GET-UP picture with Senior Minister of State (Singapore) Mr. S. Iswaran
GET-UP picture with Senior Minister of State (Singapore) Mr. S. Iswaran
A recovery model is a database property that controls how transactions are logged. Backup and restore operations occur within the context of the recovery model of the database.
Why log transactions?
To guarantee ACID properties over crashes or hardware failures every action is logged (to disk). In case of system failure, it would read the log and redo/undo the logged actions to bring the database to a consistent state.
Where it is logged?
MS SQL Server maintains mainly two kinds of files. Data file and log file. Log file can map to one or more physical files. Also note that transactional log is different from other logs like setup log, error log, profiler log, agent log, event log etc.
Types of recovery model
Depending on how extensively MS SQL Server logs there are 3 types of recovery models. They are:
Simple recovery model
Suppose every day you take a backup of the database. If there is a failure, you at most lose the data for one day and you can afford that. In that case, you can choose simple recovery model. Here, every transaction is logged. However, as soon as those transactions are written to disk (checkpoint), the log is discarded.
Here you don’t have the provision to take back up for log. You only backup data. And as said before, in case of failure, you can recover the data that you backed up last time. You loose everything else, done since the data backup.
You get rid of the administrative task of taking the log backup. You get rid of running into the risk of managing very large log files.
However you lose the following features:
Log shipping – This is a disaster recovery technique where the transactional log is copied from primary (production) database on a primary server (the production MS SQL Server instance) to one or more secondary database (warm backup of primary database) on separate secondary server instance (where warm standby copy/copies is/are stored). This feature works with the help of full recovery model that will be discussed a bit later.
AlwaysOn or Database mirroring – These features support/create fail-over environment using redundancy. If the primary database fails, the secondary (not backups) database takes over. These features also work with the help full recovery model.
Media recovery without data loss – If media (tape/disk) fails recovery without data loss not possible. It needs full recovery model.
Point-in-time restores – You cannot restore the database to any arbitrary point defined by a date and time. For this to work, you need full recover model.
Full recovery model
By this time, it is clear that for the features above to work you need full recovery model. Well, it will log all the transactions from the beginning even if those transactions are written from memory to disk. So it is obvious that the log size will be too big. It means there is provision to take log backups. And that you have to take log backup regularly else you might run out of production server disk space. Yes, if you backup log then the backed-up up log will be removed from the database log file.
Armed with full recovery model (with the pain of administrating transactional log) you won’t lose any data due to a lost or damaged data file. You can restore database to any point of time.
Unless the tail of transactional log file is damaged all is good. Else you lose the data since last time you took a log backup.
Bulk recovery model
Same as full recovery model just that for the bulk operations (to be precise, a set of operations those can be minimally logged) it does not log everything. This model allows bulk copy to work with high performance by not logging all transactions.
Point-in-time recovery not supported. Recovery possible to the end of any (data and log) backup.
If log is damaged or bulk logged operations occurred since the most recent log backup, changes since the last backup will be lost. Else no data is lost.
Why log size too big?
This is a typical problem people face.
You can check log_reuse_wait_desc column in sys.databases. It will tell you why?
However, most likely the database is using the default full recovery model. You are unaware of it: especially, if the database is created by SharePoint. And you forgot to take regular log backup.
Let us start by HTTP
When we are browsing internet by typing a web address (URL/Server address) in a browser (client) starting with http we are using HTTP protocol. It functions as a request-response protocol in a client server model. That means client (say, browser) sends a request and server sends back a response. Note that browser is just one type of client. And a client is not necessarily has to be a browser.
HTTP is built over TCP
HTTP protocol is implemented at the top of a reliable transport layer protocol, mostly TCP.
HTTP has security problem
However an attacker can sit on the line between the client and the server and can read what has been transpired between the two. Not only that, an attacker can impersonate a server, i.e., act as if it is the server and can collect all client information that might be sensitive to be revealed.
There comes TLS/SSL to build HTTPS
Hence it is important to make sure that client is talking to the real server. It is also important that all communication between the client and the server is kept secret between the two. TLS/SSL comes into picture here. It implements transport layer security to protect the HTTP line and then the HTTP is called HTTPS. When you are typing a URL starting with https you are using a secured HTTP line.
TLS/SSL uses asymmetric encryption
Asymmetric encryption also known as public-key cryptography uses a key pair; a private key and a public key. The private key remains known only to the server while the public key is open/known to all. Data can be encrypted by any of the two keys. And then only the other key can decrypt that.
There are two things in public-key cryptography:
· Public key encryption – public key encrypts, private key decrypts
· Digital signature – private key encrypts, public key decrypts
When a server gets a certificate (X.509) from a certificate authority (CA), it essentially generates/gets a key pair. Then the certificate contains the public key of the website and a signature of the CA (i.e., signed by the private key of the CA) that issued the certificate among other identity information. When browser talks to a website, the website has to first provide its certificate.
Browser has a list of trusted CAs and their public keys. Browser can use the right CA’s (as mentioned in the certificate) public key to verify the certificate (signature).
And symmetric encryption
Now that client trust’s the website, it uses the public key of the website to encrypt a key that it would like to use as the symmetric key for further communication. Well, in reality there can be a number of steps here that essentially decides how the symmetric key to be generated/negotiated/used. Typically a disposable session key is generated from an initially (at the beginning of secure line creation) generated master secret to implement forward secrecy. That means, even if at any point the session key is compromised, only the data transferred using that session key will be compromised. But the attacker won’t be able to figure out the next session key that will be generated after a while.
Unlike asymmetric key where any of the two private and public key can encrypt data that only the other key can decrypt, in symmetric encryption it is a single key that does both the encryption and decryption. Symmetric encryption is better for performance, key distribution and helps to implement forward secrecy by continuously changing session keys.
Here comes OpenSSL
So if you want a server that would support TLS/SSL support you need a component to implement that. OpenSSL is such an open source implementation. There are many other implementations like GnuTLS etc.
But TCP has an issue
So now we have an HTTPS connection built over TCP where client is sending a request and server is giving back a response. But how long would the client wait for the server to respond? At times we do need to wait in a blocking call till the response comes back. But then at times TCP can fail to detect whether the other side is still alive. Especially, if the other side closes the connection without following a normal tear down process.
Also at times, firewalls sitting on the line close idle TCP connection.
Here comes heartbeat
To make sure client is not waiting for a response to come from a server sitting on the other side of a closed connection. And to make sure firewalls are not closing the connection when it is idle, a packet is periodically being sent to server. And the server sends back a response.
OpenSSL’s heartbeat implementation went wrong
OpenSSL’s hearbeat implementation went wrong and this bug was introduced in 2012. The buggy code could read up to around 64KB memory of the server process and send that back to client as each heartbeat response. Client could construct the message in a way that could to some extent dictate where/how much it wants to read from server. That would it turn allow the client to steal some sensitive information to exploit.
How to exploit Heartbleed bug?
A client as said need not be a browser. Rather one could write one’s own HTTPS client and construct the heartbeat message on his/her own. By using multiple and especially simultaneous heartbeat messages one can accumulate a large section of memory of the server process. After analysing the data one can possibly even get the server’s private key. If the server belongs to a e-commerce site one could possibly get the credit card information, user ids and passwords of the users who were using that site that time.
Even though as of now we were mostly focused on web servers it is not the only thing that gets affected. OpenSSL (heartbeat extension) is also used in email server, VPN and other TLS/SSL secured client server systems. Also it is not only the server but the client as well that can be affected.
There are few sites like ssllabs where one can paste the URL and check whether it is vulnerable. OpenSSL already released a fixed version. If you own a server that is vulnerable it is better to assume that it has been compromised. It is then best to change the certificate. If you are using a server where you have an Id and password, it is better to change your password.
There are considerations, restrictions and limitations to use FILESTREAM. They are listed below:
Works as it is or with some considerations
Does not work
Recently I needed to pay some fees to some authorities. In Singapore, typically you pay somebody by cash, ATM transfer, online transfer, NETS etc. However there are cases when you need to pay by cheque. It was such a case. But if you are having savings accounts with the banks they are unlikely to give you a cheque book. Cashier’s order (CO) comes handy as an alternative to cheque. Especially, if you want to avoid all the hassles and fees (small though) to open a current account that provides you with a cheque book.
You can either go down to a branch, fill out a form, pay a fee and collect a CO. However some banks like DBS allows you to apply online for a CO. If you do so, you still need to go to a branch (only certain branches – Jurong Point is one of them, you can chose one while applying online) to collect it (last time I did not need to wait in a queue, a front desk lady collected it from inside and gave it to me) but you save the fee. You can also choose the bank to send it to you by post.
When I tried to apply for a CO online I found I can do so for up to SGD 10,000. I needed more. Without finding a way I went down to the branch and was informed that I can do as many as possible. If the payee is the same they would call me and give an aggregate one (I needed a single one). Had they mentioned it here it would have been better.
By this time I became more careful. When I was issuing the COs one by one I was keeping the references manually as there was no report or anything to figure out how many I have issued so far. Well, I could log out and during that time they indeed show all activities I did in that session.
Overall, DBS provides a nice online experience. Especially I like the e-statement that saves paper.
PS. Last time, I bought a CO of S$ 35. However, I could not manage time to collect it. They called me to collect it. When I said I would not require it anymore, they canceled it, free of cost!
To use FILESTREAM feature both Windows administrator and SQL Server administrator have to enable it. While enabling from Windows there are three options to choose from:
For SQL Server instance there are three options to choose from:
Note that FILESTREAM would perform worse for 1MB+ size files for full read/write if Transact-SQL is chosen (and Win32 streaming is not chosen) than no FILESTREAM BLOB.
Now that FILESTREAM is enabled, we can go ahead to do the rest of the setups.
First, create a file group to keep the FILESTREAM data. There is a 1:1 mapping between data containers and file groups.
ALTER DATABASE MyDatabase ADD FILEGROUP MyFileStreamGroup1 CONTAINS FILESTREAM;
Second, define data container. In the below example, “D:\Production” must exist already. “D:\Production\FileStreamDataContainerRoot” is going to be the root of the data container.
ALTER DATABASE MyDatabase ADD FILE ( NAME = FileStreamGroup1File, FILENAME = 'D:\Production\FileStreamDataContainerRoot') TO FILEGROUP MyFileStreamGroup1 ;
Third, create a table with a FILESTREAM column. All FILESTREAM columns of a table must go to a single FILESTREAM group that can be specified, defaulting to default FILESTREAM filegroup. Table must have a column of the non-null uniqueidentifier data type with ROWGUIDCOL attribute and either a UNIQUE or PRIMARY KEY single-column constraint. FILESTREAM data container will have a directory for a table with subdirectory corresponding to the FILESTREAM column of the table.
USE MyDatabase ; GO CREATE TABLE TableWithFileStreamColumn ( DocumentID INT IDENTITY PRIMARY KEY, Document VARBINARY (MAX) FILESTREAM NULL, DocGUID UNIQUEIDENTIFIER NOT NULL ROWGUIDCOL UNIQUE DEFAULT NEWID ()) FILESTREAM_ON MyFileStreamGroup1;