Gmail API with OAuth 2.0

1st Friday Fun Session – 6th Jan 2017

IMAP and POP3 are the two most prevalent standard protocols for email retrieval that work over a TCP/IP connection. However, Gmail also introduced Gmail API that gives RESTful access to user’s mailbox under OAuth 2.0 authorization. To get a feel of this, we will write a small desktop application that uses Gmail API to connect to a specific mailbox and performs some operations.

Application description

Our small C# console application connects to a specific mailbox, retrieves items filtered by a time period and puts back the same in the same folder (Inbox, Sent etc.), from where it was originally read, in an encrypted form.

We will connect the mailbox using RESTful access through Gmail API. That also means, authorization would be done using OAuth 2.0 protocol.

Input to the application

Input to the application would be: mailbox name, mailbox password, start of the retrieval period, end of the retrieval period, a password, and a salt – the  last two are used for generating symmetric keys for encryption.

The input is read from a file named input.txt, placed in the application path.

Each of the five input fields is a key-value pair, separated by a single space, occupying a single line. Sample input.txt file content is shown below:

  1. mailbox some.email.id@gmail.com
  2. start_time 2016/12/14
  3. end_time 2016/12/15
  4. input_key BA994A37-901D-4777-8054-6C5D87500AB7
  5. salt 65DA56D0-9285-41A0-975E-323420D0602B

The key mailbox denotes gmail id. Both dates use yyyy/M/d format, end_time being no earlier than start_time. Time could be included/used. Salt must be at least 8 bytes long.

The steps

We need to do the following steps:

  1. Authentication/Authorization
  2. Getting emails from Gmail
  3. Encryption
  4. Insert emails back

Authentication/Authorization

Google APIs (Gmail is one of them) use OAuth 2.0 protocol for authentication and authorization. All applications have to follow a basic pattern for accessing a Google API using OAuth 2.0.

Get credentials

At first, we must create a Google API Console project. Once created, we get credentials for it. Credentials are essentially composed of the following two values that are known to both Google and the application.

  1. Client ID
  2. Client Secret

Google console pages look like below:

Google Console

Application Secret

Get access token

The second step is to get an access token from Google Authorization Server. During access token request, a scope parameter is also sent. Scope indicates all the accesses requested for.

Depending on the kind of applications (web server, installed, client side) the authorization sequences might vary slightly. Since ours is an installed application, we would get authorization code first. We then exchange this code for a token. Below image that is taken from Google, depicts the process.

webflow

Get authorization code

To get authorization code from Google, we first open an HttpListener on local  loopback address, with a dynamically found available port. However, this requires elevated privilege. Hence, the application needs to run at elevated privilege. This could be avoided by doing URL reservation. However, that itself requires elevation.

We then open an authorization request in browser from the application. Authorization request would require:

  1. Authorization endpoint: https://accounts.google.com/o/oauth2/v2/auth
  2. Scope: https://www.googleapis.com/auth/gmail.modify
  3. Redirect URI: local loopback address, as explained earlier
  4. Client Id: Client Id credential, as retrieved for the Google API Console project
  5. State etc.: some additional information like state etc. to verify that response from server is due to a request made by this application. Verification happens once the response from Google server is back.

User’s consent screen looks like below.

Authentication picture

Scope can vary. Modify as scope, as shown here is sufficient to call all the three APIs (get, list and insert) that we need for our application. However, instead of asking for all the authorization upfront, scope should be increased incrementally.

If all is well, we will get an authorization code. However, to make any API call, we need access token. So now we have to exchange this code for a token.

We make an HttpWebRequest POST using

  1. Token endpoint: https://www.googleapis.com/oauth2/v4/token
  2. Code: as found earlier
  3. Client ID: credential as explained earlier
  4. Client secret: credential as explained earlier

All Gmail API endpoints are https, meaning all API calls have to be made using encrypted SSL/TLS channel. In .NET, we can easily use HttpWebRequest class. However, in other languages, if such a library is not available, we might use GnuTLS, OpenSSL etc. to create a TLS/SSL channel to enable us making https calls.

Make API call using access token

Now we can make Gmail API call, using the token in an authorization header.

Refresh Access Token

Access token has limited lifetime. We might require refreshing the access token, if necessary.

Getting emails from Gmail

To read emails, we will use list and get APIs, using the access token that we already received.

list

For list API call, we are making an HttpWebRequest GET request with endpoint: https://www.googleapis.com/gmail/v1/users/userId/messages.

Parameter userId is the gmail id. Optional query parameter q can be used, to specify after and before date to filter data. It should be mentioned that there is a max limit (I found it to be 500) as to how many emails (essentially, emaid id and thread id combination) we can read. We are content with as many results as found in the first call. No attempt is made to make further calls to read more emails.

get

For each listed email found in the above call we are now calling the get API using GET https://www.googleapis.com/gmail/v1/users/userId/messages/id.

Parameter userId is gmail id and id is the email id that is found from the previous call. Optional query parameter, format is used with value set to full. For each email, we are retrieving the subject and labels. If the input has a time component, then at this stage, we can further filter based on it, using internalDate property of the Users.Message resource.

Encryption

Encrypt with Rijndael

For encryption, Rijndael is used. Password and salt are taken as input. A 256 bit AES key is generated. We are using RijndaelManaged with default 128 bit block size; hence, it is AES compatible. Since this is symmetric key, the same key is used for both encryption and decryption. And the same key is used for all the emails.

get with format raw

For each email, we are making another call to get API, this time with format set to raw. Here we are duplicating the read API call. However, this way, we can directly get the raw property of Message resource, necessary to make insert API call. We don’t need to construct the raw property on our own.

At first, we are decoding the raw as found above. Then we are searching for the subject (that, we collected earlier, using get API with format full) in that. Starting from first character of the subject, we are encrypting the rest of raw. That will encrypt subject and body parts, along with some more data.

Regarding encryption, we could do that for the whole of raw or only for parts like subject, body and so on. As mentioned above, for simplicity, we have encrypted everything starting from subject. We have also written functions and tested that we can correctly decrypt the encrypted data.

While inserting the email into gmail folder, it would have been better if we could somehow indicate that the email is encrypted. That way, we could stop encrypting an email more than once, or identify that a certain email is in encrypted form.

So, in short, data in field raw of the Message resource is decoded, converted to a string and then encryption is applied on the plain text. It is then reverted back to base64url encoded form for insert.

Insert emails back

insert

The encoded raw along with labelIds collected earlier is now used to make an insert API call using POST https://www.googleapis.com/gmail/v1/users/userId/messages.

Once again, userId is the gmail id. Only raw and labelIds are used (in the request body), the latter (labelIds) is to make sure that the inserted email ends up in the same folder. Thread Id parameter could be used easily, however, chose not to include.

We have not used the upload URI. This is because we have only dealt with metadata and not (email) attachments, once again for simplicity.

Output

When the application runs, it asks user to press a key. Once the key is pressed, it starts working in asynchronous mode. Pressing a key further would stop it.

The application first opens Google server in browser for user to authenticate/authorize. Once that is done, it prints out how many emails it is going to read/encrypt/insert, based on the input parameters. It might take a while before it starts printing a line for each of the inserted emails.

This is how it looks in the Gmail folder.

Output.png

GMail API client libraries could be good starting point for writing such an application.

GitHub: GMail API

Index

Interpreting IIS Internals

22nd Friday Fun Session (Part 2) – 16th Jun 2017

We are trying to understand how an HTTP request is processed by .NET web application, hosted in IIS in various scenarios with a focus on synchronization of the processing of that request. To be precise, we are interested in the thread and process contexts involved while serving an HTTP request.

Client request

Multiple clients across the globe, using their respective browsers, sending HTTP request to the web server.

Web server

All these http requests are ending up in IIS web server, hosting the application.

IIS kernel mode

In web server, HTTP listener, a kernel mode device driver, part of network subsystem, and part of the IIS – kernel mode of IIS to be precise, called http protocol stack (Http.sys), listens for http requests.

HTTPS.sys, as a forwarder, might directly pass the request to the right worker process, or as a request queuer, queues it unless a worker process picks it up. Once the response of that request reaches to it, it returns that back to client browser. Also as a kernel level cacher, it does some kernel level caching and if possible, returns the cached output directly, without involving any user level processing.

Worker process

Worker process, w3wp.exe, an executable, a process to OS, runs inside IIS user mode, is little different than other processes in the operating system (OS), in the sense that it can contain multiple application domains.

Application domain

Application domain represented by AppDomain object, a .NET concept, is a virtual process within a process. As said, it runs within a worker process. One application domain does not share its static variables etc. with another application domain running within the same worker process.

Usually, all static variables etc. of a process are visible to all within a process. It is not the case for worker process. This is how worker process is a special process, where one or more application domains are running inside it, as if each of them is a separate process, providing isolation. So what usually we are used to thinking as per process, in the world of IIS, inside worker process, it is actually per application domain.

Why Application domain, you might ask. Well, a web server can have many applications. Creating one worker process for each of them will end up creating many processes that is quite expensive. Creating an application domain for each of them and putting them together inside a single process is much cheaper.

Note that, one of the application domains can be stopped without affecting other application domains inside a worker process.

Worker thread

When worker process receives a request, it uses worker thread to process that request. If two users send two requests to the same web application, both of them can be simultaneously executed by two worker threads and served.

At any point of time, a worker thread can only be executed within a single application domain. An application domain can have multiple worker threads. But these worker threads are not confined to a single application domain. Rather they belong to worker process. When a thread is done with serving a request for a particular application domain, it is freed. At a later point of time, the same thread can be used to serve another request, belonging to a different application domain.

Web application

We develop web application. We are interested to know how this ending up running in IIS environment. Application domains are associated with web application. One web application has typically, one application domain running inside a worker process. One worker process might be running many application domains, each supporting a separate web application.

Application pool

Application pool is the container for (web) applications. Every application has to be assigned to a single application pool. A number of web applications can be assigned to a single application pool. But as mentioned earlier, a single application cannot be assigned to multiple application pools. All applications belonging to an application pool share the same configuration.

Worker process runs inside the application pool.

Application pool can be recycled, restarted. Applications belonging to other application pool are not affected by this. Thus application pool provides isolation.

So we see that a number of applications are contained within an application pool. And then a worker process running inside an application pool is again running a number of application domains, each application domain serving a different web application. Thus, we have two different levels of isolation.

Web garden

How many worker processes can be there inside an application pool? Well, it is configurable. By default, it is only one worker process running inside an application pool, supporting multiple web applications, by creating one application domain for each of the applications. And all these application domains are running as separate processes within that single worker process.

However, application pool can be configured to run more than one worker processes. In that case, it is called a web garden. In this situation, multiple worker processes can be running in a single application pool. Each of these worker processes, once again running multiple application domains, each belonging to one application.

In this scenario, each of the worker processes can have its own application domain for the same application. In other words, for a certain web application, we can have multiple application domains, each running in a separate worker process, all in the same application pool. To be precise, one application or web site can have multiple instances running in a single web server, if web garden is enabled.

This is important as it renders uses of static variables, application-wide variables etc. problematic in web application.

Web farm

When one web server is not enough to serve the clients requests, we need more of them to host the same application/web site. We call it web farm.

A load balancer would sit in front of the web servers and its IP will be exposed to external world. HTTP requests will come to load balancer first and it will then distribute the load to different web servers.

Individual web server can share the same database or replicated/partitioned database.

In a nutshell

Single server, application pool running one worker process

So we see that, multiple https requests for the same web application would be simultaneously served by multiple threads. Those threads can be executed within a single application domain belonging to a single worker process. This happens when only one worker process is set to run for an application pool.

Simple.png

In the above image, we see IIS having two parts – system and user mode. HTTP.sys is in kernel mode, forwarding HTTP request to 3rd application pool, belonging to application X. We further see that a single worker process inside that 3rd application pool is running two application domains X and Y. Two threads within application domain X – Thread 1 and Thread 2 are serving the two requests, respectively. The response will go back to client browser through HTTP.sys.

Single web server, application pool running more than one worker process, called web garden

Or the threads can come from different application domains associated with the same web application or web site, running inside different worker processes, all contained within the same application pool. This can happen in web garden configuration, where multiple worker processes are allowed to execute within a single application pool. We can understand any locking mechanism that works within a single process would not work in this setup. We would need to implement an inter-process synchronization mechanism, if our application is deployed in web garden.

Web Garden

In the above image, showing web garden, two requests are being served by two worker threads, belonging to two application domains (both associated with same web application), each running in a separate worker process, both of them (worker processes) contained within the same application pool.

Multiple web servers behind load balancer, called web farm

Or the threads can come from different physical web servers. This can happen in a web farm scenario, where multiple web servers sit behind a load balancer. We can understand that an inter-process synchronization mechanism, which works across the processes within an OS, would not work here. Since we have multiple web servers here, each running its own OS, inter-process synchronization mechanism would not work for application-wide synchronization.

Web Farm.png

In the above image, showing web farm, two requests are being served by two worker threads, each running in a separate web server.

Index

Incision into Isolation Levels

22nd Friday Fun Session (Part 1) – 16th Jun 2017

We are trying to see how isolation level, serializable to be precise, can help us implementing a synchronization mechanism for web application.

Let us start with ACID

ACID stands for Atomicity, Consistency, Isolation and Durability. It is detailed in ISO standard. Database systems implement this so that a sequence of operations, called as transaction, can be perceived as a single logical operation.

Atomicity

All operations of a transaction are all done or nothing done. Logging with undo capability can be used to achieve this.

Consistency

Given that all database constraints (foreign key, unique etc.) are valid at the beginning, the same should be maintained, at the end of the transaction as well.

Durability

All changes done by a committed transaction must go to storage even if database system crashes in the middle. Logging with redo capability can be used to achieve this.

Why Logging?

We talked about logging and then redo/undo in the previous sections. Why Logging? Well, when some transactions changes data, they are not immediately written to disk. Rather those pages are marked as dirty. Lazy writing flushes them to disk later. Instant writing to disk is expensive. Instead, logging the operation that is directly written to disk immediately, is much cheaper.

However, performance, while important is not a must. Logging is essential to ensure atomicity and durability. Any modification must be written to log before applying to actual database. This is known as write-ahead logging (WAL) This is to make sure that in case of a crash (say, 2 out 5 operations of a transaction are written to database storage and then it crashes), system can come back, read the log and figure out what was supposed to be done and what was not supposed to be done. By redoing and undoing necessary operations, durability and atomicity is ensured.

Focus on the I of ACID

Today we focus on the I of ACID, called isolation. When we are writing a transaction, we write the operations inside it thinking nobody else is doing anything else to the data that we are dealing with. Isolation property defines such an environment and database systems implements that.

So, why do we need such an environment? Well, without this, in a highly concurrent transaction execution environment, our understanding of the data we are working with will not hold true, as other would change them simultaneously. It will happen largely due to three problems: dirty read, non-repeatable read, and phantom read.

However, creating such an isolated environment is expensive in terms of performance. Hence, a number of other isolation levels are introduced, giving various degrees of isolation rather than a complete isolation.

The ISO standard defines the following Isolation levels that we will describe in terms of two transactions T1 and T2 that executes in parallel.

Read Uncommitted

Transaction 1 (T1) updates salary for Joe
Transaction 2 (T2) reads updated salary for Joe
T1 aborts transaction

We see that, T2 read dirty (because T1 did not commit the updated salary) data and went ahead with his decisions/operations inside it based on it, that was of course a wrong thing it did.

As the name implies Read Uncommitted reads uncommitted data, also called dirty data that is wrong. So we see, this isolation level does not guarantee isolation property and it is an example of a weaker isolation level. Note that along with dirty read, it also has non-repeatable read and phantom read problems.

Read Committed

The next better isolation level, as the name Read Committed implies, reads only committed data and solves the dirty read problem encountered previously in Read Uncommitted isolation level. Let us see through an example. Now T1 is running in Read Committed isolation level.

T1 reads the salary of Joe
T2 updates the salary of Joe and commits
T1 reads the salary of Joe

So we see T1 reads the salary of Joe twice, and it is different in the two cases. In the second case, it reads the data that was modified and committed by T2. No more dirty read by T1. Good.

But the isolation property expects each transaction to happen in complete isolation, meaning it would assume it is the only transaction that is taking place now. Joe’s salary was not updated by T1. Then why should T1 see different data when it reads the second time?

So we see, T1 could not repeat a read (the same salary for Joe). Hence, this problem is called non-repeatable read. Read Committed, like Read Uncommitted is another weaker isolation level. Again, note that, along with non-repeatable read it also has the phantom read problem.

Repeatable Read

To solve the non-repeatable read problem, Repeatable Read isolation level comes into picture. Since T1 reads the salary of Joe, no other transaction should be able to modify Joe’s data if we run T1 in Repeatable Read isolation level.

If we repeat the previous transactions we did earlier we would see T2 waits for T1 to finish first. Because T1 would use the right locks on the rows it reads so that others cannot delete/modify it. Repeatable read is solving the non-repeatable read problem as the name implies.

However, that won’t stop new data insertion. After all, Repeatable Read put necessary locks only on the data that it has read, not on future data. Hence, we will see ghost/phantom data. Let’s see an example.

T1 reads 4 rows in employee table
T2 inserts one record in employee table and commits
T1 reads 5 rows in employee table

We see that T1 sees a phantom row (the newly inserted row by T2) in its second read of employee table. Repeatable Read, once again, another weaker isolation level.

Serializable

So far, we see different isolation levels providing different degrees of isolation level but not what I of ACID really defines as isolation. We also know that weaker isolation levels are introduced to avoid the performance penalty that occurs for executing transactions in complete isolation. But at times, it becomes an absolute necessity to execute transaction in full isolation. Serializable comes into picture to implement that complete isolation. In serializable isolation level, it is ensured that we get the effect as if all transactions are happened one after another, in the order they started.

So if we rerun the earlier two transactions, we would see T2 waiting for T1 to complete first. Hence both the reads of T1 would read 4 rows. Only after T1 is done that T2 would insert a new row.

This solves all the three problems: dirty read, non-repeatable read and phantom reads.

At this point, it can be mentioned here that ISO standard expects serializable, not serial. The end result of a serializable execution is to produce a result equivalent to executing them one after another. Serializable does not necessarily executing transaction one after another, just that the end result is the same, had they executed serially.

MS SQL Server implementation

With Serializable, we are done with the 4 ISO transaction isolation levels. MS SQL Server implements all of them. In addition, it implements a fifth one, called Snapshot.

Snapshot

It is an isolation level that solves all the three problems just like serializable. So, why do we have two isolation levels doing the same thing? What special thing snapshot is doing?

If we closely observe the earlier serializable isolation level, implemented using locks, we see that it is too pessimistic. T2 has to wait for T1 to finish. But T2 could be simultaneously executed. After all, T1 is only reading, not modifying any data.

Snapshot comes into picture with optimistic concurrency control. It uses multiversion concurrency control (MVCC) to implement this. Every transaction starts with the latest committed copy it sees and keeps on executing the operations inside it.

So, for the last example we saw, in snapshot, T1 would read 4 rows in both the reads. After all, it had its own private copy. On the other hand T2 would start with its own copy, add a row in the middle. At the end, it would see there was no conflict. This is because no other transaction, T1 in this case, did anything conflicting. So, two transactions are simultaneously executed without violating the requirements of a serializable solation level.

What would happen if both T1 and T2 modify the same data, creating a conflicting situation? Well, both the transaction started with its own copy hoping that at the end there would no conflict, hence it is called optimistic. But if there is a conflict, the one committed first would win. The other would fail and rollback.

By the way, even though it is called serializable, write skew, anomaly is still present and it cannot be called serializable in ISO definition.

Database needs to be configured, which at times takes a while, to use this isolation. Again, since each transaction uses its own private copy, it is resource intensive.

At a glance

Isolation levels

Default transaction isolation level for MS SQL Server

Read Committed is the default isolation level set for MS SQL Server. Keeping performance in mind, it is done this way. So all along if you had thought, by default, you were getting the I of ACID by SQL Server, you are wrong. You are living with non-repeatable read and phantom read unless you have explicitly changed the isolation level or used locks.

How to set isolation level in MS SQL Server?

We can set one isolation level at a time using the following command:

SET TRANSACTION ISOLATION LEVEL 
   { READ UNCOMMITTED 
   | READ COMMITTED 
   | REPEATABLE READ 
   | SNAPSHOT 
   | SERIALIZABLE 
   } 
[ ; ]

As mentioned earlier, to set snapshot isolation level some database specific configuration is required before executing the above command.

How long isolation level remains active?

Once set, it lasts for the session, the duration of which is largely controlled by the component that creates it. When another session starts, it starts with the default Read Committed.

Transaction

It is obvious yet important to remember, isolation levels works on transaction. After all, it is called transaction isolation level. If we want to isolate (using isolation level) the execution of a set operation as a single logical operation, then they have to be wrapped with Begin and Commit transaction.

Index

Heartbleed Bug

Let us start by HTTP

When we are browsing internet by typing a web address (URL/Server address) in a browser (client) starting with http we are using HTTP protocol. It functions as a request-response protocol in a client server model. That means client (say, browser) sends a request and server sends back a response. Note that browser is just one type of client. And a client is not necessarily has to be a browser.

HTTP is built over TCP

HTTP protocol is implemented at the top of a reliable transport layer protocol, mostly TCP.

HTTP has security problem

However an attacker can sit on the line between the client and the server and can read what has been transpired between the two. Not only that, an attacker can impersonate a server, i.e., act as if it is the server and can collect all client information that might be sensitive to be revealed.

There comes TLS/SSL to build HTTPS

Hence it is important to make sure that client is talking to the real server. It is also important that all communication between the client and the server is kept secret between the two. TLS/SSL comes into picture here. It implements transport layer security to protect the HTTP line and then the HTTP is called HTTPS. When you are typing a URL starting with https you are using a secured HTTP line.

TLS/SSL uses asymmetric encryption

Asymmetric encryption also known as public-key cryptography uses a key pair; a private key and a public key. The private key remains known only to the server while the public key is open/known to all. Data can be encrypted by any of the two keys. And then only the other key can decrypt that.

There are two things in public-key cryptography:

·      Public key encryption – public key encrypts, private key decrypts

·      Digital signature – private key encrypts, public key decrypts

When a server gets a certificate (X.509) from a certificate authority (CA), it essentially generates/gets a key pair. Then the certificate contains the public key of the website and a signature of the CA (i.e., signed by the private key of the CA) that issued the certificate among other identity information. When browser talks to a website, the website has to first provide its certificate.

Browser has a list of trusted CAs and their public keys. Browser can use the right CA’s (as mentioned in the certificate) public key to verify the certificate (signature).

And symmetric encryption

Now that client trust’s the website, it uses the public key of the website to encrypt a key that it would like to use as the symmetric key for further communication. Well, in reality there can be a number of steps here that essentially decides how the symmetric key to be generated/negotiated/used. Typically a disposable session key is generated from an initially (at the beginning of secure line creation) generated master secret to implement forward secrecy. That means, even if at any point the session key is compromised, only the data transferred using that session key will be compromised. But the attacker won’t be able to figure out the next session key that will be generated after a while.

Unlike asymmetric key where any of the two private and public key can encrypt data that only the other key can decrypt, in symmetric encryption it is a single key that does both the encryption and decryption. Symmetric encryption is better for performance, key distribution and helps to implement forward secrecy by continuously changing session keys.

Here comes OpenSSL

So if you want a server that would support TLS/SSL support you need a component to implement that. OpenSSL is such an open source implementation. There are many other implementations like GnuTLS etc.

But TCP has an issue

So now we have an HTTPS connection built over TCP where client is sending a request and server is giving back a response. But how long would the client wait for the server to respond? At times we do need to wait in a blocking call till the response comes back. But then at times TCP can fail to detect whether the other side is still alive. Especially, if the other side closes the connection without following a normal tear down process.

Also at times, firewalls sitting on the line close idle TCP connection.

Here comes heartbeat

To make sure client is not waiting for a response to come from a server sitting on the other side of a closed connection. And to make sure firewalls are not closing the connection when it is idle, a packet is periodically being sent to server. And the server sends back a response.

OpenSSL’s heartbeat implementation went wrong

OpenSSL’s hearbeat implementation went wrong and this bug was introduced in 2012. The buggy code could read up to around 64KB memory of the server process and send that back to client as each heartbeat response. Client could construct the message in a way that could to some extent dictate where/how much it wants to read from server. That would it turn allow the client to steal some sensitive information to exploit.

How to exploit Heartbleed bug?

A client as said need not be a browser. Rather one could write one’s own HTTPS client and construct the heartbeat message on his/her own. By using multiple and especially simultaneous heartbeat messages one can accumulate a large section of memory of the server process. After analysing the data one can possibly even get the server’s private key. If the server belongs to a e-commerce site one could possibly get the credit card information, user ids and passwords of the users who were using that site that time.

Affected areas

Even though as of now we were mostly focused on web servers it is not the only thing that gets affected. OpenSSL (heartbeat extension) is also used in email server, VPN and other TLS/SSL secured client server systems. Also it is not only the server but the client as well that can be affected.

Solution

There are few sites like ssllabs where one can paste the URL and check whether it is vulnerable. OpenSSL already released a fixed version. If you own a server that is vulnerable it is better to assume that it has been compromised. It is then best to change the certificate. If you are using a server where you have an Id and password, it is better to change your password.

Index