What Is the Difference Between HTTP/1, HTTP/1.1 and HTTP/2?

February 10, 2023

☕️ Support Us
Your support will help us to continue to provide quality content.👉 Buy Me a Coffee

HTTP, the protocol for transmitting data over the web, has undergone several generations of evolution since its inception in 1989. The evolution has resulted in the development of various versions, including HTTP 1, 1.1, 2, and the most recent one, HTTP 3.

In this article, we will provide a brief overview of the similarities and differences between the first two versions, HTTP 1 and 1.1, and then delve into the differences between HTTP 2 and the previous versions. This will give you a better understanding of the evolution of HTTP and its various versions.

When were they published?

Hypertext Transfer Protocol HTTP (HyperText Ttransfer Protocol) is the basis for communicating data on the Internet.

The initial development of HTTP was initiated and formulated in 1989. After several iterations, HTTP/1.1 was announced in June 1999 with RFC 2616 published. HTTP/2 (originally named HTTP 2.0) was officially published in May 2015 as RFC 7540, and replaced HTTP/1.1 as HTTP practice standards.

As of October 2021, 46.5% of websites worldwide support HTTP/2 (wiki).

Differences between HTTP/1 and HTTP/1.1

Before reading any further, please note that HTTP/1.1 exists because HTTP/1 has some less-than-ideal issues. Therefore, it is recommended not to memorize the difference, but to understand it from the perspective of what problem does HTTP/1.1 solve.

Persistent connection (keep-alive)

HTTP/1 requires a new connection to be established for each request, which can result in significant bandwidth waste and delay.

On the other hand, HTTP/1.1 uses persistent connections, also known as Keep-Alive, by default. This allows multiple HTTP requests to be sent over the same TCP connection, reducing the overhead of repeatedly establishing new connections.

Status code 100 (Continue)

In some cases, the server side will reject the request sent by the client. Since the request body may be included when sending the request, every time the request is rejected, it will cause additional waste of bandwidth.

There is no mechanism in HTTP/1 to avoid this type of waste, and HTTP/1.1's 100 (Continue) status code can help us avoid this type of waste. Specifically, HTTP/1.1 allows the client to first send a request containing only headers and no content to the server. After the server confirms that there is no problem, it will respond with a status code 100 (Continue).

After receiving 100 (Continue), the client will officially send a request with a request body. If not 100 (Continue) is received, it means that the server does not accept the request, which lets the client know that the server does not accept it, so the client knows does not need to send the whole request body, thereby reducing the waste of bandwidth in transmission. (See this section of the RFC for details).

Caching

HTTP/1 mainly uses If-Modified-Since and Expires in the header as for caching, both of which are based on time; HTTP/1.1 introduces more caching strategies, such as : Etag, If-Unmodified-Since, If-Match, If-None-Match, through which the implementation of the cache can be optimized (the use of these headers is also often asked in interviews, see this article).

Host header

HTTP/1.1 added the Host request header to specify the domain name of the server. In HTTP/1, each server is considered to be bound to a unique IP address, so the URL in the request does not pass the hostname. But with the evolution of virtual host technology, there can be multiple virtual hosts on one server, and they will share the same IP address. So with the Host header, requests can be sent to different websites on the same server.

More request methods

Compared with HTTP/1, HTTP/1.1 has added many new request methods. The commonly used PUT, PATCH, DELETE, CONNECT, TRACE and OPTIONS are all added in HTTP/1.1.

Comparison of HTTP/2 and HTTP/1.1

Multiplexing to solve head-of-line blocking

HTTP/1.1 uses the pipelining mechanism, which allows the client to send multiple HTTP requests in parallel within the same TCP connection, and the client can send the next request without waiting for the result of the previous request to return, but, in the server side, it must be returned in the order of the received client requests to ensure that the client can distinguish the response content of each request.

However, since this mechanism is difficult to implement in practice, the default of all browsers is turning it off. (You can see to this stackoverflow post for more detail). In addition, the pipeline also causes head-of-line blocking (HOL) problems. If any request takes a long time or the transmission packet is lost, it will block the work of the entire pipeline.

HTTP/2 introduces a multiplexing mechanism, which allows multiple requests to be sent and received simultaneously in the same TCP connection, without waiting for the previous request to receive a response. Through this mechanism, the headers blocking issue at the HTTP level in the past are resolved. (remark: but the TCP layer still has header blocking problem, which will be solved in HTTP/3).

Priority request order

In the HTTP/2 version, all data packets of a request or response are called a data stream, and each data stream has a unique number ID (stream ID). Each data packet will be attached with the corresponding data flow number ID when it is sent, and the client can also specify the priority of the data flow. The higher the priority, the faster the server will respond.

Header (Header) message compression

Before HTTP/2, due to security issues, most header messages would not be compressed, mainly because the algorithms used in the past may suffer from CRIME attacks. In HTTP/2, the HPACK algorithm is used to avoid attacks, which in turn can compress headers. Because the header is compressed, the amount of information transmitted can be greatly reduced during transmission, thereby reducing the bandwidth burden and increasing the transmission speed.

Specifically, HPACK uses an index table to define commonly used http headers, and stores the http headers in the table. When requesting, it only needs to send the index position in the table, instead of passing the complete header.

Server push

HTTP/2 allows the server to actively push data to the client, which can help reduce the number of client requests. For example, browsers used to request index.html and style.css to render a complete screen; through Server Push, when the browser requests index.html, the server can also actively send style.css, so that only one round of HTTP requests is needed to get all the resources needed.


See also

☕️ Support Us
Your support will help us to continue to provide quality content.👉 Buy Me a Coffee