Akamai Diversity

The Akamai Blog

With HTTP/2, Akamai Introduces Next Gen Web

In early 2012 something remarkable happened: a call went out for proposals for a new version of HTTP. From the perspective of an Internet whose warp and weft seemingly shift on a daily basis, this may appear to be just one change amongst many, but because of the importance of HTTP in our daily lives, its impact is difficult to overstate. If you are reading this, it is likely your current job and livelihood would not exist without HTTP. And now, with this call for proposals, the community was about to start work to change and improve that venerable protocol.
Why change? Think about what a hip, high tech Website was like in 1999 when the HTTP/1.1 specification was published. Some text, a few images, perhaps a banner ad. It was simple. And add to that how we accessed the Internet. Broadband had only been introduced in the United States a few years earlier and all of us could hum the connection song of a 56k modem. We trained ourselves to get coffee between hyperlink clicks. And if we did this on the latest and greatest home hardware, we were running a Pentium III processor.

Now fast forward 16 years - an eternity in Internet time. To put it in raw numbers, that is eight Moore's Law cycles! Everything has changed. The biggest change is our expectations. If a page is not instant, it is too slow. On top of that, it should be responsive and render perfectly on whatever device we are using at that moment and the experience should be rich. And mobile - we want all of that to work over wireless connection on the tiny devices that we carry around in our pockets all over the globe. That is a lot to ask for something designed nearly two decades ago.

So, an effort to create a new version of HTTP -- now known as HTTP/2 -- was kicked off. Its goals were simple and can be paraphrased as "to improve the performance of HTTP by targeting the way the protocol is used today." Put more simply: your Websites will load faster. And now, after three years of work by too many to count, the IETF has approved the draft to be a new standard.

When will HTTP/2 be available?


If you are using an up to date Firefox or Chrome, or are running Internet Explorer on an early release of Windows 10, the odds are that you have been using HTTP/2 (or h2 for shorter) for the past couple of months without realizing it. HTTP/2 Draft 14 has been pushed out and is active on a number of domains. For example, I am drafting this blog post using Chrome on Google Docs happily beaming at the fact that my keystrokes and mouse clicks are shuttling back to Google over HTTP/2. Some other samples to look at:

HTTP/2 is live on our network and is rolling out towards general availability in 2015. We are committed to making h2 available to all secure delivery customers as a general and automatic upgrade. To say it another way, h2 will be included at no additional cost on your secure contract. Simply by being an Akamai customer your secure domain will speak h2. (HTTP/2 effectively requires the use of TLS, i.e. HTTPS, hence the restriction to secure delivery). Today, h2 is available to a select and growing number of beta customers. Contact us if you are interested in participating in our beta program and getting involved.

If you are a developer and want to start using h2 in your own projects for fun or fortune, check out the implementations listed on the HTTP/2 site. There you can likely find clients, servers, and libraries written in your favorite language. Not there? Create it and we'd love nothing more than to add it to the list.

On to the gory details.

What is it?

HTTP/2 has a host of features to help address today's Web usage patterns. The top features are:

  • Multiplexing
  • Header compression
  • Dependencies and prioritization
  • Server push

 Multiplexing for HTTP is a short way of saying "requesting and receiving more than one web element at a time." It is the cure for the head of line blocking that is inherent in HTTP/1.1. For example, below is a diagram that shows how requests usually flow in a single connection in HTTP/1.1:

Ludin Blog Image 1.png
Each request from the client needs to wait until the server's response to the previous request arrives. This serialization can add up to an enormous amount of time when you figure that an average Web page has around 100 objects these days. The problem gets even worse when you figure any of these requests could stall for a variety of reasons, causing the whole page download to be delayed. For this reason, an HTTP/1.1 browser uses multiple connections to a server to achieve some semblance of parallelization. This multiple socket solution has its own problems and still does not completely fix the head of line blocking.

HTTP/2 is a binary framed protocol. What that means is that requests and responses are broken up into chunks called frames that have meta information that identifies what request/response they are associated with. This allows requests and responses for multiple objects to overlap on the same connection without causing confusion. An example h2 request flow would look something like this: 

Ludin Blog Image 2.png
The client can now send multiple requests at the same time and receive them in whatever order the server can respond with. Note that in the above contrived example, the first request takes longer to complete but does not hold up the delivery of the second two objects. This ability means faster page load and render times.

Header Compression 

 When HTTP/0.9 was released, there were no headers. HTTP/1.0 and 1.1 added headers, and early on, they were modest things. Headers are the meta information the browser sends along with a request to better inform the server what it wants and what it can accept. This is how, for example a browser indicates to a server that it is able handle gzip compression or a WebP image. This is also where cookies are communicated and those things can get BIG. One characteristic of headers is that they do not change much between requests. Due to the stateless nature of HTTP/1, a browser still needs to advertise support for a given file format or language on every request. This can create a ton ( assume 1 byte equals one pound ) of redundant bytes.

HTTP/2 uses per connection ( "session" in h2 parlance ) state to help solve this problem. Using a combination of lookup tables and Huffman encoding it can reduce the number of bytes sent in a request quite literally to zero. In the common case, over the length of a Web session, compression rates above 90% are not uncommon. What does this mean for performance? For an average Web page on the response side, likely not much. The bulk of the bytes are the objects themselves and even reducing the headers to zero is not going to make a big dent. But on the request side the results are significant. Take a modest page that has 75 object, for example, and assume an average header size of a slim 500 bytes, it might take the browser 4 TCP round trips just to request the objects! With the same parameters and a 90% compression with h2, a browser can send all of the requests in a single round trip. Priceless.

Dependencies and Prioritization

Multiplexing and Header Compression are phenomenal, but they cause a new problem. Browsers are sophisticated beasts these days. They go through great pains to make certain they ask for the most important stuff first. For example, the CSS is critical to determining the page layout, but a logo in the footer of a page is not. If in the new model a browser simply requests everything at the same time and allows the server to return objects as quickly as possible, there will ironically be a reduction in page performance. This is because although everything may be faster overall, the important objects for page rendering are not necessarily getting to the browser first. Rather than push the problem onto the browser with a flippant, "be careful what you wish for," the designers built in the ability to address it in the protocol. By communicating to the server what objects are dependent on what other objects, and listing the priorities of those objects, the server can make certain the critical data is delivered to the browser right away.

Ludin Blog Image 3.png
Server Push 

One way to address the round trip latency of an HTTP request and response is for the server to send the browser an object before it is asked for. This is the essence of Server Push. On the surface, the advantage is obvious - instant page delivery even in the worst conditions. But the devilish detail of "what should the server push?" lies just beneath. In order to push the correct objects without wasting any of the user's potentially valuable bandwidth, the server needs to know both what the user is probably going to need next, and what the state of the browser cache is. For example, pushing the ubiquitous 1x1 gif may seem obvious, but the odds are that object is already in the browser's cache, and pushing it is a waste of time.

It is because this problem is hard that you do not see general applications of push today in protocols that support such as SPDY. It is also because this problem is hard that I predict some of the most interesting developments in the next few years will come in this area. Stay tuned for something awesome in this space soon from Akamai.

Want to Learn More?

The number of resources out there for HTTP/2 are few but are growing. Start with the published FAQ. This should fill in many gaps. For the more technical minded there is always the specification. If you want to tinker, check out the implementations page. Finally, stay tuned for more information and publications on HTTP/2 coming from Akamai over the coming months.


Stephen Ludin is a Chief Architect for Akamai's Web Experience group. He currently heads the company's Foundry team - a small group dedicated to innovating on the edge of technology. He joined Akamai in 2002 and works out of Akamai's San Francisco office. His primary focus has been on projects related to the core proxy technology that is responsible for routing, accelerating, and securing Akamai's traffic

1 Comment

Enjoyed reading this superb explanation of HTTP/2
What are your thoughts on violation of layering principle?
Also, Cisco estimates that internet traffic is growing about 29% annually; assuming all other things being equal, how much lower this number would get due to compression in h2 ?