Akamai Diversity

The Akamai Blog

HTTP/2.0 - What is it and Why Should You Care?

An Aging Standard

HTTP is old.  How old?  Let's look at a timeline to start:

1991 - HTTP/0.9
1996 - HTTP/1.0
1999 - HTTP/1.1
2013? - HTTP/2.0

Our beloved protocol that has been powering the information age in which we all live has been kicking around for over 21 years. Further, it has not had a major version change in 13!  Using the dog year's metaphor, this puts the invention of HTTP back in the colonial time of the Internet (I have a marvelous proof of this but limitations to the margin property prevent me from including it). It is a rare occurrence when something can stand the test of that sort of time and remain relevant.  Even the United States Constitution has needed a few major tweaks in that time (27 in all). So it should come as no surprise that the current version of HTTP is showing its age.

But before I list off HTTP's deficiencies, let me take a moment to reflect on the wonder of "the Little Protocol That Could."  It is highly likely that every one of you reading this blog would not have the job you have today if not for HTTP. Think of the evolution of a "web page" since 1991. Think of the elements that we today take for granted that were never envisioned back when HTTP first displaced Gopher for file retrieval. The very fact that HTTP has been able to adapt to the rapidly changing web and power our modern marketplace is a testament to the brilliance of the protocol.  

That said, it seems that web developers have been holding HTTP together with dental floss and glue for a number of years. We have desires for "instant" page rendering and good old HTTP/1.1 is seen as one of the bottlenecks.

HTTP is Dead - Long Live HTTP

Enter SPDY. "Wait! What?" You might be thinking, "I thought this was about HTTP!" Hang tight. A couple of engineers from Google, Mike Belshe and Roberto Peon, decided that in order to make the web instant, HTTP would need some work. To that end, they developed SPDY.  With the support of a strong community including input from Mozilla, Facebook, Amazon, and other pillars of innovation, SPDY brought a few core concepts to web delivery that HTTP was sorely lacking. These include:

- Multiplexing (allowing multiple requests to flow over a single connection)
- Prioritization (providing the ability to indicate that one resource is more important than another and should hence jump to the head of the line)
- Compression (making compression universal and extending it to headers)
- Server Push (allowing the server to give content to a user-agent before it is asked for)
- A strong recommendation for encryption (current implementations require it)

When put together in the first whitepaper advocating for the SPDY protocol, these features showed a lot of promise for acceleration. More importantly, Google showed that there was an appetite for something new and that developers and operators were willing to make a switch. The IETF kept close tabs on this and - inspired by the success of SPDY - began a process to re-charter the httpbis working group to develop HTTP/2.0.

Something Old, Something New

But isn't HTTP/2.0 just SPDY? To quote Vicky Pollard, "No, but, yeah, but, no, but..." The IETF started off the process of developing HTTP/2.0 with an open call for submissions. The IETF's request was for suggestions to establish a starting point from which HTTP/2.0 could emerge, and there were three primary drafts considered:

- HTTP Speed + Mobility
- Network Friendly HTTP

At the IETF meeting in Vancouver this past August, it was recommended, and soon after decided, to use SPDY as a starting point. SPDY was seen as the most mature of the three proposals with both working client and server implementations in the wild. It had already been through a couple of revisions and trials and was seen as a solid launching point for HTTP/2.0.

There were a few caveats, however. For example, server push, protocol upgrade, and header compression were all tagged for further discussion and the suggestion for requiring encryption was dropped. But with SPDY as a starting point, the IETF community will work towards their stated goals of coming up with a protocol that:

- Significantly improves perceived performance in common use cases (e.g., browsers, mobile)
- Makes more efficient use of network resources; in particular, reducing the need to use multiple TCP connections
- Has the ability to be deployed on today's Internet, using IPv4 and IPv6, in the presence of NATs
- Maintains HTTP's ease of deployment
- Retains the semantics of HTTP

What Does This Mean to You?

If everything goes well and according to the working group's stated goals, HTTP/2.0 will be faster, safer, and use fewer resources than HTTP/1.1. The average user and developer will get all of the benefits and none of the pain. All that language about 'ease of deployment' and 'retaining semantics' boils down to this being mostly a black box change. Some day, you will upgrade your browser and without knowing it you will be using HTTP/2.0. You may notice that everything feels a bit smoother and zippier, perhaps even instant. Welcome to the next version of the Web. Excited?

HTTP/2.0 is still a ways away. The httpbis working group is targeting Fall of 2014 for a final draft of the protocol. You can expect to see implementations before then as browser developers, server developers, intermediaries, and delivery networks such as Akamai work together to polish it up until it shines. Meanwhile, the questions you will want to ask yourself are fairly straightforward, and are all around making sure that you and your customers can get the most from this technological advance:

• "Does my Web server have a plan to support HTTP/2.0?"
• "Will my applications need to change to support HTTP/2.0?"
• "Does my CDN have a plan to support HTTP/2.0?"
• "Will the third parties I use take advantage of HTTP/2.0?"
• "What should I do with all of the time I save using HTTP/2.0?"

As a developer you may want to look under the hood and see if there is some new and creative way to take advantage of one of the aspects of HTTP/2.0 - perhaps in a way no one envisioned. 

In short, stay tuned.  As the protocol, as well as the story around the protocol develops, we'll keep you up to date.

Stephen Ludin is a chief product architect at Akamai


While we are at it could we add templating? One of the reasons why we have XSS flaws is because we stuff the data and the markup in the same file. This is similar to SQL injection where you mix a query with data. And the solution of using prepared statements solved that. Changing the HTTP protocol so the data and markup can be sent separately would get rid of XSS.

What currently is availbale is: content security policy. It handles most of the security issues you are talking about.

Curious what the roadmap for Akamai adoption of HTTP 2.0 looks like. Will an implementation be provided before the spec is officially ratified? If so, when does Akamai (realistically) expect to make it available to it's customers?