O'Reilly logo

Learning HTTP/2 by Javier Garza, Stephen Ludin

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required


In 2009, HTTP/1.1 was well over a decade old, and arguably still the most popular application protocol on the internet. Not only was it used for browsing the web, it was the go-to protocol for a multitude of other things. Its ease of use, broad implementation, and widely shared understanding by developers and operation engineers gave it huge advantages, and made it hard to replace. Some people were even starting to say that it formed a “second waist” for the classic hourglass model of the internet’s architecture.

However, HTTP was showing its age. The web had changed tremendously in its lifetime, and its demands strained the venerable protocol. Now loading a single web page often involved making hundreds of requests, and their collective overhead was slowing down the web. As a result, a whole cottage industry of Web Performance Optimization started forming to create workarounds.

These problems were seen clearly in the HTTP community, but we didn’t have the mandate to fix them; previous efforts like HTTP-NG had failed, and without strong support for a proposal from both web browsers and servers, it felt foolish to start a speculative effort. This was reflected in the HTTP working group’s charter at the time, which said:

The Working Group must not introduce a new version of HTTP and should not add new functionality to HTTP.

Instead, our mission was to clarify HTTP’s specification, and (at least for me) to rebuild a strong community of HTTP implementers.

That said, there was still interest in more efficient expressions of HTTP’s semantics, such as Roy Fielding’s WAKA proposal1 (which unfortunately has never been completed) and work on HTTP over SCTP2 (primarily at the University of Delaware).

Sometime after giving a talk at Google that touched on some of these topics, I got a note from Mike Belshe, asking if we could meet. Over dinner on Castro Street in Mountain View, he sketched out that Google was about to announce an HTTP replacement protocol called SPDY.

SPDY was different because Mike worked on the Chrome browser, and he was paired with Roberto Peon, who worked on GFE, Google’s frontend web server. Controlling both ends of the connection allowed them to iterate quickly, and testing the protocol on Google’s massive traffic allowed them to verify the design at scale.

I spent a lot of that dinner with a broad smile on my face. They were solving real problems, had running code and data from it. These are all things that the Internet Engineering Task Force (IETF) loves.

However, it wasn’t until 2012 that things really began to take off for SPDY; Firefox implemented the new protocol, followed by the Nginx server, followed by Akamai. Netcraft reported a surge in the number of sites supporting SPDY.

It was becoming obvious that there was broad interest in a new version of HTTP.

In October 2012, the HTTP working group was re-chartered to publish HTTP/2, using SPDY as a starting point. Over the next two years, representatives of various companies and open source projects met all over the world to talk about this new protocol, resolve issues, and assure that their implementations interoperated.

In that process, we had several disagreements and even controversies. However, I remain impressed by the professionalism, willingness to engage, and good faith demonstrated by everyone in the process; it was a remarkable group to work with.

For example, in a few cases it was agreed that moving forward was more important than one person’s argument carrying the day, so we made decisions by flipping a coin. While this might seem like madness to some, to me it demonstrates maturity and perspective that’s rare.

In December 2014, just 16 days over our chartered deadline (which is early, at least in standards work), we submitted HTTP/2 to the Internet Engineering Steering Group for approval.

The proof, as they say, is in the pudding; in the IETF’s case, “running code.” We quickly had that, with support in all of the major browsers, and multiple web servers, CDNs, and tools.

HTTP/2 is by no means perfect, but that was never our intent. While the immediate goal was to clear the cobwebs and improve web performance incrementally, the bigger goal was to “prime the pump” and assure that we could successfully introduce a new version of HTTP, so that the web doesn’t get stuck on an obsolete protocol.

By that measure, it’s easy to see that we succeeded. And, of course, we’re not done yet.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required