Pitfalls of HTTP/2

HTTP/2 is still new and, although deploying it is relatively easy, there are a few things to be on the lookout for when enabling it.

By Andy Davies
January 27, 2017
Mind the gap Mind the gap (source: GregPlom)

HTTP/1.x (h1) was standardized in 1999, we’ve had years of experience deploying it, we understand how browsers and servers behave with it, and we’ve learned how to optimize for it, too. In contrast, it has been just 18 months since HTTP/2 (h2) was standardized, and there’s already widespread support of it in browsers, servers, and CDNs.

So what makes h2 different from h1, and what should you watch out for when enabling h2 support for a site? Here are five things to look out for along the way.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Network waterfalls look familiar yet different

Once you’ve enabled h2, one of the first things you might do is look at the network waterfall using WebPagetest or your browser’s debugging tools. Here is where you might notice the first big difference from h1, as many of the request/response bars will be wider and requests may seem to wait longer for a server response.

With h1, a browser makes six (or so) TCP connections to a host, each connection can only carry one request and response at a time, and other requests can only be made over that connection once the previous one has completed, so requests queue up in the browser waiting for TCP connections to become free.

H2, on the other hand, does something very different. Instead of multiple TCP connections, it only uses one connection for each host and can send multiple requests at the same time. The responses from the server are multiplexed (i.e., the frames are interleaved), and the server is responsible for returning responses according to their priorities and dependencies. So longer response times in a waterfall may often be due to the server interleaving the responses for resources with the same priority or queuing lower priority responses.

This brings us onto the second thing to watch out for when deploying h2.

HTTP/2 implementations are still young

Browser and server implementations of h2 are still relatively young, some are incomplete, and others have quirks that mean they don’t deliver optimal performance. For example, some browsers don’t fully support prioritization and some servers don’t support header compression.

Header compression (via HPACK) is a key part of the h2 specification. Many of the requests and responses for a page’s resources contain duplicate headers. For example, the user-agent header should be the same for all requests but with h1 it’s sent with every one. Reducing the amount of bandwidth headers consume means more bandwidth is available for actual content, but until recently NGINX, one of the mainstream servers, didn’t fully support HPACK.

As 99designs discovered, other servers, load-balancers, and CDNs also have oddities in their h2 support, so it’s well worth confirming which features they support and checking their compliance using something like Moto Ishizawa’s h2spec.

Server push isn’t a magic bullet

H2 also enables new techniques that allow us to speed up our pages, and server push is perhaps the most notable of these.

The h1 web works on the pull model—the browser requests a page, discovers what other resources the page needs and then requests them, too. But often the server knows what other resources a page needs, so why waste round-trips and time waiting for the browser to discover what it needs when the server could just send the resources needed directly?

Currently server push is a blunt tool. The server has no way of knowing what’s already in the browser’s cache so it may be sending redundant data. The resource being pushed will also compete for bandwidth with other resources. For these reasons it’s probably best to only push critical render blocking resources and leave the browser to prioritize the rest.

Some HTTP/1.x optimizations live on

So what of all the techniques we learned to optimize our pages over h1? Are approaches such as combining files together, inlining small responses, and splitting (sharding) content across multiple domains to improve load times still relevant?

Initially we thought we wouldn’t need to continue with these hacks when sites upgraded to h2, but some still provide benefits. For example, merging text files such as CSS or JS together results in better compression (at least with gzip) and so smaller downloads.

Sharding, on the other hand, is something sites should be careful with when deploying h2. There’s an overhead in creating each TCP connection, and h2 has no way to manage prioritization of data frames across more than one connection. So test when sharding, and perhaps limit critical resources (HTML, CSS and JavaScript) to one shard and images to another.

HTTPS performance matters

Lastly, although h2 doesn’t mandate HTTPS, all browsers will only support h2 when it’s delivered over HTTPS. While previously we might have only protected part of a site with HTTPS, we now have to protect it all. As case studies such as Yell show, moving a whole site from HTTP to HTTPS isn’t always an easy task, and even then we still need to ensure that it’s performant.

HTTPS optimizations—such as ensuring certificate chains are optimal, OCSP stapling is enabled, and an HTTP Strict Transport Security (HSTS) policy is declared—can improve HTTP performance. Qualys SSL Labs is great for testing these and Is TLS Fast Yet? is a good source for further information.

Share your experiences

Ultimately, deploying h2 is easy; we just need to pick a server, load balancer, or CDN that supports it. We already know some of the wins and some of the challenges, but we’ve still got lots to learn about making the most of h2.

If you’ve deployed h2 or are thinking about it, remember to share your experiences and lessons. As a community we learn a huge amount from each other, and this enables us all to move forward and make our visitors’ experiences better and faster.


This post is a collaboration between Akamai and O’Reilly. See our statement of editorial independence.

Post topics: Performance
Share: