Having written an HTTP server+proxy recently, I haven't been super impressed with HTTP/2 so far. There is some good in it (basically everything I'm not mentioning), but also a lot of bad.<p>First, Firefox (and some others) are forcing TLS to use HTTP/2 : <a href="https://wiki.mozilla.org/Networking/http2" rel="nofollow">https://wiki.mozilla.org/Networking/http2</a> ; that's a big deal breaker for a lot of people. Yes, encryption is all well and good. I'll all for it! But SSL certs either cost money, or you get them from companies that will want cash to revoke them for you if compromised. SSL/TLS errors are still a royal bitch (and pop up with less popular authorities), with browsers warning you of your impending <i>undoing</i> if you choose to continue (sometimes damn near requiring a blood contract to override.) They also require extra CPU resources. This can be a problem for a site that is only hosting kitten pictures or video game news. It's also a barrier toward people like me experimenting with it, since I now <i>also</i> have to learn how to use TLS if I just want to toy around with the protocol.<p>Second, I don't really agree that using a new, custom-made compression algorithm is a smart way to do headers. We are talking about ~300 <i>bytes</i> of data per header ... are the bandwidth gains really so superior to outweigh the CPU costs in having to compress the data, and to overcome the added programming complexity in working with these headers?<p>Third, it's really a fundamentally different way to do things. Like the slides said, you're going to have to really redesign how servers and website packages serve up content to be optimized for this new model, or else performance may even be worse than HTTP/1.1 ... having seen the way the real world works, I'm not very confident that web developers are going to take this seriously enough, and we'll likely see a lot of "HTTP/1 over HTTP/2" behavior anyway (eg not taking advantage of server push.) The servers like Apache and nginx can only go so far toward doing this for you.<p>Fourth, since it's not backward-compatible, we're pretty much not going to be able to use HTTP/2 exclusively for another 5 - 10 years. Which, of course, doesn't mean we shouldn't ever upgrade HTTP/1. It's just kind of crappy that we have to basically run two very different HTTP engines that serve content very differently for the next decade, waiting for people to upgrade their browsers.<p>I would have liked to have seen an HTTP/1.2 intermediary step that added a few extra headers, like 'Server-Push: "filename", ETag'; and perhaps a specification rule that no HTTP/1.2 request could ever ask for /favicon.ico or /apple-touch-icon.png. Just that would have reduced countless extra wasteful connection requests -> 304 Not Modified responses that we have today on HTTP/1.1, without having to resort to max-age and not being able to instantly update your site anymore. And it would just silently keep working for HTTP/1.1 users (obviously without the 1.2 benefits.)<p>...<p>Also, all of these slides are always pretty sparse. Given that the new header format is binary, does anyone know how clients are going to go about requesting HTTP/2 capabilities? Is there a special HTTP/1.1 header? Because Apache will respond to "GET / HTTP/2" with an HTTP/1.1 OK response at present. (In fact, it responds with 200 OK even to "GET / ITS_JUST_A_HARMLESS_LITTLE_BUNNY/3.141592" ...)