Founder of NuevoCloud here. If I read this right, you guys used Cloudflare for http 2. So let me ask you this, when you did your comparison, were all of the images cached (ie: x-cache: hit) at the edge?<p>The reason I ask is because cloudflare, last I checked, still hasn't implemented http2's client portion. So when a file is not cached, it does this:<p>client <--http2--> edge node <--http 1.1--> origin server.<p>Http2 is only used for the short hop between the client and edge node.. then the edge node uses http 1.1 for the connection to the origin server, which may be thousands of miles away.<p>In other words, in your test, depending on the client location and the origin server location.. your test may have used http 1.1 for the majority of the distance.<p>If you guys want to rerun this test on our network, we use http2 everywhere... your test would look like this on our network:<p>client <--http2--> edge node (closest to client) <--http2--> edge node (closest to server) <--http2--> origin server.<p>So even if your origin server doesn't support http2, it'll only use http 1.1 over the short hop between your server and the closest edge node.<p>You're welcome to email me if you want to discuss details you don't want to post here.<p>Edit: I should also mention, that we use multiple http 2 connections between our edge nodes and between the edgenode and origin server... removing that bottleneck. So only the client <--> edge node is a single http 2 connection.
I did not do any real tests and I might be completely wrong etc. but it seems to me that http2 is going to perform poorly over wireless links like 3g.<p>With http1 one had N tcp connections, and with the way tcp slowly increases the bandwidth used, and rapidly decreases it when packet is lost, even if any packet were dropped (which will happen quite a lot on 3g) other tcp streams were not delayed, or blocked, and can even utilize the leftover bandwidth, yielded by the stream that lost the packet.<p>With http2 however there's one tcp connection, so dropped packets will cause under-utilization of the bandwidth. On top of that dropped packets will cause all frames after them, to be delayed in kernel receiving buffer until the dropped packet is retransmitted, while in http1 case they would be available at the app level right away.<p>HTTP2 being implemented on top of TCP always seemed like a weird choice. It should have been UDP, IMO. That's why network accelerators like PacketZoom make so much sense. Note: I work in PacketZoom, I did not do any in-depth research on HTTP2, and this is my opinion, not necessarily of the company.
I don't think that the server is in charge of priorisation here. The server can do it, but there is no reason to push this responsibility onto the server when the browser can do it much better (for example the server can't know what's in the viewport).<p>I expect this will be quickly sorted out by more mature HTTP/2 implementations in browsers. Downloading every image at once is obviously a bad idea, and I expect such naive behaviour will soon be replaced by decent heuristics (even just downloading eight resources at once should be better in nearly all cases)
One way to "solve" the time to visual completion would be to make all the images, but especially the larger images, progressive scan. For very large images, the difference in visual quality between 50% downloaded and 100% downloaded on most devices isn't noticeable, so the page would appear complete in half the time.
Did I read this right that http1 was with cdn A (unnamed?) and http2 was with cdn B (cloudflare)?<p>If so, you really can't draw any conclusions about the protocol difference when the pop locations, network designs, hardware and software configurations could easily have made the kinds of differences you're seeing.
Comparing two protocols using different providers, isn't that a bit comparing pears and apples? And i have a doubt, which could be bad assumption, but that it is on hardware you control or own and what exactly runs on it, and potentially which other parties use it.
I'm really looking forward to see how much HTTP/2 will increase performance for my Bitcoin payment channel server: <a href="https://github.com/runeksvendsen/restful-payment-channel-server/" rel="nofollow">https://github.com/runeksvendsen/restful-payment-channel-ser...</a><p>Just now I finished separating the front-end and back-end - by a RESTful protocol - and this roughly halved performance compared to using a native library (from ~2000 payments/second on my laptop to ~1000). I expect HTTP/2 to make a greater percentage-wise difference here, although I admit I really have no idea how much, say, ZeroMQ would have reduced performance, compared to cutting it in half using HTTP/1.x.<p>I expect HTTP/2 to make a much greater difference in high performance applications, where overhead becomes more important, which static file serving doesn't really hit. So I think RESTful backend servers will see a much more noticeable performance increase, especially since, if you use Chrome at least, as an end-user you already get many of the HTTP/2 latency benefits through SPDY.
Some solutions:<p>- Serve less data. The best speedup is when there's no more data to download and if the throughput for clients is maxed out, then decreasing page weight helps.<p>- Use async bootstrap JS code to load in other scripts once images are done loading or other page load events have fired.<p>- Load less images in parallel, use JS to load one row of images at a time.<p>- Use HTTP/2 push (which CloudFlare offers) to push some of the images/assets with any other response. Push images with the original HTML and you'll start getting the images to browser before it even parses the HTML and starts any (prioritized) requests.
We've recently moved to Google Cloud Storage from AWS because of http/2. We had a bottleneck of the browser waiting when serving multiple large (8+files * 10mb+each).<p>I'm wondering if 99designs looked at any sort of domain sharding to get around the timing issues. If I understand correctly, wouldn't this get around the priority queue issue? Your js,fonts, etc. coming from a different address than your larger images, would create completely separate connections.<p>I'm not completely sure this would get around the issues mentioned, but I'm curious if it was looked at as a solution.
Excellent and in-depth article. Thank you for sharing!<p>Hopefully we'll see a follow up with future changes and tweaks both from webservers and browsers.
I thought that HTTP/2 didn't fix head-of-line blocking and this was why QUIC (<a href="https://www.chromium.org/quic" rel="nofollow">https://www.chromium.org/quic</a>) existed.<p>From the project page:<p>Key features of QUIC over existing TCP+TLS+HTTP2 include<p>* Dramatically reduced connection establishment time<p>* Improved congestion control<p>* Multiplexing without head of line blocking<p>* Forward error correction<p>* Connection migration
Thanks for posting your findings - very useful data. It would be interesting to see the Webpagetest waterfalls in greater detail if you're able to share that.<p>You planning to use your resource hints to enable server push at CDN edge?
we also did a far less sophisticated HTTP/2 reality check: <a href="https://blog.fortrabbit.com/http2-reality-check" rel="nofollow">https://blog.fortrabbit.com/http2-reality-check</a><p>about the same result: real world performance boost was not soooo big.