TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Comparing HTTP/3 vs. HTTP/2 Performance

240 pointsby migueldemouraabout 5 years ago

14 comments

jrochkind1about 5 years ago
So, as far as the results: In their synthetic benchmarks, they find negligible to no improvement:<p>&gt; For a small test page of 15KB, HTTP&#x2F;3 takes an average of 443ms to load compared to 458ms for HTTP&#x2F;2. However, once we increase the page size to 1MB that advantage disappears: HTTP&#x2F;3 is just slightly slower than HTTP&#x2F;2 on our network today, taking 2.33s to load versus 2.30s<p>And in their closer to real world benchmarks, they find no improvement, instead some negligible degradation.<p>&gt; As you can see, HTTP&#x2F;3 performance still trails HTTP&#x2F;2 performance, by about 1-4% on average in North America and similar results are seen in Europe, Asia and South America. We suspect this could be due to the difference in congestion algorithms: HTTP&#x2F;2 on BBR v1 vs. HTTP&#x2F;3 on CUBIC. In the future, we’ll work to support the same congestion algorithm on both to get a more accurate apples-to-apples comparison.<p>As a developer of web apps, I will personally continue to not think that much about HTTP&#x2F;3. Perhaps in the future network&#x2F;systems engineers will have figured out how to make it bear fruit? I don&#x27;t know, but it seems to me of unclear wisdom to count on it.
评论 #22866654 未加载
评论 #22865370 未加载
评论 #22865441 未加载
评论 #22865474 未加载
评论 #22865531 未加载
评论 #22866652 未加载
评论 #22869126 未加载
评论 #22865316 未加载
londons_exploreabout 5 years ago
A major benefit of HTTP&#x2F;3 is the ability to transparently switch from one network connection to another without restarting requests.<p>You could be midway through a gaming session over websocket, and walk away from your wifi, and you shouldn&#x27;t notice a glitch.<p>Nearly nothing else offers that ability, and it&#x27;s very annoying, especially in offices with hundreds of wifi access points - I should be able to walk down the corridor on a video call without glitchiness!<p>MPTCP (developed mostly by Apple) offers the same, but Google and Microsoft are holding it back, for some unknown reason.
评论 #22867143 未加载
the_dukeabout 5 years ago
This does not mention if the tests also simulated and measured packet loss.<p>With a good network connection with little packet loss, I wouldn&#x27;t expect much benefit to &#x2F;3. Especially since all the server and client implementations are immature and in user space without kernel support.<p>The benefits should show up with (poor) mobile connections.
评论 #22865470 未加载
评论 #22865867 未加载
tomxorabout 5 years ago
&gt; With HTTP&#x2F;2, any interruption (packet loss) in the TCP connection blocks all streams (Head of line blocking).<p>This issue is really noticeable on my crappy home mobile internet when loading web pages, in combination with the timeout being absurdly long for reasons I don&#x27;t understand.
评论 #22865142 未加载
评论 #22866801 未加载
评论 #22874712 未加载
评论 #22870482 未加载
sholladayabout 5 years ago
In Node.js (curious to hear about other ecosystems), HTTP&#x2F;2 hasn&#x27;t even caught on yet. Sure, it&#x27;s technically supported by Node core and various frameworks, but hardly anyone is really using it. Most of the benefits that HTTP&#x2F;2 brings to the table require a new model that doesn&#x27;t map cleanly to the traditional request&#x2F;response lifecycle. It seems harder to program applications using HTTP&#x2F;2 because of that. Perhaps some of it is what we are used to and the burden of learning something new, but I don&#x27;t think that&#x27;s the whole story. I wonder if future HTTP versions will address this in some way or if it is going to continue to be the new normal. It will be interesting to see what the adoption curve looks like for HTTP&#x2F;3 and onward. I&#x27;m still building everything on HTTP&#x2F;1.1 (RFC 7230) and have no plans to change that any time soon, even though I can appreciate the features that are available in the newer versions.
评论 #22867312 未加载
评论 #22869195 未加载
pgjonesabout 5 years ago
It is possible to compare HTTP&#x2F;3 to HTTP&#x2F;2 &amp; HTTP&#x2F;1 using Python, as Hypercorn (via aioquic for HTTP&#x2F;3) supports all three.<p>When I compared late last year I found HTTP&#x2F;3 to be noticeably slower, <a href="https:&#x2F;&#x2F;pgjones.dev&#x2F;blog&#x2F;early-look-at-http3-2019&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pgjones.dev&#x2F;blog&#x2F;early-look-at-http3-2019&#x2F;</a> however my test was much less comprehensive than the one here.
WhatIsDukkhaabout 5 years ago
So I can&#x27;t find the reference but I believe there was a paper a few months back claiming that there were big issues with fairness (as I understand the word) with other protocols.<p>The gist of it was that Quic tends to just flat out choke out TCP running on the same network paths?<p>Anyone know about this?<p>There is some mention of BBRv2 improving fairness but not the outside academic paper I was looking for -<p><a href="https:&#x2F;&#x2F;datatracker.ietf.org&#x2F;meeting&#x2F;106&#x2F;materials&#x2F;slides-106-iccrg-update-on-bbrv2" rel="nofollow">https:&#x2F;&#x2F;datatracker.ietf.org&#x2F;meeting&#x2F;106&#x2F;materials&#x2F;slides-10...</a>
flyinprogrammerabout 5 years ago
When you&#x27;re ready for an actual improvement check out <a href="https:&#x2F;&#x2F;rsocket.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;rsocket.io&#x2F;</a>
cletusabout 5 years ago
So in a former life I worked on Google Fiber and, among other things, wrote a pure JS Speedtest (before Ookla had one alhtough there&#x27;s might&#x27;ve been in beta by then). It&#x27;s still there (<a href="http:&#x2F;&#x2F;speed.googlefiber.net" rel="nofollow">http:&#x2F;&#x2F;speed.googlefiber.net</a>). This was necessary because Google Fiber installers use Chromebooks to verify installations and Chromebooks don&#x27;t support Flash.<p>This is a surprisingly difficult problem, especially given the constraints of using pure JS. Some issues that spring to mind included:<p>- The User-Agent is meaningless on iPhones, basically because Steve Jobs got sick of leaking new models in Apache logs. There are other ways of figuring this out but it&#x27;s a huge pain.<p>- Send too much traffic and you can crash the browser, particularly on mobile devices;<p>- To maximize throughput it became necessary to use a range of ports and simultaneously communicate on all of them. This in turn could be an issue with firewalls;<p>- Run the test too long and performance in many cases would start to degrade;<p>- Send too much traffic and you could understate the connection speed;<p>- Sending larger blobs tended to be better for measuring throughput but too large could degrade performance or crash the browser. Of course, what &quot;too large&quot; was varied by device;<p>- HTTPS was abysmal for raw throughput on all but the beefiest of computers;<p>- To get the best results you needed to turn off a bunch of stuff like Nagel&#x27;s algorithm and any implicit gzip compression;<p>- You&#x27;d have to send random data to avoid caching even with careful HTTP headers that should&#x27;ve disabled caching.<p>And so on.<p>Perhaps the most vexing issue that I was never able to pin down was with Chrome on Linux. In certain circumstances (and I never figured out what exactly they were other than high throughput), Chrome on Linux would write the blobs it downloaded to &#x2F;tmp (default behaviour) and never release them until you refreshed the Webpage. And no there were no dangling references. The only clue this was happening was that Chrome would start spitting weird error messages to the console and those errors couldn&#x27;t be trapped.<p>So pure JS could actually do a lot and I actually spent a fair amount of effort to get this to accurately show speeds up to 10G (I got up to 8.5G down and ~7G up on Chrome on a MBP).<p>But getting back to the article at hand, what you tend to find is how terribly TCP does with latency. A small increase in latency would have a devastating effect on reported speeds.<p>Anyone from Australia should be intimately familiar with this as it&#x27;s clear (at least to me) that many if not most services are never tested on or designed for high-latency networks. 300ms RTT vs &lt;80ms can be the difference between a relatively snappy SPA and something that is utterly unusable due to serial loads and excessive round trips.<p>So looking at this article, the first thing I searched for was the word &quot;latency&quot; and I didn&#x27;t find it. Now sure the idea of a CDN like Cloudfare is to have a POP close to most customers but that just isn&#x27;t always possible. Plus you hit things not in the CDN. Even DNS latency matters here where pople have shown meaningful improvements in Web performance by just having a hot cache of likely DNS lookups.<p>The degradation in throughput in TCP that comes from latency is well-known academically. It just doesn&#x27;t seem to be known about, given attention to or otherwise catered for in user-facing services. Will HTTP&#x2F;3 help with this? I have no idea. But I&#x27;d like to know before someone dismisses it as having minimal improvements or, worse, as degrading performance.
评论 #22869355 未加载
评论 #22866668 未加载
elsif1about 5 years ago
I&#x27;m curious as to how good the bandwidth estimation is. That&#x27;s something that can certainly be improved from TCP, but it&#x27;s also something that has a lot of corner cases and is not usually done super well in UDP protocols (e.g. WebRTC)
underdeserverabout 5 years ago
I wonder how many different artifacts Cloudflare is serving on this test page. Maybe a real test is the difference grouped by the number of files served on a single page load.
ryanthedevabout 5 years ago
So http3 will be using UDP? Makes sense.<p>Will we see more performance tuning when it comes to MTU sizes?
KenanSulaymanabout 5 years ago
The USP of h3 isn&#x27;t peak performance, it&#x27;s 95th percentile latencies.
mpweiherabout 5 years ago
TLDR: still slightly slower, but &quot;very excited&quot;