TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

There isn't much point to HTTP/2 past the load balancer

334 点作者 ciconia3 个月前

28 条评论

hiAndrewQuinn3 个月前
The maximum number of connections thing in HTTP&#x2F;1 always makes me think of queuing theory, which gives surprising conclusions like how adding a single extra teller at a one-teller bank can cut wait times by 50 times, not just by 2.<p>However, I think the problem is the Poisson process isn&#x27;t really the right process to assume. Most websites which would run afoul of the 2&#x2F;6&#x2F;8&#x2F;etc connections being opened are probably trying to open up a lot of connections <i>at the same time</i>. That&#x27;s very different from situations where only 1 new person arrives every 6 minutes on average, and 2 new people arriving within 1 second of each other is a considerably rarer event.<p>[1]: <a href="https:&#x2F;&#x2F;www.johndcook.com&#x2F;blog&#x2F;2008&#x2F;10&#x2F;21&#x2F;what-happens-when-you-add-a-new-teller&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.johndcook.com&#x2F;blog&#x2F;2008&#x2F;10&#x2F;21&#x2F;what-happens-when-...</a>
评论 #43199398 未加载
评论 #43198650 未加载
评论 #43197346 未加载
vasilvv3 个月前
The article seems to make an assumption that the application backend is in the same datacenter as the load balancer, which is not necessarily true: people often put their load balancers at the network edge (which helps reduce latency when the response is cached), or just outsource those to a CDN vendor.<p>&gt; In addition to the low roundtrip time, the connections between your load balancer and application server likely have a very long lifetime, hence don’t suffer from TCP slow start as much, and that’s assuming your operating system hasn’t been tuned to disable slow start entirely, which is very common on servers.<p>A single HTTP&#x2F;1.1 connection can only process one request at a time (unless you attempt HTTP pipelining), so if you have N persistent TCP connections to the backend, you can only handle N concurrent requests. Since all of those connections are long-lived and are sending at the same time, if you make N very large, you will eventually run into TCP congestion control convergence issues.<p>Also, I don&#x27;t understand why the author believes HTTP&#x2F;2 is less debuggable than HTTP&#x2F;1; curl and Wireshark work equally well with both.
评论 #43201422 未加载
评论 #43203641 未加载
jchw3 个月前
Personally, I&#x27;d like to see more HTTP&#x2F;2 support. I think HTTP&#x2F;2&#x27;s duplex streams would be useful, just like SSE. In theory, WebSockets do cover the same ground, and there&#x27;s also a way to use WebSockets over HTTP&#x2F;2 although I&#x27;m not 100% sure how that works. HTTP&#x2F;2 though, elegantly handles all of it, and although it&#x27;s a bit complicated compared to HTTP&#x2F;1.1, it&#x27;s actually simpler than WebSockets, at least in some ways, and follows the usual conventions for CORS&#x2F;etc.<p>The problem? Well, browsers don&#x27;t have a JS API for bidirectional HTTP&#x2F;2 streaming, and many don&#x27;t see the point, like this article expresses. NGINX doesn&#x27;t support end-to-end HTTP&#x2F;2. Feels like a bit of a shame, as the streaming aspect of HTTP&#x2F;2 is a more natural evolution of the HTTP&#x2F;1 request&#x2F;response cycle versus things like WebSockets and WebRTC data channels. Oh well.
评论 #43199713 未加载
评论 #43198829 未加载
评论 #43199495 未加载
评论 #43169782 未加载
评论 #43198674 未加载
评论 #43197215 未加载
评论 #43197253 未加载
评论 #43169555 未加载
评论 #43197463 未加载
treve3 个月前
First 80% of the article was great, but it ends a bit handwavey when it gets to its conclusion.<p>One thing the article gets wrong is that non-encrypted HTTP&#x2F;2 exists. Not between browsers, but great between a load balancer and your application.
评论 #43170040 未加载
评论 #43169726 未加载
评论 #43169112 未加载
fulafel3 个月前
There&#x27;s a security angle: Load balancers have big problems with request smuggling. HTTP&#x2F;2 does something to the picture, maybe someone is more up to date if it&#x27;s currently better or worse?<p>ref: <a href="https:&#x2F;&#x2F;portswigger.net&#x2F;web-security&#x2F;request-smuggling" rel="nofollow">https:&#x2F;&#x2F;portswigger.net&#x2F;web-security&#x2F;request-smuggling</a>
评论 #43199665 未加载
评论 #43198514 未加载
评论 #43198634 未加载
jiggawatts3 个月前
Google <i>measured</i> their bandwidth usage and discovered that something like half was just HTTP headers! Most RPC calls have small payloads for both requests and responses.<p>HTTP&#x2F;2 compresses headers, and that alone can make it worthwhile to use throughout a service fabric.
LAC-Tech3 个月前
<i>Personally, this lack of support doesn’t bother me much, because the only use case I can see for it, is wanting to expose your Ruby HTTP directly to the internet without any sort of load balancer or reverse proxy, which I understand may seem tempting, as it’s “one less moving piece”, but not really worth the trouble in my opinion.</i><p>That seems like a massive benefit to me.
Animats3 个月前
The amusing thing is that HTTP&#x2F;2 is mostly useful for sites that download vast numbers of tiny Javascript files for no really good reason. Like Google&#x27;s sites.
评论 #43197701 未加载
评论 #43168959 未加载
评论 #43199684 未加载
littlecranky673 个月前
One overlooked point is ephemeral source port exhaustion. If a load balancer forwards a HTTP connection to a backend system, it needs a TCP source port for the duration of that connection (not destination port, which is probably 80 or 443). That limits the number of outgoing connections to less than 65535. A common workaround is to use more outgoing IP addresses to the backends as source IPs, thus multiplying the available number of source ports to 65535 times number_of_ips.<p>HTTP&#x2F;2 solves this, as you can multiplex requests to backend servers over a single TCP socket. So there is actually a point of using HTTP&#x2F;2 for load_balancer &lt;-&gt; backend_system connections.
评论 #43203678 未加载
feyman_r3 个月前
CDNs like Akamai still don’t support H2 back to origins.<p>That’s likely not because of the wisdom in the article per se, but because of rising complexity in managing streams and connections downstream.
monus3 个月前
&gt; bringing HTTP&#x2F;2 all the way to the Ruby app server is significantly complexifying your infrastructure for little benefit.<p>I think the author wrote it with encryption-is-a-must in the mind and after he corrected those parts, the article just ended up with these weird statements. What complexity is introduced apart from changing the serving library in your main file?
评论 #43199290 未加载
评论 #43199238 未加载
immibis3 个月前
If your load balancer is converting between HTTP&#x2F;2 and HTTP&#x2F;1.1, it&#x27;s a reverse proxy.<p>Past the reverse proxy, is there a point to HTTP at all? We could also use SCGI or FastCGI past the reverse proxy. It does a better job of passing through information that&#x27;s gathered at the first point of entry, such as the client IP address.
评论 #43199650 未加载
chucky_z3 个月前
gRPC?
评论 #43169137 未加载
评论 #43198056 未加载
dangoodmanUT3 个月前
Yet in experience I see massive speedups on my LOCALHOST going from 1.1 to 2, where are the numbers and tests OP?
gwbas1c3 个月前
&gt; So the main motivation for HTTP&#x2F;2 is multiplexing, and over the Internet ... it can have a massive impact.<p>&gt; But in the data center, not so much.<p><i>That&#x27;s a very bold claim.</i><p>I&#x27;d like to see some data that shows little difference with and without HTTP&#x2F;2 in the datacenter before I believe that claim.
评论 #43199267 未加载
miyuru3 个月前
The TLS requirement from HTTP2 also hindered http2 origin uptake. The TLS handshake adds latency and is unnecessary on some instances. (This is mentioned in heading &quot;Extra Complexity&quot; in the article)
评论 #43169177 未加载
awinter-py3 个月前
plus in my experience some h2 features behave oddly with load balancers<p>I don&#x27;t understand this super well, but could not get keepalives to cross the LB boundary w&#x2F; GCP
评论 #43169377 未加载
a-dub3 个月前
i think it probably varies from workload to workload. reducing handshake time and header compression can have substantial effects.<p>it&#x27;s a shame server side hunting&#x2F;push never caught on. that was always one of the more interesting features.
评论 #43169806 未加载
wczekalski3 个月前
It is very useful for long lived (bidirectional) streams.
评论 #43169097 未加载
nitwit0053 个月前
I&#x27;d agree it&#x27;s not critical, but discard the assumption that requests within the data center will be fast. People have to send requests to third parties, which will often be slow. Hopefully not as slow as across the Atlantic, but still magnitudes worse than an internal query.<p>You will often be in the state where the client uses HTTP2, and the apps use HTTP2 to talk to the third party, but inside the data center things are HTTP1.1, fastcgi, or similar.
评论 #43198663 未加载
sluongng3 个月前
Hmm it’s weird that this submission and comments are being shown to me as “hours ago” while they are all 2 days old
评论 #43197402 未加载
评论 #43197435 未加载
kam1kazer3 个月前
nah, I&#x27;m using HTTP&#x2F;3 everywhere
najmlion3 个月前
Http2 is needed for a GRPC route on OpenShift.
wiggidy3 个月前
Yeah yeah, whatever, just make it work in the browser so I can do gRPC duplex streams, thank you very much.
_ache_3 个月前
I remember been bashed on HN saying that HTTP is hard. Yet, I saw non-sens here in the comment about HTTP. The whole article is good but:<p>&gt; HTTP&#x2F;2 is fully encrypted, so you need all your application servers to have a key and certificate<p>Nope. h2c is a thing and is official. But the article is right, the value HTTP&#x2F;2 provides isn&#x27;t for LAN, so HTTP 1.1 or HTTP&#x2F;2 it doesn&#x27;t matter much.<p>HTTP&#x2F;3 however, is fully encrypted. h3c doesn&#x27;t exists. So yeah, HTTP3 slower you connection, it isn&#x27;t suited for LAN and should not be used.<p>BUT if you actually want to encrypt even in you LAN, use HTTP&#x2F;3, not HTTP&#x2F;2 encrypted. You will have a small but not negligible gain from 0-RTT.
评论 #43170008 未加载
Guthur3 个月前
The RFC said &quot;SHOULD not&quot; not &quot;MUST not&quot; couldn&#x27;t we have just ignored the 2 connection limit?
评论 #43169816 未加载
评论 #43199746 未加载
评论 #43199540 未加载
kittikitti3 个月前
If we ever get to adopting this, I will send every byte to a separate IPv6 address. Big Tech surveillance wouldn&#x27;t work so many don&#x27;t see a point like the author.
lmm3 个月前
I think this post gets the complexity situation backwards. Sure, you <i>can</i> use a different protocol between your load balancer and your application and it won&#x27;t do <i>too</i> much harm. But you&#x27;re adding an extra protocol that you have to understand, for no real benefit.<p>(Also, why do you even want a load balancer&#x2F;reverse proxy, unless your application language sucks? The article says it &quot;will also take care of serving static assets, normalize inbound requests, and also probably fend off at least some malicious actors&quot;, but frankly your HTTP library should already be doing all of those. Adding that extra piece means more points of failure, more potential security vulnerabilities, and for what benefit?)
评论 #43169015 未加载
评论 #43169058 未加载
评论 #43169024 未加载
评论 #43169035 未加载
评论 #43169654 未加载
评论 #43169020 未加载