TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Employing QUIC Protocol to Optimize Uber’s App Performance

363 pointsby nhfabout 6 years ago

11 comments

ctimeabout 6 years ago
This YouTube video does a great job illustrating how well HTTP&#x2F;2 works in practice.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=QCEid2WCszM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=QCEid2WCszM</a><p>A lesser known *ownside to HTTP&#x2F;2 over TCP solution was actually caused by one of the improvements - a single reusable (multiplexed) connection - that could end up stalled or blocked due to network issues. This behavior could go unnoticed over the legacy HTTP&#x2F;1.1 connections due to browsers opening a hugh number of connections (~20) to a host, so when one would fail it wouldn&#x27;t block everything.
评论 #19965375 未加载
评论 #19964852 未加载
评论 #19969109 未加载
internalsabout 6 years ago
What a great case study. Successfully shifting 80% of mobile traffic to QUIC for a 50% reduction in latency is amazing. QUIC and the ongoing work with multipath TCP&#x2F;QUIC will be huge QoL improvements for mobile networking.
评论 #19965706 未加载
评论 #19964044 未加载
评论 #19965457 未加载
panarkyabout 6 years ago
Experiment 1<p><i>While we used the NGINX reverse proxy to terminate TCP, it was challenging to find an openly available reverse proxy for QUIC. We built a QUIC reverse proxy in-house using the core QUIC stack from Chromium and contributed the proxy back to Chromium as open source.</i><p>Experiment 2<p><i>Once Google made QUIC available within Google Cloud Load Balancing, we repeated the same experiment setup with one modification: instead of using NGINX, we used the Google Cloud load balancers to terminate the TCP and QUIC connections...<p>Since the Google Cloud load balancers terminate the TCP connection closer to users and are well-tuned for performance, the resulting lower RTTs significantly improved the TCP performance.</i>
esaymabout 6 years ago
I recently moved and got internet with Spectrum. A 200&#x2F;10 service yet my upload speeds were rarely above 5mbit. This was a business account with some web and dev servers behind it. I didn&#x27;t even try to call customer service...<p>With a little more testing using UDP, I could see I was getting very spotty packetloss (&lt;0.5%). I&#x27;d never tried changing the TCP algo before but I knew random packetloss is normally interpreted as congestion and hence causes a speed backoff.<p>I tried all of the ones available at the time but the one that stood out not only in performance but also simplicity was TCP-Illinois[0]. The stats provided by `ss -i` also seemed the most accurate with TCP-Illinois. I force enable it on every machine I come across now.<p>0:<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;TCP-Illinois" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;TCP-Illinois</a>
评论 #19966145 未加载
m3kw9about 6 years ago
Tcp was build for the internet long ago, even though there are changes added, the architecture of the protocol make it hard to do anything drastic. With UDP because it is so simple, you can basically create a new protocol on top, inside the payload and emulate TCP if you wanted to
sly010about 6 years ago
I wish the mandated minimum MTUs of IP were just a bit bigger. Ubers traffic must be so transactional, they could really just use individual UDP packets for most messaging.
评论 #19966814 未加载
评论 #19967126 未加载
7ewisabout 6 years ago
So is this essentially HTTP&#x2F;3?
评论 #19964583 未加载
评论 #19969647 未加载
评论 #19964579 未加载
the8472about 6 years ago
Isn&#x27;t TLP[0] supposed to fix the largest cause (tail losses) of this issue? It should result in retransmits far sooner than the 30 seconds they mention.<p>&gt; Recently developed algorithms, such as BBR, model the network more accurately and optimize for latency. QUIC lets us enable BBR and update the algorithm as it evolves.<p>Again this is available for TCP in recent linux kernels[1]. And it&#x27;s sender-side, so it should be unaffacted by ancient android devices.<p>Are they using ancient linux kernels on their load balancers? Or are the sysctl knobs for these features turned off in some distros?<p>[0] <a href="https:&#x2F;&#x2F;git.kernel.org&#x2F;pub&#x2F;scm&#x2F;linux&#x2F;kernel&#x2F;git&#x2F;torvalds&#x2F;linux.git&#x2F;commit&#x2F;?id=6ba8a3b19e764b6a65e4030ab0999be50c291e6c" rel="nofollow">https:&#x2F;&#x2F;git.kernel.org&#x2F;pub&#x2F;scm&#x2F;linux&#x2F;kernel&#x2F;git&#x2F;torvalds&#x2F;lin...</a> [1] <a href="https:&#x2F;&#x2F;kernelnewbies.org&#x2F;Linux_4.9#BBR_TCP_congestion_control_algorithm" rel="nofollow">https:&#x2F;&#x2F;kernelnewbies.org&#x2F;Linux_4.9#BBR_TCP_congestion_contr...</a>
评论 #19966234 未加载
评论 #19967118 未加载
ssvssabout 6 years ago
I thought DDOS prevention was difficult with udp, compared to TCP. Is it not the case anymore. Does cloudflare provide DDOS prevention for QUIC&#x2F;UDP.
评论 #19968524 未加载
jefftkabout 6 years ago
I&#x27;m surprised the &quot;alternatives considered&quot; section doesn&#x27;t have a &quot;write something custom for core functionality using UDP&quot;. I would be curious to read why the decided not to go that way, given their scale and the potential gains from not using a general-purpose protocol.<p>(Something like, make the entire standard journey from opening to the app to requesting a car over something custom, and then leave the rest of the app using TCP)
评论 #19965496 未加载
OrgNetabout 6 years ago
This kind of latency improvement only matters if they are planning to do auto-pilot from the cloud? (that would be crazy, especially if they don&#x27;t have a fallback)
评论 #19964940 未加载
评论 #19966073 未加载