What nobody talks about is the lack of server-side offloads for QUIC. Things like TSO, LRO, and even hardware offloaded kTLS. Without those offloads, I estimate I'd be lucky to get 200Gb/s out of the same Netflix CDN server hardware that can serve TLS-encrypted TCP at over 700Gb/s.<p>Do the benefits of QUIC really justify the economic and environmental impacts of that kind of loss of inefficiency on the server side?<p>And yes, I know that some of these offloads are being worked on, but they are not here today.
A great writeup, but just to take issue (or at least discuss) this one point:<p>> There may be some performance penalty of shifting the transport code from the kernel to user space<p>This makes it sound like a kernel has potential for slightly more optimised implementation. But I think it's more than that - the transport code can be completely offloaded from the CPU to the network card/processor. That can only happen if it's abstracted behind syscalls, not written in user space.
I often see QUIC described as "faster than TCP", but in my experience this has only been the case when it comes to handshake latency.<p>Throughput-wise, I've found in real-world testing that QUIC is often slower than TCP:
(1) QUIC uses more CPU, due to the processing in user-space. 1Gbps required 3 CPU cores. On my 1x-CPU VPS, QUIC maxed out at 400Mbps due to the CPU. With TCP+TLS, I could comfortably achieve 5Gbps.
(2) QUIC was less resilient to packet loss (surprisingly). This was particularly noticeable on mobile devices.<p>If your use-case is to move bytes between powerful servers over a reliable, wired connection, QUIC may beat TCP in most ways that matter. But for use in real-world mobile apps, TCP may still offer better throughput.<p>Caveat: This is all data from using the quic-go package. The C libraries may well be more efficient :)
No performance numbers, though. I was hoping for third party benchmarks.<p>QUIC is optimized for Google's use case - Google client talking to Google servers, with many Google streams combined into one big pipe. This is not the normal non-Google case. Early performance numbers from Google only indicated a relatively small gain (15%?) even for that case.
One of the best writeups about QUIC, and my new go to reference for anyone who asks about the topic (since the QUIC RFCs are obtuse, and I say that as someone who's read many RFCs to troubleshoot networking issues).
This is great! I've tried writing reliable stream implementations over UDP for games but always got lost in the weeds.<p>The main use case I need is to have reliable streams and unreliable datagrams that continue working even if either peer's IP address changes. Something like that would allow cell phones to form scalable p2p mesh networks, for example.<p>It needs to be encrypted and punch through NAT in a fully automated way, falling back to a (secure/anonymous) matching server if both peers are behind NAT. I don't know about that last part, but it appears that WebTransport over HTTP/3 over QUIC over UDP might be able to do most of that and be a potential replacement for WebRTC data channels:<p><a href="https://web.dev/webtransport/" rel="nofollow">https://web.dev/webtransport/</a><p><a href="https://www.w3.org/TR/webtransport/" rel="nofollow">https://www.w3.org/TR/webtransport/</a>
Under "QUIC Issues" it mentions "Private QUIC" and offhandedly remarks that there is no way to use QUIC without CA based TLS which is built in. This means that QUIC cannot be used without the continuing approval of a third party incorporated entity. This is a very serious problem when QUIC takes over in user software (made by megacorps) and eventually drops HTTP/1.1 support. It will be the end of the open web and the beginning of a corporate controlled one.<p>Ignoring this extremely dangerous outcome of a QUIC only world, this write-up is excellent and really clears things up.