With both a DNS-over-HTTP client and potentially a DNS-over-QUIC in the browser and serving advertisements over QUIC... there is a good chance that the world will see unblockable advertisements in our near future.<p>I don't think this is a good idea... about a decade ago... as a research project I ran honeypot farm of 13 machines to learn more about malware. The honeypot machines were autonomously surfing the net, parsing the DOM and choosing random links. I ran them in a sandbox and was getting weekly malware hits.<p>Much to my surprise... most of the malware was coming over advertisement networks on shady websites.
I’m not an expert but QUIC doesn’t seem like enough of an improvement over TCP to warrant replacing it, especially given that it’s even more complex.<p>- 0-RTT handshakes are great but there’s still the problem of slow start.<p>- QUIC’s congestion control mechanism is pretty much the same as TCP’s and doesn’t perform particularly well over e.g. mobile networks.<p>- Mandatory TLS means it’s going to be a huge PITA if you ever need to run a quic service locally (say, in a container).<p>- Having it in user space means there’a a good chance we’ll end up with 100s of implementations, all with their own quirks. It’s bad enough trying to optimise for the three big TCP stacks.
After my opinion around 4% performance improvement doesn't justify the introduction of this more complicated protocol (maybe google knows how this benefit their ad business, like forcing everyody to https so they can increase their control over the internet, since their scripts are already included by the majority of the websites, reporting them all the important metrics, regardless of HTTPS)
Those are pretty modest gains for a layer 4 change. It's going to be much harder to tool/debug this stuff. Is it expected that servers pretty much always support all the HTTP protocols or is the goal to eventually replace the earlier forms?
Relying on Alt-Svc for HTTP/3 is really bad, so I hope Chromium is following this with <a href="https://blog.cloudflare.com/speeding-up-https-and-http-3-negotiation-with-dns/" rel="nofollow">https://blog.cloudflare.com/speeding-up-https-and-http-3-neg...</a> right away.
Does anyone else think it's weird/futile that they're building a protocol over UDP?<p>QUIC is disabled on our corporate network, simply because the network firewall/SSL inspector can't see what's going on, and can't regulate traffic, so it just blocks all UDP. our internet still works because sites see that QUIC doesn't work and fall back to TCP. Heaven forbid the entire web moves to QUIC or we'd be in trouble.
Anyone know why there's no new URL scheme for HTTP/3? We didn't rely on Alt-Svc headers for switching to HTTPS. We gave it its own scheme. Why aren't we doing that for HTTP/3?
The level of complexity of this thing goes way beyond the HTTP over CORBA experiment that took place at the end of millennium.<p>The point is: despite CORBA's convoluted complexity, at least HTTP + CORBA experiment was somewhat sane as it allowed to use multiplexed connections right out of the box and relied upon standard network capabilities without reinventing the wheel. All that in 1999 or so.<p>DNS over HTTPS, QUIC et al look nothing less than a monopolistic attack on open web. Google really wants to own the Internet.
Has the amplification attack been solved recently? Last I checked the spec still said "at most 3x amplification" (which I expect will be enough for attackers) and the server implementation that I was testing went <i>well</i> beyond that. If that's not solved and this gets deployed on a few big networks, I can already tell you what the next popular protocol will be for taking down websites.
Can someone provide the tradeoffs and benefits of QUIC vs WebSockets vs WebRTC? I know websockets are tcp and WebRTC requires some special tunneling logic but aside from that I don't particularly know how quic is better or different aside from using udp.
Does anyone know when HTTP/3 is going to get wider support in gRPC. There's an open issue in the github project about this [0]. In IoT use cases where you want to do bi-directional stream of data to/from a location getting rid of some head of line blocking will make me a happy camper.<p>0 - <a href="https://github.com/grpc/grpc/issues/19126" rel="nofollow">https://github.com/grpc/grpc/issues/19126</a>
While this seems good to have a more efficient transport, I can't make sense of this<p><pre><code> > Since the subsequent IETF drafts 30 and 31 do not have compatibility-breaking changes, we currently are not planning to change the over-the-wire identifier.
</code></pre>
Are there slow-moving internal software at Google that relies on this nonce? This looks like the kind of thing that some clients will tend to rely on (for a reason yet unknown). That's how clients grow the standard in unintended ways, no?<p>On another note:<p><pre><code> 3. optionally, the trailer field section, if present, sent as a single HEADERS frame.
</code></pre>
I see you're paving the way for gRPC on the Web (of browsers) by adding trailers (a header sent after the body), which is not supported today for HTTP/1 not /2 by at least the top 3 browser vendors in volume.<p>I'm divided: I'd be glad to get rid of grpc-gateway and websockets but isn't proto-encoded communication bad for the open Web /in principle/? Maybe it's only a tooling problem.
Implementing QUIC is not trivial, so I suspect it will be years until it gets reasonable adoption in standard frameworks and languages that prefer not to interop with C.
I remember a Hacker News post that many of the top Firewall vendors suggest disabling UDP over port 443. Apparently it's hard for packet inspection, restricted browsing etc in the enterprise space.<p>Have there been any leaps in Firewall tech, or will most companies still disable this?
Does anyone know what the maturity of standalone C-compatible implementations is like?<p>Curl seem to be evaluating two different stacks: ngtcp2+nghttp3 (C, seems to be from developers behind aria2) and Quiche (Rust, from Cloudflare)<p>Then there's Google's C++ QUICHE implementation which seems to not be used by anyone outside of Chromium (Even node.js apparently isn't using this, unless the code is just old).<p>There are several more: <a href="https://en.wikipedia.org/wiki/HTTP/3#Libraries" rel="nofollow">https://en.wikipedia.org/wiki/HTTP/3#Libraries</a><p>It's a bit of a mess, and until Curl makes a decision I'm not sure where to go.
Unrelated: chromium 86 bring the backforward cache which make back navigation instantaneous in many cases, this was I believe, the biggest optimization that was Firefox only
Is this being added to Chromium code? Its hard to tell if its being added (and in which release) or if parts of it or all of it are already in Chrome or Chromium and are just being enabled now
I am generally pro QUIC, but after seeing <a href="https://tools.ietf.org/html/draft-ietf-quic-datagram-01" rel="nofollow">https://tools.ietf.org/html/draft-ietf-quic-datagram-01</a> I have to ask, why not have all the streaming stuff on top of this? Then the layering looks like:<p>1: connections management + encryption<p>2: streams and multiplexing<p>Seems pretty good to me?
“Today this changes. We've found that IETF QUIC significantly outperforms HTTP over TLS 1.3 over TCP. In particular, Google search latency decreases by over 2%. YouTube rebuffer time decreased by over 9%, while client throughput increased by over 3% on desktop and over 7% on mobile.“<p>This is the most sickening sentence for me. The myopic internal focus. ‘Look we’ve made our new thing a standard and look it makes our products run faster’. This is just blatant exploitation that’s occurring as there is too much centralised ownership. In my opinion this is predatory behaviour packaged up as open source good for all.
i would like to see peformance comparison with SRT for example or other udp-based protocol. i mean, it its good for video, it must be good for web too.
Those incremental gains doesn't seems much better than what linux Tcp improvments get each year, especially if turning on state of the art congestion / bufferbloat algorithms.
Also Tcp fast open is ridiculously old and I can't see how mainstream equipment still wouldn't support it on average.