At least some of this document is based on much older versions of QUIC. In particular it mentions RST_STREAM which went away by the end of 2018 in favour of RESET_STREAM.<p>In fact it's possible it isn't even talking about the proposed IETF protocol QUIC at all, but instead Google's QUIC ("gQUIC" in modern parlance) in which case this might as well be a paper saying the iPhone is vulnerable to an attack but it turns out it means a 1980s device named "iPhone" not the Apple product.<p>It certainly references a bunch of gQUIC papers, which could mean that's blind references from a hasty Google search by researchers who don't read their own references - but equally could mean they really did do this work on gQUIC.
<i>"allowing adversaries to infer the users' visited websites by eavesdropping on the transmission channel."</i><p>Ah, ok, that kind of fingerprinting. I suppose then this might be where our local ISP's find a way to replace their now threatened DNS query sniffing. Assuming 95.4% accuracy means what I think it does, that's pretty impressive.
It's odd state of the world where we will have to add significant amounts of noise to prevent browsers from revealing what site they are visiting because of traffic request protocols getting too efficient and browsers trying to be efficient in predictable ways.
What are the proposed benefits of QUIC?<p>May be misguided, but I feel a little uneasy about bundling TCP functionality, TLS and HTTP into a single protocol over UDP.
Anything other than a pipenet will be fingerprintable at some level. A pipenet is a network in which all links run at constant utilization, with dummy traffic (indistinguishable from useful traffic) sent over the links during idle periods. Pipenets are of course inefficient, but everything else is going to reveal <i>some</i> kind of signal distinguishable from noise at <i>some</i> level.
Pipelining, HTTP/1.1-style, not necessarily SPDY, HTTP/2 or QUIC-style, can effectively counter this sort of fingerprinting that relies on analysis of request-response sizes.^1 I have used HTTP/1.1 pipelining outside the browser for bulk data retrieval for decades. Although I do not normally randomise requests, the UNIX-style filters I wrote to do pipelining could easily be used for this purpose.<p>1. <a href="https://blog.torproject.org/experimental-defense-website-traffic-fingerprinting" rel="nofollow">https://blog.torproject.org/experimental-defense-website-tra...</a>
We had also investigated the differences in fingerprintability between HTTP/QUIC and HTTPS (WF in the Age of QUIC, PETS'21). We had found equivalent fingerprintability with deep-learning classifiers when eavesdropping on the traffic through Wireguard. It's interesting though to see the stark difference they found between fingerprinting HTTP/QUIC and HTTPS when using only the first ~40 packets. The trends in those early packets had also allowed us to easily differentiate between the two types of traffic over the VPN.<p>Our paper, in case you want to read more on this area: <a href="https://petsymposium.org/2021/files/papers/issue2/popets-2021-0017.pdf" rel="nofollow">https://petsymposium.org/2021/files/papers/issue2/popets-202...</a>