This thing is cool. I saw a live demo at IETF 118 in Prague last month. It totally eliminates buffer bloat, which makes it awesome for video chat. I saw the demo and was like "woah... I didn't think this would ever be possible."<p>It requires an additional bit to be inserted into IP packets, to carry information about when buffers are full (I think?), but it actually works. It feels like living in the future!
I was wondering how the receiver tells the sender that there was congestion. So I tried to figure it out, but it wasn't the easiest to find.<p>Essentially the details are documented in <a href="https://www.rfc-editor.org/info/rfc3168" rel="nofollow noreferrer">https://www.rfc-editor.org/info/rfc3168</a><p>The simple answer is that there are more than just one flag. From what i gather there are three flags. One flag that the sender sets to inform the routers that it can handle ECN. A second flag is used by the router to tell the recipient that the router was congested. And a third flag is set in by the recipient when it sends an ACK package back to the sender.<p>For more details, here is the relevant section:<p>* An ECT codepoint is set in packets transmitted by the sender to indicate that ECN is supported by the transport entities for these packets.<p>* An ECN-capable router detects impending congestion and detects that an ECT codepoint is set in the packet it is about to drop. Instead of dropping the packet, the router chooses to set the CE codepoint in the IP header and forwards the packet.<p>* The receiver receives the packet with the CE codepoint set, and sets the ECN-Echo flag in its next TCP ACK sent to the sender.<p>* The sender receives the TCP ACK with ECN-Echo set, and reacts to the congestion as if a packet had been dropped.<p>* The sender sets the CWR flag in the TCP header of the next packet sent to the receiver to acknowledge its receipt of and reaction to the ECN-Echo flag.
Bob Briscoe has been on this line of thought for a long time. I'd recommend reading a couple of his classics on the topic, including:<p><a href="http://www.sigcomm.org/sites/default/files/ccr/papers/2007/April/1232919-1232926.pdf" rel="nofollow noreferrer">http://www.sigcomm.org/sites/default/files/ccr/papers/2007/A...</a><p><a href="https://dl.acm.org/doi/pdf/10.1145/1080091.1080124" rel="nofollow noreferrer">https://dl.acm.org/doi/pdf/10.1145/1080091.1080124</a>
Some tests were done on Comcast networks on cable plant.<p>Slide deck below explains it:<p><a href="https://datatracker.ietf.org/meeting/118/materials/slides-118-tsvwg-sessa-61-l4s-experience-00" rel="nofollow noreferrer">https://datatracker.ietf.org/meeting/118/materials/slides-11...</a><p>Not sure where this leads but I guess ISPs will start charging toll for express lanes
If you are interesting in learning more on L4S, there is a webinar series starting today on understandinglatency.com. Some of the authors of L4S, the head of Comcasts L4S field trail and some critical voices are speaking
In case anyone else was curious, I found a brief demo of this in use with a video feed from an RC car: <a href="https://www.youtube.com/watch?v=RZmS10djDEg" rel="nofollow noreferrer">https://www.youtube.com/watch?v=RZmS10djDEg</a>
While it's a step in the right direction, there's a problem if there's at least one 'malicious' actor, who ignores the congestion feedback and just wants a larger share of bandwidth. Then all other actors will retreat and the unfair actors get what they want. Unfortunately it is hard to know for a good actor if the other actors are playing nicely or not. Only if a good actor knows that there's fair queuing, they can trust L4S to treat them fairly.<p>This can be solved by complementing L4S with fair queuing (e.g. fq_codel) and by making sure that congestion control can detect the presence of fair queuing (<a href="https://github.com/muxamilian/fair-queuing-aware-congestion-control">https://github.com/muxamilian/fair-queuing-aware-congestion-...</a>).
What does this mean in practicality as a user? Will e.g. video calls be closer to real-time? There's usually about 0.5-1 second delay which leads to a lot of hiccups and interruptions when speaking with each other. What other application uses will be significantly improved?
Essentially, L4S shrinks the latency feedback loop. The second half of this video explains it quite nicely: <a href="https://youtu.be/tAVwmUG21OY?si=lydbqfNL80Y8Uxvp" rel="nofollow noreferrer">https://youtu.be/tAVwmUG21OY?si=lydbqfNL80Y8Uxvp</a>
How does it compare to μTP (Micro Transport Protocol)?<p><a href="https://en.wikipedia.org/wiki/Micro_Transport_Protocol" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Micro_Transport_Protocol</a>
Im confused, everyone here is talking about improvements to video conferencing and streaming, but those applications use UDP instead of TCP so I don’t understand how this will change anything.
a rfc which simply sells two others rfc... sigh<p>> Center TCP (DCTCP) [RFC8257] and a Dual-Queue Coupled AQM [RFC9332]<p>this only exists to ask that cable modems (and maybe mobile phones?) use that too
Like diffserv? Allowing to tell the ISP about low latency traffic?<p>Ofc, ISPs would have to aggressively limit this type of traffic as it would be abused otherwise (video game gameplay traffic, and voice call streams).
How does the feedback loop works ? I.e. the routers need to tell the source (upstream) to back off , but this used an IP header bit, so there is No guaranteed Return Stream....
I’m having trouble determining if my 3.1 cable modem supports the draft spec. Is there a way to tell based on serial number? Are there hardware limitations that would prevent older 3.1 modems from receiving a software update to enable support?