Summary: TCP throughput drops dramatically when packet loss is present. This technique uses forward error correction to compensate for packet loss, resulting in higher effective throughput over lossy links. Many WiFi and cellular connections are lossy, so this would be helpful in those cases.<p>They haven't improved the underlying link rate at all. In fact, the FEC overhead is going to reduce the effective link rate. However, in some edge-case high packet loss scenarios, the reduced packet loss will more than make up for the reduced effective link rate.
Google, don't speculate. The primary source appears to be <a href="http://www.mit.edu/~medard/papers2011/Network%20CodingMeets%20TCP-%20Theory%20and%20Implementation.pdf" rel="nofollow">http://www.mit.edu/~medard/papers2011/Network%20CodingMeets%...</a> and similar papers from the same authors.
See the original discussion & better article at <a href="https://news.ycombinator.com/item?id=4686743" rel="nofollow">https://news.ycombinator.com/item?id=4686743</a>
This looks like Fountain codes: <a href="http://en.wikipedia.org/wiki/Fountain_code" rel="nofollow">http://en.wikipedia.org/wiki/Fountain_code</a><p>Basically, you split your data into blocks, XOR random blocks together and the client can recreate the data by solving the equation of which blocks where XORed with which.<p>A good tutorial is here: <a href="http://blog.notdot.net/2012/01/Damn-Cool-Algorithms-Fountain-Codes" rel="nofollow">http://blog.notdot.net/2012/01/Damn-Cool-Algorithms-Fountain...</a><p>And a fast implementation: <a href="http://en.wikipedia.org/wiki/Raptor_code" rel="nofollow">http://en.wikipedia.org/wiki/Raptor_code</a>
It's the job of the 802.11 L2 to hide correctable packet loss L3 so most radio link loss isn't passed on to TCP. This could just as well read "the wifi L2 error correction doesn't try hard enough".<p>This problem is very common but it wants fixing on the L2
and not TCP. Turning up the FEC on the L2 would reduce its capacity even further though since more of the bandwidth is taken up by the FEC (and so does this TCP level FEC).<p>3G gets it wrong on the other extreme, it pretends to always have 0% packet loss, your packets just sometimes show up 30-100 seconds late and in order.
I see lots of comments talking about FEC. That's not how the article reads to me. Granted the author (or I?) may be completely out in left field, but here's my take on what it says:<p>Let's suppose you have a mathematical process that outputs a stream of [useful] data. The description of the process is much, much smaller than the output. You can "compress" the data by sending the process (or equation) instead. Think π. Do you transmit a million digits of π or do you transmit the instruction "π to a million digits"? The latter is shorter.<p>Now, reverse the process: given an arbitrary set of data, find an equation (or process) that represents it. Not easy for sure. Perhaps not possible. I recall as a teenager reading an article about fractals and compression that called on the reader to imagine a fractal equation that could re-output your specific arbitrary data.<p>If I've totally missed the article's point, please correct me, but explain why it also talks about algebra.<p>EDIT: I re-read and noticed this: "If part of the message is lost, the receiver can solve the equation to derive the missing data." I can see the FEC nod here.<p>Guh. I guess I'm blind tonight. "Wireless networks are in desperate need for forward error correction (FEC), and that’s exactly what coded TCP provides." I cannot for the life of me understand why they'd need to keep this a secret.
There's a company in Ottawa implementing a similar idea, but based on carving up packets and then inserting one or more redundant XOR packets (RAID-style). Their name for it is IPQ: <a href="http://liveqos.com/products/ipq/" rel="nofollow">http://liveqos.com/products/ipq/</a><p>They have a patent on this: <a href="http://www.google.com/patents/US20120201147" rel="nofollow">http://www.google.com/patents/US20120201147</a><p>(Disclosure: I was an intern there in 2009, when it was IPeak Networks.)
Has anyone done an experiment to see what simply duplicating every TCP packet sent over wireless does? If you're in a situation where you're limited by random packet loss and not by raw bandwidth, I imagine it could help...<p>Obviously this is a much weaker and less efficient solution that what is proposed in the paper, but this would be trivial to implement. I believe netem allows you to simulate this.
Is this likely to be any different from introducing a little error-correction?<p>Also, how were we not doing this already?<p>Also, I need a writer. Whoever wrote this up made it sound WAY cooler than when I explain error correcting codes.
This is one of those posts where the comments on the site are better than the article submitted.<p>It also means less power consumption on mobile phones. There will be no need to increase signal power to get better speed or voice quality.
I'm not up-to-date on networking technologies, but it's surprising to me that some sort of error correction hasn't already been made a standard yet.<p>I wonder if something along the lines of old-school Parity files would work in the packet world? Basically just blast out the packets and any that were lost, you just reconstruct using the meta-data sent with the other packets.
Isn't the downside of FEC encoded packets increased latency? Instead of sending each packet immediately, don't you need to accumulate n packets to encode as a group? Or does the math allow incremental encoding? Simple parity is incremental, but the FEC on DSL lines always added 40ms of latency.
What's funny here is that wireless networks already use FEC at the physical layer. This just adds more, less conservative, FEC higher up for apps where it makes more sense to reduce throuhput and increase some average case latency to avoid worse case latency and worse case throughput.
My friend did something similar to the concepts in this article, except he applied it to compression of audio. It was basically finding patterns in the audio and transforming these into equations. Interesting stuff.