How I love it when people without deep knowledge of some subject write authoritative sounding articles.<p>Without guarantee of completeness, to avoid the spread of misinformation:<p>- IPv6 fragmentation has nothing to do with some "minimum payload size" (whatever that is) - there simply is no fragmentation being done by routers, the sender still can fragment however it pleases, and presumably will do so whenever it has to send a packet that doesn't fit through the path MTU.<p>- The end points use Packet Too big ICMP6 messages to determine _path_ MTU, which is different from just "the MTU".<p>- With IPv4, the sender chooses whether a router will fragment when the packet exceeds the next-hop MTU or whether the router should drop the packet and send a Fragmentation Needed ICMP message - where the latter again is used for path MTU discovery.<p>- Path MTU discovery is useful because it allows the sending IP implementation to push the chunking higher up the stack when the sending higher-level protocol has the capability (as is the case with TCP, but not with UDP, for example), which tends to produce lower overhead. Unfortunately, some clueless firewall administrators, such as those responsible for AWS EC2, do filter all ICMP because they for unknown reasons consider it to be bad, thus breaking PMTUD, which can lead to hanging TCP connections.<p>- TCP sequence numbers are for bytes, always, with the special case of SYN and FIN also counting as "bytes" in the sequence, but never for segments.
I've heard that a good way to gauge a person's general technological literacy is to simply ask "what happens when I type a URL in a browser and hit Enter?" Obviously, the question is deliberately open-ended, and any step in the process can be broken down into more detailed steps (up to a point). I'd like to see an article that initially shows high-level steps (e.g. DNS request, HTTP request, server processing, HTTP response, parsing and rendering), but allows each step to be expanded progressively with increasing detail.
If you want to further your understanding of network protocols, there's an excellent open textbook available here: <a href="http://cnp3book.info.ucl.ac.be/" rel="nofollow">http://cnp3book.info.ucl.ac.be/</a>
<i>> There’s a misconception that restarting the (HTTP) request will fix the problem. That is not the case. Again, TCP will resend those packets that need resending on its own.</i><p>But that's not true if the connection is interrupted at the socket level, right?<p>For example, if the device switches from 3G to Wi-Fi, or from Wi-Fi to wire, then I believe, its hardware address changes, its IP address changes and the socket becomes stale. But the TCP connection, would it be closed right away or would it hang until some timeout? (And does it depend on the OS?)
If you want to learn about IP, TCP, UDP and some of the protocols below this I would highly recommend reading Richard Stevens book TCP/IP Illustrated, Volume 1: The Protocols.<p>For two reasons: It's probably one of the best introductions to the subject that has ever been written, and it's a model example of how a technical book should be written.<p>I'd be hard pressed to find a reason not to go this route at least once in your life. I know the material pretty well but I still re-read Stevens books every few years just because it is so good.
It's nice to see this recent increased emphasis on Web/mobile developers understanding the technologies that link it all together. The next thing I would add is a high level overview of the sockets API. While these topics aren't critical to most day-to-day lives of developers, they are certainly useful to understand.
I'm still curious about an explanation why do we have both TCP and UDP.<p>For example if you do peer to peer, you need low latency, and UDP is best for that.<p>I think it's because TCP is hardware optimized, but it's designed to transmit a file in a stream, so if a packet is corrupt, it just waits to send that packet. In that fashion, TCP tend to be slower, but on average it's more efficient for single files or webpages.<p>You don't have good granularity with TCP, but if you want to work with UDP, you need to add redundancy and other mechanisms to make sure all is good.<p>ENet is an example of using UDP for gaming, so the goal is to have the lowest latency possible.
> The improvements of using HTTP pipelining can be quite dramatic over high-latency connections – which is what you have when your iPhone is not on Wi-Fi. In fact, there’s been some research that suggests that there’s no additional performance benefit to using SPDY over HTTP pipelining on mobile networks<p>Excellent summary but i think pipeline has been oversimplified. HTTP pipelining is a FIFO queue. The responses have to be delivered in the same order as the requests. So if the first(or an early) response took longer to generate, all other requests in the pipeline have to wait. Something that SPDY is not susceptible to.
I prefer <i>The Unix and Internet Fundamentals HOWTO</i>:<p><a href="http://en.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/" rel="nofollow">http://en.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWT...</a>
David Wetherall teaches this course @ Coursera.<p><a href="https://www.coursera.org/course/comnetworks" rel="nofollow">https://www.coursera.org/course/comnetworks</a><p>He pretty much wrote the book.
There's a minor typo below the HTTPS section. It should be TLS not TSL ;)<p>Edit: By the way, it was a nice article. I especially liked the tcpdump explanation.