I'd been working on network congestion in 1984-1985, and wrote the classic RFCs on this.[1][2] I did the networking work at Ford Aerospace, but they weren't in the networking business, so it was a sideline activity. Once we had the in-house networking working well, that effort was over. By 1986, I was out of networking and working on some things for a non-networking startup, which turned out very well.<p>There was much computer vendor hostility to TCP/IP, because it was vendor neutral. DEC had DECnet. IBM had SNA. Telcos had X.25. Networking was seen as an important part of vendor lock-in. Working for a company that was a big buyer of computing, I had the job, for a while, of making it all talk to each other.<p>Berkeley BSD's TCP was so influential because it was free, not because it was good. It took about five years for it to get good. Pre-Berkeley implementations included 3COM's UNET (we ran that one, after I made heavy modifications), Phil Karn's K9AQ version for amateur radio, Dave Mills Fuzzball implementation for small PDP-11 machines, and Mark Crispin's implementation for DEC 36-bit machines. The first releases of Berkeley's TCP would only talk to Berkeley TCP, and only over Ethernet. Long haul links didn't work, and it didn't interoperate properly with other implementations. (The initial release of 4.3BSD would only talk to some systems during even numbered 4 hour periods because Berkeley botched the sequence number arithmetic. I spent 3 days finding that bug, and it wasn't fun. Casts to (unsigned) had been misused.)<p>The Berkeley crowd liked dropping packets much more than I did. I used ICMP congestion control messages to tell the sender to slow down, rather than dropping packets. I was more concerned with links with large round-trip times, because we were linking multiple company locations, while Berkeley was, at the time, mostly a local area network user.
So they had much lower round trip times.<p>I'm responsible for the terms "congestion collapse" and devised "fair queuing". I also pointed out the game-theory problems of datagram networks - sending too much is a win for you, but a lose for everybody. This was all in 1984. Today, we have "bufferbloat", which is a localized form of congestion collapse, fair queuing is widely used (but not widely enough), and we have enough core network bandwidth that the congestion is mostly at the edges. Today's hint: if you have something with a huge FIFO buffer feeding a bottleneck, you're doing it wrong. Looking at you, home routers.<p>Back then, I realized that fair queuing could be turned into what's now called "traffic shaping", but decided not to publish that because it would provide ammunition for the people who wanted to charge for Internet traffic. There were telco people who assumed that something like the Internet would have usage billing. This could easily have gone the other way. Look up "TP4", an alternative to TCP pushed by telcos. That was supported by Microsoft up to Windows 2000.<p>Berkeley broke the Nagle algorithm by putting in delayed ACKs. Those were a bad idea. The fixed ACK delay is designed for keyboard echo and nothing else. When a packet needs an ACK, Berkeley delayed sending the ACK for a fixed time, in hopes that it could be piggybacked on the returned echoed character packet. The fixed time, usually 500ms, was chosen based on human keyboarding speed. Delaying an ACK is a bet that a reply packet is coming back before the sender wants to send again. This is a lousy bet for anything but classical Telnet. Unfortunately, I didn't hear about this until years after it was too late, having moved to PC software.<p>UNET was expensive; several thousand dollars per machine. BSD offered a free replacement. So 3COM exited TCP/IP and went off to do "PC LANs", which were a thing in the 1980s.<p>John Nagle<p>[1] <a href="https://tools.ietf.org/html/rfc896" rel="nofollow">https://tools.ietf.org/html/rfc896</a>
[2] <a href="https://tools.ietf.org/html/rfc970" rel="nofollow">https://tools.ietf.org/html/rfc970</a>