TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Van Jacobson Denies Averting Internet Meltdown in 1980s (2012)

47 点作者 kercker大约 7 年前

5 条评论

Animats大约 7 年前
I&#x27;d been working on network congestion in 1984-1985, and wrote the classic RFCs on this.[1][2] I did the networking work at Ford Aerospace, but they weren&#x27;t in the networking business, so it was a sideline activity. Once we had the in-house networking working well, that effort was over. By 1986, I was out of networking and working on some things for a non-networking startup, which turned out very well.<p>There was much computer vendor hostility to TCP&#x2F;IP, because it was vendor neutral. DEC had DECnet. IBM had SNA. Telcos had X.25. Networking was seen as an important part of vendor lock-in. Working for a company that was a big buyer of computing, I had the job, for a while, of making it all talk to each other.<p>Berkeley BSD&#x27;s TCP was so influential because it was free, not because it was good. It took about five years for it to get good. Pre-Berkeley implementations included 3COM&#x27;s UNET (we ran that one, after I made heavy modifications), Phil Karn&#x27;s K9AQ version for amateur radio, Dave Mills Fuzzball implementation for small PDP-11 machines, and Mark Crispin&#x27;s implementation for DEC 36-bit machines. The first releases of Berkeley&#x27;s TCP would only talk to Berkeley TCP, and only over Ethernet. Long haul links didn&#x27;t work, and it didn&#x27;t interoperate properly with other implementations. (The initial release of 4.3BSD would only talk to some systems during even numbered 4 hour periods because Berkeley botched the sequence number arithmetic. I spent 3 days finding that bug, and it wasn&#x27;t fun. Casts to (unsigned) had been misused.)<p>The Berkeley crowd liked dropping packets much more than I did. I used ICMP congestion control messages to tell the sender to slow down, rather than dropping packets. I was more concerned with links with large round-trip times, because we were linking multiple company locations, while Berkeley was, at the time, mostly a local area network user. So they had much lower round trip times.<p>I&#x27;m responsible for the terms &quot;congestion collapse&quot; and devised &quot;fair queuing&quot;. I also pointed out the game-theory problems of datagram networks - sending too much is a win for you, but a lose for everybody. This was all in 1984. Today, we have &quot;bufferbloat&quot;, which is a localized form of congestion collapse, fair queuing is widely used (but not widely enough), and we have enough core network bandwidth that the congestion is mostly at the edges. Today&#x27;s hint: if you have something with a huge FIFO buffer feeding a bottleneck, you&#x27;re doing it wrong. Looking at you, home routers.<p>Back then, I realized that fair queuing could be turned into what&#x27;s now called &quot;traffic shaping&quot;, but decided not to publish that because it would provide ammunition for the people who wanted to charge for Internet traffic. There were telco people who assumed that something like the Internet would have usage billing. This could easily have gone the other way. Look up &quot;TP4&quot;, an alternative to TCP pushed by telcos. That was supported by Microsoft up to Windows 2000.<p>Berkeley broke the Nagle algorithm by putting in delayed ACKs. Those were a bad idea. The fixed ACK delay is designed for keyboard echo and nothing else. When a packet needs an ACK, Berkeley delayed sending the ACK for a fixed time, in hopes that it could be piggybacked on the returned echoed character packet. The fixed time, usually 500ms, was chosen based on human keyboarding speed. Delaying an ACK is a bet that a reply packet is coming back before the sender wants to send again. This is a lousy bet for anything but classical Telnet. Unfortunately, I didn&#x27;t hear about this until years after it was too late, having moved to PC software.<p>UNET was expensive; several thousand dollars per machine. BSD offered a free replacement. So 3COM exited TCP&#x2F;IP and went off to do &quot;PC LANs&quot;, which were a thing in the 1980s.<p>John Nagle<p>[1] <a href="https:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc896" rel="nofollow">https:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc896</a> [2] <a href="https:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc970" rel="nofollow">https:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc970</a>
评论 #16936840 未加载
评论 #16936694 未加载
评论 #16935513 未加载
评论 #16937141 未加载
评论 #16935464 未加载
jgrahamc大约 7 年前
Here&#x27;s Van Jacobson&#x27;s 1988 paper on slow start and other congestion avoidance algorithms. It&#x27;s really worth reading to understand what was happening: <a href="https:&#x2F;&#x2F;cs162.eecs.berkeley.edu&#x2F;static&#x2F;readings&#x2F;jacobson-congestion.pdf" rel="nofollow">https:&#x2F;&#x2F;cs162.eecs.berkeley.edu&#x2F;static&#x2F;readings&#x2F;jacobson-con...</a><p>It&#x27;s personally fascinating to me because in about 1984&#x2F;5 I was working on a local area network and with a friend &#x27;invented&#x27; a connection-oriented protocol that used counters to spot dropped packets (because of Ethernet collisions) and request retransmission. We successfully overloaded the network of about 16 machines using this algorithm as it went crazy retransmitting and upping the collision rate and we began worked on very similar algorithms to fix this (but didn&#x27;t get that far because had A levels to do).
评论 #16932327 未加载
评论 #16933557 未加载
mhandley大约 7 年前
The original 1988 Congestion Avoidance and Control paper is still well worth reading: <a href="http:&#x2F;&#x2F;ee.lbl.gov&#x2F;papers&#x2F;congavoid.pdf" rel="nofollow">http:&#x2F;&#x2F;ee.lbl.gov&#x2F;papers&#x2F;congavoid.pdf</a><p>The observations about timers and ack-clocking are still as relevant today.<p>That&#x27;s not to say that 1988-style Additive Increase Multiplicative Decrease is perfect as a congestion control scheme. There are lots of issues, ranging from not working well on paths with high bandwidth-delay product, to being overly sensitive to non-congestive packet loss, to causing buffer bloat. But I don&#x27;t think there&#x27;s any doubt that the Internet survived growing both in traffic and in hosts by many orders of magnitude, survived all the underlying technologies being replaced multiple times, and survived massive changes in applications, all in part because TCP does a reasonable job of matching offered load to available capacity.
drewg123大约 7 年前
Van is at Google, and has been for quite some time. He&#x27;s been working on BBR at Google. <a href="https:&#x2F;&#x2F;www.networkworld.com&#x2F;article&#x2F;3218084&#x2F;lan-wan&#x2F;how-google-is-speeding-up-the-internet.html" rel="nofollow">https:&#x2F;&#x2F;www.networkworld.com&#x2F;article&#x2F;3218084&#x2F;lan-wan&#x2F;how-goo...</a>
评论 #16931113 未加载
th-ai大约 7 年前
Van Jacobson <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Named_data_networking" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Named_data_networking</a>
评论 #16933100 未加载