Way back then there were competing visions of what the internet might be - some were corporate (and somewhat based around corporate lock-in) DNA/BNA/SNA/etc others were more com ing from a postal/telegraph sort of world X.25/OSI - in many ways TCP/IP was an outlier, the fact that it didn't really belong to anyone had a lot to do with why it succeeded (also they understood datagrams, and weren't really worried about how to charge for dropped packets).<p>I suspect (I wasn't even close to being in the room) that freezing TCP was likely a very pragmatic thing, if you wanted to be accepted as THE internet you had to be perceived as finished, otherwise someone else's many 1000 person-year project would have won.<p>One of the great things about IP is that it's extensible, there's still room for protocols other than UDP/TCP, you can still write something new and better, or a fixed TCP, and install it along side the existing protocols - of course getting everyone to accept it and use it will be difficult
For one we’re limited to TCP and UDP– without a better protocol for media streaming .<p>authentication was omitted , resulting in horrifying UX and security holes<p>and the omission of encryption led to sloppy tunneling solutions that are still being reworked 50 years later
One of the biggest bummers is how much the Internet has mostly collapsed to TCP and of that a very large share is http/https. UDP is still going strong for a handful of important applications. But if it's not one of those two -- good luck getting end-to-end transit.
TCP/IP as we know it today was an ARPANET compromise between various views. A great first-party account of it (assume the RINA views carry some bias, but the paper is a first principles type look at networking):<p><a href="https://netfoundry.io/whitepapers/Post-IP-RINA-Advances.pdf" rel="nofollow">https://netfoundry.io/whitepapers/Post-IP-RINA-Advances.pdf</a><p>Architecturally, speed prevailed over security and control. This was great for web evolution of the past 20 years.<p>Now is the time to assess if alternative architectures are better fits for use cases which could benefit from recursive architectures (and likely with different business models than today's commercial web) such as RINA, including use cases in which security, control and quality are of the highest value.
That's always been obvious to me. The Web & Net are built on RFCs: Requests for Comment. They aren't called PRD's (protocol requirement documents) or something along those lines. Kind of beautiful actually, that it wasn't as top down as I would have expected.
Reminds me of RINA (Recursive Internetwork Architecture):<p>> RINA's fundamental principles are that computer networking is just Inter-Process Communication or IPC, and that layering should be done based on scope/scale, with a single recurring set of protocols, rather than based on function, with specialized protocols. The protocol instances in one layer interface with the protocol instances on higher and lower layers via new concepts and entities that effectively reify networking functions currently specific to protocols like BGP, OSPF and ARP. In this way, RINA claims to support features like mobility, multihoming and quality of service without the need for additional specialized protocols like RTP and UDP, as well as to allow simplified network administration without the need for concepts like autonomous systems and NAT.<p><a href="https://en.wikipedia.org/wiki/Recursive_InterNetwork_Architecture" rel="nofollow">https://en.wikipedia.org/wiki/Recursive_InterNetwork_Archite...</a>
Just a minor gripe, perhaps. Where does it say that Haverty is an original person behind FTP? RFC 959 seems to give most of the credit to Abhay Bhushan [1] from one of the older RFCs, 114 [2].<p>[1] <a href="https://datatracker.ietf.org/doc/html/rfc959" rel="nofollow">https://datatracker.ietf.org/doc/html/rfc959</a><p>[2] <a href="https://datatracker.ietf.org/doc/html/rfc114" rel="nofollow">https://datatracker.ietf.org/doc/html/rfc114</a>