Back in 1989 Sir Tim Berners-Lee put a lot of careful thought into the design of a protocol for sharing documents using IP/TCP. However, when Ajax and Web 2.0 got going circa 2004, the emphasis was on offering software over TCP, and for that the HTTP protocol was poorly suited. Rather than carefully rethink the entire stack, and ideally come up with a new stack, the industry invented what amount to clever hacks, such as WebSockets, which were then bolted into the existing system, even relying on HTTP to handle the initial "handshake" before the upgrade.<p>What I would like to see is the industry ask itself, can HTTP be retro-fitted to work for software over TCP or UDP? It is clear that HTTP is a fantastic protocol for sharing documents. But it is what we want when our goal is to offer software as a service?<p>I'll briefly focus on one particular issue. WebSockets undercuts a lot of the original ideas that Sir Tim Berners-Lee put into the design of the Web. In particular, the idea of the URL is undercut when WebSockets are introduced. The old idea was:<p>1 URL = 1 document = 1 page = 1 DOM<p>Right now, in every web browser that exists, there is still a so-called "address bar" into which you can type exactly 1 address. And yet, for a system that uses WebSockets, what would make more sense is a field into which you can type or paste multiple URLs (a vector of URLs), since the page will end up binding to potentially many URLs. This is a fundamental change, that takes us to a new system which has not been thought through with nearly the soundness of the original HTTP.<p>Slightly off-topic, but even worse is the extent to which the whole online industry is still relying on HTML/XML, which are fundamentally about documents. Just to give one example of how awful this is, as soon as you use HTML or XML, you end up with a hierarchical DOM. This makes sense for documents, but not for software. With software you often want either no DOM at all, or you want multiple DOMs. Again, the old model was:<p>1 URL = 1 document = 1 page = 1 DOM<p>We have been pushing technologies, such as Javascript and HTML and HTTP, to their limits, trying to get the system that we really want. The unspecified, informal system that many of us now work towards is an ugly hybrid:<p>1 URL = multiple URLs via Ajax, Websockets, etc = 1 document (containing what we treat as multiple documents) = 1 DOM (which we struggle against as it often doesn't match the structure, or lack of structure, that we actually want).<p>Much of the current madness that we see with the multiplicity of Javascript frameworks arises from the fact that developers want to get away from HTTP and HTML and XML and DOMs and the url=page binding, but the stack fights against them every step of the way.<p>Perhaps the most extreme example of the brokenness are all the many JSON APIs that now exist. If you do an API call against many of these APIs, you get back multiple JSON documents, and yet, if you look at the HTTP headers, the HTTP protocol is under the misguided impression that it just sent you 1 document. At a minimum, it would be useful to have a protocol that was at least aware of how many documents it was sending to you, and had first-class support for counting and sorting and sending and re-sending each of the documents that you are suppose to receive. A protocol designed for software would at least offer as much first-class support for multiple documents/objects/entities as TCP allows for multiple packets. And even that would only be a small step down the road that we nee
d to go.<p>A new stack, designed for software instead of documents, is needed.<p>I would have been happy if they simply let HTTP remain at 1.1 forever -- it is a fantastic protocol for exchanging documents. And then the industry could have focused its energy on a different protocol, designed from the ground up for offering software over TCP.