After the whole IPv6 story, I'm surprised the author ignores the political dimension of designing a new protocol. As Mitch Kapor said "Architecture is Politics". It's not just about what solution is best from a technological perspective, it's about what we want our future to look like.<p>The internet has become way more important than back when these protocols first became standard, and every time a protocol or standard is up for debate, political and commercial forces try to influence it in their favor. Some of the concepts they tried to shove into IPv6 were downright evil, and would have killed the internet as we know it. Personally, I'm relieved all that is left is a small, un-sexy improvement which albeit slowly, will eventually spread and solve the only really critical problem we have with IPv4.<p>I really dread subjecting HTTP to that process. Although I fully agree with the author's critique of cookies for instance, the idea of replacing them with something "better" frankly scares the crap out of me. Especially when the word "identity" is being used. You just know what kind of suggestions some powerful parties will come up with if you open this up for debate, and fighting that will take up all of the energy that should be put towards improving what we already have.<p>As techies we should learn to accept design flaws and slow adoption and look at the bigger picture of the social and political impact of technology: HTTP may be flawed, but things could be way, way worse.
His proposal is at <a href="http://phk.freebsd.dk/misc/draft-kamp-httpbis-http-20-architecture-01.txt" rel="nofollow">http://phk.freebsd.dk/misc/draft-kamp-httpbis-http-20-archit...</a><p>It comes with the caveats that "Please disregard any strangeness in the boilerplate, I may not
thrown all the right spells at xml2rfc, and also note that I have
subsequently changed my mind on certain subjects, most notably
Cookies which should simply be exterminated from HTTP/2.0, and
replaced with a stable session/identity concept which does not make
it possible or necessary for servers to store data on the clients."
While in general I understand where he is coming from, I believe his main argument about adoption is flawed.<p>What do you think is more likely going to be adopted? A protocol that's not backwards compatible at all (heck, it even throws out cookies) or something that works over the existing protocol, negotiating extended support and then switching to that while continuing to work the exact same way for both old clients and the applications running behind it?<p>See SPDY which is a candidate for becoming HTTP 2.0. People are ALREADY running that or at least eager to try it. I don't think for a second that SPDY is having the adoption problems of ipv6, SNI issues aside.<p>Even if native sessions would be a cool feature, how many years do you believe it takes before something like that can be reliably used? We're still wary of supporting stuff that a 11 years old browser didn't support.
How protocol adoption happens, in three steps:<p>1. Working code.
2. Publicity.
3. Ubiquity.<p>That's it. Kamp is making a lot of the right noises here, but he's already lost ground to SPDY just because they've shipped code. No amount of sitting round tables bashing out the finer details of a better spec will help as much as getting code written - even if it's just a placeholder for an extensible spec, as long as that placeholder does something useful.
(These are just my opinions as web developer. Please feel free to downvote if my expectations are wrong)<p>While I can completely agree with the technical merits of this proposal, there are some very two-faced statements.<p>Author begins by pointing out the painfulness of IPv4 to IPv6, says that the next HTTP upgrade should be <i>humble</i>. But then proceeds to kill cookies and remove all the architectural problems in HTTP. Isn't that the same what IPv6 was? Wouldn't such an approach produce the same amount of pain to the implementors (that is us, the web developers)?<p>Any upgrade will certainly have some backward-incompatible changes. But if it is <i>totally</i> backward incompatible, I don't understand why it still needs to be called HTTP. Couldn't we just call it SPDY v2 instead, or some other fancy name?<p>Cookies are a problem. But the safest way to solve that problem is in isolation. Try to come up with some separate protocol extension, see if it works out, throw it away if it doesn't. But why marry the entire future of HTTP with such a do-or-die change?<p>I blindly agree with the author that SPDY is architecturally flawed. But why is it being advocated in such big numbers? Even Facebook (deeply at war with Google) is embracing it. It's because SPDY doesn't break existing applications. Just install mod_spdy to get started. But removing cookies? What happens to the millions of web apps deployed today, which have $COOKIE and set_cookie statements everywhere in the code? How do I branch them out and serve separate versions of the same application, one for HTTP/1.1 and another for HTTP/2.0?<p>More doubts keep coming... Problem with SPDY compressing HTTP headers? Use SPDY only for communication over the internet. Within the server's data center, or within the client's organization - keep serving normal HTTP. There are no bandwidth problems within there. Just make Varnish and the target server speak via SPDY, that is where the real gains are.<p>I could go on. I'm not trying to say that the author's suggestions are wrong. They are important and technically good. But the way they should be taken up and implemented, without pain to us developers, doesn't have to be HTTP/2.0. Good ideas don't need to be forced down others throats.
I have a lot of respect for phkamp, varnish is an impressive piece of engineering.<p>I disagree with the stab he takes at cookie-sessions here, though. He seems to ignore that sessions are not only about identity but also about state.<p>Servers should be stateless, therefor client-sessions (crypt+signed) are usually preferable over server-sessions.<p>Having a few more bytes of cookie-payload is normally an order of magnitude cheaper (in terms of latency) than performing the respective lookups server-side for every request. Very low bandwidth links might disagree, but that's a corner-case and with cookies we always have the choice.<p>Removing cookies in favor of a "client-id" would effectively remove the session-pattern that has proven optimal for the vast majority of websites.
> See for instance how SSH replaced TELNET, REXEC, RSH, SUPDUP<p>> Or I might add, how HTTP replaced GOPHER[3].<p>telnet and gopher were used by a few thousands servers only and were not consumer facing technologies (for the most part), it doesn't make sense to compare that to IPv4 and HTTP that are used by millions (billion?) of servers.
Removing cookies from a protocol which is otherwise fully compatible with HTTP/1, in the sense of being able to be interposed as a proxy or substituted in the web server without breaking apps, is a terrible idea.<p>> Cookies are, as the EU commision correctly noted, fundamentally
flawed, because they store potentially sensitive information on
whatever computer the user happens to use, and as a result of various
abuses and incompetences, EU felt compelled to legislate a "notice
and announce" policy for HTTP-cookies.<p>> But it doesn't stop there: The information stored in cookies have
potentialiiy very high value for the HTTP server, and because the
server has no control over the integrity of the storage, we are now
seing cookies being crypto-signed, to prevent forgeries.<p>Anyone with a grain of skill is capable of using cookies as identifiers only; it's hard to see what cookies vs identifiers has to do with "notice and announce" or security. An explicit session mechanism could provide benefits over using cookies for the same purpose, but what exactly would <i>removing</i> cookies achieve other than breaking the world?
Frankly many problems addressed by these proposals are not things that should be solved in a protocol like HTTP.
Do you want every HTTP request to be slowed down with a crypto token exchange and verification? unlikely. In special cases you definately want it, but all the time? Absolutely not.
One huge benefit I think people are missing about the notion of a protocol standard session mechanism, is that we can use HTTP auth much easier, and perhaps get away from this notion of every site having to redo the login process. Browsers can handle the "remember login" settings, and logging out is as simple as tab closing. No "remember this computer" or "don't remember this computer" checkbox confusion. No random sites saying "remember me always" and requiring manual logout on a borrowed computer. It certainly helps with the password wallet concept too.<p>Sure all that stuff has become semi-standard as it currently exists, but it is ugly, hacky, and sometimes doesn't work, and other times opens doors for hilarious malfeasance.
"Overall, I find all three proposals are focused on solving yesteryears problems, rather than on creating a protocol that stands a chance to last us the next 20 years."
"We have also learned that a protocol which delivers the goods can replace all competition in virtually no time."<p>This is an argument for websockets eating protocol lunch.
In my opinion it is always a good idea in these kind of situations to set a goal we should strive to.<p>I wonder what the ideal web protocol would look like. If, for example, we didn't have a burden of billions of servers and Internet reliance on HTTP/1.x protocol.<p>What would be the most ideal solution that would suite emerging use cases for the Web? Are there any research papers on this topic?
load balancers aren't meant to just be "HTTP routers". they can definitely be used as such for smaller applications and do a good job at it, but a real load balancer needs to be quite complex, being able to adapt to the underlying applications that make use of it.<p>if your goal is to only route HTTP requests, then you're only solving the first step of an increasingly complicated field of computer science (namely, web applications).<p>Cookies aren't going to go away. if you want to improve the protocol to deal with cookies better, that makes sense, but acting like they are some kind of evil on the internet that should be forgotten isn't going to work. it's a bit self-defeating to argue that some protocols failed because of failure to provide new benefits and then argue against Cookies in HTTP!
I can think of a few organisations that benefit financially, a great amount from the fact that cookies allow them to track users. Ad companies like Google and Microsoft spring to mind. The same organisations building our major browsers... Conflict of interest? You better believe it.