Check out the IETF draft[1], and this awesome book[2] for more details on HTTP/2.<p>Some of the coolest stuff I saw was streams and server push. Streams allow multiplexing multiple logical streams of data onto one TCP connection. So unlike the graphs you typically see in chrome network inspector where one resource request ends and another begins, frames (the unit of data) from multiple streams are sent in parallel. So this means only one connection (connects are persistent by default) is needed between server and client, and there are ways to prioritize streams and control flow so it gives devs more opportunities for performance gains.<p>Also headers are only sent in deltas now. Client/server maintain header tables with previous values of headers (which persist for the connection), so only updates need to be sent after the first request. I think this will be a consistent 40-50 byte saved per request for most connections where headers rarely change.<p>[1] <a href="http://tools.ietf.org/html/draft-ietf-httpbis-http2-14" rel="nofollow">http://tools.ietf.org/html/draft-ietf-httpbis-http2-14</a><p>[2] <a href="http://chimera.labs.oreilly.com/books/1230000000545/ch12.html" rel="nofollow">http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm...</a>
HTTP/2 is certainly not a clean separation of concerns like HTTP/1.x was, but it's something of a pragmatic approach to protocol design.<p>HTTP/1.x was neatly layered on TCP with an easy-to-parse text format. This in turn ran neatly on IP4/6, which ran on top of Ethernet and other myriad things. This separation of concerns gave us the benefit of being very easy to understand and implement, while also allowing people to subvert the system, adding things like half-baked transparent proxies to networks that would munge streams and couldn't agree where HTTP headers started. We ended up having to design WebSockets to XOR packets just to fix other people's broken deployments.<p>HTTP/1.x also became so pervasive that it became the overwhelmingly most popular protocol on top of TCP, even to the point where a system administrator could block everything but ports 80 and 443 and probably not hear anything back from their userbase. This is the reason we ended up with earlier monstrosities like SOAP and XML-RPC: by that point HTTP had become the most prevalent transport that it was assumed incorrectly in many cases that it was the <i>only</i> transport.<p>Perhaps the IETF should be pushing a parallel version of HTTP that pushes many of these concerns into SCTP. The problem here is that it'll take forever to get that rolled out and we need something to improve things now. Look at how long it's taking to roll out IPv6: something we <i>actually need</i> to fix now.
> Why is Internet Explorer leading with HTTP/2 implementation?<p><i>Leading?</i> Firefox and Chrome already support HTTP/2 already (and SPDY, the basis for HTTP/2, for a long time now), just not enabled by default.
Also, Chrome has experimental support for HTTP/2 in Canary[1] as well as Firefox since version 34 (if I'm reading [2] correctly).<p>It seems unusual for Microsoft to disable SPDY support entirely, at least until support for HTTP/2 is more widely deployed...<p>[1]: <a href="http://www.chromium.org/spdy/http2" rel="nofollow">http://www.chromium.org/spdy/http2</a><p>[2]: <a href="https://wiki.mozilla.org/Networking/http2" rel="nofollow">https://wiki.mozilla.org/Networking/http2</a>
You can probably get a comparable, if not greater, improvement in performance by using ad and tracker blocking. Most of those extra TCP streams opened when displaying a web page are for ads and trackers. Those are the ones opening a TCP connection to send their one-pixel GIF.
Will this affect the way we do AJAX requests? Or the speed of them? Or has this no impact on websites talking back to the server? My knowledge of networking at the HTTP level is limited and I am trying to find some context.
DDOS future blackmailers are happy: a new leverage for amplification :)<p>I want that so bad. Coding is hard, DDoSing is so easy.<p>Thank you architects for making black hats life so easy.
HTTPS by default? YEESS even more leverage.<p>I love progress.<p>Next great idea: implementing ICMP, UDP, routing on top of an OSI layer 7 protocol, because everybody knows religion forbid to open firewall for protocols that do the jobs, or we could even create new protocols that are not HTTP. But HTTP for sure is the only true protocol since devs don't know how to make 3 lines of code for networking and sysadmins don't know how to do their jobs.<p>And HTTP is still stateless \o/ wonderful, we still have this wonderful hack living, cookies, oauth and all these shitty stuff. Central certificate are now totally discredited, but let's advocate broken stuff even more.<p>Why not implement a database agnostic layer on top?<p>When are we gonna stop this cowardly headless rush of stacking poor solutions and begin solving the root problems?<p>We are stacking the old problems of GUI (asynch+maintainability+costs) with the new problem of doing it all other HTTP.<p>I have a good solution that now seems viable: let's all code in vanilla Tk/Tcl: it has GUI, it can do HTTP and all, and it works on all environment, and it is easy to deploy.<p>Seriously, Tk/Tcl now seems sexy.
Could somebody elaborate how server push relates to web sockets (if at all)? Are they completely independent and will both be supported or does one build on the other?<p>Given that the web is becoming more and more real-time this seems pretty interesting.
Is there a http/2 test page out there that shows if you are connecting with it?<p>Found this project but nothing live<p><a href="https://github.com/molnarg/http2-testpage" rel="nofollow">https://github.com/molnarg/http2-testpage</a>
So this terrible NIHy Rube Goldberg contraption does actually get to see the light of day.<p>I'm saddened. The days of good internet protocols are clearly behind us.