TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Last Call: HTTP2

161 pointsby lazyloopover 10 years ago

15 comments

jacquesmover 10 years ago
Giant mistake in the making. HTTP is elegant, HTTP2 is a monstrosity.<p>Edit: downvoters: please explain what&#x27;s to like about HTTP2. I have a very hard time finding anything to like.<p>For example: no more easy debugging on the wire, another TCP like implementation inside the HTTP protocol, tons of binary data rather than text and a whole slew of features that we don&#x27;t really need but that please some corporate sponsor because their feature made it in. Counter examples appreciated.<p>Compare: <a href="http://tools.ietf.org/html/rfc1945" rel="nofollow">http:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc1945</a>
评论 #8825114 未加载
评论 #8825271 未加载
评论 #8825088 未加载
评论 #8825305 未加载
评论 #8825941 未加载
评论 #8826427 未加载
评论 #8825644 未加载
评论 #8825057 未加载
评论 #8825167 未加载
评论 #8825023 未加载
评论 #8825070 未加载
评论 #8825466 未加载
评论 #8825584 未加载
评论 #8825026 未加载
评论 #8825068 未加载
magilaover 10 years ago
It feels like HTTP2 is a classic case of &quot;something must be done, this is something, therefor it must be done&quot;. Clearly there are shortcomings in HTTP 1.1 which would be nice to address. Google to their credit spent a lot of resources coming up with a solution which met their needs. The problem is that when Google then went to httpbis the people on the WG apparently took it as an imperative that _something_ must be released as HTTP2 in relatively short order. There was a halfhearted attempt to open things up to competing ideas, but unsurprisingly SPDY was by far the most mature of the proposals. Thus SPDY became the heir apparent to HTTP by default, despite being a mud ball of complexity and layering violations.
drawkboxover 10 years ago
Technology ebbs and flows, I feel like this is a backdrift like XHTML but it will flow again.<p>Binary in Hyper Text Transfer will never seem right. I understand it is more performant but it always creates more bugs, ask any game developer, binary needed but also living on the edge of indexes&#x2F;ordering&#x2F;headers&#x2F;harder to debug&#x2F;etc. Indexing, overflows, incorrect implementations, will follow.<p>Many of the advancements in HTTP2 are good but there are some steps backwards we&#x27;ll have to re-learn again. It isn&#x27;t all about performance when it comes to correct interoperability as standards lead to many interpretations, it is why XML then JSON won data transfer, it is easy to interoperate, yes binary is more efficient over the wire but not to interoperate. Should we go back to binary formats for data exchange on the network? The protocol level is lower level but still it has been beneficial in the current standards to spreading innovation with lower barriers to understanding.<p>HTTP2 is one of those &#x27;version 2&#x27; of an app that some of the legacy genius of it was lost and overlooked in the redesign, like simplicity. An engineers job is to make something complex into something simple and blackboxing data isn&#x27;t simplifying it.
评论 #8825734 未加载
评论 #8826703 未加载
hjfgdxover 10 years ago
That mess should have never made it to last call. <a href="https://www.varnish-cache.org/docs/trunk/phk/http20.html" rel="nofollow">https:&#x2F;&#x2F;www.varnish-cache.org&#x2F;docs&#x2F;trunk&#x2F;phk&#x2F;http20.html</a>
gmzllover 10 years ago
The fact that M. Belshe is listed as the primary author, when he didn&#x27;t even work on the document, says it all. This is just Google forcing the IETF to gold plate SPDY.
评论 #8826391 未加载
fubarredover 10 years ago
SSL&#x2F;TLS is something that needs to be thrown away and start over (not that it would happen realistically without immense pressure after another spectacular failure). The over-complexity of X509 and the ease of which one can acquire legitimate certs for domains one doesn&#x27;t own is appalling. From recent revelations, it&#x27;s even more troublesome the number and scale of exfiltration of private keys, making it possible for some state actors to MITM 10&#x27;s-100&#x27;s megaconnections. (One has to put on their tinfoil hat to estimate how many countries have successfully placed staff in core IT&#x2F;webops positions of Fortune 100 that are then able to leverage that access... Not to mention high-level engagement. [The direct approach conversation might go like this: &quot;gives your keys or we will send in agents to expose embarrassing details about your org and we will still get the keys anyway.&quot;])<p>Perhaps folks like &#x27;cperciva would be kind enough to propose a single, simple TOML-based cert system that is extremely lightweight with the fewest of features. (Not that TLS&#x2F;SSL would change without focused, sustained herculean effort immediately after yet another Heartbleed.)
评论 #8825668 未加载
评论 #8825664 未加载
nlyover 10 years ago
I&#x27;m so glad HTTP&#x2F;2 is finally here to save us from the horrors of the web stack by providing a decent session layer, privacy preserving defaults, cross-domain and efficient differential caching, as-near-as-can-be bulletproof password-based authentication, and mandatory encryption.<p>Oh, wait... maybe that was a dream.
sanxiynover 10 years ago
While HTTP2 is a layering violation incarnate, apparently properly layered solution is undeployable. Perfect is the enemy of good.
评论 #8825671 未加载
评论 #8825440 未加载
Pxtlover 10 years ago
Anybody got a good summary of HTTP2 features (which I know could be described as &quot;everythign plus the kitchen sink&quot;)?
评论 #8825306 未加载
评论 #8825257 未加载
dreszgover 10 years ago
Shouldn&#x27;t a protocol as important as HTTP get more than two weeks?
评论 #8824908 未加载
评论 #8824856 未加载
cdentover 10 years ago
HTTP2 is yet another in a long series of developments that feel like the corporate takeover of the commons. Sure there are plenty of excellent features in it but they are primarily of benefit to systems doing huge (on lots of dimensions) stuff.<p>Is this the inevitable path of any technology which has initial promise for enabling individual public expression?
评论 #8826987 未加载
TwoBitover 10 years ago
I am against this. This is not a good standard. It&#x27;s a response to Google &#x27;s Microsoft-like protocol hack.
lkrubnerover 10 years ago
Back in 1989 Sir Tim Berners-Lee put a lot of careful thought into the design of a protocol for sharing documents using IP&#x2F;TCP. However, when Ajax and Web 2.0 got going circa 2004, the emphasis was on offering software over TCP, and for that the HTTP protocol was poorly suited. Rather than carefully rethink the entire stack, and ideally come up with a new stack, the industry invented what amount to clever hacks, such as WebSockets, which were then bolted into the existing system, even relying on HTTP to handle the initial &quot;handshake&quot; before the upgrade.<p>What I would like to see is the industry ask itself, can HTTP be retro-fitted to work for software over TCP or UDP? It is clear that HTTP is a fantastic protocol for sharing documents. But it is what we want when our goal is to offer software as a service?<p>I&#x27;ll briefly focus on one particular issue. WebSockets undercuts a lot of the original ideas that Sir Tim Berners-Lee put into the design of the Web. In particular, the idea of the URL is undercut when WebSockets are introduced. The old idea was:<p>1 URL = 1 document = 1 page = 1 DOM<p>Right now, in every web browser that exists, there is still a so-called &quot;address bar&quot; into which you can type exactly 1 address. And yet, for a system that uses WebSockets, what would make more sense is a field into which you can type or paste multiple URLs (a vector of URLs), since the page will end up binding to potentially many URLs. This is a fundamental change, that takes us to a new system which has not been thought through with nearly the soundness of the original HTTP.<p>Slightly off-topic, but even worse is the extent to which the whole online industry is still relying on HTML&#x2F;XML, which are fundamentally about documents. Just to give one example of how awful this is, as soon as you use HTML or XML, you end up with a hierarchical DOM. This makes sense for documents, but not for software. With software you often want either no DOM at all, or you want multiple DOMs. Again, the old model was:<p>1 URL = 1 document = 1 page = 1 DOM<p>We have been pushing technologies, such as Javascript and HTML and HTTP, to their limits, trying to get the system that we really want. The unspecified, informal system that many of us now work towards is an ugly hybrid:<p>1 URL = multiple URLs via Ajax, Websockets, etc = 1 document (containing what we treat as multiple documents) = 1 DOM (which we struggle against as it often doesn&#x27;t match the structure, or lack of structure, that we actually want).<p>Much of the current madness that we see with the multiplicity of Javascript frameworks arises from the fact that developers want to get away from HTTP and HTML and XML and DOMs and the url=page binding, but the stack fights against them every step of the way.<p>Perhaps the most extreme example of the brokenness are all the many JSON APIs that now exist. If you do an API call against many of these APIs, you get back multiple JSON documents, and yet, if you look at the HTTP headers, the HTTP protocol is under the misguided impression that it just sent you 1 document. At a minimum, it would be useful to have a protocol that was at least aware of how many documents it was sending to you, and had first-class support for counting and sorting and sending and re-sending each of the documents that you are suppose to receive. A protocol designed for software would at least offer as much first-class support for multiple documents&#x2F;objects&#x2F;entities as TCP allows for multiple packets. And even that would only be a small step down the road that we nee d to go.<p>A new stack, designed for software instead of documents, is needed.<p>I would have been happy if they simply let HTTP remain at 1.1 forever -- it is a fantastic protocol for exchanging documents. And then the industry could have focused its energy on a different protocol, designed from the ground up for offering software over TCP.
thomasfoster96over 10 years ago
Pushing content to the client, emphasis on encrypted and secure connections - woo!<p>Waiting months&#x2F;years for HTTP\2 support to appear in all the tools I use - :( ....
alexwilliamscaover 10 years ago
Let&#x27;s just skip this like we did with IPv5.