TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

HTTP/2 all the things

335 pointsby matsuuover 10 years ago

14 comments

teddyhover 10 years ago
No mention of SRV records. Of course.<p><a href="https://news.ycombinator.com/item?id=8404788" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=8404788</a><p>It really is no surprise that Google is not interested in this, since <i>Google</i> does not suffer from any of those problems which using SRV records for HTTP would solve. It’s only users which could more easily run their own web servers closer to the edges of the network which would benefit, not the large companies which has CDNs and BGP AS numbers to fix any shortcomings the hard way. Google has already done the hard work of solving this problem for themselves – <i>of course</i> they want to keep the problem for everybody else.<p>This is going to bite them big time in the end, because Google got large by indexing the Geocities-style web, where everybody <i>did</i> have their own web page on a very distributed set of web hosts. What Google is doing is only contributing to the centralization of the Web, the conversion of the Web into Facebook, which will, in turn, kill Google, since they then will have <i>nothing to index</i>.<p>They sort of saw this coming, but their idea of a fix was Google+ – trying to make sure that <i>they</i> were the ones on top. I think they are still hoping for this, which is why they won’t allow a decentralized web by using SRV records in HTTP&#x2F;2.
评论 #8550324 未加载
评论 #8550748 未加载
insertnicknameover 10 years ago
It&#x27;s interesting that modern web developers don&#x27;t consider that maybe websites are slow because they keep stuffing them to the brim with every conceivable script and other resources from 30 different domains. No, of course that&#x27;s not why the web is slow. The web is obviously slow because HTTP is too slow.
评论 #8549811 未加载
评论 #8549815 未加载
评论 #8550330 未加载
评论 #8551328 未加载
katorover 10 years ago
The first 100ms to 160ms of the Yahoo call is RTB[1] delay. I don&#x27;t see how HTTP&#x2F;2 will fix that. Also notice all the various analytics beacons that will still be third party in the future. I know many people hate this stuff but when you&#x27;re Yahoo! and most of your income is from advertisements on your web site you will still have to sell that inventory and prove to third parties they got what they paid for via various &quot;pixels&quot;.<p>I don&#x27;t doubt there is a lot of room to improve, clearly a lot was learned from SPDY and other projects. But I do worry that what we&#x27;re actually doing is rewarding large companies who can bring all the stuff into one stream and the small sites will be yet more disadvantaged. I could see a world where this is used as a way for PHB&#x27;s[2] to justify &quot;just move our stuff to Google&#x2F;Amazon&#x2F;Apple&quot;. These sort of initiatives may incentivize more centralization and IMHO that is not a good thing.<p>Also being an old guy(tm) I worry about loosing the ability for humans to talk to these services or see the conversation in a textual way. Before you explode on that comment think about the balance of JSON -vs- various binary serialization solutions. How many of us have chosen a serialization protocol because &quot;we can read it&quot;. I&#x27;m not arguing that as a perfect argument, but if you&#x27;re already pushing TLS and HPACK then why not at least leave the textual data in there? I guess I worry that now we head down the RFC hell where only large bodies can get extensions in the protocol because we only have 256 options on this header and each bit counts. To be fair I&#x27;ve not deep dove the protocol yet, I suppose theses are just rantings of a guy who read a slide show.. :)<p>[1] <a href="http://en.wikipedia.org/wiki/Real-time_bidding" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Real-time_bidding</a> [2] <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pointy-haired_Boss</a>
评论 #8550402 未加载
angersockover 10 years ago
People do realize that this is going to mainly benefit large companies, right? Anybody who still scatters their resources across many CDNs and whatnot is probably not going to see a tremendous benefit.<p>I&#x27;m not convinced that the 30-40% improvement in performance (in some cases) is worth the additional complexity. There are some nice features, but I can&#x27;t help but think that this is something being pushed mainly by Google et al. because it benefits their server farms as opposed to being an objectively good idea.<p>At least it should be relatively easy to setup on a server.
评论 #8549733 未加载
评论 #8549587 未加载
评论 #8549718 未加载
mjevansover 10 years ago
&quot;push &#x27;tombstone&#x27; record to invalidate cache&quot;<p>For that &#x2F;alone&#x2F; this is technically superior and that&#x27;s frosting on the cake.<p>Now if only we could also get DNS servers to reply with similar packages of useful data.
评论 #8550090 未加载
byuuover 10 years ago
Having written an HTTP server+proxy recently, I haven&#x27;t been super impressed with HTTP&#x2F;2 so far. There is some good in it (basically everything I&#x27;m not mentioning), but also a lot of bad.<p>First, Firefox (and some others) are forcing TLS to use HTTP&#x2F;2 : <a href="https://wiki.mozilla.org/Networking/http2" rel="nofollow">https:&#x2F;&#x2F;wiki.mozilla.org&#x2F;Networking&#x2F;http2</a> ; that&#x27;s a big deal breaker for a lot of people. Yes, encryption is all well and good. I&#x27;ll all for it! But SSL certs either cost money, or you get them from companies that will want cash to revoke them for you if compromised. SSL&#x2F;TLS errors are still a royal bitch (and pop up with less popular authorities), with browsers warning you of your impending <i>undoing</i> if you choose to continue (sometimes damn near requiring a blood contract to override.) They also require extra CPU resources. This can be a problem for a site that is only hosting kitten pictures or video game news. It&#x27;s also a barrier toward people like me experimenting with it, since I now <i>also</i> have to learn how to use TLS if I just want to toy around with the protocol.<p>Second, I don&#x27;t really agree that using a new, custom-made compression algorithm is a smart way to do headers. We are talking about ~300 <i>bytes</i> of data per header ... are the bandwidth gains really so superior to outweigh the CPU costs in having to compress the data, and to overcome the added programming complexity in working with these headers?<p>Third, it&#x27;s really a fundamentally different way to do things. Like the slides said, you&#x27;re going to have to really redesign how servers and website packages serve up content to be optimized for this new model, or else performance may even be worse than HTTP&#x2F;1.1 ... having seen the way the real world works, I&#x27;m not very confident that web developers are going to take this seriously enough, and we&#x27;ll likely see a lot of &quot;HTTP&#x2F;1 over HTTP&#x2F;2&quot; behavior anyway (eg not taking advantage of server push.) The servers like Apache and nginx can only go so far toward doing this for you.<p>Fourth, since it&#x27;s not backward-compatible, we&#x27;re pretty much not going to be able to use HTTP&#x2F;2 exclusively for another 5 - 10 years. Which, of course, doesn&#x27;t mean we shouldn&#x27;t ever upgrade HTTP&#x2F;1. It&#x27;s just kind of crappy that we have to basically run two very different HTTP engines that serve content very differently for the next decade, waiting for people to upgrade their browsers.<p>I would have liked to have seen an HTTP&#x2F;1.2 intermediary step that added a few extra headers, like &#x27;Server-Push: &quot;filename&quot;, ETag&#x27;; and perhaps a specification rule that no HTTP&#x2F;1.2 request could ever ask for &#x2F;favicon.ico or &#x2F;apple-touch-icon.png. Just that would have reduced countless extra wasteful connection requests -&gt; 304 Not Modified responses that we have today on HTTP&#x2F;1.1, without having to resort to max-age and not being able to instantly update your site anymore. And it would just silently keep working for HTTP&#x2F;1.1 users (obviously without the 1.2 benefits.)<p>...<p>Also, all of these slides are always pretty sparse. Given that the new header format is binary, does anyone know how clients are going to go about requesting HTTP&#x2F;2 capabilities? Is there a special HTTP&#x2F;1.1 header? Because Apache will respond to &quot;GET &#x2F; HTTP&#x2F;2&quot; with an HTTP&#x2F;1.1 OK response at present. (In fact, it responds with 200 OK even to &quot;GET &#x2F; ITS_JUST_A_HARMLESS_LITTLE_BUNNY&#x2F;3.141592&quot; ...)
评论 #8549763 未加载
评论 #8550057 未加载
评论 #8550387 未加载
评论 #8550649 未加载
评论 #8551132 未加载
评论 #8551623 未加载
评论 #8551981 未加载
评论 #8552192 未加载
_stephanover 10 years ago
The current specification of priorities in HTTP&#x2F;2 seems problematic (and apparently was agreed upon by coin toss): <a href="http://lists.w3.org/Archives/Public/ietf-http-wg/2014OctDec/0450.html" rel="nofollow">http:&#x2F;&#x2F;lists.w3.org&#x2F;Archives&#x2F;Public&#x2F;ietf-http-wg&#x2F;2014OctDec&#x2F;...</a>
higherpurposeover 10 years ago
It&#x27;s a shame they dropped mandatory encryption. Now &quot;HTTP2 all the things&quot; doesn&#x27;t mean &quot;encrypt all the things&quot; anymore.
评论 #8553184 未加载
评论 #8549769 未加载
评论 #8551710 未加载
BorisMelnikover 10 years ago
Wow what a great history lesson. I had no idea HTTP 1 (or .9) was basically an &quot;idea framework&quot; for the www. Sounds like this will help solve some major issues and organize the interwebs a bit better.
adam-aover 10 years ago
I&#x27;m curious about how the Header tables are supposed to work. How does the client reliably know which request was last received by the server? In the example<p>&gt; method: GET &gt; path: &#x2F;resource &gt; ...<p>can be followed by just<p>&gt; path: &#x2F;other_resource<p>but how do I know a badly behaving router didn&#x27;t delay a DELETE request from earlier? Do I have to manually table all my responses to make sure there are no potentially dangerous requests on the wire?
评论 #8550172 未加载
dsr_over 10 years ago
What&#x27;s the number one thing people who run web servers are concerned about?<p>It&#x27;s not performance.<p>It&#x27;s not customer experience.<p>It&#x27;s reachability.<p>When a person with a browser clicks, they have to receive the page they clicked on before anything else can happen. Seems obvious, right?<p>In order to do that, they need a web server that they trust. Most people opt for Apache, nginx or IIS. Which of those has well-tested and trustworthy HTTP&#x2F;2 implementations?
评论 #8551761 未加载
estover 10 years ago
I wonder what&#x27;s QUIC&#x27;s role in this? An UDP based multi-home roaming protocol looks so much better for today&#x27;s mobile internet world.
评论 #8549768 未加载
devantiover 10 years ago
i&#x27;m assuming clients and web servers that start to support http2 will also be backwards compatible with http1.1 which seems necessary
hnmcsover 10 years ago
Hell yeah, Ilya Grigorik.<p><a href="https://www.igvita.com/archives" rel="nofollow">https:&#x2F;&#x2F;www.igvita.com&#x2F;archives</a><p>&quot;A word to the wise is sufficient.&quot; When he speaks, it&#x27;s quite often relevant at a quantum level beyond your average tech post or presentation.