TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

New multi-page HTTP compression proposal from Google

38 点作者 dmv超过 16 年前

6 条评论

axod超过 16 年前
Support for something like this would be a step in the right direction, but I think there are a couple of simpler ways to improve HTTP:<p>A similar peeve of mine is HTTP headers.<p>If a browser opens a connection to a web server, and the connection is keep-alive, the browser will send several requests down than one connection.<p>But for <i>every</i> single request, it'll send out it's full headers. That's really wasteful and idiotic. Send full headers when the connection is opened, there is no need to repeat every single time.<p>Also if the connection is keep-alive, it'd be reasonably simple to have gzip compression over the full data - not per request. This would achieve the same as the google proposal, but in a better way IMHO.<p>The HTTP headers can add up quite a bit if you're using XMLHttpRequest or similar. Also if the data is small, compression isn't worthwhile. HTTP header spam is a PITA.<p>So if I had my way:<p>* Headers <i>only</i> sent once at the start of a connection, not per request. Send them if they change - eg a new cookie has been set since the last request :/<p>* A new transfer-type to specify that the data is gzipped as one - instead of gzipped per request.<p>Those 2 simple changes to HTTP would make things <i>so</i> much better.
评论 #300250 未加载
评论 #300014 未加载
ardit33超过 16 年前
I read the whole thing, and I just don't like it. The beauty of HTTP headers, cookies, and elements is their simplicity (or primitivness). They are easy to implement.<p>This proposal will introduce a huge complexity to the HTTP spec. If you have implemented caching in a client, it is so easy for things to go wrong, even if the clients are right, the server, content managers could mess this up roaly really fast.<p>The other thing I don't like, is that when using raw sockets, and try to implement HTTP over it, (many reasons to do this, especially in mobile), now you have to deal with more complexities.<p>As somebody mentioned above, eliminating duplicate http headers, and addressing the duplicity issue in the markup language itself (i.e HTML5 or XHTML2), and not the transport protocol.
评论 #300032 未加载
jwilliams超过 16 年前
I haven't read the detail of the specification, but is a great idea.<p>The amount of similarity between pages of Markup (esp XML) or related pieces of JavaScript could be significant.<p>I found this Google PowerPoint that hints at some of the benefits <a href="http://209.85.141.104/search?q=cache:RIkP-5qZ4awJ:assets.en.oreilly.com/1/event/7/Shared%2520Dictionary%2520Compression%2520Over%2520HTTP%2520Presentation.ppt+SDCH+results&#38;hl=en" rel="nofollow">http://209.85.141.104/search?q=cache:RIkP-5qZ4awJ:assets.en....</a><p>The PPT claims <i>About 40 percent data reduction better than Gzip alone on Google search.</i>
dmv超过 16 年前
Link (of a link) to the PDF: <a href="http://sdch.googlegroups.com/web/Shared_Dictionary_Compression_over_HTTP.pdf" rel="nofollow">http://sdch.googlegroups.com/web/Shared_Dictionary_Compressi...</a>
andrewf超过 16 年前
Can't be a coincidence that they started pushing this a week after Chrome arrived. I wonder what other proposals Google has coming?
bprater超过 16 年前
Curious as to how this compares to standard GZIP compression over the course of a hundred pages on a website.
评论 #300395 未加载