Blogspam.<p>Here's the actual announcement: <a href="http://chrome.blogspot.com/2012/01/speed-and-security.html" rel="nofollow">http://chrome.blogspot.com/2012/01/speed-and-security.html</a>
This is really going to fuck up your log analysis days...<p>Seeing traffic that never materialized.<p>Headers aren't part of any standard log format.<p>Every web-server configuration and every log analysis script will need to be modified, unless Chrome...<p>1. Adds a GET variable to each URL to signify this is a preview pull (ex: GET <a href="http://url/?ChromePreview=Background" rel="nofollow">http://url/?ChromePreview=Background</a>).<p>2. Hits the URL again in some way (ex: HEAD <a href="http://url/?ChromePreview=View" rel="nofollow">http://url/?ChromePreview=View</a>) to signify that this is now a view.<p>(<i>edit: adding GET vars is a bad idea as outlined in comments</i>)<p>Which will solve only half the problem, you'll still need to update your analysis scripts in a non trival way.<p>Headers won't work here well as these are not logged, and the only thing you could do with them is block the request, or have your Apache, IIS, Node.js, etc add non-standard entries into the log file, which creates more problems.<p>(<i>edit: headers are about the only way for this to work as outlined in comments</i>)<p>Not to mention the extra traffic on the web could be doubled.
Google did something similar themselves several years ago as an add-in to internet explorer. It was called web-accelerator or something like that. (correct me if i'm wrong, it might have been for firefox or even chrome itself).<p>It prefetched links you were likely to click on a website but it had to be abandoned because some GET-links also issued actions, such as removing blog-posts, on way to many web-pages causing havoc.<p>I fear this will have the same problem.
One thing that bothered me about instant, and will probably annoy me about this, was when using chrome to test REST api's. Hitting my local development server in debug mode with half constructed URL's used to drive me crazy. In general for surfing around - I think the feature is great, but I would love to be able to exclude a given site from pre-fetch/instant.
I understand that Chrome will now start to fetch and display pages before you finish typing the url.<p>Google Instant is bad, but at least only Google supports the increased load on its servers.<p>Now Chrome is trying to build "Web Instant", which everyone will have to support.
we actually launched an extension for chrome which did the same thing in october 2010 when google instant search launched. it was a one night thing. i was in college. the adoption was not good.. so i kinda ignored the project. we actually did it with the omnibox api but it was an experimental api at that time.so i released it with an button ,which you can click and type to load pages instantly..<p>here is the link,
<a href="https://chrome.google.com/webstore/detail/nipkbmplhlokenofofabadcmppaklbhp" rel="nofollow">https://chrome.google.com/webstore/detail/nipkbmplhlokenofof...</a><p>it does not work anymore coz of an api problem. u can see the working video. never bothered to fix it. just wanted to say that i did that before Google :)
Doesn't Chrome already do this to a certain extent ? I've seen server logs myself where Chrome has tried to fetch URLs that I have not finished typing.
They should send a special HTTP request header along with pre-load requests so site owners can choose to block them.<p>Eg web browser sends a request with the header:<p>X-Page-Preload: Something<p>I configure my webserver to 403 any requests with that header.
I was pretty sure that Chrome already does this (maybe it was a flag though, actually). I switched it off after a while because it got irritating... luddite that I am.
What will this do to web traffic? Sounds like this could result in a lot of additional traffic (if the the prefetched page is not what I'm interested in).