TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How I attacked myself with Google Spreadsheets (2012)

168 pointsby aliakhtarover 9 years ago

11 comments

tristanjover 9 years ago
Previous discussion (763 points, 143 comments). For some reason this doesn&#x27;t show up in the &quot;past&quot; search results but shows up if you search for the domain.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=3890328" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=3890328</a>
评论 #10269441 未加载
评论 #10268690 未加载
aresantover 9 years ago
That is just such a fantastic headline. So close to linkbait but really just a brilliant summary of what actually happened. So good I actually remember it from last time (and clearly others did too based on first comment!). I&#x27;m always amazed by what enters our collective consciousness based on exceptional word smithery.
thoman23over 9 years ago
Apparently this is a repost, but I for one missed it the first time around.<p>I&#x27;ll just say I found it to be a highly entertaining and well-written account of a nightmare scenario I think many of us here can relate to: The unexpected and unexplained exploding AWS bill.
wodenokotoover 9 years ago
I know Google has cheaper bandwidth than most, but it&#x27;s still amazing that they are willing to pull 250gb every hour of every day for a single, free spreadsheet.
评论 #10269246 未加载
scintill76over 9 years ago
&gt; What I find fascinating in this setting is that Google becomes such a powerful weapon due to a series of perfectly legitimate design decisions.<p>It does have a certain &quot;perfect storm of good intentions&quot; quality, but no, &quot;prefetching&quot; hundreds of gigabytes worth of images that the user is not looking at right now* and that will not be cached for the next time the user views it, that the user did not indicate will be changing frequently or have recently changed, and doing it every hour on the hour (according to timestamps in a screenshot), is not a &quot;perfectly legitimate&quot; design. Calling it that implies IMO that there is nothing Google should change about this (maybe the author does not mean that.)<p>Maybe I or the author are missing something here -- why did Google think it was necessary to fetch something that will not be immediately shown to the user nor will it be cached for later? I can understand the no-caching decision, but then why fetch at all if it&#x27;s not needed <i>now</i>? Why is 1 hour supposedly short enough for some hypothetical user that wants their spreadsheet&#x27;s embedded images to update automatically, but long enough to not cause damage (wasn&#x27;t long enough in this case)? And I hinted at &quot;on the hour&quot; above because it seems like some sort of staggered refreshing would be better on the CPUs and networks involved, though it wouldn&#x27;t make a difference to the author.<p>Even if for some reason they think fetching this aggressively and wastefully is good, it seems like it&#x27;s in Google&#x27;s own interest to have some kind of safety valve (bandwidth restriction, hard abort, something in between) after a few hundred megabytes on one spreadsheet&#x27;s refresh cycle. If nothing else, that omission means it probably wasn&#x27;t a &quot;legitimate&quot; design decision.<p>Wild theory: the author was accidentally causing the refresh somehow (or maybe purposely automated but forgotten.) Somehow it seems more likely than Google setting it up this way on purpose...<p>* I&#x27;m kind of assuming here, but the author doesn&#x27;t mention anything like he was actively viewing the spreadsheet while the attack was happening. Even if he had it open (and with all the image-linked cells in view!) for hours on end, I stand by my other points that it&#x27;s strange and not a perfect design for Google to auto-refresh in this fashion.
评论 #10269688 未加载
spdionisover 9 years ago
A lot of people complain about amazon lacking a cap for spending everytime such a story appears. Invariably though everrytime i&#x27;ve read about something like this happening amazon dropped the bill if the usage was not intentional.<p>Honestly from what i&#x27;ve seen this policy of amazon&#x27;s is really nice and if they did otherwise they&#x27;d constantly get a lot of bad pr. Cases like this probably happen often but not everyone writes about it. A lot more would write rant blog posts if amazon didn&#x27;t drop such bills.
x1798DEover 9 years ago
Does anyone know if Google ended up changing their behavior on this?<p>I&#x27;m struggling to see why this is a legitimate design decision on their part - how is downloading a new copy every hour different from maintaining a persistent cache wherever they are storing it after download?
评论 #10268946 未加载
vorticoover 9 years ago
Why do people chose to use hosting services with no cap, and then complain that their bills are arbitrarily high? You agreed to that in the Terms and Conditions.<p>Just use a service with a fixed monthly rate for a fixed capacity, and up&#x2F;downgrade as needed. Of course you don&#x27;t want your service to be shut down after reaching a limit, but you should be watching the resources as you would with AWS, only the consequences are much less bizarre than a surprise $1,700 bill.
评论 #10271256 未加载
scintill76over 9 years ago
If content negotiation[1] had a standard way, Google&#x27;s client could have told the server it was only for a thumbnail of N size and a smart server could serve less bytes.<p>[1] <a href="https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;HTTP&#x2F;Content_negotiation" rel="nofollow">https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;HTTP&#x2F;Content_ne...</a>
leni536over 9 years ago
Is it a good idea to use AWS through a prepaid virtual card to avoid such cases? I&#x27;m planning to set a up a personal site and I could afford the site going down instead of paying $1000. The guy got refund though, but I would rather not going through this hassle.
评论 #10270276 未加载
bearzooover 9 years ago
I REALLY REALLY want someone to do this with a huge amount of google image thumbnails so that the google crawlers just start hitting google servers. Would it be considered malicious to do such a thing?
评论 #10269187 未加载