I really feel for the Amazon employees working in the us-east data center this week. Not only do they have to worry about the safety of their families, but half the internet will be watching them if anything goes down. I can't even imagine all the contingencies they have to prepare for (e.g. engineer gets paged, but has no power at home, or can't drive in due to flooded streets). Best of luck to them.
I have pressureNET running on EC2 in US-East. It's collecting data about Sandy from a bunch of Android users in the region, and I only realized a few hours ago that...it's gonna get hit by Sandy. I'm not sure what to do.<p>The Android app has a hardcoded server URL to the DNS of my instance. If the server goes down, and I prepare a backup, all my users will still be sending to the old, dead server (assuming Sandy takes out AWS in that region). So I can also update the app and give it a new server url. But I'm going to lose many hours of valuable hurricane data as users take time to get the update, etc.<p>Does anyone have any thoughts on how I can handle this?<p>I realize my mistakes and know how to fix them for next time, and my current data is obviously backed up. But for incoming data...am I screwed?<p>Edit: I have an idea. I'll update the app and give it a backup URL to use only if the main one is non-responsive. Then I'll publish the update and cross my fingers.
"We recommend customers with production databases (Crane and up) create a follower running in us-west-2 using the --region flag (this is an alpha feature we are exposing ahead of schedule to prepare for this incident)"<p>Finally! Excited multi-AZ support is coming to Heroku.
Those on EC2 might want to read <a href="http://alestic.com/2010/10/ec2-ami-copy" rel="nofollow">http://alestic.com/2010/10/ec2-ami-copy</a> for their own disaster preparations. (Though ideally you should have been on it before now...)
I was hoping <a href="http://blog.linode.com/" rel="nofollow">http://blog.linode.com/</a> would mention risk assessment/strategy on Newark center.<p>This is a good summary of data centers at risk: <a href="http://readwrite.com/2012/10/29/hurricane-sandy-vs-the-internet-in-the-path-of-frankenstorm" rel="nofollow">http://readwrite.com/2012/10/29/hurricane-sandy-vs-the-inter...</a>
A few Heroku add-ons have also high availability options. Along with a new add-on status page <a href="http://status.addons.heroku.com/" rel="nofollow">http://status.addons.heroku.com/</a><p>RedisToGo - <a href="http://blog.togo.io/status/redistogo-hurricane-preparation/" rel="nofollow">http://blog.togo.io/status/redistogo-hurricane-preparation/</a><p>MongoHQ - <a href="http://blog.mongohq.com/blog/2012/10/29/monitoring-the-weather-situation-with-hurricane-sandy/" rel="nofollow">http://blog.mongohq.com/blog/2012/10/29/monitoring-the-weath...</a>
Why aren't there more data centers in the solid craton part of the content far from these inevitable coastal problems? Genuinely curious as to why Minnesota or something is the datacenter capital of the US.
I never thought of that- they don't just have aws availability to worry about, they have the availability of 79 external service providers, totally out of their hands, to worry about.
We are complimenting companies for not having redundant setups but telling us about it, when they have been claiming all along, that they have redundant setups?
Is US-east the least reliable of all AWS data centres? Arguably, Virginia is more susceptible to natural disasters than Oregon or Northern California (while we wait for that 1 in a 100 year earthquake).