TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Best setups to avoid availability outages on AWS

128 点作者 richardv将近 13 年前
Using only AWS services, what do you put in place to help prevent disruptions when a single availability zone goes down?<p>The most simple would be to set up your instances in a multiple AZ's and then configure the ELB to round robin requests until the health of one of the instances is poor.<p>Any other thoughts?

15 条评论

PaulHoule将近 13 年前
I hate to sound like a simpleton, but for a small operation you're best off putting all your eggs in one basket.<p>I'm in one of the us-east zones and I haven't had a failure in at least a year. They retired one machine I was using and dealing with that was as simple as starting and stopping -- at a time I chose.<p>With five zones in U.S. East, the probability of a zone failure affecting a single zone systems is 1 in 5.<p>If you're a busybody who spreads your system across five zones, the probability of a failure affecting you becomes 1.<p>You're spending more money, and dealing with a lot more complexity, all to increase the probability that hardware failures will affect you.<p>Now, you're hoping that a zone-distributed system will be able to recover from failures, but that's tricky to do and it's quite unlikely that this will work if you haven't tested it. Add the fact that all the other "cool kids" will be trying to recover their systems at this time and make AMZN's control plane go down.<p>In the meantime, with probability 4/5 I'm sleeping through the disaster and the first time I hear about it is on hacker news.
评论 #4181572 未加载
评论 #4181541 未加载
sehugg将近 13 年前
My thought: Have a very nice screen for your mobile app/website that says "We are down for maintenance, please stand by."<p>Sorry to be fatalist, but it's a hard problem. This last outage was more than just an AZ failure. Region-wide API usage was affected, so operations like static IP reassignment and ELB changes were not taking effect. This means you are hanging out in the wind should there be something unusual that requires manual intervention (as was the case with us).<p>Route 53 is a good service but I don't know how its control plane works, and it could be that problems in a single region would disable the ability to update DNS records (I would guess that DNS reads are a lot more available than writes). And in any case DNS is not a very good failover mechanism due to upstream caching.<p>Unless your business model requires higher reliability than Instagram, Netflix, and Pinterest, I'd suggest going multi-AZ, crossing your fingers, and doing everything else right.
评论 #4181994 未加载
rkalla将近 13 年前
<i>Mindless rambling ahead; I love this topic</i><p>Richard,<p>AWS is an amazing tool and you have a few options here, but the downside is the more options you use to be highly-available (HA), the more expensive AWS gets (as you would imagine).<p>Your first option is to be HA across a SINGLE region; to do this you make use of the elastic load balancers (ELB) + auto-scaling. You setup auto-scale rules to launch more instances in different availability zones (AZ) either in response to demand or in response to failures (e.g. "always keep at least 3 instances running").<p>You compliment that with an ELB to load-balance incoming requests automatically across those instances in the different AZs. This is all fairly straight forward through the web console (except auto-scaling is still done via CLI for some reason)<p>If you want to be HA ACROSS regions you can't just use ELBs anymore, you have some added complexity and an additional AWS feature you will likely want to use: Route 53.<p>Route 53 is Amazon's DNS service which offers a lot of slick DNS-features like removing dead points from DNS rotation, latency-based routings, etc. There are also something like 29 deployments of Route53 (and CloudFront) around the globe so you'll hopefully never have Route53 become a point of failure for you even disaster strikes.<p>In this scenario you would setup the HA configuration for a single-region as mentioned above, but you would do it in multiple regions. Put another way, 2+ servers in multiple AZs in <i>each</i> AWS region. Then a Route53 DNS configuration setup to point to each ELB in each region representing those individual pockets of servers.<p>Ontop of that you would use Route53 to manage all routing of client requests into your entire domain; you can leverage the new "latency-based routing" (effectively why everyone was asking for GeoDNS for years, but even better) and monitor capability to ensure you aren't routing anyone to a dead region.<p>SIMPLIFICATION<p>--------------<p>Here is what I would recommend given the size of your budget and need to stay up in the AWS cloud, in-order of expense:<p>1. Launch a single instance in a region with acceptable latency that has never had an outage before (e.g. Oregon has never completely gone down but Virginia has -- yes yes I know VA is older, but you understand my point). This solution will be cheaper than multiple instances in <i>any</i> region.<p>2. Launch multiple instances using the web console, in multiple AZs in US-EAST (cheapest option for multi-instances) and front them with an ELB. You skip any auto-scaling complexity here but you need to keep an eye on your servers. I think ELB fixed the issue where it would effectively route traffic into the void if all the instances in an AZ went down.<p>OPTIONAL: If you didn't mind spending a few $ more, you could do this strategy in the region that has never gone down for added piece of mind.<p>3. Launch single instances in multiple REGIONS and front them with Route53. This isn't really a recommended setup as entire regions will disappear if you lose a single instance, BUT I said I would list possibilities in order of price, so there you go. You could mitigate this by setting up auto-scaling policies to replace any dead instances quickly in the off chance you wanted to do exactly this but not babysit the web console all day.<p>4. Launch multiple instances in each region, across multiple AZs fronted by ELBs and then the entire collection fronted by Route53.<p>NOTE: The real cost comes from the additional instances and not from Route53 or the ELB; so if you can use smaller instances to help keep costs down (or reserved instances also) that might allow you to provide a larger HA setup.<p>What about my data?<p>-------------------------<p>Yes, yes... this is an issue that someone already touched on (data locality below).<p>You will have to decide on a single region to hold your data; in this case I would recommend using DB services that aren't based on EC2 and have never experienced outages (or rarely) -- this includes S3, SimpleDB and/or DynamoDB. AWS's MySQL offering (RDS) are just custom EC2 instances with MySQL running on them, so any time EC2 goes down, RDS goes down.<p>The other DB offerings are all custom and except for SimpleDB a long time ago, have never experienced outages that I am aware of.<p>Making this choice is all about latency and which DB store you are comfortable with (obviously don't choose SimpleDB if everything you do requires MySQL -- then use RDS); you'll want your data as close to your web tier as possible, so if you are spread across all regions you'll just want to pick a region with the smallest latency to MOST of your customers (typically West coast if you have a lot of Asia/Aus customers and East coast if you have a lot of European customers).<p>Want to Go to 11?<p>-----------------<p>If you have the money and desperately want to go to 11 with this regional-scale (which I love to do, so I am sharing this) you can combine services like DynamoDB and SQS to effectively create a globally distributed NoSQL datastore with behavior along the lines of:<p>1. Write operation comes into a region, immediately write it to the local DynamoDB instance, asynchronously queue the write command in SQS and return to the caller.<p>2. In 1+ additional EC2 instances running daemons, pull messages from SQS in chunk sizes that make sense and re-play them out to the other regions DynamoDB stores; erase the messages when processed or if the processing fails the next dameon to spin up will replay it.<p>3. On reads, just hit the local DynamoDB in any region and reply; we trust our reconciliation threads to do the work to keep us all in sync <i>eventually</i>.<p>NOTE: If you prefer to do read-repairs here you can, but it will increase complexity and inter-region communication which all costs money.<p>The challenges with this approach is that you pull up a lot of DB concerns into your code like conflict resolution, resync'ing entire regions after failure, bringing new regions online and ensuring they are synchronized, diffs, etc.<p>There is a reason AWS doesn't offer a globally-distributed data store: it is a really hard problem to get right once you make it past the 80% use case.<p>Your data will determine if this is an option or not; some data allows for certain amounts of inconsistency in which case this strategy is awesome and works great; while other data (e.g. banking data) cannot allow a single wiggle of inconsistency in which case pulling all this DB logic up into the application is a bad idea. Your failure scenarios become catastrophic (e.g. your conflict-resolution logic is wrong and wipes out the balance from an account; or keeps re-filling the balance on an empty account... something bad basically)<p>It is all a trade-off though; if you managed your own Cassandra cluster though, Cassandra does all this and much more for you automatically; but then you just put your time into Cassandra administration instead of developing the logic around DynamoDB (or SimpleDB, or MySQL, or whatever); just pick which devil you feel more comfortable with.<p>I am not aware of a services company yet that offers cross-region AWS datastore deployments yet; Datastax and Iris Couch will setup things for that like you via a consulting/custom arrangement, but there isn't a dashboard for launching something like that automatically.<p>Hope that helped (and didn't bring you to tears of boredom)
评论 #4181708 未加载
评论 #4181740 未加载
评论 #4182893 未加载
justincormack将近 13 年前
Decide which part of the CAP Theorem <a href="http://en.wikipedia.org/wiki/CAP_theorem" rel="nofollow">http://en.wikipedia.org/wiki/CAP_theorem</a> you want to give up on. Presumably you decided that Availability was not it, so you need to program around lack of consistency and/or partition tolerance. Essentially that means there is no "master database", and you will need to reconcile differing views. This can get quite application specific, and you need to understand your data well.
rvagg将近 13 年前
FWIW I initially went into ELB assuming it would solve a lot of my redundancy problems. And while it has helped a lot (I spread my frontend across 3 zones), I've suffered through a number of ELB failures or disruptions, including this latest one, which is one of the worst. Even with fully functioning servers that I can connect to individually, ELB was intermittently rejecting connections and failed to reregister instances. There's no silver bullet! Just prepare for failure and attempt to handle it gracefully, learning from each one. I suppose you should also think hard before you launch into a greater AWS budget to increase availability. Most of us are tempted to do that after each major incident--which is why Amazon can walk away from these events in a better position than before (until they have a genuine competitor that is).
explodingbarrel将近 13 年前
We run a few decent sized social games and we have survived all the major AWS region outages in the past year. He's what we do and what I would suggest.<p>1. Use Rightscale. You can get away with the free edition, but for $500/month the basic paid edition will allow you access to arrays and all the excellent scripts available on the marketplace.<p>2. The front end. I would strongly suggest moving away from ELB. We are using it and are about to get rid of it. The main problem is what exactly happened last night. If a whole AZ goes down, the ELB for that zone can get screwed and the DNS was not updating the CNAME to remove the bad zone. Instead of ELB, we have our own LB solution we are going to roll out that will use Rightscale server arrays and will handle the updating of the DNS names itself. We also aren't going to use Route53, because we learned last night that the API for that can go down and you can get stuck with bad DNS records.<p>3. Application servers. Use at least 3 AZ and have them evenly spaced. This is easy to do in Rightscale with sever arrays. Make sure your voting ration for scaling isn't 50% because you might not scale correctly if you loose 2 AZ. Keep the vote to 30% and you will be happy (if one zone votes to grow, let it grow).<p>4. Database. This is the fun one. We have been using MongoDB with pretty good success. Our multi-shard DB has 3 servers per replica set and has them distributed equally between AZs. We use 4 EBS drive RAID-0 drives for storage which have had problems in the past due to the outages that EBS sometimes has. Our best bet has been a watcher process that will kill the mongod process if there's any problems writing to the drive array. By doing this, the replica set will automatically failover to the next server and we won't get stuck with a primary node that can't write back to disk. For backups, we just freeze the writes on the secondaries and do EBS snapshots even 15 minutes. Rightscale has some great EBS tools for managing this for you. If you loose a server, we can deploy a new server in a matter of minutes and it will rebuild the RAID array from the last backup so we have a warm spare.<p>5. Monitor, monitor, monitor. Rightscale has some great tools for monitoring everything. Use them, and use more monitoring on other infrastructure (ie Pingdom)<p>Doing something like this will cost a lot more that just sticking to a single AZ, but you should be able to survive one, if not two complete datacenter outages.
评论 #4182205 未加载
aeden将近 13 年前
Sending traffic to different zones isn't the challenge, the challenge is deciding where your master data will live. In fact, this has always been one of the biggest challenges of building a fault-tolerant systems. If your master data store lives in one zone then you've got latency issues, but if it lives in multiple zones then you need to find a logical way to shard. You could also replicate across zones and then turn off writes if the zone with the master fails. You could even change masters in that case, but there's risk of data loss there.<p>Anyhow, sorry I don't have a simple answer - I'm not sure a simple answer exists.
评论 #4181434 未加载
评论 #4182898 未加载
alanbyrne将近 13 年前
I am on PHPFog for my front-end and with an AWS RDS back-end. I managed to survive this incident without an outage (I am on U.S East as well), although I did get some horrendous response times from RDS for about an hour there.<p>PHPFog are on AWS and I pay them to make sure they have the redundancy worked out. If they don't, I would yell at them until I got some money back.<p>I am considering configuring RDS for Multi A-Z, but need to research it a little more first. From what I can tell you just click a button to turn it on, but there were a lot of people complaining yesterday that the fail-over didn't work at all when it was supposed to.<p>I also have a bunch of EC2 VMs that do back-end processing and have a load of CRON jobs on them that need to run once every 24 hours. If these go down for a couple of hours then there is no noticeable impact to my customers, they can still log into my service and access their historical data.<p>I have considered spreading across multiple regions etc but at the end of the day it's just too expensive for the small increase in reliability.
elijahchancey将近 13 年前
Assuming we want to minimize latency and maximize reliability, we want to create a stack that:<p>1) Has AutoScaling Groups &#38; Elastic Load Balancers in two regions (and only two availability zones; let's keep front-end instances in the same AZ as your local/region-specific DB)<p>2) Has Databases in two regions and uses Master Master replication<p>3) Instances talk to their local DB. If they detect their local DB is down, they failover to the remote DB (ie, the far region). If they failover, they notify you.<p>4) DNS does geographic load balancing (pre-ELB). You'll need to use a provider like DynDNS or UltraDNS to give you Geo Load Balancing &#38; Failover. Or, you could pair a monitoring service like CatchPoint with Route53<p>5) Application caching (Memcache, Redis, etc). Let's not put more load on the DB's than necessary.<p>That's a good start, at least.
mark_l_watson将近 13 年前
I haven't tried this (I use single EC2 deployments, some Heroku, also have a Hetzner server) but it is something that I have been thinking of: have the web services that back up your web app on a single server, and yes that will fail on hopefully rare occasions. Host the Javascript+HTML5+CSS front end on S3 with Cloudfront CDN. The home page of your app will almost never go offline and you control what to report to your users if your backend services are offline. Sure you lose core functionality, but you still have static content and a friendly message about temporary lack of services.<p>Going beyond that at a cost of slow response times when trying to access a downed backend, you could deploy back end web services to two different hosting providers, perhaps running something like CouchDB replicated on each provider. The Javascript on your UI could switch to an alternative back end after a timeout. For "one page" style apps, you could maintain the state information that a backend host is down in the browser.
rdl将近 13 年前
Start here: <a href="http://aws.amazon.com/architecture/" rel="nofollow">http://aws.amazon.com/architecture/</a><p>I don't think they show how to do ELB across Regions, or diversity against single-ELB problems (although I haven't seen ELB fail yet). You'd probably have to build this yourself.
评论 #4181557 未加载
trebor将近 13 年前
From what I've heard, you're on the right track. However, I'd want it to not round-robin but go to the nearest working node. I don't use AWS, so I don't know how to configure the ELB, but I would assume that this is possible.
bfisher9将近 13 年前
Super low TTL and Refresh combined with replication to a DR provider. High Availability placed exclusively on a single provider - even Amazon (albeit different AZ's) is of zero value if all of Amazon itself is offline...
neilwillgettoit将近 13 年前
I'm shocked no one has mentioned <a href="http://www.cedexis.com/" rel="nofollow">http://www.cedexis.com/</a> yet.
cardmagic将近 13 年前
Try a multi-infrastructure PaaS like <a href="http://appfog.com/" rel="nofollow">http://appfog.com/</a>