Interestingly I couldn’t submit this URL to HN because it would take me to an original story that was posted 6 months ago. So I had to put it a random query string at the end of the URL like this<p><a href="https://status.aws.amazon.com/?x=1000" rel="nofollow">https://status.aws.amazon.com/?x=1000</a><p>@Dang, any way HN can allow new submissions to the same URL? Thank you .
There actually are several good reasons to run in us-east-1.<p>Something I hear a good lot is the latency argument - if you're a startup based in Boston, and your roundtrip to us-west-2 is 80ms (this is actually my roundtrip to us-west 2 right now), it doesn't matter - your customers on the west coast will see 80ms if you go into us-east-1, too. That's true, but your first customers will probably be local, and you almost certainly don't have the resources to be doing true multi-region deployments right out of the gate. So my personal feeling is - deploy into us-east-1 to give your first customers a good experience - _fully understanding that you are taking on some extra risk_, and pay it down as tech debt in time.<p>Another plays off this, but in a very different way: a hybrid cloud deployment, where the public cloud is used as an on-demand extension of the datacenter. Something I once saw was that the 80ms round trip from a Boston datacenter to us-west-2 actually expanded to massive connection latencies: 80ms for a DNS lookup, and then another 500+ms (!) for the round-trips for TCP and TLS handshakes to take place, all before a SQL query or REST call actually started. That was a complete non-starter.
interesting they use blue as the color that indicate an "issue". at a glance, looks like nothing to see here. i have a feeling that if the entire AWS infrastructure were to go offline somehow they would refer to that as 'increase error rates'
> Existing instances and networks continue to work normally.<p>This suggests that one can minimize certain failure modes by making the configuration as static as possible, i.e. creating a fixed number of long-running instances. But then, using immutable, ephemeral instances can make for a more resilient system when the EC2 control plane is working normally. There are always tradeoffs.
I run mostly us-west-2 - and when talking about AWS I've always felt reliability was pretty good. us-east-1 really is the one that seems to get hammered the hardest on the error counts.
GCP also experienced problems. A really nice one were if you deleted a service account you end up with restart loop:<p>> We are experiencing an issue with Google Kubernetes Engine. Removing Service Account from GKE might lead to infinite cluster master restarts. Please refrain from removing GKE service accounts.