About 15 minutes ago I got a call from a customer that our site was down. "Works for me," I said, because I could bring it up on my laptop. The customer said he couldn't get it on his phone, and then I confirmed I couldn't get it on my phone either.<p>The AWS Health Dashboard (https://health.aws.amazon.com/health/status) reports no issues at all. But DownDetector (https://downdetector.com/status/aws-amazon-web-services/) shows a spike in reports.<p>I can't even reach the AWS console through my phone.<p>So, AWS has connectivity issues to certain networks and their own health dashboard is lying to us about it. What gives?<p>(All of this accurate as of 2:10 pm CST).<p>Update: as of 2:26 pm CST, the health dashboard reports that they are "investigating an issue". So, 45 minutes after Down Detector sees it, they do.
Everyone seems to overlook the point here. That yet again Amazon were slow as hell to be honest with their customers. I get it up down reports help but why do you keep using a service which lies to you about availability. I've read on HN in the past how the dashboard can only be updated to reflect an issue with approval. (Comments section on a similar posting, believe it if you wish).
So why not move to a hosting company that is transparent and open about their status. I'll not make suggestions as I don't want to be accused of trying to shill for a specific provider but there are plenty out there. 45 min to update their public dash is too slow. They either don't care, don't monitor or they are trying to hide their stats for fear Jeff will beat the staff for SLA violations.
If any other provider lied to customers the way AWS does they wouldn't be tolerated why do you tolerate this behaviour from AWS?<p>Edited to fix auto correct issues
Keep in mind AWS status dashboard solely reflects the product owning managers discretion.<p>And the number of yellow ("green I" if you're old enough) is definitely a material input to PIP :)
I'm seeing it too, and surprised their health page says nothing. The US East 2 console is unresponsive. <a href="https://us-east-2.signin.aws.amazon.com/" rel="nofollow">https://us-east-2.signin.aws.amazon.com/</a>
I'm hearing from customers and other employees that our stuff at AWS (us-east-2) is unreachable, but I'm able to get to it all without any issue (via http & ssh). Perhaps there's a problem upstream of AWS that's only affecting some ISPs?
A reminder that the public and personal health dashboards are not the the only port of call.<p>If you pay for the top tier of AWS support, if you have a suspected outage you'd be paging in AWS who will pick up the phone and start debugging your problem.<p>If your business depends on AWS you don't sit around clicking refresh on a status page hoping it might be updated.
About two weeks ago all three of our Aurora DB instances in eu-central-1 suddenly crashed and were offline, to no avail, for almost 55 minutes. Simultaneously we had random network problems going on within our eu-central-1 VPC which we were unable to diagnose. We still don't know what happened because we're not getting any answers to our support request. The AWS health dashboard was all green the entire time. No notifications were sent out.
We're still up on us-east-2 but lots of customers calling in that they can't connect - makes me think there's some network down somewhere.
It does seem to be a networking issue. I have a ec2 instance in us-east-2 that is accessible through a "Global Accelerator" but not externally through my ISP.<p>That ec2 instance can talk to other ec2 instances that are on us-east-2 - but none of those other instances are accessible externally.
I'm still seeing it on my end. Our currently-running EC2 instances are working fine, but the EC2 us-east-2 console webpage doesn't load, and an EC2 instance in us-east-2 I rebooted has yet to come back online.
They've finally updated their status.<p>12:26 PM PST We are investigating an issue, which may be impacting Internet connectivity between some customer networks and the US-EAST-2 Region.
We're seeing a similar thing for our us-east-2 properties. Some of our team is able to reach them, but others aren't. Folks in the midwest (Oklahoma and Michigan) can't even load the AWS console, while people in Texas, California, Arizona, and Pennsylvania can.
A vendor's cloud product is having significant issues. Figured HN would tell me which major public cloud infrastructure fell over to cause it. Never fails.
Snowflake confirmed AWS us-east-2 issues as well.<p>AWS - US East (Ohio): INC0073093
<a href="https://status.snowflake.com/incidents/yv40l966krl9" rel="nofollow">https://status.snowflake.com/incidents/yv40l966krl9</a>
Anyone with affected RDS instances? We were getting random connectivity issues today occasionally... New pods with 1-2 minutes after startup were suddenly getting timeouts connecting to MySQL DBs