TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Summary of the Amazon Kinesis Event in the Northern Virginia (US-East-1) Region

355 pointsby codesparkleover 4 years ago

23 comments

jonehollandover 4 years ago
Running out of file handles and other IO limits is embarrassing and happens at every company, but I’m surprised that AWS was not monitoring this.<p>I’m also surprised at the general architecture of Kinesis. What appears to be their own hand rolled gossip protocol (that is clearly terrible compared to raft or paxos, a thread per cluster member? Everyone talking to everyone? An hour to reach consensus?) and the front end servers being stateful period breaks a lot of good design choices.<p>The problem with growing as fast as Amazon has is that their talent bar couldn’t keep up. I can’t imagine this design being okay 10 years ago when I was there.
评论 #25238465 未加载
评论 #25238116 未加载
评论 #25239100 未加载
评论 #25238022 未加载
评论 #25238941 未加载
评论 #25238059 未加载
评论 #25239436 未加载
评论 #25244067 未加载
评论 #25239636 未加载
评论 #25238749 未加载
评论 #25238568 未加载
评论 #25238797 未加载
评论 #25238183 未加载
risover 4 years ago
The one thing I want to know in cases like this is: why did it affect multiple Availability Zones? Making a resource multi-AZ is a significant additional cost (and often involves additional complexity) and we really need to be confident that typical observed outages would <i>actually</i> have been mitigated in return.
评论 #25237836 未加载
评论 #25237258 未加载
tnoletover 4 years ago
This is a pretty damn decent post mortem so soon after the outage. Also gives an architectural analysis of how Kinesis works which is something they had not have to do at all.
lytigasover 4 years ago
&gt; During the early part of this event, we were unable to update the Service Health Dashboard because the tool we use to post these updates itself uses Cognito, which was impacted by this event.<p>Poetry.<p>Then, to be fair:<p>&gt; We have a back-up means of updating the Service Health Dashboard that has minimal service dependencies. While this worked as expected, we encountered several delays during the earlier part of the event in posting to the Service Health Dashboard with this tool, as it is a more manual and less familiar tool for our support operators. To ensure customers were getting timely updates, the support team used the Personal Health Dashboard to notify impacted customers if they were impacted by the service issues.<p>I&#x27;m curious if anyone here actually got one of these.
评论 #25238129 未加载
评论 #25237154 未加载
评论 #25237543 未加载
评论 #25238151 未加载
评论 #25237170 未加载
评论 #25239913 未加载
freeone3000over 4 years ago
The failure to update the Service Health Dashboard was due to reliance on internal services to update. This also happened in March 2017[0]. Perhaps a general, instead of piecemeal, approach to removing dependencies on running services from the dashboard would be valuable here?<p>0: <a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;message&#x2F;41926&#x2F;" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;message&#x2F;41926&#x2F;</a>
评论 #25242480 未加载
codesparkleover 4 years ago
From the postmortem:<p><i>At 9:39 AM PST, we were able to confirm a root cause [...] the new capacity had caused all of the servers in the fleet to exceed the maximum number of threads allowed by an operating system configuration.</i>
rswailover 4 years ago
Minor detail, but is anyone else irritated by the use of the word &quot;learnings&quot; instead of &quot;lessons&quot;? &quot;To learn&quot; is a verb. Nouning verbs seems to be an unnecessary operationalization.
bithavocover 4 years ago
They’re calling it an “Event”, title should say “Summary of the Amazon Kinesis Outage...”
lend000over 4 years ago
Even today I had a few minutes of intermittent networking outages around 9:30am EST (which started on the day of the incident), and compared to other regions, I frequently get timeouts when calling S3 from us-east-1 (although that has been happening since forever).
karmakazeover 4 years ago
Seems to me that the root problem could also be fixed by not using presumably blocking application threads talking to each of the other servers. Any async or poll mechanism wouldn&#x27;t require N^2 threads across the pool.
评论 #25239920 未加载
ipsocannibalover 4 years ago
So the cause of outage boils down to not having a metric on total file descriptors with an alarm if usage gets within 10% of the Max and a faulty scaling plan that should of said &quot;for every N backend hosts we add we must add X frontend hosts&quot;. One metric and a couple of lines in a wiki could have saved Amazon what is probably millions in outage related costs. One wonders if Amazon retail will start hedging its bets and go multicloud to prevent impacts on the retail customers from AWS LSE&#x27;s.
ignoramousover 4 years ago
root-cause tldr:<p><i>...[adding] new capacity [to the front-end fleet] had caused all of the servers in the [front-end] fleet to exceed the maximum number of threads allowed by an operating system configuration [number of threads spawned is directly proportional to number of servers in the fleet]. As this limit was being exceeded, cache construction was failing to complete and front-end servers were ending up with useless shard-maps that left them unable to route requests to back-end clusters.</i><p>fixes:<p>...moving to larger CPU and memory servers [and thus fewer front-end servers]. Having fewer servers means that each server maintains fewer threads.<p>...making a number of changes to radically improve the cold-start time for the front-end fleet.<p>...moving the front-end server [shard-map] cache [that takes a long time to build, up to an hour sometimes?] to a dedicated fleet.<p>...move a few large AWS services, like CloudWatch, to a separate, partitioned front-end fleet.<p>...accelerate the cellularization [0] of the front-end fleet to match what we’ve done with the back-end.<p>[0] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=swQbA4zub20" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=swQbA4zub20</a> and <a href="https:&#x2F;&#x2F;assets.amazon.science&#x2F;c4&#x2F;11&#x2F;de2606884b63bf4d95190a3c2390&#x2F;millions-of-tiny-databases.pdf" rel="nofollow">https:&#x2F;&#x2F;assets.amazon.science&#x2F;c4&#x2F;11&#x2F;de2606884b63bf4d95190a3c...</a>
评论 #25237784 未加载
teromover 4 years ago
Unsurprising to see such outages also tickling bugs&#x2F;issues in the fallback behavior of dependent services that were intended to tolerate outages. There must be some classic law of cascading failures caused by error handling code :)<p>&gt; Amazon Cognito uses Kinesis Data Streams [...] this information streaming is designed to be best effort. Data is buffered locally, allowing the service to cope with latency or short periods of unavailability of the Kinesis Data Stream service. Unfortunately, the prolonged issue with Kinesis Data Streams triggered a latent bug in this buffering code that caused the Cognito webservers to begin to block on the backlogged Kinesis Data Stream buffers.<p>&gt; And second, Lambda saw impact. Lambda function invocations currently require publishing metric data to CloudWatch as part of invocation. Lambda metric agents are designed to buffer metric data locally for a period of time if CloudWatch is unavailable. Starting at 6:15 AM PST, this buffering of metric data grew to the point that it caused memory contention on the underlying service hosts used for Lambda function invocations, resulting in increased error rates.
londons_exploreover 4 years ago
One requirement on my &quot;production ready&quot; checklist is that any catastrophic system failure can be resolved by starting a completely new instance of the service, and it be ready to serve traffic inside 10 minutes.<p>That should be tested at least quarterly (but preferably automatically with every build).<p>If Amazon did that, this outage would have been reduced to 10 mins, rather than the 12+ hours that some super slow rolling restarts took...
评论 #25240033 未加载
评论 #25239896 未加载
评论 #25240344 未加载
steelframeover 4 years ago
&gt; Cellularization is an approach we use to isolate the effects of failure within a service, and to keep the components of the service (in this case, the shard-map cache) operating within a previously tested and operated range. This had been under way for the front-end fleet in Kinesis, but unfortunately the work is significant and had not yet been completed.<p>Translation: The eng team knew that they had accumulated tech debt by cutting a corner here in order to meet one of Amazon&#x27;s typical and insane &quot;just get the feature out the door&quot; timelines. Eng warned management about it, and management decided to take the risk and lean on on-call to pull heroics to just fix any issues as they come up. Most of the time yanking a team out of bed in the middle of the night works, so that&#x27;s the modus operandi at Amazon. This time, the actual problem was more fundamental and wasn&#x27;t effectively addressable with middle-of-the-night heroics.<p>Management rolled the &quot;just page everyone and hope they can fix it&quot; dice yet again, as they usually do, and this time they got snake eyes.<p>I guarantee you that the &quot;cellularization&quot; of the front-end fleet wasn&#x27;t actually under way, but the teams were instead completely consumed with whatever the next typical and insane &quot;just get the feature out the door&quot; thing was at AWS. The eng team was never going to get around to cellularizing the front-end fleet because they were given no time or incentive to do so by management. During&#x2F;after this incident, I wouldn&#x27;t be surprised if management didn&#x27;t yell at the eng team, &quot;Wait, you KNEW this was a problem, and you&#x27;re not done yet?!?&quot; Without recognizing that THEY are the ones actually culpable for failing to prioritize payments on tech debt vs. &quot;new shiny&quot; feature work, which is typical of Amazon product development culture.<p>I&#x27;ve worked with enough former AWS engineers to know what goes on there, and there&#x27;s a really good reason why anybody who CAN move on from AWS will happily walk away from their 3rd- and 4th-year stock vest schedules (when the majority of your <i>promised</i> amount of your sign-on RSUs actually starts to vest) to flee to a company that fosters a healthy product development and engineering culture.<p>(Not to mention that, this time, a whole bunch of peoples&#x27; Thanksgiving plans were preempted with the demand to get a full investation and post-mortem written up, including the public post, ASAP. Was that really necessary? Couldn&#x27;t it have waited until next Wednesday or something?)
评论 #25243169 未加载
评论 #25242519 未加载
fafnerover 4 years ago
From the summary I don&#x27;t understand why front end servers need to talk to each other (&quot;continuous processing of messages from other Kinesis front-end servers&quot;). It sounds like this is part of building the shard map or the cache. Well in the end an unfortunate design decision. #hugops for the team handling this. Cascading failures are the worst.
tmk1108over 4 years ago
How does the architecture of Kinesis compare to Kafka? If you scale up the number of Kafka brokers can you hit similar problem? Or does Kafka not rely on creating threads to connect to each other broker
评论 #25239485 未加载
zxcvbn4038over 4 years ago
They didn’t really discuss their remediation plans but maybe having one fleet of servers for everything isn’t the best setup. I’d love to know which OS setting they ran into. In their defense this is exactly the sort of change that never shows up in testing because the dev and qa environments are always smaller then production.<p>I’m wondering how many people Amazon fired over this incident - that seems to be their goto answer to everything.
temp0826over 4 years ago
us-east-1 is AWS’s dirty secret. If ddb had gone down there, there would likely be a worldwide and multi-service interruption.
pps43over 4 years ago
&gt; the new capacity had caused all of the servers in the fleet to exceed the maximum number of threads allowed by an operating system configuration. [...] We didn’t want to increase the operating system limit without further testing<p>Is it because operating system configuration is managed by a different team within the organization?
评论 #25237520 未加载
评论 #25237364 未加载
评论 #25237370 未加载
评论 #25239510 未加载
jaikant77over 4 years ago
&quot;and it turned out this wasn’t driven by memory pressure. Rather, the new capacity had caused all of the servers in the fleet to exceed the maximum number of threads allowed by an operating system configuration.&quot;<p>An auto scaling irony for AWS! We seem to be back to the late 1990s :)
hintymadover 4 years ago
A tangential question, why would AWS even use the term &quot;microservice&quot;? A service is a service, right? I&#x27;m not sure what the term &quot;microservice&quot; signifies here.
评论 #25239781 未加载
metaedgeover 4 years ago
I would have started the response with:<p>First of all, we want to apologize for the impact this event caused for our customers. While we are proud of our long track record of availability with Amazon Kinesis, we know how critical this service is to our customers, their applications and end users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further.<p>Then move on to explain...
评论 #25237522 未加载