TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Summary of the Amazon S3 Service Disruption

1246 点作者 oscarwao大约 8 年前

73 条评论

ajross大约 8 年前
&gt; <i>At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.</i><p>It remains amazing to me that even with all the layers of automation, the root cause of most serious deployment problems remain some variant of a fat fingered user.
评论 #13776717 未加载
评论 #13776231 未加载
评论 #13775966 未加载
评论 #13776675 未加载
评论 #13776465 未加载
评论 #13776320 未加载
评论 #13776780 未加载
评论 #13776289 未加载
评论 #13776574 未加载
评论 #13780193 未加载
评论 #13777388 未加载
评论 #13781543 未加载
评论 #13778840 未加载
评论 #13779628 未加载
评论 #13784604 未加载
评论 #13778472 未加载
评论 #13776095 未加载
评论 #13779005 未加载
评论 #13779074 未加载
评论 #13777497 未加载
评论 #13777576 未加载
评论 #13779675 未加载
评论 #13776704 未加载
评论 #13776059 未加载
conorh大约 8 年前
This part is also interesting:<p><i>&gt; While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years.</i><p>These sorts of things make me understand why the Netflix &quot;Chaos Gorilla&quot; style of operating is so important. As they say in this post:<p><i>&gt; We build our systems with the assumption that things will occasionally fail</i><p>Failure at every level has to be simulated pretty often to understand how to handle it, and it is a really difficult problem to solve well.
评论 #13776615 未加载
评论 #13776045 未加载
评论 #13780144 未加载
评论 #13779935 未加载
jph00大约 8 年前
&gt; <i>From the beginning of this event until 11:37AM PST, we were unable to update the individual services’ status on the AWS Service Health Dashboard (SHD) because of a dependency the SHD administration console has on Amazon S3.</i><p>Ensuring that your status dashboard doesn&#x27;t depend on the thing it&#x27;s monitoring is probably the first thing you should think about when designing your status system. This doesn&#x27;t fill me with confidence about how the rest of the system is designed, frankly...
评论 #13776586 未加载
评论 #13776065 未加载
评论 #13776979 未加载
评论 #13778954 未加载
评论 #13779730 未加载
losteverything大约 8 年前
&quot;I did.&quot;<p>That was CEO Robert Allen&#x27;s response when the AT&amp;T network collapsed [1] on January 15, 1990<p>He was asked who made the mistake.<p>I can&#x27;t imagine any CEO now a days making a similar statement.<p>[1] <a href="http:&#x2F;&#x2F;users.csc.calpoly.edu&#x2F;~jdalbey&#x2F;SWE&#x2F;Papers&#x2F;att_collapse.html" rel="nofollow">http:&#x2F;&#x2F;users.csc.calpoly.edu&#x2F;~jdalbey&#x2F;SWE&#x2F;Papers&#x2F;att_collaps...</a>
评论 #13777086 未加载
评论 #13778037 未加载
评论 #13777193 未加载
评论 #13776715 未加载
评论 #13776785 未加载
savanaly大约 8 年前
Not as interesting an explanation as I was hoping for. Someone accidentally typed &quot;delete 100 nodes&quot; instead of &quot;delete 10 nodes&quot; or something.<p>It sounds like the weakness in the process is that the tool they were using permitted destructive operations like that. The passage that stuck out to me: &quot;in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level.&quot;<p>At the organizational level, I guess it wasn&#x27;t rated as all that likely that someone would try to remove capacity that would take a subsystem below its minimum. Building in a safeguard now makes sense as this new data point probably indicates that the likelihood of accidental deletion is higher than they had estimated.
评论 #13776697 未加载
评论 #13776233 未加载
评论 #13777215 未加载
评论 #13776492 未加载
westernmostcoy大约 8 年前
Take a moment to look at the construction of this report.<p>There is no easily readable timeline. It is not discoverable from anywhere outside of social media or directly searching for it. As far as I know, customers were not emailed about this - I certainly wasn&#x27;t.<p>You&#x27;re an important business, AWS. Burying outage retrospectives and live service health data is what I expect from a much smaller shop, not the leader in cloud computing. We should all demand better.
评论 #13777842 未加载
评论 #13777308 未加载
评论 #13777219 未加载
评论 #13778776 未加载
seanwilson大约 8 年前
&gt; At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.<p>I find making errors on production when you think you&#x27;re on staging are a big one for similar errors. One of the best things I ever did on one job was to change the deployment script so that when you deployed you would get a prompt saying &quot;Are you sure you want to deploy to production? Type &#x27;production&#x27; to confirm&quot;. This helped stop several &quot;oh my god, no!&quot; situations when you repeated previous commands without thinking. For cases where you need to use SSH as well (best avoided but not always practical), it helps to use different colours, login banners and prompts for the terminals.
评论 #13776652 未加载
评论 #13778509 未加载
dsr_大约 8 年前
&quot; we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years. S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected&quot;<p>This is analogous to &quot;we needed to fsck, and nobody realized how long that would take&quot;.
评论 #13778221 未加载
评论 #13784929 未加载
评论 #13777609 未加载
magd大约 8 年前
Oh, that interview question. “Tell me about something you broke in your last job&quot;
评论 #13780348 未加载
评论 #13776823 未加载
评论 #13779383 未加载
评论 #13779998 未加载
评论 #13780599 未加载
评论 #13776214 未加载
orthecreedence大约 8 年前
TLDR; Someone on the team ran a command by mistake that took everything down. Good, detailed description. It happens. Out of all of Amazon&#x27;s offerings, I still love S3 the most.
评论 #13776237 未加载
评论 #13776227 未加载
idlewords大约 8 年前
&quot;we have changed the SHD administration console to run across multiple AWS regions.&quot;<p>Dear Amazon: please lease a $25&#x2F;month dedicated server to host your status page on.
评论 #13777428 未加载
评论 #13779515 未加载
mleonhard大约 8 年前
AWS partitions its services into isolated regions. This is great for reducing blast radius. Unfortunately, us-east-1 has many times more load than any other region. This means that scaling problems hit us-east-1 before any other region, and affect the largest slice of customers.<p>The lesson is that partitioning your service into isolated regions is not enough. You need to partition your load evenly, too. I can think of several ways to accomplish this:<p>1. Adjust pricing to incentivize customers to move load away from overloaded regions. Amazon has historically done the opposite of this by offering cheaper prices in us-east-1.<p>2. Calculate a good default region for each customer and show that in all documentation, in the AWS console, and in code examples.<p>3. Provide tools to help customers choose the right region for their service. Example: <a href="http:&#x2F;&#x2F;www.cloudping.info&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.cloudping.info&#x2F;</a> (shameless plug).<p>4. Split the large regions into isolated partitions and allocate customers evenly across them. For example, split us-east-1 into 10 different isolated partitions. Each customer is assigned to a particular partition when they create their account. When they use services, they will use the instances of the services from their assigned partition.
Dangeranger大约 8 年前
So this is the second high profile outage in the last month caused by a simple command line mistake.<p>&gt; Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.<p>If I would have guessed anyone could prevent mistakes like this from propagating it would be AWS. It points to just how easy it is to make these errors. I am sure that the SRE who made this mistake is amazing and competent and just had one bad moment.<p>While I hope that AWS would be as understanding as Gitlab, I doubt the outcome is the same.
评论 #13777977 未加载
评论 #13776993 未加载
评论 #13777781 未加载
rkuykendall-com大约 8 年前
tl;dr: Engineer fat-fingered a command and shut everything down. Booting it back up took a long time. Then the backlog was huge, so getting back to normal took even longer. We made the command safer, and are gonna make stuff boot faster. Finally, we couldn’t report any of this on the service status dashboard, because we’re idiots, and the dashboard runs on AWS.
评论 #13776710 未加载
all_usernames大约 8 年前
Overall, it&#x27;s pretty amazing that the recovery was as fast as it was. Given the throughput of S3 API calls you can imagine the kind of capacity that&#x27;s needed to do a full stop followed by a full start. Cold-starting a service when it has heavy traffic immediately pouring into it can be a nightmare.<p>It&#x27;d be very interesting to know what kind of tech they use at AWS to throttle or do circuit breaking to allow back-end services like the indexer to come up in a manageable way.
hyperanthony大约 8 年前
Something that wasn&#x27;t addressed -- there seems to be an architectural issue with ELB where ELBs with S3 access logs enabled had instances fail ELB health checks, presumably while the S3 API was returning 5XX. My load balancers in us-east-1 without access logs enabled were fine throughout this event. Has there been any word on this?
评论 #13776441 未加载
djhworld大约 8 年前
Really pleased to see this, it&#x27;s good to see an organisation that&#x27;s being transparent (and maybe given us a little peek under the hood of how S3 is architected) and most importantly they seem quite humbled.<p>It would be easy for an arrogant organisation to fire or negatively impact the person that made the mistake, I hope Amazon don&#x27;t fall into that trap and focus instead on learning from what happened, closing the book and move on.
mmanfrin大约 8 年前
There are quite a few comments here ignoring the clarity that hindsight is giving them. Apparently the devops engineers commenting here have never fucked up.
评论 #13780982 未加载
certifiedloud大约 8 年前
This is a bit off topic. The use of the word &quot;playbook&quot; suggests to me that they use Ansible to help manage S3. I wonder if that is the case, or if it&#x27;s just internal lingo that means &quot;a script&quot;. Unless there is some other configuration management system that uses the word playbook that I&#x27;m not aware of.
评论 #13776003 未加载
评论 #13776239 未加载
评论 #13776195 未加载
erikbye大约 8 年前
What does everyone use S3 for?<p>I&#x27;m genuinely curious. As my experiments with it have left me disappointed with its performance, I&#x27;m just not sure what I could use it for. Store massive amounts of data that is infrequently accessed? Well, unfortunately the upload speed I got to the standard rating one was so abysmal it would take too much time to move the data there; and then I suspect the inverse would be pretty bad as well.
评论 #13776886 未加载
评论 #13776762 未加载
评论 #13776658 未加载
评论 #13776788 未加载
评论 #13778791 未加载
i336_大约 8 年前
&gt; (...) [W]e have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years. S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected.<p>All those tweets saying &quot;turn it off and back on again&quot;?<p>&quot;We accidentally turned it off, but it hasn&#x27;t been turned it off for so long it took us hours to figure out how to turn it back on.&quot;<p>Poorly-presented jokes aside, this is rather concerning. The indexer and placement systems are SPOFs!! I mean, I&#x27;d <i>presume</i> these subsystems had ultra-low-latency hot failover, but this says <i>they never restarted</i>, and I wonder if AWS didn&#x27;t simply invest a ton of magic pixie dust in making Absolutely Totally Sure™ the subsystems physically, literally never crashed in years. Impressive engineering but also very scary.<p>At least they&#x27;ve restarted it now.<p>And I&#x27;m guessing the current hires now know a <i>lot</i> about the indexer and placer, which won&#x27;t do any harm to the sharding effort (I presume this&#x27;ll be being sharded quicksmart).<p>I wonder if all the approval guys just photocopied their signatures onto a run of blank forms, heheh.
评论 #13779700 未加载
cnorthwood大约 8 年前
I&#x27;m surprised how transparent this is, I can find Amazon often a bit opaque when dealing with issues.
评论 #13776068 未加载
评论 #13776187 未加载
评论 #13776464 未加载
nissimk大约 8 年前
I keep being reminded of something I read recently that made me feel uneasy about google&#x27;s cloud spanner [1]:<p><i>the most important one is that Spanner runs on Google’s private network. Unlike most wide-area networks, and especially the public internet, Google controls the entire network and thus can ensure redundancy of hardware and paths, and can also control upgrades and operations in general. Fibers will still be cut, and equipment will fail, but the overall system remains quite robust. It also took years of operational improvements to get to this point. For much of the last decade, Google has improved its redundancy, its fault containment and, above all, its processes for evolution. We found that the network contributed less than 10% of Spanner’s already rare outages.</i><p>But when it fails it&#x27;s going to be epic!<p>[1] <a href="https:&#x2F;&#x2F;cloudplatform.googleblog.com&#x2F;2017&#x2F;02&#x2F;inside-Cloud-Spanner-and-the-CAP-Theorem.html" rel="nofollow">https:&#x2F;&#x2F;cloudplatform.googleblog.com&#x2F;2017&#x2F;02&#x2F;inside-Cloud-Sp...</a>
评论 #13776725 未加载
St-Clock大约 8 年前
I am unpleasantly surprised that they do not mention why services that should be unrelated to S3 such as SES were impacted as well and what they are doing to reduce such dependencies.<p>From a software development perspective, it makes sense to reuse S3 and rely on it internally if you need object storage, but from an ops perspective, it means that S3 is now a single point of failure and that SES&#x27;s reliability will always be capped by S3&#x27;s reliability. From a customer perspective, the hard dependency between SES and S3 is not obvious and is disappointing.<p>The whole internet was talking about S3 when the AWS status dashboard did not show any outage, but very few people mentioned other services such as SES. Next time we encounter errors with SES, should we check for hints of S3 outage before everything else? Should we also check for EC2 outage?
评论 #13776805 未加载
tifa2up大约 8 年前
Do you know if Amazon is giving any refunds&#x2F;credits for the service outbreak?
评论 #13783308 未加载
ct0大约 8 年前
I wouldn&#x27;t want to be the person who wrote the wrong command! Sheesh.
评论 #13775918 未加载
评论 #13776056 未加载
评论 #13776026 未加载
评论 #13775925 未加载
评论 #13775932 未加载
评论 #13777697 未加载
EdSharkey大约 8 年前
It&#x27;s curious they needed to &quot;remove capacity&quot; to cure a slow billing problem.<p>Is that code for a &quot;did you try to reboot the system?&quot; kind of troubleshooting?<p>It sounds to me like the authorized engineer sent a command to reboot&#x2F;reimage a large swath of the S3 infrastructure.
sebringj大约 8 年前
If Amazon were a guy, he&#x27;d be a standup guy. This is a very detailed and responsible explanation. S3 has revolutionized my businesses and I love that service to no end. These problems happen very rarely but I may have backups just in case using nginx proxy approach at some point and because S3 is so good, everyone seems to adopt their API so its just a matter of a switch statement. Werner can sweat less. Props.<p>I would add, it would be awesome if there was a simulation environment, beyond just a test environment that simulated servers outside requesting in, before a command was allowed to run onto production, like a robot deciding this, then could mitigate this, kind of like TDD on steriods if they don&#x27;t have that already.
Exuma大约 8 年前
Imagine being THAT guy.......... in that exact moment...... after hitting enter and realizing what he did. RIP
评论 #13777485 未加载
评论 #13781779 未加载
评论 #13777034 未加载
评论 #13777693 未加载
lasermike026大约 8 年前
Ops and Engineering here.<p>My guts hurt just reading this.<p>With big failures is never just one thing. There are a series of mistakes, bad choices, and ignorance that lead to a big system wide failures.
spullara大约 8 年前
Twitter once had 2 hours of downtime because an operations engineer accidentally asked a tool to restart all memcached servers instead of a certain server. The tool was then changed to make sure that you couldn&#x27;t restart more than a few servers without additional confirmation. Sounds very similar to this situation. Something to think about when you are building your tools to be more error proof.
评论 #13778538 未加载
评论 #13781264 未加载
matt_wulfeck大约 8 年前
&gt;<i>Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.</i><p>We have geo-distributed systems. Load balancing and automatic failover. We agonize over edge cases that might cause issues. We build robust systems.<p>At the end of he day reliability -- a lot like security -- is most affected by the human factor.
dap大约 8 年前
&gt; Removing a significant portion of the capacity caused each of these systems to require a full restart.<p>I&#x27;d be interested to understand why a cold restart was needed in the first place. That seems like kind of a big deal. I can understand many reasons why it might be necessary, but that seems like one of the issues that&#x27;s important to address.
评论 #13777934 未加载
OhHeyItsE大约 8 年前
I find this refreshingly candid; human, even, for AWS.
throwtotheway大约 8 年前
I hope I never have to write a post-mortem that includes the phrase &quot;blast radius&quot;
bandrami大约 8 年前
&quot;we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years.&quot;<p>Yeah... nothing says &quot;resilience&quot; quite like that...
aestetix大约 8 年前
It sounds like this can be mitigated by making sure everything is run in dry run mode first, and for something mission critical, getting it double-checked by someone before removing the dry run constraint.<p>It&#x27;s good practice in general, and I&#x27;m kind of astonished it&#x27;s not part of the operational procedures in AWS, as this would have quickly been caught and fixed before ever going out to production.
评论 #13777242 未加载
carlsborg大约 8 年前
&quot;As a result, (personal experience and anecdotal evidence suggest that) for complex continuously available systems, Operations Error tends to be the weakest link in the uptime chain.&quot;<p><a href="https:&#x2F;&#x2F;zvzzt.wordpress.com&#x2F;2012&#x2F;08&#x2F;16&#x2F;a-note-on-uptime&#x2F;" rel="nofollow">https:&#x2F;&#x2F;zvzzt.wordpress.com&#x2F;2012&#x2F;08&#x2F;16&#x2F;a-note-on-uptime&#x2F;</a>
dorianm大约 8 年前
Same as that db1 &#x2F; db2 for GitLab, naming things is pretty important (E.g production &#x2F; staging, production-us-east-1-db-560 etc.)
pfortuny大约 8 年前
I guess it is time to define commands whose inputs have great distance in, say the Damerau-Levenshtein metric.<p>For numerical inputs, one might use both the digits and the textual expression. This would make them quite cumbersone but much less prone to errors. Or devise some shorthand for them...<p>156 (on fi six). 35. (zer th fi). 170 (on se zer). 28 (two eig) evens have three letters odds have two.<p>This is just my 2 cents.
atrudeau大约 8 年前
Are customers going to receive any kind of future credit because of this? Would be a nice band-aid after such a hard smack on the head.
评论 #13776305 未加载
mulmen大约 8 年前
If this boils down to an engineer incorrectly entering a command can we please refer to this outage as &quot;Fat Finger Tuesday&quot;?
jasonhoyt大约 8 年前
&quot;People make mistakes all the time...the problem was that our systems that were designed to recognize and correct human error failed us.&quot; [1]<p>[1] <a href="http:&#x2F;&#x2F;articles.latimes.com&#x2F;1999&#x2F;oct&#x2F;01&#x2F;news&#x2F;mn-17288" rel="nofollow">http:&#x2F;&#x2F;articles.latimes.com&#x2F;1999&#x2F;oct&#x2F;01&#x2F;news&#x2F;mn-17288</a>
评论 #13776576 未加载
hemant19cse大约 8 年前
Amazon&#x27;s AWS Outage Was More of a Design(UX) Failure and Less of Human Error. <a href="https:&#x2F;&#x2F;www.linkedin.com&#x2F;pulse&#x2F;how-small-typo-caused-massive-downtime-s3aws-hemant-kumar-singh" rel="nofollow">https:&#x2F;&#x2F;www.linkedin.com&#x2F;pulse&#x2F;how-small-typo-caused-massive...</a>
bsaul大约 8 年前
Wonder if every numbers for critical command lines shouldn&#x27;t be spelled out as well. If you think about how checks works, you&#x27;re supposed to write the number as well as the words for the number. -nbs two_hundreds instead of twenty is much less likely to happen..<p>just like rm -rf &#x2F; should really be rm -rf `root`
eplanit大约 8 年前
So this week the poor soul at Amazon, along with the Price-Waterhouse guy, are the poster children of Human Error.
rsynnott大约 8 年前
&gt; While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years.<p>This is the bit that&#x27;d worry me most; you&#x27;d think they&#x27;d be testing this.
评论 #13776550 未加载
EternalData大约 8 年前
This caused panic and chaos for a bit among my team, which I imagine was replicated across the web.<p>Moments like these always remind me that a particularly clever or nefarious set of individuals could shut down essential parts of the Internet with a few surgical incisions.
DanBlake大约 8 年前
Seems like something like Chaos monkey should have been able to predict and mitigate a issue like this. Im actually curious if anyone uses it at all- Curious if anyone in here at a large company (over 500 employees) has it deployed or not.
matt_wulfeck大约 8 年前
Remember folks, automate your systems but never forget to add sanity checks.
tuxninja大约 8 年前
I think they should have led with insensitivity about it and maybe a white lie. Such as... We took our main region us-east-1 down for X hours because we wanted to remind people they need to design for failure of a region :-)<p>Shameless plugs (authored months ago): <a href="http:&#x2F;&#x2F;tuxlabs.com&#x2F;?p=380" rel="nofollow">http:&#x2F;&#x2F;tuxlabs.com&#x2F;?p=380</a> - How To: Maximize Availability Effeciently Using AWS Availability Zones ( note read it, its not just about AZ&#x27;s it is very clear to state multi-regions and better yet multi-cloud segway...second article) <a href="http:&#x2F;&#x2F;tuxlabs.com&#x2F;?p=430" rel="nofollow">http:&#x2F;&#x2F;tuxlabs.com&#x2F;?p=430</a> - AWS, Google Cloud, Azure and the singularity of the future Internet
nsgoetz大约 8 年前
This makes me want to write a program that would ask users to confirm commands if it thinks they are running a known playbook and deviating from it. Does anyone know if a tool like that exists?
评论 #13780756 未加载
prh8大约 8 年前
For as much as people jumped all over Gitlab last month, this seems remarkably similar in terms of preparedness for accidental and unanticipated failure.
评论 #13776910 未加载
评论 #13776632 未加载
chirau大约 8 年前
Deletions, shutdowns and replications should always either contain a SELECT to show affected entities or a confirmation (Y&#x2F;n) of the step.
pwthornton大约 8 年前
This is the risk you run into by doing everything through the command line. This would be really hard to do through a good GUI.
评论 #13777245 未加载
评论 #13778014 未加载
评论 #13779070 未加载
fulafel大约 8 年前
I guess this means it&#x27;s much better for your app to fail over between regions than between availability zones.
asow92大约 8 年前
Man, I&#x27;d really hate to be that guy.
tn13大约 8 年前
Can Amazon take responsibility and offer say 10% discount to all the customers who are spending &gt;$X ?
评论 #13778130 未加载
sumobob大约 8 年前
Wow, I&#x27;m sure that person who mis entered the command will never, ever, ever, do it again.
dootdootskeltal大约 8 年前
I don&#x27;t know if it&#x27;s a C thing, but those code comments are art!
评论 #13780506 未加载
CodeWriter23大约 8 年前
I&#x27;m just going to call this &quot;PEBKAC at scale&quot;
feisky大约 8 年前
Great shock to the quick recovery of such a big system.
cagataygurturk大约 8 年前
Sounds so chernobyl.
评论 #13779119 未加载
davidf18大约 8 年前
There should be some sort of GUI interface that does appropriate checks instead of allowing someone to mistakenly type the correct information.
njharman大约 8 年前
Did you read the fucking article?<p>That is EXACTLY what they are doing (among other things).
评论 #13780331 未加载
skrowl大约 8 年前
TLDR:<p>Never type &#x27;EXEC DeleteStuff ALL&#x27;<p>When you actually mean &#x27;EXEC DeleteStuff SOME&#x27;
thraway2016大约 8 年前
Something doesn&#x27;t pass the smell test. Over two hours to reboot the index hosts?
评论 #13778526 未加载
评论 #13779104 未加载
romanovcode大约 8 年前
They could&#x27;ve just not post anything. People already forgot about this disruption.
machbio大约 8 年前
No on in HN is questioning this - &quot;The Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected.&quot; - they are debugging on Production System..
fr4egy8e9大约 8 年前
What most AWS customers don&#x27;t realize is that AWS is poorly automated. Their reliability relies on exploiting the employees to manually operate the systems. The technical bar at Amazon is incredibly low and they can&#x27;t retain any good engineers.
aorloff大约 8 年前
What&#x27;s missing is addressing the problems with their status page system, and how we all had to use Hacker News and other sources to confirm that US East was borked.
评论 #13775977 未加载
评论 #13776119 未加载
tuxninja大约 8 年前
I think they should have led with insensitivity about it and maybe a white lie. Such as... We took our main region us-east-1 down for X hours because we wanted to remind people they need to design for failure of a region :-)<p>Shameless plugs (authored months ago): <a href="http:&#x2F;&#x2F;tuxlabs.com&#x2F;?p=380" rel="nofollow">http:&#x2F;&#x2F;tuxlabs.com&#x2F;?p=380</a> - How To: Maximize Availability Effeciently Using AWS Availability Zones ( note read it, its not just about AZ&#x27;s it is very clear to state multi-regions and better yet multi-cloud segway...second article)<p><a href="http:&#x2F;&#x2F;tuxlabs.com&#x2F;?p=430" rel="nofollow">http:&#x2F;&#x2F;tuxlabs.com&#x2F;?p=430</a> - AWS, Google Cloud, Azure and the singularity of the future Internet
edutechnion大约 8 年前
For the many of us who have built businesses dependent on S3, is anyone else surprised at a few assumptions embedded here?<p>* &quot;authorized S3 team member&quot; -- how did this team member acquire these elevated privs?<p>* Running playbooks is done by one member without a second set of eyes or approval?<p>* &quot;we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years&quot;<p>The good news:<p>* &quot;The S3 team had planned further partitioning of the index subsystem later this year. We are reprioritizing that work to begin immediately.&quot;<p>The truly embarrassing that everyone has known about for years is the status page:<p>* &quot;we were unable to update the individual services’ status on the AWS Service Health Dashboard &quot;<p>When there is a wildly-popular Chrome plugin to <i>fix</i> your page (&quot;Real AWS Status&quot;) you would think a company as responsive as AWS would have fixed this years ago.
评论 #13776122 未加载
评论 #13776127 未加载
评论 #13776129 未加载
评论 #13776131 未加载
评论 #13776164 未加载
评论 #13779133 未加载