TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How long till outages if Big Tech employees suddenly got Thanos-snapped?

18 点作者 ryan_j_naughton超过 2 年前
Given the recent mass layoffs and resignations at Twitter, it raises an interesting question: At what point will critical systems start failing and Twitter’s Fail Whale will return?[1] When will there be an outage?<p>While the layoffs at Facebook, Google, Amazon, Stripe, etc are all certainly being managed far better than Twitter’s situation, it is an interesting question to understand the relationship between tech workers and the production infrastructure of modern tech companies.<p>One would hope that most companies have used systems equivalent to chaos monkey + modern devops engineering to build automated resiliency into their systems without requiring much human intervention. But.....<p>So how long do you think it would be until X systems&#x2F;products at any given Big Tech company would degrade in service or experience an outage if they lost all their workers?<p>[1] https:&#x2F;&#x2F;www.theatlantic.com&#x2F;technology&#x2F;archive&#x2F;2015&#x2F;01&#x2F;the-story-behind-twitters-fail-whale&#x2F;384313&#x2F;

13 条评论

runlevel1超过 2 年前
I&#x27;m surprised I haven&#x27;t yet seen anyone mention the risk of insider threats.<p>It only takes one pissed off person with the right knowledge to do a disproportionate amount of damage to the company.<p>Twitter is in an especially risky position right now:<p>1. The sheer size of the layoffs increase the odds at least one former employee wants to retaliate.<p>2. The manner of the layoffs increase the chances that former employees feel slighted.<p>3. The source of the layoffs is an individual (i.e. not something abstract like a falling economy), giving them an articulable target.<p>4. No longer having equity means former employees have less interest in its ongoing success.<p>5. The scale of the departures mean many former employees will no longer know anyone still there and so harm wouldn&#x27;t befall someone they know.<p>6. The layoffs include roles with access to sensitive information. That could be anything from trade secrets, to credentials, to where the proverbial bodies are buried.<p>7. Security teams that would normally mitigate some of this risk might no longer be fully functioning.<p>I&#x27;m not confident enough to say it <i>will</i> happen -- only that the risk is much, much higher than normal.
评论 #33715186 未加载
matt_s超过 2 年前
I think Twitter is especially vulnerable to an outage of some kind just with the massive knowledge that has left. Things will start falling apart when whoever is left starts making changes, could be 2 months, could be next summer. The other element is if they don&#x27;t have infrastructure engineers (SRE?) left and nobody is monitoring and keeping tabs on various things that always fail and once those pile up something will fail publicly.<p>People may casually look at twitter and think its a 280 character text field and like 4 buttons to like&#x2F;retweet&#x2F;reply&#x2F;share (and reply is a recursive feature in a way) and assume you don&#x27;t need a lot of technical complexity for that. You&#x27;re correct, you don&#x27;t, someone could probably build that flow in a weekend or less. The major feature of any public social media company is content moderation. That is invisible to end users and I imagine a lot of back-end processing and systems are required for that. It doesn&#x27;t matter what the content is, it needs to be moderated and that usually falls on human judgement at some point for the more nuanced content. Things that need to be moderated start with content like the spam you get in your email or low effort promotional content anywhere online. Then work your way up thru content that will bring lawsuits against twitter, etc.<p>My guess is once Musk starts initiating changes they will start discovering how complex it is to make changes, maybe backing out changes initially when BadThingsHappen™. Then he will get tired of it and put someone in charge and find another toy company to play with.
zaphod12超过 2 年前
Depends...you could rebalance to stay up forever, but just slow anything new at all.<p>Your biggest issue is the data center. A lot of folks forget about this stuff, but hard drives are constantly dying, servers go bad or crash, heck even the ac needs maintenance (though that&#x27;s contracted out I&#x27;m sure). None of that is glorious, but it&#x27;s critical. The major systems are very very reliable in the face of a few hardware failures, but give it a couple of months of falling behind on maintenance and it would all crumble.
评论 #33713980 未加载
ldjkfkdsjnv超过 2 年前
The key point not discussed enough is that outages happen as the code is changing. If you stop deploy new changes, big FAANGs basically wont go down. Obviously they are so complex thats hard to do, but slowing the rate of feature development will slow the rate of failure. And its probably not a linear relationship
评论 #33714254 未加载
评论 #33713813 未加载
评论 #33717009 未加载
ozzythecat超过 2 年前
I’m convinced this a bit overblown.<p>I don’t know enough about twitters infrastructure, so I’m only speaking at the application layer.<p>If the code isn’t changing, things should be extremely stable and resilient. Presumably, Twitter had already made significant investments in resilience, fault tolerant services that function independently at scale.<p>I’d think the more risky parts are server&#x2F;hardware failures, hardware load balancers, etc.<p>One of the key services my big tech org owned was in support-only mode with no active feature development. Despite 500k requests per second, it had just a one person on pager duty.<p>The majority of support issues were OS level updates and application level dependency updates&#x2F;fixing vulnerabilities. But not doing that work wouldn’t take the service down, so much as be corporate policy violations for not keeping software up to date. You could also definitely swing by exceptions.
gsatic超过 2 年前
Just par for the course with tech. There are much more critical systems than twitter running all over the world that no one has updated or fixed in a long time.<p>I worked with big telco a while back. The software&#x2F;hardware we maintained for the telco exchanges was used in pretty much every country. That &quot;stack&quot; was in development for 30+ years. Hundreds of companies and thousands of devs had contributed to it. Using as many languages and tools you can imagine. Many don&#x27;t exist anymore. Large chunks of the source code and tooling to build&#x2F;fix just got lost with with time, relocations, layoffs, mergers etc. And things would break all the time. All we did was cook up hacks and work arounds to keep things running. No real fixes or updates were possible.
DevKoala超过 2 年前
Twitter can continue running with 100 engineers or less. That said, can they iterate fast enough on moderation and fraud prevention? Apply security fixes responding to the newest threats? Deliver on advertising customer demands? Stay competitive in terms of features vs other social network platforms? I doubt it, but I bet they can do a decent job with less engineers than they used to have.
评论 #33714760 未加载
评论 #33714803 未加载
akomtu超过 2 年前
Something like Twitter can run with 100 employees and a few thousand offshore content moderators.
makeitrain超过 2 年前
Something better crash. Otherwise, why not cut 20% more headcount?
faangiq超过 2 年前
Can easily keep all these places running with 10% headcount.
hayst4ck超过 2 年前
<i>Outages are primarily proportional to change.</i><p>Problem one: Increasing scale<p>If a company is growing, increasing scale forces change. It forces change to core systems, like upsharding database clusters. It pushes the limits of various systems in ways that require architecture change. If a company is not growing, that is a major motivator of change not happening and therefore outages that will never happen.<p>Problem two: Adding features<p>If a company stops adding new features, new code doesn&#x27;t really need to be pushed all that often. Bad code pushes are <i>by far</i> the number one cause of outages, although these outages generally don&#x27;t have the types of blast radius that architecture changes do.<p>Problem three: Rot&#x2F;maintenance&#x2F;upkeep<p>Now we get to the crux of the issue, which is something on the order of 3 machine failures per 1000 machines per day (my empirical estimation based on experience). Hard drives fail, circuits fry, network interfaces become finicky, hard drives fill up. A good portion of this can be resolved via blind auto-remediation. There&#x27;s a problem on a machine? Wipe it clean and reconfigure it for its task. Assuming there are functioning autoremediation systems and no SPOFs, that database systems can handle master failures etc, that results in the most major &quot;people need to handle this&quot; problem being hardware failure. There must be someone actively procuring new hardware and replacing old hardware.<p>Systems can run up to 70% peak capacity, so that&#x27;s likely on the order of 100 days of unaddressed machine rot before consequences will be seen depending on how capacity is allocated.<p>Problem four: Context change<p>While most change is done by the company itself, the company exists in a certain context. Governments can come down and companies via regulation like GDPR, which will definitely require the company to make changes. Security problems can require major or minor changes to be made. When the context a company exists within changes, the company must adapt, and these forced changes can result in outages. Depending on the change, the level of expertise of the remaining employees would likely dictate the outage.<p>So attempting to concretely estimate, I would guess something on the order of magnitude of months, maybe 3-6 months, with the caveat of good auto remediation and no SPOFs.
joshxyz超过 2 年前
they fired soem overhead and hired nerds, should be good right?
theCrowing超过 2 年前
Netflix. 7 days.