> In one particular case at Google, a software controller–acting on bad feedback from another software system–determined that it should issue an unsafe control action. It scheduled this action to happen after 30 days. Even though there were indicators that this unsafe action was going to occur, no software engineers–humans–were actually monitoring the indicators. So, after 30 days, the unsafe control action occurred, resulting in an outage.<p>Isn't this the time they accidentally deleted governmental databases? I love the attempt at blameless generalization, but wow.
Thanks to all the people here pointing out how bloated, overly broad and useless this is. I went to read it thinking I would pick up something applicable and it was written in such a overwrought humanless style that I gave up learning nothing and thought the problem was me. I am glad to learn I am not alone.
> "The class itself is very well structured. I've heard about STPA in past years, but this was the first time I saw it explained with concrete examples. The Google example at the end was also really helpful."<p>But the article itself contains no concrete examples.
I don't understand and I really really want to.<p>This seems so cool at a scale that I can't fathom. Tell me specifically how it's done at google with regards to a specific service, at least enough information to understand what's going on. Make it concrete. Like "B lacks feedback from C", why is this bad?<p>You've told me absolutely nothing and it makes me angry.
In other words STPA is a design review framework for finding some less obvious failure modes. FMEA is more popular but relies on making a list of all of the knowable failure modes in a system, but the failure modes you haven’t thought of don’t make it on the list. STPA helps fill in some of those gaps of failure modes you haven’t thought of.
STAMP/STPA work well as a model and methodology for complex systems, I was interested in them a while ago in the context of cyber risk quantification. Having a fairly easy model to reason about unsafe control action is not a given in other approaches. I just wish they were adopted by more companies, I have seen too many of them stuck with ERM-based frameworks that do no make sense most of the time when scaled down to working at the system level granularity.
> After working with the system experts to build this control structure, we immediately noticed missing feedback from controller C to controller B–in other words, controller B did not have enough information to support the decisions it needed to make<p>There is a feedback loop through D? And why does the same issue not apply to the missing directed edge from B to D?<p>EDIT: I figured it out on a reread, the vertical up/down orientation matters for whether an edge represents control vs feedback, so B is merely not controlling D, which is fine. But if B is only controlling C as a way to get through to D (which is what I would have guessed, absent other information), what's the issue with that?
This is peak corporate drivel—bloated storytelling, buzzwords everywhere, and a desperate attempt to make an old idea sound revolutionary.<p>The article spends paragraphs on some childhood radio repair story before awkwardly linking it to STPA, a safety analysis method that’s been around for decades. Google didn’t invent it, but they act like adapting it for software is a major breakthrough.<p>Most of the piece is just filler about feedback loops and control structures—basic engineering concepts—framed as deep insights. The actual message? "We made an internal training program because existing STPA examples didn’t click with Googlers." That’s it. But instead of just saying that, they pad it out with corporate storytelling, self-congratulation, and hand-wringing over how hard it is to teach people things.<p>The ending is especially cringe: You can’t afford NOT to use this! Classic corporate play—take something mundane, slap on some urgency, and act like ignoring it is a reckless gamble.<p>TL;DR: Google is training engineers in STPA. That’s the whole story.