Related ongoing thread:<p><i>CrowdStrike Update: Windows Bluescreen and Boot Loops</i> - <a href="https://news.ycombinator.com/item?id=41002195">https://news.ycombinator.com/item?id=41002195</a> - July 2024 (3590 comments)
Light on technical and light on details.<p>Putting the actual blast radius aside, this whole thing seems a bit amateurish for a "security company" that pulls the contracts they do.
Can someone who actually understands what CrowdStrike does explain to me why on earth they don't have some kind of gradual rollout for changes? It seems like their updates go out everywhere all at once, and this sounds absolutely insane for a company at this scale.
> The update that occurred at 04:09 UTC was designed to target newly observed, malicious named pipes being used by common C2 frameworks in cyberattacks<p>The obvious joke here is CS runs the malicious C2 framework. So the system worked as designed: it prevented further execution and quarantined the affected machines.<p>But given they say that’s just a configuration file (then why the hell is it suffixed with .sys?), it’s actually plausible. A smart attacker could disguise themselves and use the same facilities as the CS. CS will try to block them and blocks itself in the process?
>>> Systems that are not currently impacted will continue to operate as expected, continue to provide protection, and have no risk of experiencing this event in the future.<p>Given that this incident has now happened twice in the space of months (first on Linux, then on Windows), and that as stated in this very post the root cause analysis is not yet complete, I find that statement of “NO RISK” very hard to believe.
This seems very unsatisfying. Not sure if I was expecting too much, but that’s a lot of words for very little information.<p>I’d like more information on how these Channel Files are created, tested, and deployed. What’s the minimum number of people that can do it? How fast can the process go?
I'm not a big expert but honestly this read like a bunch of garbage.<p>> Although Channel Files end with the SYS extension, they are not kernel drivers.<p>OK, but I'm pretty sure usermode software can't cause a BSOD. Clearly something running in kernel mode ate shit and that brought the system down. Just because a channel file not in kernel mode ate shit doesn't mean your kernel mode software isn't culpable. This just seems like a sleezy dodge.
>The configuration update triggered a logic error that resulted in an operating system crash.<p>> We understand how this issue occurred and we are doing a thorough root cause analysis to determine how this logic flaw occurred.<p>There's always going to be flaws in the logic of the code, the trick is to not have single errors be so catastrophic.
> we are doing a "root cause analysis to determine how this logic flaw occurred"<p>That's going to find a cause: a programmer made an error. That's not the root of the problem. The root of the problem is allowing such an error to be released (especially obvious because of its widespread impact).
> This issue is not the result of or related to a cyberattack.<p>Must be corrected to "the issue is not the result of or related to a cyberattack by <i>external agents</i>".
Weak.<p>Very weak and over corporate level of ass covering. And it doesn't even come close to doing that.<p>They should just let the EM of the team involved provide a public detailed response that I'm sure is floating around internally. Just own the problem and address the questions rather than trying to play at politics, quite poorly.
The lower you go in system architecture, the greater the impact when defects occur. In this instance, the Crowdstrike agent is embedded within the Windows Kernel, and registered with the Kernel Filter Engine illustrated in the diagram below.<p><a href="https://www.nathanhandy.blog/images/blog/OSI%20Model%20in%20Practice%20v1.1%20-%20SingleSystem%20-%20Large.png" rel="nofollow">https://www.nathanhandy.blog/images/blog/OSI%20Model%20in%20...</a><p>If the initial root cause analysis is correct, Crowdstrike has pushed out a bug that could have been easily stopped had software engineering best practices been followed: Unit Testing, Code Coverage, Integration Testing, Definition of Done.
To my biased ears it sounds like these configuration-like files are a borderline DSL that maybe isn't being treated as such. I feel like that's a common issue - people assume because you call it a config file, it's not a language, and so it doesn't get treated as actual code that gets interpreted.
Can someone aim me at some RTFM that describes the sensor release and patching process, please? I'm lost trying to understand: When a new version 'n' of the sensor is released, we upgrade a selected batch of machines and do some tests (mostly waiting around :-)) to see that all is well. Then we upgrade the rest of the fleet by OU. However, 'cause we're scaredy cats, we leave some critical kit on n-1 for longer. And some really critical kit even on n-2. (Yeah, there's a risk in not applying patches I know but there are other outage-related risks that we balance; forget that for now) Our assumption is that n-1, n-2, etc are old, stable releases, and so when fan and shit collided yesterday, we just hopped on the console and did a policy update to revert to n-2 and assumed we'd dodged the bullet. But of course, that failed... you know what they say about assumptions :-) So in a long-winded way that leads to my three questions: Why did the 'content update' take out not just n but n-whatever sensors equally as effectively? Are the n-whatever versions not actually stable? And if the n-whatever versions are not actually stable and are being patched, what's the point of the versioning? Cheers!
“Technical” detail report reads more like a lawyer generated report. This company is awful.<p>If I ever get a sales pitch from these shit brains, they will get immediately shut down.<p>Also fuck MS and their awful operating system that then spawned this god awful product/company known as “CrowdStike Falcon”