From <a href="https://www.fireeye.com/blog/threat-research/2017/12/attackers-deploy-new-ics-attack-framework-triton.html" rel="nofollow">https://www.fireeye.com/blog/threat-research/2017/12/attacke...</a><p>>The targeting of critical infrastructure to disrupt, degrade, or destroy systems is consistent with numerous attack and reconnaissance activities carried out globally by Russian, Iranian, North Korean, U.S., and Israeli nation state actors. Intrusions of this nature do not necessarily indicate an immediate intent to disrupt targeted systems, and may be preparation for a contingency.<p>This reeks of Stuxnet 2.0
At some point, we're going to start seeing an internet connection not just in terms of the benefits, but in terms of the liabilities too. I really would have thought that by 2019 we'd be there with industrial control systems, but apparently not.<p>One wonders if the governments of the world wouldn't be well advised to go ahead and hack their a couple bits of their own critical infrastructure a couple of times and horribly break it, just to make the point, before a bad actor hacks all the infrastructure. That visibly has huge costs, but it's not clear that the hidden costs of just blithely letting people keep hooking up critical stuff to the Internet isn't orders of magnitude higher.<p>And by no means could such a result be called a "Black Swan", because that it's going to occur is perfectly predictable. It's only a question of when.
Redundant Safety PLCs run the same program in parallel in lockstep and if they get different results then this triggers an error. I think Triconex in particular requires 2 out of 3 controllers to agree.<p>It is odd that the attacker tried to modify the program in the PLC configured this way. They should have known it would cause a noticeable disturbance.<p>The Schneider Quantum PLC literally runs a pentium 166 or 200 and there is a steady string of firmware and operating system (VxWorks) updates. We had one from 2006 that would simply stop communicating if it was plugged in to a cisco switch from 2016.<p>A zero day in VxWorks which is the operating system for a large swath of controllers would be pretty bad.
<a href="https://twitter.com/SarahTaber_bww/status/1105256557154127872" rel="nofollow">https://twitter.com/SarahTaber_bww/status/110525655715412787...</a><p>>I don't I've quite articulated why I'm so critical of the tech industry. Tech isn't just software anymore. They're coming for ag, food, & manufacturing- & they're bringing a negligent attitudes towards risk & safety that they learned in the cushy world of apps.<p>And this malware is affecting industries with a strong incentive for safety, think about what that might imply about every other industry.
> [The malware contained] an IP address that had been used to launch operations linked to the malware.<p>> That address was registered to the Central Scientific Research Institute of Chemistry and Mechanics in Moscow, a government-owned organization with divisions that focus on critical infrastructure and industrial safety.<p>Ironic, sounds like they also have the job of <i>subverting</i> critical infrastructure and industrial safety.
This is definitely not the first time malicious software was implanted in industrial control safety systems. Here is an example from the Cold War (it caused the largest non-nuclear man-made explosion in history):<p><a href="https://www.zdnet.com/article/us-software-blew-up-russian-gas-pipeline/" rel="nofollow">https://www.zdnet.com/article/us-software-blew-up-russian-ga...</a><p>The actual sabotage involved adding an integer overflow to valve control software, and making sure it took months to hit (so testing would miss it).
I think people need to keep in mind, that "disconnect it from the Internet, it shouldn't have been on the internet" doesn't fix this. If the injection works from USB devices, then the typical field engineer is not going to scrub their USB before downloading the field upgrade. Almost everything worldwide now uses USB as a field-upgrade path. Maybe as a cost cutting and simplification method this was ok, but the risk side? way way above the benefit (in my opinion)<p>What mitigates this (if anything does) is signed code on media you have to work harder to program. Rather than a USB device, this should be some form of media which doesn't present as a bootable device to a BIOS/UEFI. The field unit should have signature checks over images based on PKI. This is what a lot of things do, but somehow it seems not the ones which matter here?<p>Field upgrade by kermit or xymodem would be better than this, in that narrow regard. -The risk of an unexpected packet hitting the code path is lower if the code upgrade is reading a byte stream for a hash/sig check, compared to mounting a USB device, loading drivers, enabling HID mode ...<p>I deliberately avoided working in engineering contexts where the risk was above my comfort factor. It ruled out industrial process control, health, civil engineering and a host of fascinating fields, but I was just too worried about the liability side and my own competency to work in these areas.<p>I did not foresee (inter)net technology becoming so critical it exposed all of these risks, in my core competency. I still feel inadequate to these risks, 37 years later.
Industrial operations are going to have to start giving a damn. A lot of them just don't right now. Most of the ones I've been in are an amalgamation of devices and software spanning the last thirty years. The number of xp boxes still controlling vital systems while being connected to the internet is insane.
> "...likely through a hole in a poorly configured digital
> firewall that was supposed to stop unauthorized access. .."<p>'Every' penetration tester I talk to says that this is what they find all the time: actual 'reality' within networks does not align with assumed network policies or topology.<p>But, I don't talk to that many. Is this really the case? We put great care to have network architectures and policies that define network segmentation, isolation, and other strategies to harden and protect the network. But those policies are not implemented properly, or over time their technical enforcement isn't guaranteed?
Instead of having Internet connectivity 24/7 for IoT devices or critical infrastructure, why not have a small window for things like updates and so on, but be physically air-gapped the rest of the time? The window doesn't have to be at the exact same time either: if you need 20 minutes to download and apply updates once a week, then you can start that 20 minute interval at anytime on whichever day. The air-gapping could also be done using analog means or another network that isn't connected to the Internet.<p>The best solution would be to be air-gapped 24/7, but in cases where that is not possible, there are other viable and more secure approaches than being online 24/7.
Made in Russia?<p><a href="https://www.fireeye.com/blog/threat-research/2018/10/triton-attribution-russian-government-owned-lab-most-likely-built-tools.html" rel="nofollow">https://www.fireeye.com/blog/threat-research/2018/10/triton-...</a>
What causes some of the finest hackers to come from Russia? Is it attributable to their education system? Comparatively, I don’t see as many hackers coming from any other country? I don’t mean it in a bad way. Just curious.
I know that this is tinfoil-hat territory, but isn't the problem of Boeing 737 MAX recent crashes related to software issues? It would be scary. I'm referring to this article: <a href="https://www.businessinsider.com/boeing-737-max-receive-updated-control-software-2019-3?IR=T" rel="nofollow">https://www.businessinsider.com/boeing-737-max-receive-updat...</a>
Mobile platform apps run in considerably stricter environments than do desktop apps.<p>I'm wondering why MS has not come out with a similar kind of Windows, wherein every app is effectively sandboxed.
> In attacking the plant, the hackers crossed a terrifying Rubicon. This was the first time the cybersecurity world had seen code deliberately designed to put lives at risk.<p>This is no regular malware. This is war.