The good stuff is in the PDF:<p><a href="https://www.gao.gov/assets/700/694913.pdf" rel="nofollow">https://www.gao.gov/assets/700/694913.pdf</a><p>- Running a port scan caused the weapons system to fail<p>- One admin password for a system was guessed in nine seconds<p>- "Nearly all major acquisition programs that were operationally tested between 2012 and 2017 had mission-critical cyber vulnerabilities that adversaries could compromise."<p>- Taking over systems was pretty much playing on easy mode: "In one case, it took a two-person test team just one hour to gain initial access to a weapon system and one day to gain full control of the system they were testing."
I was a dev contractor for the US Army for a few years. None of this surprises me.<p>They had some goofballs policies that made it seem like vulnerabilities were the goal. I could bitch at length. Their TSA style security theater practices were the order of the day. The IA training was an embarrassing joke and they made you do it often enough to make you a little crazy.<p>I just checked the certificate of networthiness page and they don't have a valid SSL certificate. I recall that being the case years ago too. I wonder if it's been that way for the last 7 years? That's a cute little terrarium of the whole biome I remember.<p>Off topic a bit, but that all aside... I am more proud of the work I did there than at any other place in my career. I got a lot of excitement and engaged feedback about the interactive learning materials I created.<p>I'll never know if it made any difference, but the mere fact that someone's son or daughter COULD have noticed an IED threat they wouldn't have otherwise because of my work gives me all sorts of proud fuzzies.<p>That work had way more meaning than all the other CRUD/ML/Advertainment schlock I'll get to do for the rest of my life :)
If you are interested in helping the US Government fix this particular trashfire, consider joining the Defense Digital Service. We work on a variety of DoD projects as part of the US Digital Service "tech peace corps". <a href="https://www.dds.mil/" rel="nofollow">https://www.dds.mil/</a><p>If you're not ready for that level of commitment (though it's amazing work), and you're interested in being involved as a security researcher, reach out to me and we can talk about joining our bug bounty program.
I was an operator on a weapon system within the last decade that did not use encryption. I was horrified, naturally, but the explanations were:<p>1. Well, this is rapid deployment, we can't have everything.<p>2. The enemy here is fairly low-tech. Shouldn't be a problem.<p>Needless to say, I'm not surprised by this report.
> Nearly all major acquisition programs that were operationally tested between 2012 and 2017 had mission-critical cyber vulnerabilities that adversaries could compromise.<p>It's not too surprising and a little reminiscent of the security nightmare that are IoT devices.<p>All those weapon systems come out of hardware/engineering companies with little background in software engineering and the accompanying security best practices.
When I was at Lockheed - we were building the RFID tracking systems they used to track various everythings all over - and they were trying to make it a part of the Port Security for every port... and even had Tom Ridge join the board...<p>well, I recall asking about the security of the systems (I was the IT lead and was to help design the global port tracking system which they hoped to track all shipping containers) -- there was no encryption/authentication on <i>any</i> of the tags.<p>If you had a reader, you could read/write the tags.<p>They had not even thought about securing these systems - and they were trying to tout them as a security system for weapons shipments. They even had tags that had G-sensors that were to be able to tell you if a munition was dropped, if it had armed (some weapons will only arm themselves once a certain g-force is reached which indicates to the weapon they have been launched.)
The graphic on page 26 of the report is kind of cute: <a href="https://i.imgur.com/MWrM2i8.png" rel="nofollow">https://i.imgur.com/MWrM2i8.png</a><p>The inclusion of this graphic makes me realize the report is not intended to explain the situation to engineers. It's to explain the problem to well-decorated higher ups that probably don't understand modern technology all that well, yet are calling all the budget shots.
"Another test team reported that they caused a pop-up
message to appear on users’ terminals instructing them to insert two
quarters to continue operating."
A ton of commercial systems have similar vulnerabilities. Teslas have gotten hacked remotely a multitude of times over several years. People who attack/hack systems are specialized in ways that those engineers that build systems are not. None of this should be all that surprising. New recommendations on proper system design should mean future programs should have budgets to hire people to mitigate these problems. However, it should always be assumed there are vulnerabilities that can be exploited by others; any claims to the contrary should be met with extreme skepticism.
Most of the comments outline how awful and dire the situation is (or probably is).<p>I'm less interested in this than I am in what we could do to fix it. Is it just more money to hire competent security engineers? Is it a more responsive talent acquisitions process that gets the right people in at the right time?
I guess my question then is why have a computer attached to these systems in the first place, or if you must, why not make it as dumb as possible? Why include more points of failure?<p>Also, I couldn't help it, the DOD plans to spend 1.66 Trillion on these systems! Perhaps if we instead stop making new fangled, more complicated devices that with have tenfold more vulnerabilities to catch, how about we just stick with the machines we have and make then hardened. I imagine that it would save us loads if we just do that.
Good luck closing the barn door after the horse has bolted: <a href="https://www.wired.com/2011/11/counterfeit-missile-defense/" rel="nofollow">https://www.wired.com/2011/11/counterfeit-missile-defense/</a><p>I am no military expert, but it seriously looks like China has us in a stranglehold.
Are these remotely activated systems that are at risk (like drones)? if not, why is any weapon system that doesn't need remote activation actually plugged into a public network?
Silver lining: when the DOD find good ways to harden their systems, we can all copy them.<p>Cloud: it's probably unplug the aerial / network cable
<i>You'll see things here that look odd, even antiquated to
modern eyes. Phones with cords, awkward manual valves, computers that barely deserve the name. But all of it is intentional. It's all designed to operate in combat against an enemy who could infiltrate and disrupt all but the most basic computer systems.<p>Of course, those attitudes have changed through the years and Galactica is something of a relic. A reminder of a time when we were so frightened by the capabilities of our enemies that we literally looked backward for protection. Modern battlestars resemble Galactica only in the most superficial ways...</i>
GDC4S (now General Dynamics Mission Systems) and NICTA have been working on seL4, and it at least seems that USDOD has <i>something</i> to build on, if they want to start providing assurances of some form on weapons systems.<p>They'll really have to set the passwords properly though.
Not to play the Whataboutism card, (proceeds to play whataboutism card), but has anybody pen tested the Soviet's or Chinese' systems?<p>Just thinking this isn't a U.S. only problem.