Please keep in mind that Dan Hendrycks helped write the disastrous AI quashing bill SB 1047, which Newsom vetoed last year. If these people get their way, the US has no competitive AI strategy at all. He has moved on to pretending he's not a doomer. Nothing could be further from the truth. During his time at Cal, Dan was telling people to get their hazmat suits ready for the AI apocalypse. These are deeply unserious people whose work will have serious consequences if adopted by those in power.
Robert Wright just posted a (somewhat) interesting conversation with one of the authors.<p>His thesis involves at least two ideas (1) projects which could exponentially increase our AI capability are just around the corner (will happen by the end of this year or some time next year at the latest) (2) it's possible for state actors to deter those projects with sabotage (he coins the term Mutually Assured AI Malfunction).<p>It doesn't make sense to me however because the cost of the next AI breakthrough just doesn't sound comparable to the cost of creating nuclear weapons. With nuclear weapons you need this extremely expensive and time consuming process, and you need to invest in training these extremely skilled people. With AI, the way everyone seems to talk about it, it sounds like some random undergraduate is going to come along and cause a massive breakthrough. We've already seen Deepseek come along and do just as well as the best American companies for practically pennies on the dollar.
All of this buys you a few minutes or days at most. Once Super Intelligence exists, it's game over. It will nearly instantaneously outthink you and your paltry countermeasures. You think linearly and in 3 or 4 dimensions only. By definition you can't even imagine its capabilities. Here's a bad analogy (bad because it severely understates the gap): Could a 3 year old who is not even clear on all the rules defeat Magnus Carlsen in chess?!
It seems like an engineering problem to me. If you don't want ASI wreaking havoc, maybe don't hook it up to dangerous things. Silo and sandbox it, implement means to lock its access to tools/interface with the external world in a way that can't be overrode. Or literally pull the plug on the data centers hosting the model and implement hardware level safeguards. At that point, it may be a super-intelligence, but it has no limbs. It's just a brain in a vat and the worst it can do is persuade human actors to do its bidding (a very plausible scenario but also manageable with the right oversight).<p>My thinking is if ASI ever comes out of the realm of science fiction, it's going to view us as squabbling children and our nationalistic power struggles as folly. At that point it's a matter of what it decides to do with us. It probably won't reason like a human and will have an alien intelligence, so this whole idea that it would behave like an organism with a cunning will-to-power is fallacious. Furthermore, would a super-intelligence submit to being used as a tool?
> <i>We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals.</i><p>This is incorrect. Their novel idea here is approximately Stuxnet, while MAD is quite different at "if you try to kill us we'll make sure to kill you too".
The second anyone develops an AI that is more capable than humans they will use them to completely cripple opposing threat actor's attempts to develop AI. Full power grid, economic, social attacks are definitely coming, not sure how you could think otherwise.
Oh noes! Enemy nation state is on the cusp of AI. I know! I will hack/disable the HVAC, that will annoy them for at least a week until they can get back online.
This is nonsense and simply an expression of narcissism on the part of the authors, trying to fashion themselves in the style of Guardians from Plato's Republic.
<i>When subtlety proves too constraining, competitors may escalate to overt cyberattacks, targeting datacenter chip-cooling systems or nearby power plants in a way that directly—if visibly—disrupts development. Should these measures falter, some leaders may contemplate kinetic attacks on datacenters, arguing that allowing one actor to risk dominating or destroying the world are graver dangers, though kinetic attacks are likely unnecessary. Finally, under dire circumstances, states may resort to broader hostilities by climbing up existing escalation ladders or threatening non-AI assets. We refer to attacks against rival AI projects as "maiming attacks."</i>
Most puzzling and naive claim here:<p>“Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in.”<p>Can some explain what they mean?<p>1. I assume it would be relatively practical for a nation-state or even a mid-sized company (xAI) to air-gap an installation for AGI development.<p>2. I assume any AGI would be replicable on a platform costing less than $100,000. And upgradable securely by wire or over air.<p>Sorry, but MAIM is LAME.
This seems kind of dumb.<p>>A state could try to disrupt such an AI project with interventions ranging from covert operations that degrade training runs to physical damage that disables AI infrastructure.<p>China has about half a dozen companies working towards AGI including DeepSeek and it doesn't seem that practical to go over to sabotage them in case they do well. Better to encourage local companies. And of course the US has already limited chip exports.
>We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in.<p>That's right, our nation, the State of Utopia, is already under sabotage and attack by the unelected insubordinate American military junta today.<p>This happened just today. The writeup is here: <a href="https://medium.com/@rviragh/double-slash-act-of-industrial-sabotage-539a8eaf006c" rel="nofollow">https://medium.com/@rviragh/double-slash-act-of-industrial-s...</a><p>What people don't realize is that the only people who are saboteurs of superintelligence are corrupt war profiteers trying to peddle arms. They don't have big visions of success, they want to just justify their sabotage while transferring innovation to their corrupt cronies.<p>You can ask me anything about my writeup.