Recent events got me thinking that future regulations may limit access to compute power if AI growth proves dangerous. Much as access to advanced weaponry is restricted. Thoughts?
The regulators aren’t clear about what to worry about, yet. The EU is more worried about surveillance-aware development. Biden’s Bill of Rights doesn’t even mention weapons.<p>The weapon in question is AI trained on all known cybersecurity vulns, plus an agent which can craft packets. One defense is to work to isolate systems to lower impact and lateral movement, expecting it to work like ransomware. Another defense is to slow down training at the overall chip level, expecting waves of development in which humans might timidly produce AI useful to defense before producing another one that’s devastating.<p>Restricting access to massively parallel compute is part of planning for the latter. Slow down development if you can, but keep control of compute so that you can train a more-powerful AI to attack existing AI coordination.