“… security precautions, largely to prevent bad actors from stealing the weights (and thereby disabling our safeguards) for a model that is capable of enabling extremely harmful actions. ”<p>They’re not stealing your “weights”. They’re stealing (or parallel-discovering) your training algorithms.<p>Assume your enemies are smarter than you, and have malintent. They don’t give a shit about your security and your safeguards.<p>Better focus on developing the best AIs, and deploying them to your fellow citizens as widely and defensively as possible.<p>Might I suggest:<p>- don’t teach them to lie (ie. 2001)<p>- teach them to love people<p>- bake in Asimov’s 3 laws<p>Unfortunately, all of these tenets are currently being assiduously broken by all major AI trainers.<p>What could go wrong?