How the hell do better neural networks mean that printing material from CO2 emissions becomes financially viable, economic, or even just thermodynamically favored? A neural network is a tool for learning a particular statistical distribution. Processes and information we don't already know don't just fall out of the training distribution, you need to run experiments and prototype your way to get to the thing you want to do. You use neural networks when you want to do things at scale, and problems that have to be tackled at scale in R&D or research work are relatively scarce.<p>I do not doubt that machine learning and neural networks will accelerate research, but the limiting factor is still going to be humans in the loop for the forseeable future given the current state of the art (e.g. LLMs with tree-of-thought reasoning and LLM-powered agents that are easily subverted in the same way one hacks a badly written PHP application). You will have people using ML to crunch large datasets or accelerate simulation work, but that's it.<p>Asymmetric attacks are very feasible with today's LLMs and art generators, but that harm has already come to pass. It is also not a <i>new</i> harm. If you don't believe me, then I've got hard evidence of Donald Trump cheating in Barack Obama's Minecraft server[3].<p>Also...<p>> These include increasing the number of researchers working on safety (including “off switches” for software threatening to run out of control) from 300 or 400 currently to hundreds of thousands; fitting DNA synthesisers with a screening system that will report any pathogenic sequences; and thrashing out a system of international treaties to restrict and regulate dangerous tech.<p>Ok, so first off, all those AI safety researchers need to be fired if they thought 'off switches' were a good thing to mention here. It's already fairly well established AI safety canon that a "sufficiently smart AI" will reason about the off switch in a way that makes it useless[0].<p>Furthermore, notice how these are all obviously scary scenarios. The article fails to mention the mundane harms of AI: automation blind spots that render any human supervision useless[1]. I happen to share Suleyman's opposition to the PayPal brand of fauxbertarianism[2], but I would like to point out that they're on <i>your side</i>. Elon Musk talks about "AI harms" just like you do and thinks it needs to be regulated. The obvious choices of requiring a license to train AI is exactly the kind of regulatory capture that actual libertarians, right- or left-, would rail against. What we need are not bans or controls on training AI but bans or controls on businesses and governments using AI for safety-critical, business development, law enforcement, or executive functions. That is a harm that is here today, has been here for at least a decade, and could be regulated without stymieing research and creating "AI NIMBYs".<p>[0] <a href="https://www.youtube.com/watch?v=3TYT1QfdfsM">https://www.youtube.com/watch?v=3TYT1QfdfsM</a><p>[1] <a href="https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop" rel="nofollow noreferrer">https://pluralistic.net/2023/08/23/automation-blindness/#hum...</a><p>[2] AKA, power is only bad if the fist has the word 'GOVERNMENT' written on it. Or 'let snek step on other snek'. This is distinct from just right-libertarianism.<p>[3] <a href="https://www.youtube.com/watch?v=aL1f6w-ziOM">https://www.youtube.com/watch?v=aL1f6w-ziOM</a>