Nice to see a reasoned take.<p>The crazy takes are necessary; they serve a valuable purpose of making sure people mitigate the risks. For example, the invention of “AI alignment” was a response to AI doomers.<p>Or there’s the paperclip maximizer thought experiment. It doesn’t stand up to scrutiny. All that paperclip conversion will take a tremendous amount of energy. Maybe someone will notice when they get their energy bill, and maybe they’ll like, turn the machine off? Also, it’s a bad look if your company’s AI converts the world to paperclips. If someone started such malarky the media will notice quick smart. Then there’ll be a scandal and then maybe the company will.. turn the machine off? But sure let’s run around like headless chooks fearing the AIs that have off switches are going to keep using energy they’re not being provided with to do god knows what without any human oversight whatsoever.<p>When a smart person warns of the sky falling, ask if they truly believe themselves. Maybe they’re raising hell on purpose to scare boffins into inventing a mitigation.