Nice to see a reasoned take.<p>The crazy takes are necessary; they serve a valuable purpose of making sure people mitigate the risks. For example, the invention of “AI alignment” was a response to AI doomers.<p>Or there’s the paperclip maximizer thought experiment. It doesn’t stand up to scrutiny. All that paperclip conversion will take a tremendous amount of energy. Maybe someone will notice when they get their energy bill, and maybe they’ll like, turn the machine off? Also, it’s a bad look if your company’s AI converts the world to paperclips. If someone started such malarky the media will notice quick smart. Then there’ll be a scandal and then maybe the company will.. turn the machine off? But sure let’s run around like headless chooks fearing the AIs that have off switches are going to keep using energy they’re not being provided with to do god knows what without any human oversight whatsoever.<p>When a smart person warns of the sky falling, ask if they truly believe themselves. Maybe they’re raising hell on purpose to scare boffins into inventing a mitigation.
> Like cloud vendors, AI companies will gladly sell services to rival companies, but the systems they sell will be truly separate, configured by their operators with different goals and purposes.<p>So all that is needed is finding a vulnerability / causing a backdoor in the hypervisor that is used to separate the tenants.<p>Lots of wishy-washy words here, disconnected from actual technology.
I sorta agree and i think this approach is a good one. A bit from alvin toefflers mindset but i disagree on the conclusion that AI is a needed useful thing. We are adding more and more complexity and abstraction to our world.
This will simply end up in more stress and problems and not resolve anything.
Like an atom bomb we simply have more worries and even less solutions