Lots are saying we should be careful developing AI to avoid a Matrix/Terminator scenario. Keep it in the hands of the few with tight controls. Tethers, guardrails, etc.<p>Isn't it sufficient to simply pull the plug? I mean AI needs an computer and electricity to survive. Wheras we humans need oxygen and water. AI lives in the virtual world. We live in the physical world.<p>Can anyone convince me there is a real threat? Aside from "feep fakes" and "misinformation". Humans can do those very well already.
This is a classic precautionary vs proactionary principle situation. Is there danger in AI? Yes. But there is danger in everything we do. Is there so much danger that there ought to be a government-enforced moratorium on AI research and GPUs treated as armaments? Some people believe so but I don’t think that case has been adequately made.