> it illustrated one of the big fears behind the government’s zeal to regulate A.I.: that the technology could be used to stoke panic and sow disinformation, with potentially disastrous consequences<p>> Within minutes, internet sleuths began to debunk the image<p>> global governments should consider creating a regulator, akin to the International Atomic Energy Agency, that can inspect, audit and when necessary restrict systems that go beyond a certain level of capability. “Governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight,” Altman and his colleagues write.<p>So the theory is: Chicken littles panic about AI. But the panic is formless. "AI is powerful its going to change the world, we need to do something." But obviously do what isn't what they people think about. Because they dont understand they just panic and fear.<p>So now there is some stupid ideas like pausing, or limiting compute, or dataset size, or deployments. All stupid and obviously unworkable. People obsess over LLMs but I would be willing to bet a $1000 that a model trained only on something like object recognition and captioning would have emergent capabilities like a GPT at enough scale.<p>Neural networks build models of the world, models of the world are useful for planning and computation. Computation is isomorphic to reality. QED. You can't stop the tide.