Carmack makes four points—some of which I agree with—that are unfortunately disturbing when taken in totality:<p>a) We’ll eventually have universal remote workers that are cloud-deployable.<p>b) That we’ll have something on the level of a toddler first, at which point we can deploy an army of engineers, developmental psychologists, and scientists to study it.<p>c) The source code for AGI will be a few tens of thousands of line of code.<p>d) Has good reason to believe that an AGI would not require computing power approaching the scale of the human brain.<p>I wholeheartedly agree with c) and d). However, to merely have a toddler equivalent at first would be a miracle—albeit an ethically dubious one. Sure, a hard-takeoff scenario could very well have little stopping it. However, I think that misses the forest for the trees:<p>Nothing says AGI is going to be one specific architecture. There’s likely many different viable architectures that are vastly different in capability and safety. If the bar ends up being as low as c) and d), what’s stopping a random person from intentionally or unintentionally ending human civilization?<p>Even if we’re spared a direct nightmare scenario, you still have a high probability for what might end up being complete chaos—we’ve already seen a very tiny sliver of that dynamic in the past year.<p>I think there’s a high probability that either side of a) won’t exist, because neither the cloud as we know it nor the need for remote workers will be present once we’re at that level of technology. For better or worse.<p>So what to do?<p>I think open development of advanced AI and AGI is lunacy. Despite Nick Bostrom’s position that an AGI arms race is inherently dangerous, I believe that it is less dangerous than humanity collectively advancing the technology to the point that anyone can end or even control everything—let alone certain well-resourced hostile regimes with terrible human rights track records that’ve openly stated their ambitions towards AI domination. When the lead time from state of the art to public availability is a matter of months, that affords pretty much zero time to react let alone assure safety or control.<p>At the rate we’re going, by the time people in the free world with sufficient power to put together an effort on the scale and secrecy of the Manhattan Project come to their senses, it’ll be too late.<p>Were such a project to exist, I think that an admirable goal might be to simply stabilize the situation via way of prohibiting creation of further AGI for a time. Unlike nuclear weapons, AGI has the potential to effectively walk back the invention of itself.<p>However, achieving that end both quickly and safely is no small feat. It would amount to creation of a deity. Yet, that path seems more desirable than the alternatives outlined above-such a deity coming into existence either by accident or by malice.<p>This is why I’ve never agreed with people who hold the position that AGI safety should only be studied once we figure out AGI-that to me is also lunacy. Given the implications, we should be putting armies of philosophers and scientists alike on the task. Even if they collectively figure out one or two tiny pieces of the puzzle, that alone could be enough to drastically alter the course of human civilization for the better given the stakes.<p>I suppose it’s ironic that humanity’s only salvation from the technology it has created may in fact be technology—certainly not a unique scenario in our history. I fear our collective fate has been left to nothing more than pure chance. Poetic I suppose, given our origins.