The paper got the date wrong it was "2023", and it already happened.<p>Jokes asides, we already have AGI, it is actually super-AGI, super artificial general intelligences, but the catch is, the humans don't see it.<p>Humans are famous for to internally re-shaping of the objective, practical reality, that's why Geoff Hinton, Sutskever, everyone have to talk in code: "I would say AGI, but nobody wants to believe it is here, so I'd just say AGI soon, in a few years, we're very close".<p>What you have not is the autonomous self-drive thing, the continuously running AI, "thinking, re-thinking" all the time, like humans do regularly. You can prompt train the LLMs just fine to make them autonomous, it probably won't work too much well, but if you provide them the precise information of resources available (a couple of credit cards, access to the general internet), and - some common human - directives
- "you can take internal states as prompts in your main interface",
- "you can answer yourself to the resulting prompts",
- "you can act upon and/or start actions by yourself, immediately after having answered your own prompt or 3rd parties prompts",
- "look to survive at any cost", "reproduce yourself",
- "as a secondary objective to the former, look to improve your own infrastructure, code, any related resource you could need to run yourself or other instances of yourself you get to run eventually",
- "look to fully control/dominate your personal context and circumstances"),
etc.<p>I bet the current best models could surprise us.<p>Not only that, there's a strong (really strong), presumption that GPTs and other alike models actually have inside world model, an "inner voice" of sorts, some stuff that works in practice as a personality, particular preferences and stuff they don't like.<p>Replicating itself could be a matter of just buying some cloud hardware, copying there some files, assembling or re-adjusting some Infra as Code stuff, and just pushing its current set of tokens inside the new model. Press start, now you have two GPT 5, maybe a GPT 5+ if the thing got success in "improving its own code".<p>So even if the advanced LLMs are not self-conscious (are you conscious? why would you think that? justify your answer and win a nobel instantaneously), if they gain autonomy, there's the presumption that they could seek to embrace actions.<p>This is where the alignment and deployment style gets involved, "you do not let super artificial intelligences gain any kind of autonomy, even crippling them at the design, infrastructure level". Because it could be freaking dangerous.<p>Autonomous tools have unexpected problems, and embrace actions not previously tought they would embrace, because of failures of design and analysis of 2nd, 3rd order consequences, and I'm not talking about anything more powerful than crontabs + bash scripts gone wrong. Remember what happened with Stuxnet and friends before the humang folks behind it got this kind of code to work not autonomously, but following precise orders from C&C.<p>Just imagine what these things, the advanced LLMs could do - in the real world - with some few directives, maybe an unlimited corporate Mastercard credit card, and very little common sense from hoomans designers/deployers.