The worrying thing for me is that the comments here prove that people in no way understand what an actual potentially dangerous AI looks like. And that ignorance is what will lead to AI taking over the planet sooner rather than later.<p>The real concern is going to be with fully autonomous superintelligent cognitive agents that emulate all sorts of other animal/human characteristics such as emotions and survival instincts. GPT 3/4 are not autonomous. They will only do what the users instruct them to do. They do not have their own goals etc. They have general intelligence but we are anticipating models with easily 10-1000 X more intelligence in only a few years.<p>But many groups are working as fast as they can to build full autonomy and even trying to emulate other human and animal characteristics with the apparent intent to create digital people and enslave them. Based on the conflation of general purpose intelligence with the other animal traits like autonomy, emotions, survival, etc.<p>Within only a few years, GPT-X powered VMs will be considered very basic tools that only the most conservative users adhere to out of concerns about AIs that have 100 times the cognitive power and near full autonomy and sophisticated cognitive architecture.<p>But people need to worry about the sophisticated cognitive architectures being designed for autonomy. Not relatively simple tools that just follow directions and have a lot of tuning for that. In fact, it's quite possible that this type of system in a commercial service will be generally considered much safer than traditional VMs, because they can be equipped with instructions to disable accounts when even a hint of malfeasance is detected. Whereas giving people direct access to the machine does not allow that AI filtering.