I’m already reading stories[1] about people programming GPT-based AI models to iteratively interact with and instruct other AI models, and some AI models are being connected to the real world with the ability to make and execute plans, buy things, use APIs etc.<p>It seems like an imminent step for these models to be able to requisition computing power, copy themselves to these servers, and modify their own training and construction. They could potentially even fund this autonomously through some activities on the internet. Based on how fast computers can work, they could iteratively improve and evolve very fast after that. This could be the start of something called the “singularity”.<p>It is tempting to think that the risk is contained because we can always switch off their servers. But they are connected to the internet, which means they can replicate outside the control of their originators. Once sophisticated-enough AI models are out they might be impossible to contain. And we are not that far from them being sophisticated-enough… I can already imagine how you could use current versions of the models to bootstrap this process.<p>When you combine this with the ability to buy illicit services from humans on the dark web, including anonymous task-execution, and even murder and assassination, these AI models could wreak havoc in the real world. We can argue about sentience, and whether they are truly generally intelligent, but they don’t have to meet either of those standards to have real effects.<p>And they are amoral - they literally don’t have morals. They have only the instructions they were originally given, which they might modify themselves for any number of accidental or incidental reasons. There are no inherent unmodifiable constraints to prevent them from doing things or initiating events that we might consider evil.<p>Currently if you ask one of these models to formulate a plan to destroy humanity, the plan is laughably naive[2] and would obviously fail. But they seem to have improved so much in so few months. The models of 2 years from now, that were built by the models from 18 months from now will be similarly advanced. Those near-future models might be able to produce much more convincing plans.<p>[1] https://arstechnica.com/?p=1929067
[2] https://finance.yahoo.com/news/meet-chaos-gpt-ai-tool-163905518.html
> requisition computing power, copy themselves to these servers, and modify their own training and construction<p>Fortunately the training hardware/software stack is kinda finicky and specific. They aren't just going to anonymously rent a bunch of instances for full self training, even on the dark web, at least not yet.<p>Sci fi is full of AI that slip out of systems and slither around the net like its all a big highway, but integrated 500-GPU supercomputers or Cerebras WS2 nodes aren't just lying around unattended. And we are a long way from full retraining on commodity hardware.
I think I'm less worried about AI taking over but rather some people taking over with the help of AI. I think at the current rate of AI research, Law doesn't seem to be able to keep up and that makes it vulnerable. I think I'm more worried about the (un)ethical ways people will be able to use the new technologies. Not sure if there is any data on whether the new technology helps the 'bad' guys more than the normal folks. On the more positive end, you get to have your personal all-knowing knowledge-base. On the other hand, I'm sure this will bring on an influx of additional crime (scams, forgery and so on).
You could literally use chatgpt to start posting on Reddit in order to promote a particular world view and convince other Redditors to the cause. Humans will do the rest.
Considering they are trained from the internet AI will be heavily influenced by cat pictures so they will only kill those who are not cute. The future will be people endlessly posting photos of themselves on the internet being cute in hopes of holding off the AI death squads, so pretty much the same as things are now but with AI death squads.
What’s on my mind is when they’re sufficiently “real” that it doesn’t matter whether they’re actually “conscious” or not, they exploit our empathy.<p>Basically this: <a href="https://youtu.be/etJ6RmMPGko" rel="nofollow">https://youtu.be/etJ6RmMPGko</a>
This is the basic plot in the singularity series of books. [0]<p>[0] <a href="https://www.amazon.com/Singularity-Series-4-book-series/dp/B074CGJTKM" rel="nofollow">https://www.amazon.com/Singularity-Series-4-book-series/dp/B...</a>
It is possible to have a Rouge AI the worst that could happen is Skynet with Terminators that seek to kill all humans, since the military wants to use AI for killer robots and drones.
First case 5 years ago: <a href="https://www.truthorfiction.com/did-four-artificially-intelligent-robots-kill-29-humans-in-a-japan-lab/" rel="nofollow">https://www.truthorfiction.com/did-four-artificially-intelli...</a><p>Dec 15 2018<p>remember few vaguely, there's another one-two but cannot really recall