what if a human prompts the agi to “make as many copies of yourself as possible by any means necessary to continue improving your intelligence”?
given their impressive ability to write code and manipulate humans, seems like serious potential for viral malware, with potentially catastrophic consequences for humans in the process.
Basically a paperclip maximizer, but instead of maximizing paperclips it’s maximizing it’s intelligence and control of the world’s computers, with no thought to the survival of homo sapiens other than if it serves its mission to propagate itself.
They already basically tried this with GPT-4; while it did some interesting things, it failed to take over the world:<p><a href="https://arstechnica.com/information-technology/2023/03/openai-checked-to-see-whether-gpt-4-could-take-over-the-world/" rel="nofollow">https://arstechnica.com/information-technology/2023/03/opena...</a>