I'm talking about this project: https://github.com/Significant-Gravitas/Auto-GPT<p>It puts LLM's programming capabilities in a feedback loop with the real world in order to achieve arbitrary goals. In my opinion, this has the potential to become a very powerful and therefore dangerous tool.<p>I fear that people with both good and bad intentions give commands that do large scale harm. How do you see it?
I think the question of criminal liability will be an interesting one for the future of IRL-enabled AI.<p>Prosecuting the software is clearly absurd, but if it "innovates" in response to an ambiguous request, where does the agency lie? Is the requestor open to conspiracy charges? The API hoster.. ?<p>If there are cases where no-one can be found liable for the actions of an insufficiently constrained AI agent, isn't that an open invitation to instigate plausibly deniable AI performed crime?<p>And so on.
I had a good chuckle after finding this in a related thread: <a href="https://xkcd.com/416/" rel="nofollow">https://xkcd.com/416/</a>