Too late. I added a 5 minute cron job for cursor AI's compose tab in agent mode that keeps replying "keep going, think of more fixes and features, random ideas as fine, do it all for me". I won't pull the plug.
This is a purely procedural question, not supporting or critiquing in any way-- other than this reads kind of like an editorial with the format of a scientific paper. The question is... are there rules about what constitutes a paper or can you just put whatever you want in there as long as you follow "scientific paper format"?
I really enjoy Margaret Mitchell‘s podcast (she is the first author on the paper), and perhaps I missed something important in the paper, but:<p>Shouldn’t we treat separately autonomous agent we write ourselves, or purchase to run on our own computers, on our own data and that use public APIs for data?<p>If Margaret is reading this thread, I am curious what her opinion is.<p>For autonomous agents controlled by corporations and governments, I mostly agree with the paper.
No one should be allowed to develop software that has bugs in it that lead to unlawful harm to others. And if they do it anyway they should be punished lawfully.<p>The thing with autonomous AI is that we already know it cannot be made safe in a way that satisfies lawmakers who are fully informed about how it works… unless they are bribed, I suppose.
Most of the arguments presented also apply to corporations.<p>There's no mention of externalities. That is, are the costs of AI errors borne by the operator of the AI, or a third party.
This has to be the least interesting paper I've ever read with the most surface level thinking.<p>> • Simple→Tool Call: Inaccuracy propagated to inappropriate tool selection.<p>> • Multi-step: Cascading errors compound risk of inaccurate or irrelevant outcomes.<p>> • Fully Autonomous: Unbounded inaccuracies may create outcomes wholly unaligned with human goals.<p>Just... lol
<i>Our analysis reveals that risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise. Particularly concerning are safety risks, which affect human life and impact further values.</i>
The paper described a level 5 fully autonomous agent as one that can:<p>create code(user request);<p>execute();<p>Is this not possible with tool use alone, so long as the agent has access to a tool that can execute arbitrary code?
I feel that these kinds of statements are more effective at promoting AI than limiting it. It reinforces the assumption that such powerful AI is behind the corner. It hypes up AI and the result is likely more money and resources being put into it.<p>Imagine if the A-bomb was being openly developed. What title would have contributed more to funding and research, "The A-bomb (is terribly powerful and) should not be developed" or "The A-bomb will never work"? Except the A-bomb did work and in a surprisingly short time, while autonomous AGI is still a conjecture.
This is quite concerning seeing that the authors are all affiliated with huggingface. Hopefully they won't start censoring what models you can or can't upload because they seem certain things shouldn't be developed.