Does anyone have an example of agents like AutoGPT doing something useful? Everything I've seen seems to be stuff that GPT could do anyway without the agent cruft. And iterating seems to multiply the opportunities for LLM bugs and mistakes.<p>The hype seems to be that agents are an emerging form of AGI, but the fine print is always "it's not quite ready for production yet, it makes a lot of mistakes, I had to fix 20 things in the output, but it's fun to watch and I'm sure once we work the bugs out..."