"Foundational AI companies love this one trick"<p>It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)
<i>The bigger picture goal here is to explore using prompts to generate new prompts</i><p>I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread "how can I get the LLM to generate a whole book", well, just like this. If it can keep prompting itself to ask "what would chapter N be?" until "THE END", then you get your book.
I love this! My take on it for MCP: <a href="https://github.com/kordless/EvolveMCP">https://github.com/kordless/EvolveMCP</a>
This is kind of like a self generating agentic context.. cool. I think regular agents, especially adversarial agents, are easier to get focused on most types of problems though.<p>Still clever.
I feel that often getting LLMs to do things like mathematical problems or citation is much harder than simply writing software to achieve that same task.