Like many of you, I frequently write little Python scripts, create Jupyter Notebooks, etc to get work done. And LLMs are often part of the process (generating copy, analyzing data, etc) -- but I almost never do anything more involved than programmatically creating good prompts and calling chatGPT/Claude/etc (and of course use the chat interfaces directly to assist with various tasks).<p>But I'm worried I'm falling into pg's "Blub trap" by not understanding what's possible:<p>> ...when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.<p>Am I missing out by "asking too little" of LLMs? Is there productivity and new capabilities I can harness through agentic workflows and other more complex LLM functionality? What are the libraries and tools that folks are reaching for to get the most out of LLMs in their everyday work?