There are two competing definitions of agents being used in industry.<p><a href="https://www.anthropic.com/engineering/building-effective-agents" rel="nofollow">https://www.anthropic.com/engineering/building-effective-age...</a><p>"- Workflows are systems where LLMs and tools are orchestrated through predefined code paths.<p>- Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks."<p>What Anthropic calls a "workflow" in the above definition is what most of the big enterprise software companies (Salesforce, ServiceNow, Workday, SAP, etc.) are building and calling AI Agents.<p>What Anthropic calls an "agent" in the above definition is what AI Researchers mean by the term. It's also something that mainly exists in their labs. Real world examples are fairly primitive right now, mainly stuff like Deep Research. That will change over time, but right now the hype far exceeds the reality.
I follow Mr. Huang, read/watch his content and also plan to use PocketFlow in some cases. A preamble, because I don't agree with this assessment. I think agents as nodes in a DAG workflow is _an_ implementation of an agentic system, but is not the systems I most often interact with (e.g. Cursor, Claude + MCP).<p>Agentic systems can be simply the LLM + prompting + tools[1]. LLMs are more than capable (especially chain-of thought models) to breakdown problems into steps, analyze necessary tools to use and then executing the steps in sequence. All of this is done with the model in the driver seat.<p>I think the system described in the post need a different name. It's a traditional workflow system with an agent operating on individual tasks. Its more rigid in that the workflow is setup ahead of time. Typical agentic systems are largely undefined or defined via prompting. For some use cases this rigidity is a feature.<p>[1 <a href="https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview" rel="nofollow">https://docs.anthropic.com/en/docs/build-with-claude/tool-us...</a>
Hey folks! I just posted a quick tutorial explaining how LLM agents (like OpenAI Agents, Pydantic AI, Manus AI, AutoGPT or PerplexityAI) are basically small graphs with loops and branches. For example:<p>OpenAI Agents: for the workflow logic: <a href="https://github.com/openai/openai-agents-python/blob/48ff99bb736249e99251eb2c7ecf00237488c17a/src/agents/run.py#L119" rel="nofollow">https://github.com/openai/openai-agents-python/blob/48ff99bb...</a><p>Pydantic Agents: organizes steps in a graph: <a href="https://github.com/pydantic/pydantic-ai/blob/4c0f384a0626299382c22a8e3372638885e18286/pydantic_ai_slim/pydantic_ai/_agent_graph.py#L779" rel="nofollow">https://github.com/pydantic/pydantic-ai/blob/4c0f384a0626299...</a><p>Langchain: demonstrates the loop structure: <a href="https://github.com/langchain-ai/langchain/blob/4d1d726e61ed58b39278903262d19bbe9f010772/libs/langchain/langchain/agents/agent_iterator.py#L174" rel="nofollow">https://github.com/langchain-ai/langchain/blob/4d1d726e61ed5...</a><p>If all the hype has been confusing, this guide shows how they actually work under the hood, with simple examples. Check it out!<p><a href="https://zacharyhuang.substack.com/p/llm-agent-internal-as-a-graph-tutorial" rel="nofollow">https://zacharyhuang.substack.com/p/llm-agent-internal-as-a-...</a>
It is hard to put a pin on this one because there are so many thing wrong with this definition. There are agent frameworks that are not rebranded workflow tools too. I don't think this article helps explain anything except putting the intended audience in the same box of mind we were stuck since the invention of programming - i.e. it does not help.<p>Forget about boxes and deterministic control and start thinking of error tolerance and recovery. That is what agents are all about.
Everything that was previously just called automation or pipeline processing on-top of LLM is now the buzzword "agents". The hype bubble needs constant feeding to keep from imploding.
Anthropic[0] and Google[1] are both pushing for a clear definition of an “agent” vs. an “agentic workflow”<p>tl;dr from Anthropic:<p>> Workflows are systems where LLMs and tools are orchestrated through predefined code paths.<p>> Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.<p>Most “agents” today fall into the workflow category.<p>The foundation model makers are pushing their new models to be better at the second, “pure” agent, approach.<p>In practice, I’m not sure how effective the “pure” approach will work for most LLM-assisted tasks.<p>I liken it to a fresh intern who shows up with amnesia every day.<p>Even if you tell them what they did yesterday, they’re still liable to take a different path for today’s work.<p>My hunch is that we’ll see an evolution of this terminology, and agents of the future will still have some “guiderails” (note: not necessarily _guard_rails), that makes their behavior more predictable over long horizons.<p>[0]<a href="https://www.anthropic.com/engineering/building-effective-agents" rel="nofollow">https://www.anthropic.com/engineering/building-effective-age...</a><p>[1]<a href="https://www.youtube.com/watch?v=Qd6anWv0mv0" rel="nofollow">https://www.youtube.com/watch?v=Qd6anWv0mv0</a>
Great write up! In my opinion, your description likely accurately models what AI agents are doing. Perhaps the graph could be static or dynamic. Either way - it makes sense! Also, thank you for removing the hype!
I found it understandable and clear. Pocket flow looks cool, although that magic with - >> operators seems a bit obtuse... Also, I think "simply" is a trap - an agent might be modeled by a graph, but that graph can be arbitrarily complex.