AI agents that can call external tools look a lot like workflow engines. Both move work from “Step A” to “Step B.” The difference is in the steering wheel: a workflow engine follows hard‑coded lanes, while an agent can improvise. Tell the agent to “summarize this report and email the highlights,” and it decides which tool to grab next—no rigid flowchart required.<p>That’s why the agent‑vs‑tool debate often gets messy. Take Google’s A2A pitch versus the earlier MCP pattern. A2A tries to label “agents” as entities that plan and reason, while MCP casts those same capabilities as just another layer of tooling. In practice the boundary is more marketing than material—the moment a tool chain makes decisions, it’s wearing an agent’s hat.<p>So how do you ship something useful without vanishing into taxonomy debates? Start with one agent in charge of a well‑chosen toolkit. You can validate prompts, error handling, and observability before unleashing a flock of agents and the orchestration headaches that come with them.<p>When should you graduate to a fleet of specialized agents? Think cognitive load. People fumble when their to‑do list mushrooms; a single agent’s reasoning also degrades as its context window fills with unrelated tools and divergent tasks. Once your “kitchen‑sink” agent juggles customer support, data cleaning, and infra ops, it’s time to spawn new agents dedicated to each domain. Smaller, purpose‑built agents keep context tight, reduce hallucinations, and make troubleshooting saner.<p>Bottom line: Begin with one agent plus many tools. Split the work into multiple agents when the variety—not just the volume—of tasks starts tripping the original up.
I guess you pointed out the buzz about ai & automation. I'm laughing when I read " ai will automate low value tasks"
As you mentioned, a hard coded tool chain existed before LLM go public and we don't need Llms for that. But LLM are great to make dynamic decision and it's where the magic happens. Thanks for sharing some tips on how for building agent.