The article starts out as if it's headed for "and that's how we did it". But no. There's no implementation.<p><i>"Imagine a virtual team of AI agents, each with its workflow’s own specialism, collaborating to solve problems and make decisions just like a human team would."</i><p>OK. Where does that go?
So far, multi-agent systems have been delegating simple and well-bounded tasks, such as "fetch the weather info for Outer Nowhere" or "check airline schedules for flights from JFK to ORD", or even "what is 25% of $50". Those are questions inexpensive to answer, and don't need much management. If the subagents are complex, they will need management, and probably budgeting.
Subagents need to know when to stop and when to approximate. If the subagents are themselves generative AI systems, there's potential for hallucination at the lower levels generating info that the higher levels take as valid. Subagents also need to be able to query their managers - "is this enough detail" is a reasonable question to pass upwards. They may need to talk to their peer agents.<p>Now you have all the problems of organizational dynamics within a multi-agent AI system.<p>I look forward to reading papers with titles such as:<p>- "Teams of generative AI agents for coding - scrum or waterfall?"<p>- "Span of control - how many subagents should an agent manage?"<p>- "Does the agent org chart influence the solution too much?"<p>- "Resolving disagreements between specialized subagents".<p>That's where this is going. It has to. Once you start to cut a problem into pieces to be handled by different units, all those problems arise.