Anthropic has a good definition of this one [1] if you want a more comprehensive view.<p>I've been building something along the same lines [2]. I'd define an agent as a piece of software that can autonomously reason based on the contextual information and follow a non-pre-defined path to achieve an outcome and self-correct.<p>Most of the "agents" people build today have their control flow encoded in some kind of a graph. I don't think this will yield to a useful result as reasoning capability improves. I think that setting the constraints via tool calling and letting the control flow by dynamic (with human in the loop) is the way to go.<p>[1] <a href="https://www.anthropic.com/research/building-effective-agents" rel="nofollow">https://www.anthropic.com/research/building-effective-agents</a><p>[2] <a href="https://www.inferable.ai/blog/posts/functions-as-ai-agents" rel="nofollow">https://www.inferable.ai/blog/posts/functions-as-ai-agents</a>