Article answered "why LangGraph over roll-your-own", but failed to address "why LangGraph" in the broader sense.<p>All of the points made here are also true for Mastra, for example.<p><pre><code> > One pain point has been documentation. The framework is developing very quickly and the docs are sometimes incomplete or out of date
</code></pre>
I also found this to be the case when working with Microsoft's Semantic Kernel in the early days. Thankfully, they had a lot of examples and <i>integration tests</i> demonstrating usage.<p>Where's the AI startup using LLMs to automatically generate docs, sample code, and guides for libraries?
When building complex multi-agent systems where each agent has it's own tools, prompt, persona, etc. I've found LangGraph to be better (and easier) than AWS Bedrock, and OpenAI's Agent framework.
We explored LangGraph last November and were pleasantly surprised by the difference with LangChain. The framework had much more care put in it. It was much easier to iterate and the final solutions felt less brittle.<p>BUt the pricing model and deployment story felt odd. The business model around LangGraph reminded us of Next.js/Vercel, with a solid vendor lock-in and every cent squeezed out of the solution. The lack of clarity on that front made us go with Pydantic AI.
> Testing and mocking is a huge challenge when developing LLM driven systems that aren’t deterministic. Even relatively simple flows are extremely hard to reproduce.<p>This is by far the most frustrating part of building with LLMs. Is there any good solution out there for any framework?