Is this basically a LLM that has tools automatically configured so I don’t have to handle that myself? Or am I not understanding it correctly? As in do I just make standard requests , but the LLM does more work than normal before sending me a response? Or I get the response to every step?
The "My MCPs" button looks very promising.<p>I was looking around at Le Chat, a thing I haven't done in months, and I thought that they've really worked on interesting stuff in interesting ways.<p>The ability to enrich either a chat or generally an agent with one or more libraries has been solved in a very friendly way. I don't think OpenAI nor Anthropic have solved it so well.
Ok I’m behind the times in terms of MCP implementation, so would appreciate a check: the appeal of this feature is that you can pass off the “when to call which MCP endpoint and with what” logic to Mistral, rather than implementing it yourself? If so I’m not sure I completely understand why I’d want a model-specific, remote solution for this rather than a single local library, since theoretically this logic should be the same for any given LLM/MCP toolset pairing. Just simpler?<p>It certainly looks easy to implement, I will say that! Docs halfway down the page: <a href="https://docs.mistral.ai/agents/mcp/" rel="nofollow">https://docs.mistral.ai/agents/mcp/</a>
Whoever made those embedded videos, here some feedback if you want it take it, it's free:<p>1) It's really hard to follow some of the videos since you're just copy pasting the prompts fr your agents into the chat because the output generation comes out and hides the prompts. Instead put the prompt text as an overlay/subtitle-like so we know what you're doing<p>2) The clicking sound of you copy pasting and typing is not ASMR, please just mute it next time<p>3) Please zoom into the text more, not everyone has 20/20 super vision 4K style