Mistral really became what all the other over-hyped EU AI start-ups / collectives (Stability, Eleuther, Aleph Alpha, Nyonic, possibly Black Forest Labs, government-funded collaborations, ...) failed to achieve, although many of them existed way before Mistral. Congrats to them, great work.
I think this is a game changer, because data privacy is a legitimate concern for many enterprise users.<p>Btw, you can also run Mistral locally within the Docker model runner on a Mac.
Not quite following. It seems to talk about features common associated with local servers but then ends with available on gcp<p>Is this an API point? A model enterprises deploy locally? A piece of software plus a local model?<p>There is so much corporate synergy speak there I can’t tell what they’re selling
This announcement accompanies the new and proprietary Mistral Medium 3, being discussed at <a href="https://news.ycombinator.com/item?id=43915995">https://news.ycombinator.com/item?id=43915995</a>
This is so fast it took me by surprise. I'm used to wait for ages until the response is finished on Gemini and ChatGPT, but this is instantaneous.
While I am rooting for Mistral, having access to a diverse set of models is the killer app IMHO. Sometimes you want to code. Sometimes you want to write. Not all models are made equal.
Another new model ( Medium 3) of Mistral is great too.
Link: <a href="https://newscvg.com/r/yGbLTWqQ" rel="nofollow">https://newscvg.com/r/yGbLTWqQ</a>
Mistral models though are not interesting as models. Context handling is weak, language is dry, coding mediocre; not sure why would anyone chose it over Chinese (Qwen, GLM, Deepseek) or American models (Gemma, Command A, Llama).