A few points that are getting overlooked:<p>- OpenAI is buying WindSurf and probably did diligence on these models before it decided to invest.<p>- WindSurf may have collected valuable data from it users that is helpful in training a coding-focused AI model. The data would give a 6 month lead to OpenAI which is probably worth the $3B.<p>- Even if Windsurf's frontier models are not better than other models for coding, if they excel in a few key areas it would justify significant investment in their methodology (see points above).<p>- There are still areas of coding where even the top frontier models falter that would seemingly be ripe for improvement via more careful training. Notably, making the model better at working within a particular framework and version, programming language version, etc. Also better support for more obscure languages and libraries/versions and the ability to "lock in" on the versions that the developer is using. I've wasted a lot of time trying to convince OpenAI models to use OpenAI's latest Python API -- even when given docs and explicit constraints to use the new API, OpenAI frontier models routinely (incorrectly) update my code to use old API conventions and even methods that have been removed!<p>Consider that the basic competency of doing a frontier coding model well is likely one of the biggest opportunities in AI right now (second to reasoning and in my opinion tied with image analysis and production). An LLM that can both reason and code accurately could read a chapter in a textbook and code a 3D animation illustrating all of the concepts as a one-shot exercise. We are far from that at present even in OpenAI's best stuff.
So because they need to have a better business model, they will try to move users to weaker models compared to the best available? This "AI inside the editor" thing makes every day less sense in many dimensions: it makes you not really capable of escaping the accept, accept, accept trap. It makes the design interaction with the LLM too much about code and too little about the design itself. And you can't do what many of us do: have that three subscriptions for the top LLMs available (it's 60$ for 3, after all) and use each for it's best. And by default write your stuff without help if LLMs are not needed in a given moment.
I'm confused why they are working on their own frontier models if they are going to be bought by OpenAI anyway. I guess this is something they were working on before the announcement?
> Enabled from the insight from our heavily-used Windsurf Editor, we got to work building a completely new data model (the shared timeline) and a training recipe that encapsulates incomplete states, long-running tasks, and multiple surfaces.<p>This data is very valuable if you're trying to create fully automated SWEs, while most foundation model providers have probably been scraping together second hand data to simulate long horizon engineering work. Cursor probably has way more of this data, and I wonder how Microsoft's own Copilot is doing (and how they share this data with the foundation model providers)...
Two takes here. Cynical and optimistic.<p>Cynical take: describing yourself as a full stack AI IDE company sounds very invest-able in a "what if they're right" kind of way. They could plausibly ask for higher valuations, etc.<p>Optimistic take: fine tuning a model for their use-case (incomplete code snippets with a very specific data model of context) should work. Or even has from their claims. It certainly sounds plausible that fine-tuning a frontier model would make it better for their needs. Whether it's reasonable to go beyond fine-tuning and consider pre-training etc. I don't know. If I remember correctly they were a model company before Windsurf, so they have the skillset.<p>Bonus take: doesn't this mean they're basically training on large-scale gathered user data?
it was only a matter of time, they have too much good data to not train their own models, not to mention that claude API calls were probably killing their profitability.<p>open source alternative <a href="https://huggingface.co/SWE-bench/SWE-agent-LM-32B" rel="nofollow">https://huggingface.co/SWE-bench/SWE-agent-LM-32B</a><p>though I haven't been able to find a mlx quant that wasn't completely broken.
It's a shame that my development work needs a specific VSCode extension (domain specific language for ERP systems) so my options are VSCode+Copilot or Cursor.