The benchmarks compare it favorably to GPT-4-turbo but not GPT-4o. The latest versions of GPT-4o are much higher in quality than GPT-4-turbo. The HN title here does not reflect what the article is saying.<p>That said the conclusion that it's a good model for cheap is true. I just would be hesitant to say it's a great model.
Why say comparable when gpt4o is not included in the comparison table? (Neither is the interesting Sonnet 3.5)<p>Here's an Aider leaderboard with the interesting models included: <a href="https://aider.chat/docs/leaderboards/" rel="nofollow">https://aider.chat/docs/leaderboards/</a> Strangely, v2.5 is below the old v2 Coder. Maybe we can count on v2.5 Coder being released then?
In my experience, Deepseek is my favourite model to use for coding tasks. it is not as smart of an assistant as 4o or Sonnet, but it has outstanding task adhesion, code quality is consistently top notch & it is never lazy. unlike GPT4o or the new Sonnet (yuck) it doesn't try to be too smart for its own good, which actually makes it easier to work with on projects. the main downside is that it has a problem with looping, where it gets some concept or context inside its context and refuses to move on from it. however if you remember the old GPT4 ( pre turbo ) days then this is really not a problem, just start a new chat.
It’s interesting to see a Chinese LLM like DeepSeek enter the global stage, particularly given the backdrop of concerns over data security with other Chinese-owned platforms, like TikTok. The key question here is: if DeepSeek becomes widely adopted, will we see a similar wave of scrutiny over data privacy?<p>With TikTok, concerns arose partly because of its reach and the vast amount of personal information it collects. An LLM like DeepSeek would arguably have even more potential to gather sensitive data, especially as these models can learn from and remember interaction patterns, potentially accessing or “training” on sensitive information users might input without thinking.<p>The challenge is that we’re not yet certain how much data DeepSeek would retain and where it would be stored. For countries already wary of data leaving their borders or being accessible to foreign governments, we could see restrictions or monitoring mechanisms placed on similar LLMs—especially if companies start using these models in environments where proprietary information is involved.<p>In short, if DeepSeek or similar Chinese LLMs gain traction, it’s quite likely they’ll face the same level of scrutiny (or more) that we’ve seen with apps like TikTok.
This 236B model came out around September 6th.<p>DeepSeek-V2.5 is an upgraded version that combines DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.<p>From: <a href="https://huggingface.co/deepseek-ai/DeepSeek-V2.5" rel="nofollow">https://huggingface.co/deepseek-ai/DeepSeek-V2.5</a>
<a href="https://www.youtube.com/watch?v=OW-reOkee1Y" rel="nofollow">https://www.youtube.com/watch?v=OW-reOkee1Y</a> (sorry for the shitty source)<p>A word of advice on advertising low-cost alternatives.<p>'The weaknesses make your low cost believable. [..] If you launched Ryan Air and you said we are as good as British Airways but we are half the price, people would go "it does not make sense"'
In my NYT Connections benchmark, it hasn't performed well: <a href="https://github.com/lechmazur/nyt-connections/">https://github.com/lechmazur/nyt-connections/</a> (see the table).
I run it at home at q8 on my dual Epyc server. I find it to be quite good, especially when you host it locally and are able to tweak all the settings to get the kind of results you need for a particular task.
tl;dr not even close to closed source text-only modes, and a lightyear behind the other 3 senses these multimodal ones have had for a year<p>just a personal benchmark I follow, the UX on locally run stuff has diverged vastly
Sadly it’s equally useless as OpenAI models because the terms of use read “ 3.6 You will not use the Services for the following improper purposes: 4) Using the Services to develop other products and services that are in competition with the Services (unless such restrictions are illegal under relevant legal norms).”<p>For the billionth time, there are zero products and services which are NOT in competition with general intelligence. Therefore, this kind of clause simply begs for malicious compliance…go use something else.