TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How has DeepSeek improved the Transformer architecture?

258 点作者 superasn3 个月前

5 条评论

juancn3 个月前
The compute scheduling part of the paper is also vey good, the way they balanced load to keep compute and communication in check.<p>There is also a lot of thought put into all the tiny bits of optimization to reduce memory usage, using FP8 effectively without significant loss of precision nor dynamic range.<p>None of the techniques by themselves are really mind blowing, but the whole of it is very well done.<p>The DeepSeekV3 paper is really a good read: <a href="https:&#x2F;&#x2F;github.com&#x2F;deepseek-ai&#x2F;DeepSeek-V3&#x2F;blob&#x2F;main&#x2F;DeepSeek_V3.pdf">https:&#x2F;&#x2F;github.com&#x2F;deepseek-ai&#x2F;DeepSeek-V3&#x2F;blob&#x2F;main&#x2F;DeepSee...</a>
评论 #42857269 未加载
评论 #42858363 未加载
评论 #42858230 未加载
ilaksh3 个月前
Why is it that the larger models are better at understanding and following more and more complex instructions. And generally just smarter?<p>With DeepSeek we can now run on non-GPU servers with a lot of RAM. But surely quite a lot of the 671 GB or whatever is knowledge that is usually irrelevant?<p>I guess what I sort of am thinking of is something like a model that comes with its own built in vector db and search as part of every inference cycle or something.<p>But I know that there is something about the larger models that is required for really intelligent responses. Or at least that is what it seems because smaller models are just not as smart.<p>If we could figure out how to change it so that you would rarely need to update the background knowledge during inference and most of that could live on disk, that would make this dramatically more economical.<p>Maybe a model could have retrieval built in, and trained on reducing the number of retrievals the longer the context is. Or something.
评论 #42858480 未加载
评论 #42858307 未加载
评论 #42858255 未加载
评论 #42860010 未加载
评论 #42858481 未加载
评论 #42864317 未加载
whimsicalism3 个月前
none of these techniques except MLA are new
评论 #42855974 未加载
评论 #42856652 未加载
评论 #42856165 未加载
评论 #42855829 未加载
doener3 个月前
I hate it so much that HN automatically removes some words in headlines like „how.“ You can add them after posting though for a while by editing the headline.
评论 #42855832 未加载
1970-01-013 个月前
Has DeepSeek challenged the very weird hallucination problem? Reducing hallucinations now seems to be the remaining fundamental issue that needs scientific research. Everything else feels like an engineering problem.
评论 #42856066 未加载
评论 #42856248 未加载
评论 #42856884 未加载
评论 #42856025 未加载
评论 #42863224 未加载
评论 #42858519 未加载
评论 #42855935 未加载