The compute scheduling part of the paper is also vey good, the way they balanced load to keep compute and communication in check.<p>There is also a lot of thought put into all the tiny bits of optimization to reduce memory usage, using FP8 effectively without significant loss of precision nor dynamic range.<p>None of the techniques by themselves are really mind blowing, but the whole of it is very well done.<p>The DeepSeekV3 paper is really a good read: <a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf">https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSee...</a>
Why is it that the larger models are better at understanding and following more and more complex instructions. And generally just smarter?<p>With DeepSeek we can now run on non-GPU servers with a lot of RAM. But surely quite a lot of the 671 GB or whatever is knowledge that is usually irrelevant?<p>I guess what I sort of am thinking of is something like a model that comes with its own built in vector db and search as part of every inference cycle or something.<p>But I know that there is something about the larger models that is required for really intelligent responses. Or at least that is what it seems because smaller models are just not as smart.<p>If we could figure out how to change it so that you would rarely need to update the background knowledge during inference and most of that could live on disk, that would make this dramatically more economical.<p>Maybe a model could have retrieval built in, and trained on reducing the number of retrievals the longer the context is. Or something.
I hate it so much that HN automatically removes some words in headlines like „how.“ You can add them after posting though for a while by editing the headline.
Has DeepSeek challenged the very weird hallucination problem? Reducing hallucinations now seems to be the remaining fundamental issue that needs scientific research. Everything else feels like an engineering problem.