TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Chinchilla Scaling Laws Are Not Universal

1 点作者 KhoomeiK12 个月前
Hey HN! Chinchilla (DeepMind 2022) tells us that when we scale up our language model training, we should scale the parameters and data equally.<p>Over the last several months I&#x27;ve been hacking on a research project to determine if the optimal compute allocation (scaling law) for training an LLM is sensitive to training data complexity. I found that as data complexity increases, you need even more data than Chinchilla suggests!<p>I released the preprint just yesterday: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2405.16684" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2405.16684</a>

暂无评论

暂无评论