TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How to Train a Million Context LLM

15 点作者 7d7n12 个月前

1 comment

swyx12 个月前
oh hey we&#x27;re on HN! author&#x2F;host here, we think the story of long context over the past year is worth reviewing so we invited Mark on to talk about extending Llama 3 to &gt;1m tokens.<p>a year ago we were talking to MosaicML (<a href="https:&#x2F;&#x2F;x.com&#x2F;swyx&#x2F;status&#x2F;1660033177178734592" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;swyx&#x2F;status&#x2F;1660033177178734592</a>) about their 65k+ model. now people yawn when we have yet another 1m token model. wild.<p>the TLDR in the pod seems to be Meta choosing to train Llama with a RoPE scaling theta factor that can be tweaked for finetuning. Once Gradient noticed that it was off to the races.