oh hey we're on HN! author/host here, we think the story of long context over the past year is worth reviewing so we invited Mark on to talk about extending Llama 3 to >1m tokens.<p>a year ago we were talking to MosaicML (<a href="https://x.com/swyx/status/1660033177178734592" rel="nofollow">https://x.com/swyx/status/1660033177178734592</a>) about their 65k+ model. now people yawn when we have yet another 1m token model. wild.<p>the TLDR in the pod seems to be Meta choosing to train Llama with a RoPE scaling theta factor that can be tweaked for finetuning. Once Gradient noticed that it was off to the races.