Amazing that you can just shove a ton of multimodal data into a big transformer and get a really good multimodal model. I wonder where things will top out. For many years a lot of people (including me) were saying "you can't just take existing architectures, scale them up, feed them a lot of data, and expect something mpressive", but here we are.
We've been testing it in the local llm Discords, turns out its just a llama 7B finetune that can run on any old GPU (which is cool).<p><a href="https://huggingface.co/brucethemoose/LargeWorldModel_LWM-Text-Chat-128K-55bpw" rel="nofollow">https://huggingface.co/brucethemoose/LargeWorldModel_LWM-Tex...</a><p><a href="https://huggingface.co/dranger003/LWM-Text-Chat-128K-iMat.GGUF" rel="nofollow">https://huggingface.co/dranger003/LWM-Text-Chat-128K-iMat.GG...</a><p>And its long context recall is quite good! We've already kind of discovered this with Yi, but there are some things one can do with a mega context that you just can't get with RAG.
Because it might not be clear:<p><pre><code> … d)Fully open-sourced a family of 7B parameter models capable of processing long text documents (LWM-Text, LWM-Text-Chat) and videos (LWM, LWM-Chat) of over 1M tokens.
</code></pre>
<a href="https://huggingface.co/LargeWorldModel" rel="nofollow">https://huggingface.co/LargeWorldModel</a><p>In terms of content, I am blown away yet again by the SoTA speeding on by as I try to catch up. Can someone with a more cynical eye point me to competitors or problems with this approach? Because as it stands… that jump to a context length of a <i>million</i> tokens is pretty impressive to an outsider.
I wonder why are the example videos this specific clip compilation format.<p>It feels to me that to navigate that, you essentially have to index 500 10-seconds videos, and that looks a lot easier than retrieving information that is in an actual 1 hour long video, because the later one will have a lot more of easy to mix-up moments. So maybe it hides an inability to answer questions about actual long videos (in the paper, the other example videos cap at 3 minutes length for what I can see).<p>On the other hand, maybe it's just for results presentation purposes, because it is much more readily "verifiable" for everyone than saying "trust us, in this very long video, there's the correct answer unarguably".<p>So if someone happens to more about that, I'd be very interested
It's pretty wild watching technology develop where I genuinely don't have a confident idea of just how far it will progress by December in February of the same year.<p>Open models have just been on fire lately, and the next generation of SotA models to pull synthetic data from in training the next generation of open models each taking nuanced and clever approaches to infrastructure improvements has me pretty much considering all bets to be off.<p>At this point, the bottleneck is increasingly the human ability to adapt to improving tools than limitations in the tools themselves.
Some pretty fascinating collaborators:<p>- Matei Zaharia, a CTO of Databricks
- Pieter Abbeel Director of the Berkeley Robot Learning Lab, Co-Director of the Berkeley Artificial Intelligence Research (BAIR) lab
- Two talented PhD students: Hao Liu, Wilson Yan
This looks really promising!<p>Other than this sentence:<p>> We curated a large dataset of videos and languages from public book and video datasets, consisting of videos of diverse activities and long-form books.<p>I didn’t see any other mention of datasets used, is this on intentional?
It blows my mind how quickly we are moving with these advances in LLM, and these are just the ones we see in PUBLIC. I'm sure there are more advanced proprietary solutions that we aren't privy to.
This implementation is similar to something Ilya Sutskever said a few months ago but I think I am misunderstanding both: I think they are saying robots could learn how to move and what facial expressions to use by watching millions of hours of videos involving humans, a sort of LLM of human behavior. I am not a scientist so I may have this wrong.