The relevant paper: <a href="https://arxiv.org/abs/2406.02528" rel="nofollow">https://arxiv.org/abs/2406.02528</a><p>In summary, they forced the model to process data in ternary system and then build a custom FPGA chip to process the data more efficiently. Tested to be "comparable" to small models (3B), theoretically scale to 70B, unknown for SOTAs (>100B params).<p>We have always known custom chips are more efficient especially for tasks like these where it is basically approximating an analog process (i.e. the brain). What is impressive is how fast it is prgressing. These 3B params models would demolish GPT2 which was, what, 4-5 years old? And they would be pure scifi tech 10 years ago.<p>Now they can run on your phone.<p>A machine, running locally on your phone, that can listen and respond to anything a human may say. Who could have confidently claim this 10 years ago?
Note that the architecture does use matmuls. They just defined ternary matmuls to not be 'real' matrix multiplication. I mean... it is certainly a good thing for power consumption to be wrangling less bits, but from a semantic standpoint, it is matrix multiplication.
"Call my broker, tell him to sell all my NVDA!"<p>Combined with the earlier paper this year that claimed LLMs work fine (and faster) with trinary numbers (rather than floats? or long ints?) — the idea of running a quick LLM local is looking better and better.
[dupe]<p>Some more discussion a few weeks ago: <a href="https://news.ycombinator.com/item?id=40620955">https://news.ycombinator.com/item?id=40620955</a>
There's additional discussion on the same research in an earlier thread [1].<p><a href="https://news.ycombinator.com/item?id=40787349">https://news.ycombinator.com/item?id=40787349</a>
The pre-print is <a href="https://doi.org/10.48550/arXiv.2406.02528" rel="nofollow">https://doi.org/10.48550/arXiv.2406.02528</a>