Want to reference Groq.com. They are developing their own inference hardware called an LPU <a href="https://wow.groq.com/lpu-inference-engine/" rel="nofollow">https://wow.groq.com/lpu-inference-engine/</a><p>They also released their API a week or 2 ago. Its <i>significantly</i> faster than anything from OpenAI right now. Mixtral 8x7b operates at around 500 tokens per second. <a href="https://groq.com/" rel="nofollow">https://groq.com/</a>
> The 4.5-mm-square chip, developed using Korean tech giant Samsung Electronics Co.'s 28 nanometer process, has 625 times less power consumption compared with global AI chip giant Nvidia's A-100 GPU, which requires 250 watts of power to process LLMs, the ministry explained.<p>>processes GPT-2 with an ultra-low power consumption of 400 milliwatts and a high speed of 0.4 seconds<p>Not sure what's the point on comparing the two, an A100 will get you a lot more speed than 2.5 tokens/sec. GPT 2 is just a 1.5B param model, a Pi 4 would get you more tokens per second with just CPU inference.<p>Still, I'm sure there's improvements to be made and the direction is fantastic to see, especially after Coral TPUs have proven completely useless for LLM and whisper acceleration. Hopefully it ends up as something vaguely affordable.
Neuromorphic computing is cool, but not new tech. However, using a neuromorphic spiking architecture to run LLMs seems new. Unfortunately, there doesn't seem to be a paper associated with this work, so there's no deeper information on what exactly they're doing.
Quick shoutout to <a href="https://youtube.com/@TechTechPotato" rel="nofollow">https://youtube.com/@TechTechPotato</a> for those interested in keeping tabs on the AI hardware space. There is much more going on in this area than you would think if you only follow general media.