My comment is about the general idea (LLM transformers on a chip), not particular company, as I have no insight into the latter.<p>Such a chip (with support for LoRA finetuning) would likely be the enabler for the next-gen robotics.<p>Right now, there is a growing corpus of papers and demos that show what's possible, but these demos are often a talk-to-a-datacenter ordeal, which is not suitable for any serious production use: too high latency, too much dependency on the Internet.<p>With a low-latency, cost- and energy-efficient way to run finetuned LLMs locally (and keep finetuning based on the specific robot experience), we can actually make something useful in the real world.