Everyone is talking about Large Language Models like ChatGPT, LLAma, BLOOM etc that can do any task with text transformation. I did many tests of them for changing style, locale, grammar correction etc and saw that quality is quite average. The performance is so slow and unstable to use them in scaling business.<p>I tried to make several small language models using 3-5 million of parallel sentence dataset to make specific linguistic transformation and see much potential there. The difference in performance between ChatGPT4 and my small models is almost 1000 times.<p>Anyone do such things ?
I think it depends on the use case. The big win IMO is cost savings for developers, tiny models can run on devices so theres no need for inference servers. The real challenge is coming up with a use case that a tiny model can perform. I suspect there’s probably a lot though, like rephrasing or gmail-like suggestions. Theyre tiny features, not full apps, but still seem valuable to me<p>Fyi me and a friend made a demo app of tinystories we just submitted: <a href="https://news.ycombinator.com/item?id=36960333">https://news.ycombinator.com/item?id=36960333</a> - from testing it out, anything with <500ms latency and 15 tok/sec feels real-time, and that makes a world of difference in UX