Nothing has really been proven just yet.<p>I say this because all available LLMs are currently running largely on "funny money" --- subsidized by either venture capitalism or government.<p>We won't know the real costs until they are forced to survive in the marketplace on their own merits. And based on nothing but their energy and hardware needs, they won't be exactly "cheap" and will follow a "computing as a service" model subject to bait and switch tactics.<p>Basically, LLMs turn traditional computing upside down. Instead of reliable results at low cost, LLMs offer unreliable results at high cost.<p>And because of this, I expect the real world use cases to be much smaller than many seem to expect. The two prominent early examples are search engines (where accuracy is not essential) and research involving trial and error (where accuracy will be verified).