This feels like a limited and perhaps naive perspective on LLMs. If you looked at computers as adding machines in the 60s/70s then you'd be missing most of what was interesting about computers. And if you look at LLMs as a question answering service now, you are also missing a lot.<p>It's hard to compare trust of LLMs to other computing, because many of the things that LLMs get wrong and right were previously intractable. You could ask a search engine, but it's certainly no more trustworthy than an LLM, gameable in its own way. The closest might be a knowledge graph or database, which can formally represent some portion of what an LLM represents.<p>To be fair the relational systems can and will give "no answer" when an LLM (like a search engine) always gives some answer. Certainly an issue!<p>But this is all in the realm of coming up with answers in a closed system, hardly the only way LLMs can be used. LLMs can also come up with questions, for instance creating queries for a database. Are these trustworthy? Not entirely, but the closest alternative supportive tool is perhaps some query builder...? I have seen expert humans come up with untrustworthy queries as well... misinterpretation of data is easy and common.<p>That's just one example of how an LLM can be used. If you use an LLM for something that you can directly compare to a non-LLM system, such as speech recognition or intent parsing, it's clear that the LLM is more trustworthy. It can and does do real error correction! That is, you can get higher quality data out of an LLM than you put in. This is not unheard of in computing, but it is uncommon. Internet networking, which Kay refers to, might be an analog... creating reliable connections on top of unreliable connections.<p>What we don't have right now is systematic approaches to computing with LLMs.