One thought technology for understanding "hallucination" is that LLMs can only predict a fact statistically using all of the syntax available in its training data. This means that when you ask for a fact, you are really asking the computer to "postcast", or statistically predict the past, based on its training data.<p>That's why it "hallucinates", because sometimes the prediction of the past is wrong about the past. This differs from what people do, in that we don't see the past or present as a statistical field, we see them as concrete and discrete. And once we learn a sufficiently believable fact we generally assume it to be fully true, pending information to the contrary.