I am mostly a novice to the field of LLMs, but as a layman who has a basic but admittedly very rough understanding of how they work algorithmically, I have a hunch that the same thing that makes these LLMs powerful AIs that have interesting emergent behaviors is also what makes them occasionally get things wildly wrong and claim to know things that they do not know. They are supposed to be AIs, not carefully vetted encyclopedias. Sure, I get that people want to eventually use AI to do life-critical stuff like surgery, and at that point "hallucinations" become a real problem. But we are nowhere close to that point yet, I think, so I feel that the focus on "hallucinations" may be misleading. It is one thing to try to get a 30 year old doctor to not make nonsense up on the fly while at work, that makes sense. But if you try to prevent a 3 year old kid from making up nonsense, that will actually probably hurt his development into a more powerful intelligence. Note: I know that the current popular LLMs do not actually learn past the scope of a single session, but I am sure that they soon will.