This is pretty obvious no? LLMs are basically a lossy compression of their dataset. Being lossy, they will necessarily have error (hallucinations). Furthermore, the underlying data is human approximation of truth. Therefore it will have error as a well. Shall we publish a paper on the linked list?