No, that's the only thing to understand about it! You can't trust anything it says about anything. What it does is emit plausible-sounding information. But we've seen -- I think across every single realm of anything -- that this information includes complete nonsense, whether due to it being trained on people just shitposting bullshit, or due to combining tokens (words, sentence fragments, whatever) in new ways without the benefit of any actual understanding of it.<p>It can be useful, but it's at best as useful as a smart dog that's figured out how to operate a voice synthesizer, and is also on amphetamines or hallucinogens. I'm glad to see this term "LLM hallucination", as that's how it's felt to me.<p>I find ChatGPT really useful for things that I can double-check instantaneously (or nearly so), such as doc comments for the code I just wrote, or small unit tests that match the comment I just wrote.<p>But in my experience, it's worse than useless for writing real code, or any complex endeavor that isn't instantly verifiable/rejectable at a glance. Because vetting plausible looking code -- including dependency specification -- is almost always more taxing than just writing the code or package.json entries yourself.