The problem here isn't that an LLM hallucinates. The problem is that nobody asked for an AI response, and Meta pushed content to a forum that makes such claims, which could easily mislead or at least confuse people not sophisticated enough to be on the lookout for hallucinations.<p>Meta should be (and is) in the business of policing third-party spam on their forums that does exactly this. We can infer what must've happened - the model must've been fine-tuned on forum comments, and this would be the likely format for a response to that question. This sort of thing should've been caught by a wrapper/guard model, and will probably make a good case to add to such a model's instructions/training.<p>(btw: is it "an LLM" or "a LLM"? I guess I should ask an LLM which it prefers to be called)
LLMs are Internet simulators. That is a simulation of a good response you would get on the Internet.<p>Is anyone still surprised by this? If so, let me repeat: LLMs are Internet simulators. They will give you simulations of good replies you might get on the Internet.
how about everyone stop calling it AI when it very clearly is little more "intelligent" than "intelligent device for starting your heater when it gets cold" (aka thermostat) (and yes, this has really been marketed)
"Meta AI claims"<p>No it doesn't. It can't. Only people (or companies, which require people) can meaningfully "claim" things. LLMs are still not people, despite our persistent attempts to personify them.<p>This is merely a sexier headline than "Hallucination machine hallucinates." And even that word personifies a bit too much!