> And crucially, we made sure to tell the model not to guess if it wasn’t sure. (AI models are known to hallucinate, and we wanted to guard against that.)<p>Prompting an LLM not to confabulate won't actually prevent it from doing so. It's so disappointing to see an organization like this, that's mission is to inform the public, used AI not understanding the limitations and then making a claim like this.