> AI companies can also not just 'hide' false information from users while they internally still process false information<p>To my understanding of this:<p>1. The false claims were hallucinations, i.e. do not exist in training data<p>2. OpenAI have filtered the output to prevent it generating these particular false claims<p>Seems a tricky situation if just the false claim being represented ephemerally in internal model activations, or the model having weights that can lead it to that claim, is defamation.