> AI companies can also not just 'hide' false information from users while they internally still process false information<p>To my understanding of this:<p>1. The false claims were hallucinations, i.e. do not exist in training data<p>2. OpenAI have filtered the output to prevent it generating these particular false claims<p>Seems a tricky situation if just the false claim being represented ephemerally in internal model activations, or the model having weights that can lead it to that claim, is defamation.
If the EU applies the regulations as the group mentioned in the article alleges, it would mean no LLM based tools can be legal in the EU. And then the EU will wonder why they’re lacking entrepreneurs or whatever, without connecting the dots. I hope they instead revise the GDPR.