Excellent point. I think it’s critical to recognize that as AI tech is getting more advanced, we need to improve the mechanisms that enforce transparency and trustability. It starts with training data. Just because something is technically reachable online, doesn’t make it OK for it to be used for AI training. A person or organization should be able to opt into having their data used, of course, but it’s not a “use first, ask to remove later”.<p>Then who is watching the watchers? Models being used to generate content or make decisions that have a real-world impact on actual humans need to be open to audit and validation that they don’t repeat the bias and social limitations of the people who create those models. This is not a new concept (think algorithms for insurance coverage calculations etc).<p>So yes, a toy chatbot to play around with - sure. A fancy AI model, sucking personal data without consent and being deployed all around us by closed organizations with dubious funding - big no. Is AI ready to offer scientific or academic value - no, and even if it was a yes, it will need to stand to human scrutiny for a long time before being acceptable “as is”. Kids in schools need to be thought how to recognise and work with AI generated content. Just like dealing with fake news, it requires critical thinking and the ability to analyse information.