Side story about dinosaurs and AI...<p>I worked at a museum years ago, and one of the projects there was an AI chatbot for a titanosaur: <a href="https://www.fieldmuseum.org/exhibitions/maximo-titanosaur" rel="nofollow">https://www.fieldmuseum.org/exhibitions/maximo-titanosaur</a> (near the bottom, the "Message Máximo" section; the web chat seems discontinued, but you can still text him).<p>From a marketing angle, it was interesting enough: a way to bring a fossil to life, giving him a name and a personality, with signage around the exhibition encouraging visitors to text him to ask questions about his past life, diet, tail, etc.<p>From a technical angle, it was a simple system built on Google Dialogflow (<a href="https://cloud.google.com/dialogflow?hl=en" rel="nofollow">https://cloud.google.com/dialogflow?hl=en</a>), a natural language parsing and no-code response tree system. This was all before GPTs really came in vogue, so the responses were all human-curated: parsed Dialogflow intent tokens in, highest matching response variants out, all edited in Dialogflow's nice GUI.<p>But what I really liked about it was the scientific angle. There were some serious behind-the-scenes efforts to make his responses scientifically accurate yet easily digestible. That museum was a research institution as well, and had a ton of PhD paleontologists actually researching titanosaurs and other sauropods. This was a way to collect popular questions from the public, filter it through their professional expertise, and then distill that knowledge back down into bite-size chunks suitable for families and kids, all while maintaining scientific integrity. Every week or so, the team would collect the latest questions, points of confusion, etc., and then it run it by the scientists again to update the dialog tree accordingly.<p>If the project were launched today instead, I wonder if it'd be possible to do something similar with a very tightly scoped GPT that could be trained only on the scientific data (published papers, etc.), eliminating or vastly reducing hallucinations, while still giving the GPT limited room to express a personality and not be limited by scripted responses. But there would still have to be a human in the loop to vet those responses for scientific accuracy. Not sure how to best build something like that, but it would be awesome for understaffed museum exhibitions (which is most of them!), a way for the public to be able to ask the exhibition itself questions, instead of hoping a knowledgeable PhD happened to be available right then.