I've been trying to understand why on earth these companies would release something as an answer engine that obviously fabricates incorrect answers, and would simultaneously be so blinded to this as to release promo videos where the incorrect answers are in the actual promo videos! And this happened twice with two of the biggest and oldest companies in big tech.<p>It really feels like some kind of "emperor has no clothes" moment. Everyone is running around saying "WOW what a nice suit emperor" and he's running around buck naked.<p>I am reminded of this video podcast from Emily Bender and Alex Hannah at DAIR - the Distributed AI Research Institute - where they discuss Galactica. It was the same kind of thing, with Yan LeCunn and facebook talking about how great their new AI system is and how useful it will be to researchers, only it produced lies and nonsense abound.<p><a href="https://videos.trom.tf/w/v2tKa1K7buoRSiAR3ynTzc" rel="nofollow">https://videos.trom.tf/w/v2tKa1K7buoRSiAR3ynTzc</a><p>But reading this article I started to understand something... These systems are enchanting. Maybe it's because I <i>want</i> AGI to exist and so I find conversation with them so fascinating. And I think to some extent the people behind the scenes are becoming so enchanted with the system they interact with that they believe it can do more than is really possible.<p>Just reading this article I started to feel that way, and I found myself really struck by this line:<p>LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.<p>Seeing that after reading this article stirred something within me. It feels compelling in a way which I cannot describe. It makes me want to know more. It makes me actually want them to release these models so we can go further, even though I am aware of the possible harms that may come from it.<p>And if I look at those feelings... it seems odd. Normally I am more cautious. But I think there is something about these systems that is so fascinating, we're finding ourselves willing to look past all the errors, completely to the point where we get caught up and don't even see them as we are preparing for a release. Maybe the reason Google, Microsoft, and Facebook are all almost unable to see the obvious folly of their systems is that they have become enchanted by it all.<p>EDIT:
The above podcast is good but I also want to share this episode of Tech Won't Save Us with Timnit Gebru, the former google ethics in AI lead who was fired for refusing to take her name off of a research paper that questioned the value of LLMs. Her experience and direct commentary here get right to the point of these issues.<p><a href="https://podcasts.apple.com/us/podcast/dont-fall-for-the-ai-hype-w-timnit-gebru/id1507621076?i=1000595385583" rel="nofollow">https://podcasts.apple.com/us/podcast/dont-fall-for-the-ai-h...</a>