I can't say that I'm a fan of this current crop of LLM chatbots.<p>Some things bother me about this article and the facts that warranted it in the first place. I have very similar reservations with regard to the Bing chatbot launch (detailed below - just substitute Bing for Bard).<p>> Google has a lot riding on this launch. Microsoft partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google blundered straight out of the gate when it first tried to respond. In a teaser clip for Bard that the company put out in February, the chatbot was shown making a factual error. Google’s value fell by $100 billion overnight.<p>1. LaMDA is not a new project. They are clearly releasing a chatbot based on it to the public to compete with MS/OpenAI. But if the tech existed some time ago (and as a developer I realize that iteration and time and attention tends to improve quality), why didn't they release it before? I am guessing - baselessly - that they saw that the quality of the output was quite poor (often factually wrong, for instance). But now that a competitor is threatening their market share, quality metrics goes out the window - revenue is once again king.<p>2. Funny how the valuation of the company dropped so quickly. It is because as shareholders we rely so much on short term gain. There is no focus on follow-through or long-term consequences. I certainly don't believe that capitalism is inherently bad and I am a huge fan of competition. But I think the way we practice it leaves many practical things to be desired.<p>> “We’ll get user feedback, and we will ramp it up over time based on that feedback,” says Google’s vice president of research, Zoubin Ghahramani. “We are mindful of all the things that can go wrong with large language models.”<p>><p>> But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and she thinks pitching Bard as an experiment “is a PR trick that larger companies use to reach millions of customers while also removing themselves from accountability if anything goes wrong.”<p>3. This quote, and the mention in the article of the "Google It" button below the Bard chat, the three versions of Bard's response ("drafts", FTA), the quote by the Google product director that says "There’s the sense of authoritativeness when you only see one example”... I could not agree more with what Margaret Mitchell has to say (I have never heard of her before, to my knowledge). Isn't it very clear by now that users don't have the time or attention to acknowledge implications? We are busy and we don't have the energy. If we see it on a screen, we take it as fact, copy and paste it as needed, and proceed with the knowledge that we have gleaned from said "facts". I suspect that if anybody really knows how misinformation works and the effects it has on society, it's data analysts at Google search. But the almighty revenue stream dictates that they push this not-necessarily-factual-information-producing-tool anyway.<p>If I can find the time, I'm quite curious to read that article that just pre-dates Bing chat and Bard about the dangers of using LLMs in search engines.