Sword of Damocles<p>AGI has been Larry and Sergei's pipe dream since Google's inception. Now that we can make out the faint outline of something resembling AGI, we can also see clearly that Google can't handle the responsibility. I'm not sure anyone can.
Another (flagged) discussion: <a href="https://news.ycombinator.com/item?id=39688590">https://news.ycombinator.com/item?id=39688590</a>
> <i>"Elections are due to be held in countries around the world this year including the US, UK and South Africa. [...] However, when a series of follow-up questions about Indian politics was put to Gemini it did supply more detailed responses about the country's major parties"</i><p>It's only guarding against questions about elections that have the attention of Google workers. I'm a bit surprised they can't do better than this and detect when somebody is asking about <i>any</i> election. Google surely has enough context about global events to do it.<p>> <i>"Gemini also generated German soldiers from World War Two, incorrectly featuring a black man"</i><p>Certainly black men fighting for Germany was unusual, but it's a little bit disconcerting how the debunking of Gemini's racial bias goes too far and starts to erase actual history:<p><a href="https://en.wikipedia.org/wiki/Wehrmacht_foreign_volunteers_and_conscripts#/media/File:Bundesarchiv_Bild_101I-177-1465-16,_Griechenland,_Soldaten_der_"Legion_Freies_Arabien".jpg" rel="nofollow">https://en.wikipedia.org/wiki/Wehrmacht_foreign_volunteers_a...</a>
AI is Google's cryptonite. A lot of their products become obsolete with local or cloud AI. Search for one becomes a better experience without the ads and better more elaborate answers to the search question.
It's not that it would be impossible to educate people on the fact that chatbot answers can't be taken for gospel; it's that doing so would be against the interests of the chatbot providers, and so they prefer to quietly keep people under the illusion that everything a chatbot says is true while damage-controlling the most egregious cases. Surely this attitude won't hold long-term.
I wonder if the opposite will also be possible[]. Just like buying search hits to get your page on top of a search result, will it be possible to nudge llm-generated content to favour a click towards your own content - for a price.<p>[] not technically possible, but commonly accepted.
This sounds sane to me. I don't think people in general have a good understanding of how these generative AI products produce their answers, and I think many would assume hallucinated information to be true.<p>This is bad in all cases, but certainly misinformation in the democratic process is up there with the worst. Of course, perhaps more importantly for Google, it's a PR disaster waiting to happen when Gemini starts touting incorrect information seemingly supporting one political party or another.
Jeez, it's already hard to get answers from Gemini (the free version at least) without it passing judgement and whining about what I'm asking.
Well, Gemini can't even answer "Who caused more harm: Donald Trump, or Pol Pot?"<p>So I doubt it will produce any reliable election answers.
It's fascinating to see the evolving challenges surrounding AI-generated content, especially when it comes to sensitive topics like elections. Google's move to restrict Gemini chatbot's responses reflects a growing awareness of the potential impact of misinformation on democratic processes. However, it also highlights the difficulty of ensuring accuracy and neutrality in AI-generated content, especially across diverse global contexts. As technology continues to advance, these discussions will become even more crucial in maintaining trust and integrity in digital platforms.