It's a very useless debate.<p>Fact is, those are LLM, not "AI" the way the general populace understand that term.<p>Fact is, that means they just give whatever answer they can derive from their training corpus, while people think they give facts.<p>Fact is, you do not want to be liable for your half research / half pr product being manipulated into giving wrong or distorted facts regarding one of the most controversial and pivotal election, or worse not needing manipulation just straight up hallucinating. "Prompt engineering" is a thing, after all.<p>Fact is, Google is terrified of being seen as being unfair or taking a side or ..., more than even their competitors. Their entire "our AI cannot draw white people" thing smelled more of an over-reaction to a PR threat than them trying to push a belief.<p>Fact is, if you're seen as taking too hard a side in this election your company might be at risk after the result come down, one side or the other. Just look at how Fox is behaving since the Dominion suit.<p>And last but not least, politics and religion are two of those subject where belief are stronger than facts and can get people very riled up very quick, so if you're an information company you want to treat it as encyclopedia factual after the fact, not as an opinion matter.<p>I'm european, and I abhor many of the restrictions put on the current gen of LLM and image generator that come from american societal value being imposed, but on the matter of politics it doesn't matter what country in the world it's never a good idea to play that game.