I don't understand why Google inverstors put up with Sundar, who clearly doesn't care about search results being correct.<p>Ilya was working there on Google brain team, and Deep Mind still have great people in it who could fix Google search (even some of the old search stuff, who probably moved to other projects out of frustration).<p>It doesn't matter if 100000 Google engineers test Bard vs Google search vs ChatGPT4, if the CEO doesn't care about the product Google is monetizing.
> One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash<p>What is the employee expecting? That bard should not be released unless it is an epitome of perfection? When you have employees like these you don't really need enemies.
For example, it's unethical or screened to have an AI answer a question about crime rates and demographics (race / gender).<p>The answers you get are things like "It's essential to examine the broader context and address the underlying factors contributing to criminal activity." or that crime "is influenced by various factors such as socioeconomic status, education, and access to resources and opportunities." or "It is more useful to focus on addressing social and economic equality for all communities."<p>You can't actually get things like per capita rates of reported murders by gender/race out of the models or is there some setting / prompt you have to use for these questions?<p>I'm wondering Bard maybe didn't have this filtered as properly?
Is anyone else getting tired of the "employees say", "experts say", "researchers say" tactic in journalism?<p>How many? And importantly, what percentage is that? What do the other employees or experts say?