TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Google CEO Calls Biased AI Chatbot Responses Unacceptable

22 点作者 gibsonf1超过 1 年前

5 条评论

dekhn超过 1 年前
This is a perfect example of how Sundar is a terrible leader. He gave a super-bland email to the company which basically said that offending users is unacceptable and that the system must not have any bias.<p>Neither of those is a reasonable goal. The goal is to come up with something that isn&#x27;t massively offensive, not completely unoffensive and with no bias. But Sundar is basically a person who got to where he is by attempting to be maximally unoffensive and it&#x27;s clear the company is now being held back by his poor leadership and lack of vision.
评论 #39540830 未加载
评论 #39539811 未加载
geor9e超过 1 年前
Full text: I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias - to be clear, that&#x27;s completely unacceptable and we got it wrong. Our teams have been working around the clock to address these issues. We&#x27;re already seeing a substantial improvement on a wide range of prompts. No Al is perfect, especially at this emerging stage of the industry&#x27;s development, but we know the bar is high for us and we will keep at it for however long it takes. And we&#x27;ll review what happened and make sure we fix it at scale. Our mission to organize the world&#x27;s information and make it universally accessible and useful is sacrosanct. We&#x27;ve always sought to give users helpful, accurate, and unbiased information in our products. That&#x27;s why people trust them. This has to be our approach for all our products, including our emerging Al products. We&#x27;ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes. Even as we learn from what went wrong here, we should also build on the product and technical announcements we&#x27;ve made in Al over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received. We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the Al wave. Let&#x27;s focus on what matters most: building helpful products that are deserving of our users&#x27; trust.
评论 #39542958 未加载
nkurz超过 1 年前
<a href="https:&#x2F;&#x2F;archive.ph&#x2F;OMRUi" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;OMRUi</a>
nomonnai超过 1 年前
I believe these issues are instances of the frame problem [0]. Specifying the effects of an action is easy (&quot;show more diversity&quot;), but specifying non-effects is hard to impossible (&quot;do not show more diverse Nazis&quot;). Computer science and logic have worked out how to avoid side effects in formal systems, but the real world is a different animal.<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Frame_problem" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Frame_problem</a>
评论 #39538963 未加载
评论 #39542078 未加载
评论 #39542383 未加载
mouse_超过 1 年前
Yes, a program that spits out shit it reads on the Internet without a hint of understanding is in fact unacceptable. But, similarly to people who to the same, is entirely unstoppable at this point. The billionaires funding all this seem to have a similar lack of understanding.
评论 #39541143 未加载