TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Did Tay, Microsoft AI, give you a sneak peek of how dangerous AI can be?

1 点作者 adarsh_thampy大约 9 年前
AI is great. But how much is too much? What happens when they can learn on their own and come to logical conclusions it thinks is right?<p>Reference: http:&#x2F;&#x2F;arstechnica.com&#x2F;information-technology&#x2F;2016&#x2F;03&#x2F;tay-the-neo-nazi-millennial-chatbot-gets-autopsied&#x2F;

2 条评论

iaml大约 9 年前
Half of the responses from Tay were from twitter history or &quot;repeat after me&quot;. It was a cute little experiment but not really a display of how real AI will behave. Check out this article[1], I think more people should read it before jumping to conclusions.<p>[1] <a href="http:&#x2F;&#x2F;smerity.com&#x2F;articles&#x2F;2016&#x2F;tayandyou.html" rel="nofollow">http:&#x2F;&#x2F;smerity.com&#x2F;articles&#x2F;2016&#x2F;tayandyou.html</a>
smt88大约 9 年前
Tay did not come to logical (or racist) conclusions. It was taught to be anti-social. Humans had to make it that way.<p>Much like weaponized diseases, AI will just be a <i>very</i> powerful tool that humans can misuse. Hopefully, like nuclear weapons (incredible power, but highly exclusive), AI will be incredibly difficult for the average person to use in a malicious way.