TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Stanford research: Natural language AI models are bias-prone

1 点作者 aluciani大约 3 年前

1 comment

PaulHoule大约 3 年前
It&#x27;s worse than that. Neural network models are a pile of biases that (sometimes) seem to understand because (1) biases are on balance partially true, (2) our own understanding is rife with biases, (3) we tend to see ourselves mirrored in the environment.<p><a href="https:&#x2F;&#x2F;www.themarysue.com&#x2F;things-that-look-like-faces-pareidolia&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.themarysue.com&#x2F;things-that-look-like-faces-parei...</a><p>Just like all of the other fundamental principles of computer science have been abandoned by the neural net cult, basic principles of cybernetics such as Ashby&#x27;s Law are forgotten<p><a href="https:&#x2F;&#x2F;www.edge.org&#x2F;response-detail&#x2F;27150" rel="nofollow">https:&#x2F;&#x2F;www.edge.org&#x2F;response-detail&#x2F;27150</a><p>Real understanding involves multiple dimensions but one of them is a process like SAT solving that checks the consistency of an &quot;understanding&quot; vs the system&#x27;s database of world knowledge. Even in the early 1970s the symbolic AI community had some understanding of what the &quot;gap&quot; was, today there is blind faith that if you throw enough computational power at it, neural networks will overcome, with no consideration of what structural features are necessary.