TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Seriously though – what did Ilya see?

7 点作者 jaarse超过 1 年前

1 comment

Zigurd超过 1 年前
Of the concerns, even by people working on AI safety, I have seen expressed, what stands out to me are two biases: First, western scifi culture about robot uprisings. Secondly, anthropomorphizing what a machine &quot;mind&quot; would be like. Obv there are overlaps. But most of the flaws in that kind of thinking come from thinking that an AGI would think like a human, or any kind of animal.<p>It won&#x27;t. It is not alive. It has no &quot;selfish genes&quot; It cannot die of starvation. It is not dead while turned off. What confuses people is that AI algorithms have finally combined with enough compute power to provide a conversational interface that frequently seems more &quot;alive&quot; than some humans. Educated humans do a lot of clever knowledge remixing, which is what generative AI is good at.<p>It is not like you or any other human. If scifi robots just said &quot;Hey, chill, I&#x27;ll do the dishes&quot; that would have been a dull movie. We are inventing conflicts in our minds. And those are the minds of AI safety researchers, they are barking up the wrong tree.