TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Preventing an AI-Related Catastrophe

15 点作者 vardhanw超过 2 年前

2 条评论

alpineidyll3超过 2 年前
People often think of AGI as an AI which can learn to complete arbitrary tasks better than humans.<p>Given that we already can produce &quot;an&quot; AI which beats humans at almost every task we come up with (besides synthesis of broad abstract reasoning, a-la Chollet) this is probably the only definition which is meaningful in the sense that it isn&#x27;t already here.<p>Why would evading &#x27;alignment&#x27; not also be such a task AGI does better? AGI is like the nuclear deterrent. It&#x27;s a technology thats coming, inevitably, and a thing which is beyond any amount of philosophical navel gazing to control or prevent.<p>AGI&#x27;s will not be magical, they will have energy demands, construction costs, and environmental limitations.<p>I think it will be much more useful to ask how people coexist, and what role they serve in the post-AGI world, than it is to make statements about interperability or alignment, which will definitely seem silly in retrospect. The machinations of an AGI will be as impossible to understand, as human consciousness itself.
jdpel超过 2 年前
AI related catastrophes are a sexy example of catastrophes caused by people ceding control of the world to automated systems and bureaucratic processes of huge organizations (governments and corporations and the complex macroeconomy as a whole). AI itself isn&#x27;t a major factor.
评论 #32654650 未加载