TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How could setting an AI's goal to be “increase human autonomy” go awry?

3 点作者 evangow超过 7 年前
I read Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies" a while back and found the problems in trying to set a goal for AI to be interesting/difficult. He outlines various ways the goals could be misconstrued by the AI, which eventually leads to human extinction. I think setting the goal to be "to increase human autonomy" might get around some of these problems. I'm interested to hear how people think it could go awry though.

2 条评论

schoen超过 7 年前
I guess a natural question is how to define and measure human autonomy.<p>If it&#x27;s the autonomy of each individual human, increasing it without bound will cause existing societies to fall apart quickly (which is potentially fine under some ethical theories), and could create severe danger for other humans because people can use their enhanced abilities to fight and harm each other.<p>If it&#x27;s the autonomy of humanity as a whole, you have to define some way of aggregating preferences or determining the will of humanity as a whole -- already a significant challenge today.
yorwba超过 7 年前
Humans are obviously most autonomous if they are prevented from contact with the rest of the world and then left to fend for themselves.