TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Do we understand ethics well enough to build “Friendly artificial intelligence”?

42 点作者 pldpld大约 14 年前

8 条评论

aothman大约 14 年前
As an AI researcher, I think obstacles like "not having your robot fall over all the damn time" are a little more immediate than robots having a nuanced understanding of ethics. I can understand why this stuff is fun to think about and debate, but it's just not relevant at all to where AI is going to be for the next 50 (or 100, or probably 200) years.
评论 #2460127 未加载
评论 #2460292 未加载
评论 #2460128 未加载
vessenes大约 14 年前
Best suggestion of the article is that we scorn AI researchers who do not have a credible claim that their designs will maintain a basic agreed-on value system after a billion self-managed iterations and upgrades by the AI.<p>This is a fascinating and broad-ranging criticism of AI, and it's interesting to me because the author is clearly considering 'what happens if we are successful?'.<p>Definitely worth a read.
评论 #2459975 未加载
评论 #2459821 未加载
评论 #2460028 未加载
Hipchan大约 14 年前
Haha, it's impossible.<p>Absolute power corrupts absolutely. The goal is to make AI more powerful than humans is it not? We're not going to be able to control it, no way no how.
A1kmm大约 14 年前
The thing is, fully autonomous AIs will most likely be tested on a simulated world (maybe at a smaller scale) before they have any kind of real world influence.<p>Real world resources are finite, and real world processes with real world materials take a certain finite amount of time. The singularity therefore ignores the realities of physics. It would even be possible to add artificial constraints on the total resource use and rate of resource use.
geuis大约 14 年前
So I'm not a person that actively writes AI software, but I am a knowledgable supporter. I'm all for the Singularity, rights for future non-human intelligences, etc, et. al.<p>So I <i>always</i> take issues with these kinds of esoteric debates about how to engineer ethics into an intelligence that can learn and become conscious.<p>Haven't any of these yahoos ever had kids or owned a pet dog?<p>You don't "engineer ethics" into your son or daughter. You teach them through examples of good behavior, punish them when they misbehave, and reward them when they succeed. Over the course of a few years, given a good environment, the end result is a new young intelligence that knows how to behave well and get along with others. That intelligence often goes on to bootstrap itself up into adulthood and eventually goes on to create later iterations of itself. If it was raised well, then the new ones tend to get raised well too. We call them "grandkids".<p>So lets assume in 10-20 years something descended from IBM's Blue Brain (simulating cat cortexes) leads to something that is analogous in intellectual range from a dog to an elephant.<p>Most people will agree that dogs and elephants are pretty damn smart. Dogs are able to perceive human emotional states, understand some language, do work for people, and fit nicely into our social structure. Elephants aren't that close with people, but are highly intelligent, have active internal emotional states, and even grieve for their dead. In some societies, people and elephants have worked together for thousands of years.<p>In both these cases, we have a long history of working with other intelligences of varying scales for thousands of years. In general, if you don't mistreat them they turn out to be socialized pretty well. Its only when you mistreat them that they learn to fear and hate you. The same is true for people.<p>So as @aothman said in another comment in this thread, AI researchers are just trying to get their projects to not fall over. There's no thought of "engineering ethics". This problem is going to be solved one little bit at a time. Artificial neural architectures are going to more and more sophisticated over time. But there is a key difference between the underlying architecture and how you go about training these new minds.<p>If you raise them well, then most of these angels-on-a-pin discussions are just that, meaningless.
评论 #2460312 未加载
评论 #2460310 未加载
评论 #2460238 未加载
评论 #2460474 未加载
protomyth大约 14 年前
I don't think we understand debugging well enough to start building "friendly A.I.", much less in robot form.
评论 #2460283 未加载
nazgulnarsil大约 14 年前
no. and the likelihood of someone building a friendly AI first when that is harder than building an unfriendly AI seems minuscule. cya humanity, sucked while it lasted anyway.
评论 #2460051 未加载
HockeyBiasDotCo大约 14 年前
No. Not close yet...