TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Robo-Ethicists Want to Revamp Asimov’s 3 Laws

15 点作者 naish将近 16 年前

11 条评论

growt将近 16 年前
This is pure Bullshit! Robots, or AIs don't have a conciousness (and probably they never will). And as long as we've not created a robot/AI with a real concious mind we need not to worry about 'punishing' robots or giving them ethical rules. It wont work! I think people who write something like this, or theorize around that topic have time to kill or other issues, since they're solving a non-existant problem.
评论 #718596 未加载
评论 #718507 未加载
jknupp将近 16 年前
<i>"If you build artificial intelligence but don’t think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath," says Josh Hall, a scientist who wrote the book Beyond AI: Creating the Conscience of a Machine.</i><p>What!? How does this possibly work? Will the AI gun say "you still didn't fix that null pointer dereference that caused me to go haywire and kill people, but at least this time when I do it I'll feel bad?" This is one of the most ridiculous quotes I've ever seen.
评论 #718559 未加载
评论 #718599 未加载
dpark将近 16 年前
&#62; Already iRobot’s Roomba robotic vacuum cleaner and Scooba floor cleaner are a part of more than 3 million American households. The next generation robots will be more sophisticated and are expected to provide services such as nursing, security, housework and education.<p>Riiiiight. Next-generation automobiles are also expected to fly.
jacoblyles将近 16 年前
Can anyone come up with a situation involving robots that isn't already adequately covered by current tort law?
评论 #718759 未加载
mgenzel将近 16 年前
The article has little to do with Asimov's laws. Asimov's laws address "three fundamental Rules of Robotics, the three rules that are built most deeply into a robot's positronic brain", i.e., an engineering construct. The article mostly addresses a legal issue: "Accordingly, robo-ethicists want to develop a set of guidelines that could outline how to punish a robot, decide who regulates them and even create a 'legal machine language' that could help police the next generation of intelligent automated devices."<p>I also love how the article ends with "Morality is impossible to write in formal terms", and yet, it expects to see complex enough robots to warrant the article in the first place.
philwelch将近 16 年前
I posted this before but it also seems appropriate here. It's David Langford's Three Laws for military robots:<p>1. A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.<p>2. A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.<p>3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.<p>Langford is an SF writer. You might know him from his "basilisk" stories--if not, go read them sometime.
leecho0将近 16 年前
wow, this is seriously a bunch of bull. It doesn't say anything interesting about the problem at all, and just claims a bunch of speculation and improbable comparisions.<p>Like, "punishing a robot" ...? No AI scheme I've seen ever has a sense of self worth. You can give it a negative reward, but the AI doesn't care much for its _current_ reward, it only performs actions to maximize the expected reward. Which means you can tell it to avoid doing these bad things, but punishing it after the fact would just leave it confused about why its reward system changed, and then go about trying to maximize its rewards again.<p>I for one, am all for the 3 laws of robotics, but it probably won't work for a much simpler reason -- it can't identify the terms. How would an AI recognize when a human is at harm? Would it show the drowning patient a picture of distorted letters and ask what the word is? Or would it jump in to save posters from being damaged? And that's the easy part... how would you define harm? These questions need to be answered before anyone tries to figure out the ethics guidelines for robots to follow.<p>You would seriously learn more about robot ethics from I, Robot than from this poorly written article.
dantheman将近 16 年前
Asimov's 3 Laws were there to provide a constraints for a story to exist i --, they are the equivalent of a dead body found in a locked room.
评论 #718571 未加载
BRadmin将近 16 年前
I'm far removed from the AI industry, but even I was under the impression that Asimov's 3 Laws had been considered outdated for decades.
评论 #718552 未加载
评论 #718602 未加载
ars将近 16 年前
Is a dog a biological robot?<p>If I program a robot to act like a dog, and express pain when kicked - does the robot actually feel pain? Does the biological dog actually feel pain?
评论 #719102 未加载
Periodic将近 16 年前
Robots--for the foreseeable future--don't have a psyche, and thus cannot become psychopathic.