TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Why Self-Taught Artificial Intelligence Has Trouble with the Real World

175 点作者 IntronExon超过 7 年前

16 条评论

skywhopper超过 7 年前
Part of the problem is... games have explicitly defined rules, start and end points, boundaries, and discrete &quot;win&quot; and &quot;loss&quot; states (and sometimes &quot;draw&quot;). If the game itself (ie, all the rules including the ability to judge &quot;win&quot;, &quot;lose&quot;, or &quot;draw&quot;) can be easily represented in a simple computer program, we shouldn&#x27;t be surprised that a complex computer program can master the game.<p>The real world is not a finite problem with explicit rules, obvious boundaries, well-known start conditions, or any way to judge a specific situation as &quot;win&quot;, &quot;lose&quot;, or &quot;draw&quot;. But, even if you want to argue that specific tasks can be broken down this way, you still have to be able to represent this subset of reality in the computer, before AI magic can even begin to work on the problem.
评论 #16431386 未加载
评论 #16433574 未加载
评论 #16432678 未加载
评论 #16431410 未加载
评论 #16432285 未加载
评论 #16431786 未加载
评论 #16435361 未加载
评论 #16436280 未加载
评论 #16432346 未加载
wazoox超过 7 年前
<i>Imagine asking a computer to diagnose an illness or conduct a business negotiation. “Most real-world strategic interactions involve hidden information,” said Noam Brown, a doctoral student in computer science at Carnegie Mellon University. “I feel like that’s been neglected by the majority of the AI community.”</i><p>Hum, Terry Winograd (author of SHRDLU) got out of AI in the 70s because of this very problem. I don&#x27;t think it&#x27;s been neglected; it just remained as elusive as, say, quantum gravity.
sgt101超过 7 年前
Pretty soon someone will discover subsumption architectures. I predict that they will be called Deep Subsumption Architectures and they will be betterer and newerer than the old stupid subsumption architectures and that anyone who speaks against them is stupid and wrong and has no startup and can&#x27;t work at Google or use a mac and smells and has no paper at NIPS since 1998 and then papers at NIPS were no good and also they don&#x27;t have a band or a court case against them.
评论 #16434048 未加载
randomerr超过 7 年前
It just comes down to computer think in algorithms. Remember Facebook had two AI&#x27;s talk to each other? With in a few minutes they broke down from the complexity of English to almost an 8 bit language.<p>The universe, humans included, don&#x27;t follows these bit specific algorithms. Yes people follow trends, but this trends are not cut and dry. Go and chess do. They follow binary logic of moving pieces on a grid. a computer will never be able to understand the universe unless it can break out of it&#x27;s binary patterns and see thing as biological entities do. My speculation is the only solution are grafted neurons on a floating layer of protein inside a silicon chip.<p><a href="http:&#x2F;&#x2F;www.independent.co.uk&#x2F;life-style&#x2F;gadgets-and-tech&#x2F;news&#x2F;facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html" rel="nofollow">http:&#x2F;&#x2F;www.independent.co.uk&#x2F;life-style&#x2F;gadgets-and-tech&#x2F;new...</a>
评论 #16431455 未加载
raphlinus超过 7 年前
A reminder of a recent discussion here that goes into a lot more detail about why reinforcement learning works well for specialized domains like Go but is having a very hard time generalizing to more &quot;real-world&quot; types of tasks: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16383264" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16383264</a>
fizixer超过 7 年前
&gt; ... But researchers are struggling to apply these systems beyond the arcade.<p>It hasn&#x27;t been 2 years since AlphaGo v Sedol, and there was a gap of 5 years since Watson, about 5-10 years since self-driving AI (Google, DARPA challenges), and about 19 years since Deep Blue v Kasparov.<p>Zero-knowledge AI, at the level of arcade games and Go, is barely a few months old.<p>What is that &#x27;struggle&#x27; that you speak of? Does it go by the name &#x27;media wanting a new sensational story every week&#x27;?
评论 #16435187 未加载
sixQuarks超过 7 年前
The article brings up some good points, but I believe we&#x27;re just in an interim phase with AI right now. Eventually, AI will be able to self-learn in areas outside of games and environments where certain factors are hidden. My guess is that in 5 to 10 years, we will be blown away with some AI abilities.
评论 #16431050 未加载
评论 #16431079 未加载
kazinator超过 7 年前
&gt; <i>Imagine asking a computer to diagnose an illness or conduct a business negotiation.</i><p>To beat humans at this, it just has to have a lower misdiagnosis rate.
dwighttk超过 7 年前
The world isn&#x27;t governed by a few simple rules. (Or at least we don&#x27;t know the few simple rules the world is governed by yet.)<p>The world doesn&#x27;t provide perfect knowledge of itself.
评论 #16432039 未加载
loorinm超过 7 年前
I guess I’m confused on what the goal of all this is.If we wanted a computer that thinks “just like a person”, why don’t we just get a person?<p>Is the advantage of the computer that it has no rights to being paid or treated fairly?<p>If that’s the case, we need to set where the rules are. What if my “AI” is 50% stem cells grown into a real brain and 50% a computer? Is it cool to enslave that too?<p>What about if an embryo is involved?<p>The whole AGI thing makes no sense. If the point here is slavery, someone needs to say it.
评论 #16435523 未加载
评论 #16436060 未加载
评论 #16435442 未加载
danans超过 7 年前
The term self-taught in the article doesn&#x27;t really mean self-taught the way we use it for people. For the machines, it is cloned instances of the same program (hence objective) working adversarially , perhaps with different initializations.<p>Humans, or any other biological intelligence, learn adversarially and cooperatively with other entities in the world that are very different than they are. Our training data set includes not only our experiences, but those of others.<p>We also have a trainable objective, which while rooted in instinct, is very influenced by the information systems we interact with.<p>I wonder if we&#x27;d have more success with AI by allowing the objective itself to be learned after setting a reasonable initial bias.
评论 #16434433 未加载
norlys超过 7 年前
“Most real-world strategic interactions involve hidden information&quot; &quot;Tay’s objective was to engage people, and it did. “What unfortunately Tay discovered,” Domingos said, “is that the best way to maximize engagement is to spew out racist insults.”&quot;<p>So, even if the next Tay has &quot;behave in a civilised manner&quot; as a objective function, it will be hard to implement as the ethical rules we presume in reality are not written out as the rules of a video game. In fact, they involve many grey areas and not so many strict right-or-wrong-statements.
mar77i超过 7 年前
I have a reflex hearing this kind of thing to respond &quot;no shit sherlock&quot;. Part of me is just too aware of so-called AI&#x27;s shortcomings which is beautifully portrayed by <a href="https:&#x2F;&#x2F;imgs.xkcd.com&#x2F;comics&#x2F;machine_learning.png" rel="nofollow">https:&#x2F;&#x2F;imgs.xkcd.com&#x2F;comics&#x2F;machine_learning.png</a><p>The joke is that business as usual is kind of aware and at the same time, to be economic, blissfully ignorant of these issues.
fiatjaf超过 7 年前
Isn&#x27;t this point kinda obvious and wasn&#x27;t it touched on multiple and repeated times?
评论 #16432584 未加载
tabtab超过 7 年前
I&#x27;d like to see something like Cyc merged with pattern-learning systems. You&#x27;d get more common sense and logic to compliment &quot;blunt&quot; pattern matching.
steve_tan大约 7 年前
there are multiple reasons, such as, imperfect information in the real world, big reality gap between simulation and real world, sample inefficiency, potential risk during trial-and-error in real world, etc