TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Tool AIs want to be agent AIs

153 点作者 ogennadi超过 8 年前

10 条评论

eduren超过 8 年前
I highly recommend the book referenced in the article: Nick Bostrom&#x27;s <i>Superintelligence</i>.<p><a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook&#x2F;dp&#x2F;B00LOOCGB2" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Superintelligence-Dangers-Strategies-...</a><p>It has helped me make informed, realistic judgments about the path AI research needs to take. It and related works should be in the vocabulary of anybody working towards AI.
评论 #13233477 未加载
评论 #13232939 未加载
评论 #13234697 未加载
visarga超过 8 年前
Check out Gwern&#x27;s Reinforcement Learning subreddit. He&#x27;s practically supporting this subreddit by himself.<p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;reinforcementlearning&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;reinforcementlearning&#x2F;</a>
Eliezer超过 8 年前
This is excellent. If you want to see what real discussion of an AGI alignment issue looks like, please read this.
skybrian超过 8 年前
What about the relative size of the available datasets? It seems like that would make offline learning much more valuable than learning directly from experience.<p>The largest publicly available research datasets for machine translation are 2-3 million sentences [1]. Google&#x27;s internal datasets are &quot;two to three decimal orders of magnitudes bigger than the WMT corpora for a given language pair&quot; [2].<p>That&#x27;s far more data than a cell phone&#x27;s translation app would receive over its entire lifetime. Similarly, the amount of driving data collected by Tesla from all its cars will be much larger than the data received by any single car.<p>This suggests that most learning will happen as a batch process, ahead of time. There may be some minor adjustments for personalization, but it doesn&#x27;t seem like it&#x27;s enough for Agent AI to outcompete Tool AI.<p>At least so far, it seems far more important to be in a position to collect large amounts of data from millions of users, rather than learning directly from experience, which happens slowly and expensively.<p>This is not about having a human check every individual result. It&#x27;s about putting a software development team in the loop. Each new release can go through a QA process where it&#x27;s compared to the previous release.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;bicici&#x2F;ParFDAWMT14" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;bicici&#x2F;ParFDAWMT14</a> [2] <a href="https:&#x2F;&#x2F;research.googleblog.com&#x2F;2016&#x2F;09&#x2F;a-neural-network-for-machine.html" rel="nofollow">https:&#x2F;&#x2F;research.googleblog.com&#x2F;2016&#x2F;09&#x2F;a-neural-network-for...</a>
评论 #13233356 未加载
daveguy超过 8 年前
I firmly believe that general AI will not be developed without agency for the AI. The &quot;info only&quot; helper AI (called Tool AI) means that information will have to be added manually by some intelligent agent (human or otherwise). No exploration of how actions and interactions affect results can be explored.<p>Tool AIs will never &quot;want&quot; anything because the meaning of want will be completely foreign.
评论 #13233295 未加载
PaulHoule超过 8 年前
If you have a decision process of some kind you can get more value out of it if you can link it to a utility function in which the &quot;tool&quot; tries to maximize the value it creates.<p>It&#x27;s tough enough to get value out of A.I. that this trick should not be left on the table. Thus Tool A.I.s need to be Agent A.I.s to maximize their potential.
EGreg超过 8 年前
The question is what happens when &quot;what we want&quot; is replaced with &quot;what you should want if you were more intelligent&quot;. Sorry Dave.
jessriedel超过 8 年前
It seems most of these arguments apply equally well to the problem of solving AI value alignment, or of preventing the development of AI at all. (I.e., it&#x27;s cheaper and faster to race ahead without worrying about value alignment.) But that doesn&#x27;t make us conclude that value alignment is impossible, just hard to achieve soon enough in the real world.<p>Yes, we should be aware of the limitations and market instability of tool AI, but I think it&#x27;s unjustified to suggest that tool AI is essentially impossible (&quot;highly unstable equilibrium&quot;) and all we can hope to do is solve value alignment.
评论 #13233011 未加载
leblancfg超过 8 年前
Glad to see this articulated so well. The &#x27;Overall&#x27; paragraph sums up thoughts that had been in the back of my mind for months. Plus, hey, it&#x27;s Gwern. If you&#x27;re reading this: you&#x27;re an inspiration, man.<p>What is still unclear -- to me at least -- is the technical challenges that lie ahead of this &quot;neural networks all the way down&quot; approach. I get the impression we&#x27;ll need quite a few breakthroughs before usable Agent AIs are a thing. Insights on the order of importance as, say, backpropagation and using GPUs.
评论 #13233998 未加载
评论 #13233341 未加载
anon987超过 8 年前
I think all AIs in my lifetime will simply be Chinese Rooms<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chinese_room" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chinese_room</a>
评论 #13233047 未加载
评论 #13233021 未加载
评论 #13235256 未加载
评论 #13233789 未加载
评论 #13233353 未加载