TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Human-level control through deep reinforcement learning

209 pointsby daisystantonabout 10 years ago

17 comments

erostrateabout 10 years ago
The code is online if you want to play with it. <a href="https://sites.google.com/a/deepmind.com/dqn/" rel="nofollow">https:&#x2F;&#x2F;sites.google.com&#x2F;a&#x2F;deepmind.com&#x2F;dqn&#x2F;</a><p>If you&#x27;re interested, one of the main authors (David Silver) teaches a very good and intuitive introductory class on reinforcement learning at UCL: <a href="http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html" rel="nofollow">http:&#x2F;&#x2F;www0.cs.ucl.ac.uk&#x2F;staff&#x2F;d.silver&#x2F;web&#x2F;Teaching.html</a>
评论 #9110303 未加载
bmh100about 10 years ago
&gt; <i>...the authors used the same algorithm, network architecture, and hyperparameters on each game...</i><p>This is huge. It shows that the algorithm was able to generalize across multiple problem sets within the same domain of &quot;playing Atari 2600 games&quot;, and not simply a &quot;lucky&quot; choice of algorithm, network architecture, or hyperparameters that a random search for each game might choose. This is also not a violation of the No Free Lunch (NFL) Theorem [1], because the domain is limited to playing Atari 2600 games, which share many characteristics.<p>[1]: <a href="https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;No_free_lunch_in_search_and_op...</a>
评论 #9110741 未加载
评论 #9110702 未加载
sjtrnyabout 10 years ago
Watch it play:<p><a href="http://www.nature.com/nature/journal/v518/n7540/extref/nature14236-sv1.mov" rel="nofollow">http:&#x2F;&#x2F;www.nature.com&#x2F;nature&#x2F;journal&#x2F;v518&#x2F;n7540&#x2F;extref&#x2F;natur...</a><p><a href="http://www.nature.com/nature/journal/v518/n7540/extref/nature14236-sv2.mov" rel="nofollow">http:&#x2F;&#x2F;www.nature.com&#x2F;nature&#x2F;journal&#x2F;v518&#x2F;n7540&#x2F;extref&#x2F;natur...</a>
评论 #9110738 未加载
superfxabout 10 years ago
Here&#x27;s a publicly-accessible link to the full paper:<p><a href="http://rdcu.be/cdlg" rel="nofollow">http:&#x2F;&#x2F;rdcu.be&#x2F;cdlg</a>
评论 #9110378 未加载
j_m_babout 10 years ago
It is interesting how they are using various biological models to develop their own model. They gave their model a reward system and a memory. It will be interesting to see how far deep Q-networks can be extended and at what point they hit the wall of diminishing returns.<p>|Nevertheless, games demanding more temporally extended planning strategies still constitute a major challenge for all existing agents including DQN.<p>|Notably, the succesfsful integration of reinforcement learning with deep network architectures was critically dependent on our incorporation of a replay algorithm involving the storage and representations of recently experienced transitions.<p>I am not for sure what data the replay algorithm has access to, but I wonder what happens if you extend the amount of data it has. This might be the brick wall this algorithm hits of diminishing returns.<p>It would be interesting to hear what the authors think could help help improve how their model deals with temporally extended planning strategies.<p>As someone who grew up on Atari, Nintendo and Sony this is pretty cool work.
评论 #9111869 未加载
albertzeyerabout 10 years ago
An interesting critic by Schmidhuber about this publication:<p><a href="https://plus.google.com/100849856540000067209/posts/eLQf4KC97Bs" rel="nofollow">https:&#x2F;&#x2F;plus.google.com&#x2F;100849856540000067209&#x2F;posts&#x2F;eLQf4KC9...</a>
评论 #9111877 未加载
nlabout 10 years ago
Is this a different paper to the original DeepMind video game paper? <a href="http://arxiv.org/abs/1312.5602" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1312.5602</a>
评论 #9109665 未加载
评论 #9110836 未加载
评论 #9109770 未加载
discardoramaabout 10 years ago
Is there a chance this paper will be available as PDF? I&#x27; finding it difficult to read the readcube version. :-(
评论 #9110572 未加载
评论 #9111087 未加载
javierluraschiabout 10 years ago
I think qlearning is really interesting, I posted yesterday a simple implementation&#x2F;demo in Javascript of qlearning. This paper goes way beyond qlearning by deducing states based on a deep neural network from the actual game rendering, really cool. Regardless, as a first intro to qlearning I had fun putting this together <a href="https://news.ycombinator.com/item?id=9105818" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9105818</a>
javierluraschiabout 10 years ago
Here is the marketing side of this publication in which Google scientists (aquihired from Deepmind) have developed a way to outperform humans in Atari games: <a href="http://m.phys.org/news/2015-02-hal-bests-humans-space-invaders.html" rel="nofollow">http:&#x2F;&#x2F;m.phys.org&#x2F;news&#x2F;2015-02-hal-bests-humans-space-invade...</a>
plinkplonkabout 10 years ago
Is the paper available anywhere to read without having to pay Nature? From the comments it seems as if everyone is able to read this but me! Even in their &quot;readcube&quot; access method, only the first page is (barely) visible, the rest seems blurred.
nlabout 10 years ago
The most interesting thing about this is that it shows significant progress towards goal-oriented AI. The fact this system is effectively learning what &quot;win&quot; means in the context of a game is something of a breakthrough.
评论 #9110563 未加载
评论 #9110334 未加载
craftitabout 10 years ago
It is an amazingly powerful technique. We&#x27;ve been working on a service which lets you do this kind of learning with any JSON stream. You can see a demo here:<p><a href="https://aiseedo.com/demos/cookiemonster/" rel="nofollow">https:&#x2F;&#x2F;aiseedo.com&#x2F;demos&#x2F;cookiemonster&#x2F;</a>
评论 #9111844 未加载
viggityabout 10 years ago
Can someone convert &quot;academia nerd language&quot; down one notch into &quot;regular nerd language&quot;. On the surface, this sounds interesting but despite being a huge nerd I&#x27;m not really sure what the hell they&#x27;re talking about.
评论 #9110118 未加载
评论 #9110177 未加载
sharemywinabout 10 years ago
PDF:<p><a href="http://arxiv.org/pdf/1312.5602v1.pdf" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1312.5602v1.pdf</a>
评论 #9111068 未加载
eveningcoffeeabout 10 years ago
I am wondering that what kind of real life problems could be modelled this way.
评论 #9111803 未加载
评论 #9111018 未加载
Someoneabout 10 years ago
For comparison: <a href="http://www.cs.cmu.edu/~tom7/mario/" rel="nofollow">http:&#x2F;&#x2F;www.cs.cmu.edu&#x2F;~tom7&#x2F;mario&#x2F;</a>. That is way more of a hack, but I am not sure this is that big a step forward. Space invaders and breakout aren&#x27;t the hardest games and I haven&#x27;t heard a hard argument that it is just a matter of scale to create a machine that, say, plays chess.
评论 #9110849 未加载
评论 #9110812 未加载