TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

‘Deep learning’ technique enables robot mastery of skills via trial and error

220 pointsby joshwaalmost 10 years ago

12 comments

karpathyalmost 10 years ago
Sometimes it&#x27;s hard to separate signal from noise when you&#x27;re not part of a field and just hearing about projects&#x2F;papers, so I wanted to quickly pitch in to say that this is a legitimately ground-breaking approach and line of work that you can expect to hear much more about in the future. It&#x27;s probably the most exciting robotics&#x2F;manipulation project I&#x27;m currently aware of.<p>What&#x27;s exciting here is that the entire system is trained end-to-end (including the vision component). In other words, it&#x27;s heading towards agents&#x2F;robots that consist entirely of a single neural net and that&#x27;s it; There is no software stack at all - it&#x27;s just a GPU running a neural net &quot;code base&quot;, from perception to actuators. In this respect this work is similar to the Atari game-playing agent that has to learn to see while also learning to play the game. Except this setting is quite a lot more difficult in some respects; In particular, the actions in the Deepmind Atari paper are few and discrete, while here the robot is an actual physical system with a very large-dimensional and continuous action space (joint torques). Also, if you&#x27;re new to the field you might think &quot;why is the robot so slow?&quot;, while someone in the field is thinking &quot;holy crap how can it be so fast?&quot;
评论 #9588682 未加载
评论 #9588062 未加载
评论 #9587789 未加载
评论 #9587709 未加载
评论 #9587705 未加载
评论 #9587997 未加载
评论 #9589717 未加载
评论 #9587749 未加载
评论 #9588162 未加载
评论 #9589090 未加载
deepnetalmost 10 years ago
Learning motor torques directly from vision is a very important result.<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;EtMyH_--vnU" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;EtMyH_--vnU</a><p>This talk by Sergey Levine, Pieter Abbeel&#x27;s PostDoc outlines Berkley&#x27;s end-to-end deep-training visuomotor control in detail.<p>Here is the paper :<p>End-to-End Training of Deep Visuomotor Policies, Sergey Levine<i>, Chelsea Finn</i>, Trevor Darrell, Pieter Abbeel.<p><a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1504.00702" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1504.00702</a>
评论 #9592662 未加载
评论 #9587350 未加载
mark_l_watsonalmost 10 years ago
I probably made a career direction error in the early 1990s. I had been on DARPA&#x27;S neural network tools advisory panel and written the SAIC Ansim product, but moved on because of a stronger interest in natural language processing. Now, I think deep learning is getting very interesting for NLP.<p>This UCB project looks awesome!<p>BTW, I took Hinton&#x27;s Coursera neural network class a few years ago, and it was excellent. Take it if that course is still online.
dm3almost 10 years ago
Could someone explain in simple terms how is the target set to the robot so that it can learn to accomplish the task? For example, what inputs are provided in order for it to understand that it needs to put the cap on the bottle?
评论 #9587261 未加载
jonnycowboyalmost 10 years ago
It seems most of the code behind this effort is open source as well! <a href="http:&#x2F;&#x2F;lfd.readthedocs.org&#x2F;en&#x2F;latest&#x2F;" rel="nofollow">http:&#x2F;&#x2F;lfd.readthedocs.org&#x2F;en&#x2F;latest&#x2F;</a> <a href="https:&#x2F;&#x2F;github.com&#x2F;cbfinn&#x2F;caffe" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cbfinn&#x2F;caffe</a>
alkonautalmost 10 years ago
Wouldn&#x27;t this benefit from simulation of the task (from the robot&#x27;s perspective)? Doing something physical over and over again on ONE single robot must be very slow and inefficient compared to if it could be simulated. Even if the simulated training isn&#x27;t spot on, the physical robot could start off with network weights from millions of attempts in a simulated environment.
评论 #9588212 未加载
beefmanalmost 10 years ago
I&#x27;m impressed it (apparently) learned to align screw caps with a short backward turn at the start.<p>Then again, why do we make so many containers with these ungainly screw caps? Ever use those caps (popular in Japan) with the locking track that only take a quarter-turn to close? Examples<p><a href="http:&#x2F;&#x2F;www.amazon.com&#x2F;Yu-Be-Moisturizing-Skin-Cream-Skin-1&#x2F;dp&#x2F;B0001UWRCI&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.amazon.com&#x2F;Yu-Be-Moisturizing-Skin-Cream-Skin-1&#x2F;d...</a><p><a href="http:&#x2F;&#x2F;www.amazon.com&#x2F;Biotene-PBF-Toothpaste-Ounce-Pack&#x2F;dp&#x2F;B00JX73B2A" rel="nofollow">http:&#x2F;&#x2F;www.amazon.com&#x2F;Biotene-PBF-Toothpaste-Ounce-Pack&#x2F;dp&#x2F;B...</a>
adekokalmost 10 years ago
While it is how humans learn, there&#x27;s more to human learning than that. Babies are pre-wired to learn language, recognize shapes, determine &quot;intent&quot;, etc.<p>This means that the neural nets used by babies are pre-wired to be good at specific tasks. Then, babies use those neural nets to do &quot;deep learning&quot; for the final part of the process.<p>Starting from <i>nothing</i> and learning how to do a job is a big step. But having <i>something</i> would be a better start position. What that something is, though, is hard to define.
评论 #9587774 未加载
评论 #9588498 未加载
lowglowalmost 10 years ago
If you&#x27;re interested in this, I&#x27;m putting together a meetup&#x2F;workshop&#x2F;lab at the Palace of Fine Arts in SF every weekend. Come out and share, learn, and build with other people interested in this field.<p>Think of it as the Home Brew Computer Club for Robotics&#x2F;AI :)<p><a href="https:&#x2F;&#x2F;www.facebook.com&#x2F;groups&#x2F;762335743881364&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.facebook.com&#x2F;groups&#x2F;762335743881364&#x2F;</a>
Qantouriscalmost 10 years ago
It acts very organic. But I have to wonder if the organic motion is a good thing. Wouldn&#x27;t it be more efficient to control the arm using IK, but let the robot &quot;think&quot; where the arm should be ? I mean I could easily imagine a straight line, but I can&#x27;t draw it.<p>This would also speed it up imo. Since some things can easily be solved using regular algorithms. Our brains also come with some pre wired functions.
评论 #9587788 未加载
评论 #9587767 未加载
评论 #9587720 未加载
platzalmost 10 years ago
there was recently a Talking Machines episode that included some information (not apparent in the title) about difficulties of modeling the world with robots).<p>&quot;We learn about the Markov decision process (and what happens when you use it in the real world and it becomes a partially observable Markov decision process) &quot;<p><a href="http:&#x2F;&#x2F;www.thetalkingmachines.com&#x2F;blog&#x2F;2015&#x2F;5&#x2F;21&#x2F;how-we-think-about-privacy-and-finding-features-in-black-boxes" rel="nofollow">http:&#x2F;&#x2F;www.thetalkingmachines.com&#x2F;blog&#x2F;2015&#x2F;5&#x2F;21&#x2F;how-we-thin...</a>
rasz_plalmost 10 years ago
oh man, that bottle and shoe examples! :o