Fascinating:<p>"We observed that for precision grasps, such as the Tip Pinch grasp, Dactyl uses the thumb and little finger. Humans tend to use the thumb and either the index or middle finger instead. However, the robot hand’s little finger is more flexible due to an extra degree of freedom, which may explain why Dactyl prefers it. This means that Dactyl can rediscover grasps found in humans, but adapt them to better fit the limitations and abilities of its own body."<p>The learning of "emergent" behavior, specifically when it creates improvements to natural human motion is one of the main reasons why this type of work is so important. Similar to the way that we imitate design from nature (e.g. wings, suction cups), we can now accelerate development by observing how the bots perform the task in a variety of environments
Does anyone know a good graduate program/route for this kind of work? My undergrad was CS with some experience in (dumb) robotics and mechanical design but no ML. I am interested in applying ML/CV to physical systems like this however am I bit weary of going back to a CS program. I have seen some Mechanical programs with an emphasis on control that let you 'build your own degree'. If I could take a mix of ML/CV, control systems, kinematics I would be happy. Just looking for some input from people in this field.
Honest question: In the video, it looks like it works, but performs worse than about 90% of humans at the task of rotating a cube.<p>On the other hand, Alpha Go or even a rudimentary chess program does better than 99.99% of all humans.<p>So is it fair to say that deep learning is fundamentally missing something that humans do? Or that chess and Go are "easy" problems in some sense?<p>(It seems like with "unlimited" training hours it could eventually be better than a human? Or is that a hardware issue?)
> Learning to rotate an object in simulation without randomizations requires about 3 years of simulated experience<p>It's interesting to me that this is about the same amount of time it takes humans to develop similar levels of motor control. I don't know enough about AI or neuroscience to say whether it's likely to be a coincidence or not, though.
Very cool. There's also a Times article about Dactyl: <a href="https://www.nytimes.com/interactive/2018/07/30/technology/robot-hands.html" rel="nofollow">https://www.nytimes.com/interactive/2018/07/30/technology/ro...</a>
> Rapid used 6144 CPU cores and 8 GPUs to train our policy, collecting about one hundred years of experience in 50 hours.<p>That seemed an order of magnitude higher than I expected. Is training usually this computationally expensive?
Nice.<p>Take a look at position 44, where it seems to get stuck, with no move to make forward progress, and two fingers straight out. Did it lack image recognition to tell it what block rotation was needed?<p>It doesn't seem to work by discovering strategies for rotating the block one face at a time, then combining those. It's solving the problem as a whole. That has both good and bad implications.
>> We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.<p>To be precise, the "physical objects" appear to invariably be cubes of the same dimensions. Not arbitrary "physical objects". Which is probably the best that can be done by training only in a simulated environment.
I am continually impressed by OpenAI, whenever we think that something is too difficult for our currently understanding of AI. With their Dota AI and this they have shown that more can be done with a lot less than previously thought.
Holy cow, the robots are definitely coming. We really are at the ground floor of a technology that is going to change humanity, I am certain of that. Changes greater than any changes we've seen before.
I guess someone has to be the negative one: I can't help feeling it's route to the correct face looks entirely accidental (and I don't mean that in a good way)... I'm sure it's "learned" some methods, but they don't look that efficient, reliable, purposeful or controlled. In a more noisy and dynamic environment I'd expect them to fail. Granted is possible these could be more due to training conditions than an inherent limitation of the underlying model.
Link to paper ( why no Arxiv :/ ): <a href="https://d4mucfpksywv.cloudfront.net/research-covers/learning-dexterity/learning-dexterity-paper.pdf" rel="nofollow">https://d4mucfpksywv.cloudfront.net/research-covers/learning...</a><p>TLDR (quick-ish skim, feel free to correct) they train a deep neural network to control a robot hand to choose desired joints state changes (binned into 11 discrete values; eg rotate this joint by 10 degrees) for a 20-joint hand given low-level (non-visual; so, current and desired 3D orientation of the object and exact numeric state of the joints) input of the state of a particular object and the hand. They also train a network to extract the 3D pose of a given object given RGB input. All this training is done in simulation with a ton of computation, and they use a technique called domain randomization (changing colors and textures and so friction coefficient and so on) to make these learned models pretty much work in the real world despite being trained only in simulation.<p>It's pretty cool work, but if I may pull my reviewer hat on not that interesting in terms of new ideas - still, it's cool OpenAI is continuing to demonstrate what can be achieved today with established RL techniques and nice distributed compute.
> <i>a human-like robot hand</i><p>But why?? Why should robots' hands resemble human hands? They could have any number of fingers, or tentacles, or magnets, why should they be like human hands??<p>It seems "AI" really means "as close as possible to human behavior", even if we're not really that clever in said behavior.<p>Also, human intelligence being at least debatable, it's not obvious that the obsessive imitation of humans is the best way to attain "AI".
This is a great example of why AI innovation is not moving at the pace we are told to believe. This is using the same basic algorithms we've known about for decades, just more compute and differently formulated problems. We need a paradigm shift!
That's very impressive. Robotic grasping is getting pretty good[1] but in-hand manipulation is a whole 'nother kettle of fish and this is really exciting.<p>[1] He said, tooting his employer's horn.
This is a perfect example of how AI is taking over the world by storm. I don't know how people don't realize that there will be no jobs left for billions of people.
Yes, billions. Not Millions.<p>I don't quite get the "New Jobs will be created" fallacy.<p>Let me explain:
What is job? An Abstract way of looking at it: A job is something that requires a set skills to accomplish a task.
What most politicians don't get: Researchers like OpenAi teach machines SKILLS not jobs.<p>A little thought experiment:
Let's say humans are capable of 100 skills.
Skills can be anything from: driving, seeing, hearing, reading, walking, carrying, drawing etc.<p>Usually, a low paying job requires little to no traning.
For example: Someone in a warehouse that picks the stuff you have ordered. The skill that are required are: walking, picking and using a device.
A High paying jobs usually requires more skills and/or experience.<p>We train machines to see better, hear better, sort faster etc.
Any new job will require some sort of skills out the set of skills that can be trained.
But the moment you create this job, it will be automated, because a machine can do it better and faster.<p>We need to adress this now, otherwise i don't see a bright future for the generations to come.
Let's be honest - the only thing we care about is "are the programming jobs safe?!" Well, are they?<p>P.S. I am trying to help a newer dev atm, and I realize I always have only one question for them while basically doing their work "What. the. hell. are. the. business. requirements?"<p>Suddenly, this makes me feel much more like a business analyst than a code monkey, though being a decent code monkey is definitely a pre-req.