TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Robot Dexterity Still Seems Hard

72 pointsby mhb17 days ago

8 comments

sashank_150917 days ago
Recently I had a chance to listen to a set of talks powering Waymo Technology. I think the average academic roboticist will be shocked by the complete lack of end to end deep learning models or even large models powering Waymo. It’s interesting to me that the only working self driving car on the market right now, basically has painstakingly listed every possible road obstacle, has coded every possible driving logic to it, and manually addressed every edge case. maybe Tesla’s end to end approach will work, and that will be the way moving forward, but the real world seems to provide an almost limitless amount of edge cases that neural networks don’t seem great at handling. In fact the winning approach to humanoids, if Waymo is proven to be the right approach might be listing every possible item a humanoid can see an environment, detecting them and then planning for them.
评论 #43806790 未加载
评论 #43808434 未加载
评论 #43810174 未加载
评论 #43806598 未加载
评论 #43806695 未加载
评论 #43811302 未加载
评论 #43806488 未加载
评论 #43809590 未加载
levocardia17 days ago
Surprised that there isn&#x27;t any explicit discussion of <i>why</i> dexterity is so hard, beyond sensory perception. One of the root causes (IMHO the biggest one) is that modeling contact, ie the static and dynamic friction between two objects, is extremely complicated. There are various modeling strategies but their results are highly sensitive to various tuning parameters which makes it very hard to learn in simulation. From what I remember, the OpenAI Rubik&#x27;s Cube solver basically learned across a giant set of worlds of many different possible tuning parameters for the contact models and was able to generalize okay to the real world, in various situations.<p>It seems most likely that this sort of boring domain randomization will be what works, or works well enough, for solving contact in this generation of robotics, but it would be much more exciting if someone figures out a better way to learn contact models (or a latent representation of them) in real time.
评论 #43806542 未加载
评论 #43807027 未加载
评论 #43806888 未加载
评论 #43806559 未加载
评论 #43822140 未加载
评论 #43811773 未加载
iandanforth17 days ago
For some perspective we have not yet scaled robot training. The amount of data that Pi is using to train their impressively capable robots is in the range of thousands of hours of data. In contrast language models are trained over trillions of tokens comprising the entirety of human knowledge. So if you&#x27;re saying things like &quot;this still seems hard&quot; just remember we have yet to hit this with the data hammer. Simulation is proving a great way to augment &#x2F; bootstrap robot dexterity but it still pales in comparison to data in the real world. So, as the author points out, we may get capability scaling like Waymo where one company painstakingly collects real data over a decade, but we may also see the rapid progress in simulators and simulator <i>speed</i> overtake for practical household &#x2F; industrial tasks. My bet is on the latter.
评论 #43808203 未加载
评论 #43806581 未加载
MisterTea17 days ago
You need feedback. I started with industrial robotics in the 90s and then having done a bunch of CNC and motion control: positioning is easy. The big problem to solve is enabling the robot to feel what it&#x27;s doing and understand how it relates to the coordinate space. That&#x27;s why we&#x27;re dexterous, we can close our eyes and feel our hands in 3D space instead of just knowing a position in some coordinate system. We can put on a pair of gloves without looking by feel alone. I picture a robot arm similar to when you arm goes numb from sleeping on it. You can see it but it&#x27;s dead. That&#x27;s how a robot feels.
Zigurd17 days ago
There are half a dozen successful commercially available surgical robot products out there. None try to mimic a surgeon&#x27;s hands.<p>Even if biomimicry turns out to be a useful strategy in designing general purpose robots, I would bet against humans being the right shape to mimic. And that&#x27;s assuming general purpose robots will ever be more useful than robots designed or configured for specific tasks.
评论 #43807160 未加载
评论 #43822179 未加载
beefnugs17 days ago
Its because after they saw how big a suckers everyone is for &quot;AI&quot; of course they can sell dumbasses a $60k vaguely human shaped thing that still wont be able to do laundrey or dishes or answer the door or screw in a screw or step over a puppy
DGAP17 days ago
Do these challenges apply to surgical robots? There&#x27;s a lot of interest in essentially creating automated Davincis, for which there is a great deal of training data and for which the robots are prepositioned.<p>Maybe all this setup means that completing surgical tasks doesn&#x27;t counter as dexterity.
m3kw917 days ago
Just today noticed without looking that I can tell from feel that there are 2 objects in a bag instead of one tells me we have likely 1000x different type of sensor and w we combine them all to form a meaning, and dexterity goes hand in hand with it