The video is pretty unsettling, kind of showing how strong robots can be compared with humans. (We already knew this, but it’s good to remember.)<p>Reminds me a bit of the chess robot that broke a child’s finger: <a href="https://www.theguardian.com/sport/2022/jul/24/chess-robot-grabs-and-breaks-finger-of-seven-year-old-opponent-moscow" rel="nofollow">https://www.theguardian.com/sport/2022/jul/24/chess-robot-gr...</a><p>What annoyed me at the time was them describing the child as having broken some “rule” about waiting for the robot or something.<p>We should reject this framing. Robots need to be safe and reliable if we’re going to invite them into our homes.
Why does this article read like the robot <i>actually</i> got angry at a spectator. It did not, it does not have that capacity.<p>This was definitely a glaring safety issue and the company should review all its failure modes that show up in public but an ”emotional” response this was not.
Video of the incident: <a href="https://www.youtube.com/watch?v=0JiOAqNpxlE" rel="nofollow">https://www.youtube.com/watch?v=0JiOAqNpxlE</a>
Reminds me of that time in Russia at the chess tournament, where they repurposed some industrial robot to play chess and it crushed a kid’s hand.<p>Also reminds me of when Uber got kicked out of California to test self driving cars, so they moved to Nevada and promptly killed a woman.<p>I guess it’s not surprising that safety is taking a back seat in robotics development everywhere in the world. It’s a mad race for profits of untold scale. But it would be so great if the companies that win would be the companies that don’t fumble on human safety, taking perhaps a slower approach but one that kills/maims fewer people.
Another news article with video: <a href="https://www.odditycentral.com/news/humanoid-robot-appears-to-attack-crowd-at-popular-chinese-festival.html" rel="nofollow">https://www.odditycentral.com/news/humanoid-robot-appears-to...</a>
Years ago when I was a student at uni I volunteered to take part in a research study with robots.<p>I went to a rented house near campus where they had a normal living room set up and sat me down on a dining chair in the room and handed me a box with a button on it.<p>"The robot will approach you. Just press the button when you feel like it is getting too close" they said.<p>They left the room so I was alone, and a few minutes later the wheeled robot entered the room and started slowly but deliberately to move towards me.<p>Let's just say the robot got too close.<p>I was sat there alone as the robot moved towards me. I was frantically mashing at the button but it did not stop until it actually collided with my feet and then stopped.<p>To this day I am not sure if it was meant to stop or not, or even if it was a robotics research project at all or actually a <i>psychology</i> research project.<p>In hindsight it was as terrifying as it sounds. Still, I got £5 for it.
Seems to me that it loses its balance and extends its arms in order to rebalance itself, similar to what's happening in their demo videos <a href="https://www.youtube.com/watch?v=GtPs_ygfaEA?t=24" rel="nofollow">https://www.youtube.com/watch?v=GtPs_ygfaEA?t=24</a> I've worked with their robot dogs before and they kick their legs really fast when they sense that they are falling over.
Hopefully this starts a discussion/trend towards failure-tolerant robotics. Much as we have seen on commercial aircraft, relying on such things as a single sensor (or not being able to tolerate the failure of a single sensor) could spell trouble and even tragedy.<p>Having been involved in failure tolerant design for mechanical, electronic and software systems, I think I can say that this is an aspect of engineering that is well understood by those working in industries that require it.<p>Generalizing --perhaps unfairly-- I imagine that most engineers working on this class of robot have had little, if any, exposure to failure tolerant designs. They cost more, require more attention and analysis of designs and lots of testing. However, as robots of many forms interact with humans, this type of resiliency will become critically important.<p>A practical home or warehouse robot that can lift and manipulate useful weights (say, 20 or 30 Kg) will have enough power to seriously hurt someone. If a single sensor failure, disconnection or error can launch it into uncontrolled behavior, the outcome could be terrible.
Tangentially, this is the sort of thing that generally bothers me most about AI. Well, second most thing. The first is it being abused by humans to do terrible things. The second is it being built and maintained by humans, where a thing can easily malfunction in ways the people building and maintaining them can’t comprehend, predict, or prevent, especially when it’s built by organizations with a “move fast and break things” mentality and a willingness to cut corners for profit. The torrents of half-broken tech we are already drowning in don’t exactly inspire confidence.
>> The manufacturer, Unitree Robotics, attributed the incident to a "program setting or sensor error." Despite this explanation, the event has heightened ethical and safety concerns regarding the use of robots in public venues. The local community is calling for urgent measures to ensure that robots' actions align with social norms, emphasizing the need for regulatory and legal frameworks to govern robot-human interactions.<p>I did not think I’m going to see this in my lifetime after watching Animatrix
This was one of my major concerns when Elon announced Tesla Optimus. There's a real need for government regulation on the bot specs. I blogged about this a while ago.<p>Something like:<p>1. They shouldn’t be able to overpower a young adult. They should be weak.
2. They should be short.
3. They should have very limited battery power and should require human intervention to charge them.
4. They should have limited memory.
There is a video floating around of it. It's a sudden forward movement that certainly looks alarming but wouldn't call it "unexpectedly displayed aggressive behavior".<p>More like it's hardcoded to do something (maintain balance or whatever) without limits on how fast it can move to achieve the goal.<p>i.e. bad safety controls rather than malice
I really liked the show "Humans" (<a href="https://en.wikipedia.org/wiki/Humans_(TV_series)" rel="nofollow">https://en.wikipedia.org/wiki/Humans_(TV_series)</a>). I feel like we're catching up to that timeline.
What incredible click bait.
There is nothing "Incident", "Raising Alarm" or "Shocking" about this.<p>"Robot in Tianjin stumbles" there i fixed the Title.