It is ridiculous to attribute intent to the motion, but I understand why people do — it really does give the impression of an aggressive, upset human. That’s unfortunate.
There is an often unaddressed risk in robotics because there is a lack of theory-of-mind. We’ve evolved to intuit what others humans are thinking (based on words, body language and other context) which helps us predict behavior and mitigate risk. Unfortunately we can’t do the same with robuts so there is a potential for more latent risk (same as dealing with “crazy” humans where our mental models fail to predict behavior).
IMO this means we won’t be comfortable with robuts and safety critical applications until they are well, well beyond human capabilities. This is where I think the crowd that aims for “human-level performance” is wrong; society won’t trust robuts until they are much, much better than humans.
Ya, that makes sense to me, this is roughly how I feel about self driving cars as well — I want very good proof that it outperforms even the best drivers by a wide margin before I’ll actually use it. I feel that my friends and I are better drivers than average, even though I know that’s mathematically unlikely. So I need the self driving to be _really_ good before it attracts me. I know this is irrational; what I feel does not obey rational rules.
Yea, likely this was some kind of trip + glitch that happened to look like an attack. But it really did have a "boxing" style movement.
I saw a video of the Unitree [1] robot doing a kung fu routine the other day. I imagine developers are constantly programming in some pre-scripted moves. Similar to all the Boston Dynamics demo videos. They're great for showing off movement. Conceivable that someone could run the wrong demo routine. Imagine the Atlas robot doing it's classic backflip in the middle of a crowd.
Doing that unpredictably is almost worse. Functionally, the point of anthropomorphizing is to tell a story that makes things predictable. In other words, “unsafe if angry, safe otherwise”.
But if you can’t tell if it’s “angry” then we have to assume it’s always unsafe. Of course this was always true.
Many people will older brothers will recall the time when the brother had no intention of actually hurting them. They were merely swinging their hands, moving closer.. and closer.
It is how humans operate. We don't critically analyze the situation to decide if the danger is from an autonomous device or human action, or if the animal is being playful or aggressive. Panic is panic, the whole point being to bypass critical thinking and act immediately. And causing panic is dangerous. The robot acts aggressively no matter if it is autonomous, or the operator spilled their coffee, or the operator is doing it deliberately. Just like a person can act aggressively even if they are just playing around.
thats what it looks like to me. It looked like it was trying to continue the handshake while the person was pulling back and the robot was moving forward and stuttered and tripped on the bottom of the barricade causing it to lunge and try and stabilize itself.