Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Without any comparison to humans those numbers are completely meaningless

I disagree.

What I want to know, is is it safe? Not is it safer than the average human driver, but is it safe in an absolute sense.

When a human driver crashes we can almost always pin the cause down to human error. Errors caused by some human failing, being less than they could be. The promise of machines is that they are consistent. They are never "less than they could be", but are consistently at their best.

Comparing humans and machines is comparing dogs and roses - it is interesting a rose smells better, interesting a dog is more loving (or fierce), but not a valid comparison

Self driving cars stand or fall on their own capabilities.



1. It's the adaptive cruise control, not the FSD.

2. What would a good absolute number for that period be? Our threshold can't be zero or we'd have to give up every technology from showers/baths (~60x more deadly) to fresh produce (E. coli, salmonella, listeria).


> 2. What would a good absolute number for that period be? Our threshold can't be zero or we'd have to give up every technology from showers/baths (~60x more deadly) to fresh produce (E. coli, salmonella, listeria).

Zero accidents caused by the failure of the software. (Clearly a good system may have some obscure bug that makes it fail - but failure of a self driving system should cause uproar and consternation - unlike the reaction of Tesla to the deaths caused by their systems, it seems)

If I slip over in the shower it is not because the shower head went rouge and strangled me. If a shower was designed that in the period of one year strangled eleven people (a fair comparison to Tesla's record) imagine that?

Point being your comparisons (food poisoning or household mishaps) are not relevant


- It's not clear why you no longer care that a technology is killing a lot of people if there's a human in the loop that you can blame. If a driver runs over your sister/son/friend, does it matter if you can blame someone?

- About 43,000 American motorists died in 2021. If we have a way to prevent many of their deaths with software, would you not want to unless you could prevent every death?

- Why is a software tool failure different from any other tool failure? Brakes can fail, wheels can fall off) (happened to me once), etc. Listeria in produce is a failure in the production chain. Showers can be made safer. You use your car/shower/spinach and some day, for reasons entirely beyond your control, it might kill you.

- Why isn't the driver to blame in a Tesla? They're supposed to be watching and responsible.

- There's no clear distinction between what's a software failure and what isn't. Collisions have multiple contributing factors. Perfect software will still have collisions. The numbers reported aren't just collisions where the Tesla was at fault. One person was killed by a self-driving vehicle when a person jumped a concrete barrier and ran across the highway at night. The human driver couldn't react in time, and the software didn't see them. Is that a software failure? At what point do we accept that a collision is no longer the car's fault?


Sure it's different whether you were hit by a malfunctioning machine that was confused by sunlight or whether you were hit by a driver who wasn't paying attention because they texted. In one instance you have the person genuinely apologize to you while in the autonomous car case, maybe there wasn't a passenger at all.

But on the other hand, if you can identify the most accident prone group of humans, and require them to use AI cars, and with this you could significantly reduce the number of road kills/accidents, wouldn't that be an improvement?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: