Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Now the reality is some of those who are killed, die from no fault of their own. They die at the hands of the other driver.

That's way fewer



This seems like a very circular argument.

The argument in this thread seems to suggest we should be slow at adopting av technology because it could kill some innocent people.

Now I accept that point as a fact. There is no doubt people will die in av cars just like people die in cars today.

Yet, on the other we have human drivers killing innocent people today, which is acceptable collateral damage, even though introducing av technology would help to greatly reduce that number.


> Yet, on the other we have human drivers killing innocent people today, which is acceptable collateral damage, even though introducing av technology would help to greatly reduce that number.

Speaking for no one other than myself, vehicular deaths are not "acceptable collateral damage" so much as a statistical inevitability, one sought to be minimized as much as possible.

The problem with introducing autonomous vehicle technology, again IMHO, is the assumption that it "would help to greatly reduce that number."

This is not a known result.


> This is not a known result.

And without actually doing the implementation that question will never get answered.

But logic suggests they would help only because, with the exceptions of bugs in the system, the av:

. would not drive drunk

. would not speed

. would not run red lights

. would not get tired

. has better reaction times than humans

. has better eyesight than humans

. is not distracted by mobile phones, passengers or pretty girls/guys walking on the foot path

. is not susceptible to road rage

. etc etc.


>> This is not a known result.

> And without actually doing the implementation that question will never get answered.

Exactly. Which is why it is imperative the engineering community does not assume outcomes beforehand.

Much of the concerns you list I agree are likely benefits, with the "not drive drunk", "not get tired", and "not susceptible to road rage" items being a near certainty due to eliminating biological aspects.

What autonomous vehicles might not be better at could be:

. snow

. black ice

. mud

. severe thunderstorms

. able to operate in high dust environments

. adapting to rapid unexpected operating conditions

. when to run a red light (emergencies, external threats, some combination of conditions above)

Are these edge conditions? Maybe. Or you could call them functional requirements not often discussed.

All I'm saying is that there is a lot more to general-purpose driving than what can be shown in a limited setting.

IOW, show me an autonomous vehicle which can complete the Paris-Dakar Rally[0] and I'll show you someone who will say it's time for people to stop driving ;-).

0 - https://en.wikipedia.org/wiki/Dakar_Rally

EDIT: formatting


Humans learned to drive for years, initially, we will be better, but as time goes there is no chance against an automated solution. The whole industrialization process is just this and there were problems and bad things happened, but it works in the end.


I like how you buried that huge "if" in your second sentence:

> with the exceptions of bugs in the system


I specifically added that clarification only because I suspected without it, the argument would have been but what happens when a bug in the software causes the av to crash where as humans don't face have that problem types of arguments.

Being a software developer I have no doubt there will be bugs in these av software system.

But these systems will also include many fail safe overrides designed to minimize the impact of these occasional bugs.


>And without actually doing the implementation that question will never get answered.

That's BS. You can answer these questions with rigorous research (or a slow rollout).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: