Like the general trolly problem, this formulation of the self-driving car problem assumes some agents are passive. The trolly problem assumes nobody is trying to rescue the potential victims. This formulation of the self-driving car problem assumes the school bus and other vehicles are passive.
Thus the conventional trolly problem doesn't really do the problem of self-driving car decision trees justice. The school bus isn't a dumb trolly. When your car is heading toward the school bus, the school bus will also running down it's decision tree and the best outcome based on its 15 passengers may be to take you out directly rather than run the risk of miscalculating what will happen if your car dives into the guardrail. Which is to say, that statistical predictions bring confidence intervals into play.
I believe this is why trolly problems in general reveal more about their formulation than about our ethical reasoning. We will throw a switch to shunt the trolly because the bad outcome will remain in the future and the possibility of changing circumstances remains very real. We know from experience that any of our predictions may be fallible and that the more temporally remote the event the more our prediction is fallible. Pushing the fat man off the bridge elicits a different reaction: the outcome is immediate and our prediction less fallible.
Don't think we're going to ask computers to count the cost of a life, anytime soon. Especially since humans have a hard enough job doing it [1].
It will be designed, I imagine, to do the safest thing possible given an emergency. Which is in this instance, as most people suggest, stop as quickly as possible.
As silly and pointless as it to worry about whether robocar morality should be primarily deontology or utilitarian, I have to admit I love it. I'm a sucker for this kind of science fiction thinking, brought on by everyone's expectations of technological progress. The average person has a tremendous amount of confidence of the "they'll figure it out" kind.
I think it's a good sign, the opposite of cynicism. Also, I think moral questions are interesting and if teaching is the best way to learn, maybe imagining how to teach a machine to be moral is a good exercise.
On that note, deontology fits nicely with the way computers tend to think - rules. OTOH, utilitarianism means seeing the future - a nice use for all our future computational power. While we're on that, do computers have a duty to be true to their nature?
Your car is already designed to kill you, it doesn't contain equipment to protect you at the cost of many other people's lives. There's no cow catcher type device on the front for example
That wasn't the point of the comment. A cow catcher is a rigid set of bars designed to stop a heavy animal from hitting the rest of the car. A vehicle with a cow catcher hitting a human is more likely to injure or kill them than a vehicle without a cow catcher. The pedestrian safety features on modern cars are a bit like the opposite of a cow catcher - they try to soften the blow on the pedestrian, at the possibly higher risk of the pedestrian going through the windscreen and killing the driver. After all, if you wanted to protect the occupants of the car at all costs to pedestrians, you would just install a cow catcher.
Remember: we do not even have cruise control that is usable in the winter. Once any autonomous driving system can provide even this basic function (not soon, I fear), then we can talk about more complex behavior...
This debate about what autonomous vehicles will do in extreme situations is but the "tip of the iceberg".
Until all autonomous drive systems can be "unit tested" under a standard suite of simulated sensor inputs, one would be wise to assume that they are wildly unsuitable for anything but the most trivial driving situations (eg. Golf courses, closed estates).
This is a great point. Autonomous driving is very primitive right now and the idea of an "all knowing" computer judging our lives is ridiculous. It'll react the same way we react, with dumb reflexes. If an obstacle appears on the road, it will brake or swerve away from the target. It won't have time to weigh some massive AI fuzzy logic about who dies. It will have no idea what the survivability of any of this is, and like a human it will try to avoid the collision.
It seems that 60's AI is back in fashion and now part of autonomous car culture. There's this faulty idea of this super brain driving these cars. The reality is we're looking at dumb as rocks firmwares that can't even drive in the snow or heavy rain properly. Frankly, I think these firmwares should stay dumb like the human reflex system. Too much "engineer logic" and other questionable heuristics will just lead to bugs, lag, etc. I don't want my car to drive off the cliff because of some software bug. I don't need some Samsung engineer on a time constraint who couldn't spare the cycles for proper edge detection because he was wasting those cycles on 'sport mode' which marketing made a big fuss over delivering this quarter. I want dumb and simple code to do what's a dumb and simple task.
I don't think slippery is an issue for computers. We have the quick on off brakes of ABS Braking. We also have AWD of Subaru for a decade that can transfer power to other wheels if it starts to slip. Also it can steer into a slid quicker and more precise than 99% of people right now.
The main problem right now is -- there is no way for you to know how any particular autonomous driving system will handle it. With every second auto maker and start-up claiming to have independently developed super-duper auto-drive technology, I think it will get worse before it gets better.
Without standardised testing frameworks (ie. sending my car's autonomous driving system through a simulator to see what it will do), I remain ... unconvinced.
Being an industrial programmer for 30 years, I think I'll continue to err on the side of assuming incompetence and failure, until proven otherwise.
In the same regard I think in the next 20 years there will not be human driven trucks, buses and anything over a certain weight, because the cost of life to not do that will be over whelming.
This is because one thing. They all can stop quickly and will have their own lane on all Interstates.
hmm... perhaps "the default" should be programmed maximize human life as a headcount. But maybe they could sell a pricey "upgrade option" that would cause the computer to favor preserving the life of the driver as top priority. The proceeds from these upgrade options could then go into pay-outs for the families of people killed by drivers with this option enabled?
Yeah, but if we ever get driverless cars, you can bet on it that the log files will be detailed enough for really fine accident reconstruction. In other words, a "replay" could expose such hacking.
In 4/28/2006 Radio Lab (PBS Amazing Podcast) did a show on morality. They tell the moral choice people will make:
The program begins with a moral conundrum.
You're working with a crew doing track repairs (presumably, using heavy equipment such that you can't hear much else) when you happen to look up to see a locomotive rushing towards five of your buddies who are working on the tracks. But just as quickly you realize that you're standing next to a rail-switch which you could use to divert the train to a second track on which only one of your workmates is standing.
Should you throw the switch?
Ninety percent of a great-many people who've been polled say they would throw the switch.
The next scenerio has you and a coworker (a very large fellow) working on a footbridge above the tracks. Again, you look to see the locomotive bearing down upon five of your friends who are working on the tracks. You quickly realize that the only way you can stop the train is to push the big guy standing next to you off the footbridge, whereupon his massive body will stop the train (yes, I know, but just go with it).
Should you push your coworker in front of the locomotive?
Ninety percent of a great-many people polled say they would not push the man in front of the train, even if by doing so they could save the other five men.
Now we have your in a car and it could choice to turn right or left into a tree killing you or continue straight and kill 5 children who were crossing the road. What would you want your car to do?
I honestly think that apparent flip-flop comes down entirely to how terribly ineffective the large human sounds at stopping the train. Telling people to pretend it works is not an effective measure. People can't cancel their biases just from being told to do so, and a bias against such an awful plan is quite reasonable.
(d) The car would stop because unlike human drivers, a self-driving car would be programmed to avoid going so fast that it can't stop if a sudden obstacle would appear (aside from maybe something falling from the sky).
A human would drive too fast on in icy conditions... if the computer knows there are icy conditions and it doesn't slow down before it even senses trouble then the car was programmed to be going too fast. If it is so dangerous there is no way to remove these scenarios (like a blizzard) then a human should be forced to override the system in which case the human is at fault.
I trust sensors to detect icy conditions better than a trust myself.
[edit]
Most bridges where I am have signs that explicitly warn that bridges freeze over. And people intuitively know... bridge + recent precipitation + cold whether = slow down. I don't know why a computer wouldn't know that.
1000 m ahead a group of 5 people walks the pavement along the road. Should the car slow down so that it can stop on time if they suddenly try to cross the road?
If so - self-driving cars will be going much slower than manualy driven cars most of the time, which will probably hurt adoption.
I would like this to be the case, but I guess car companies will agree to some compromise to speed up adoption, and corner cases will be there. Should the software ignore them, or plan for them (= "planning to kill you").
With computer controlled braking, the car could spin around to take the hit on a front corner. This would provide the least danger to the occupant while not hitting the people. Though it does assume the truck is not stopping. In all likelihood it would just stop assuming the truck would stop... The same call any human driver would make if they weren't texting or otherwise distracted. It doesn't have to be perfect, just better than today on odds.
The car should protect it's occupants at all cost because that would be the normal reaction of someone driving the car themselves. In that case the car has been pre-programmed not to make a choice.
When faced with a split second decision whether to live or die, humans choose to live, that's how we made it this far...
Computer may have the same intelligence as human, but their fundamental inner drive will be completely different. Human like any other animal has a deep genetically programmed drive: survive and reproduce. Hunting as entertainment, for example, is actually practicing for survival - all hunting animals do that.
This deep drive is the result of evolution. AI on the other hand has a completely different evolution process - that is, us. We will ultimately decide that deep drive for AI.
That is in no way certain. In the case where a AI is the result of a evolutionary algorithm for example, there is a distinct possibility that the AI has necessarily a taste for competition. Or if important parts of the AI are modeled on the human brain, then it is entirely possible that a AI is closer to a modified human.
I am not talking about a single AI. I am talking about AI as a species. A specific AI could have any drive, but ultimately (at least at current stage) what determines its survival is us. Just like how natural select the most "fit" species during evolution, we select AI that fit our needs the most. The nature selection is relatively simple: "not get killed and reproduce yourself." So only species who strongly don't want get killed and strongly want to reproduce get to survive.
The AI evolution is completely different - "nobody is going to kill you except human and you don't need to reproduce yourself, , that's usually human's job." I believe the result is a completely different inner drive for the AI - which is more likely to be more useful to human, that's how you get selected.
I think it's a shallow view on things. Hunting like many activities, is a way to tap into the thrill of survival without the price of death. Hunting is just that, a pompous exercise in fake survival. Actually, trying to catch a prey with your bare hands might prove superior by rule of minimalism.
The most superior trait in humans is how often they wonder in what way they are to other life forms.
we are relative superior to foxes - who hunt for necessity, superior to us might not hunt at all
i agree btw
i am not worried that a superior dreadful AI will come to kill humanity, but that some little half-assed AI in charge of some minor side activity will not be capable of understand context beyond it's own complexity.
Yeah, scope is important. It's like child vs adult, tool vs AI. You don't want to be in between, having too much control without understanding the consequences.
No, cars are sometimes in a situation where someone is going to die. If you don't think about these situations and plan who to kill ahead of time, then you don't know who is going to die and the outcome is unpredictable.
Wait, just one computer? Because right now I have an entire freeway full of inattentive and occasionally drunk/angry humans trying to kill me. I can deal with just the car.
If a technology poses questions like these in front of you, you know it's really disruptive. Our technical development forces us to find answers to those questions, but that will be harder than developing the tech, and those answers possibly won't hold for long. I don't think it probable, but it may still turn out that we really don't want machines to make that kind of decision.
I (perhaps somewhat naively) assumed that a self-driving car would be programmed to prioritise the occupants of the vehicle (as that most closely emulates the likely reactions of a human driver in that situation).
Self-driving cars are likely to be safer than human drivers anyway, so one has to consider how much risk there is of a situation like the Trolley Problem arising - i.e. not much.
So here's the thing. The first time that an accident kills pedestrians with a self driving car involved the creator of that car will be open to a lawsuit. And in court the question of what that car should have been programmed to prioritize will come up. A response of we tried to emulate the likely reactions of a human driver in this situation won't hold up. The whole point of a self driving car as you said is that they are safer. In fact safer to the community as a whole. So if a car plows into crowd in order to avoid colliding with an obstacle and killing the rider. (something a real human would likely due without even realizing it) Would be grounds for that suit to award damages to the creator of the cars software.
Actually, that's a good point - I hadn't considered the legal ramifications of such a stance. It's interesting that you mention people seeking damages from the creator of the car's software, because that makes the entire situation more complex.
As for the concept of "safer to the community as a whole" - well, the idea is that they're a safer product overall, and this is mostly targeted at the individual (i.e. "if you own a self-driving car, you're less likely to die"). If people have the knowledge that their cars may (in certain situations) elect to kill them in preference of others, then I doubt that self-driving cars will sell particularly well.
It's an interesting thought experiment, but I think a real situation with such a stark binary choice is so improbable that it's barely worth considering.
Cars are not on trolley tracks and computers can brake or swerve harder and faster than any human could.
How about an animal jumping in front of your car - a very common situation - when there's no way to avoid it. Computer could assume it's a human and do absolutely anything to avoid it, crashing into a tree and seriously injuring anyone in the car. I encounter so strange situations on roads, I can't imagine self-driving car being prepared for all of them.
Wow that would be a bad robocar. I've chosen to hit animals, rather than swerving and possibly losing control, on numerous occasions. A robocar that would rather total itself is faulty. Then again a robocar that would choose to kill a child is also faulty. Until they can reliably tell a child from a dog, robocars will not be ready for general use.
That's a high bar. A human driver with a split second to decide may not be able to do that consciously. Maybe robocars should only have to do as well as a human driver?
Manufacturers' liability for lawsuits would seem to require such a high bar? Eventually industry might be able to change the law such that the lives of children in the road are devalued (or perhaps such that animals in the road are valued more highly than car passengers?), but that will take time. Before that happens, a robocar that can't tell the difference just might have to drive real slow.
No matter how fast they break or swerve, it's not improbable that they might be in a situation where the computer has a choice - hit someone standing on the side of the road, or don't hit them,but potentially kill everyone on board of the vehicle. What should it do? Would anyone of us want to be the programmer who writes that code? Because I know that I definitely wouldn't want to be in charge of that.
Well the issue is, we need to program the cars so that we optimize the outcome when every car on the road behaves that way and is expected to behave that way.
Do we get the best outcome when everyone expects the computer to spare the pedestrian? When everyone expects the computer to drive "selfishly" or to try to minimize its own impact velocity?
I don't think it's a good idea to program the cars with a "utilitarian" policy of saving the most human lives possible, as not only does that require quite a lot of inferential power to be packed into the car, it doesn't set up any concrete expectations for the passengers and pedestrians. You would have to add a mechanism for the car to first, in bounded real-time, infer how to save the most lives, and then signal to all passengers and bystanders exactly what course of action it has chosen, so that they can avoid acting randomly and getting more people killed.
This is why we always demand very regimented behavior in emergency procedures: trying to behave "smartly" in an emergency situation, in ways that don't coordinate well with others, actually gets more people dead. Following a policy may "feel" less "ethical" or "heroic", but it actually does save the most lives over the long run.
> it's not improbable that they might be in a situation where the computer has a choice - hit someone standing on the side of the road, or don't hit them,but potentially kill everyone on board of the vehicle.
What makes you think this situation is probable? How often does it currently occur for human drivers? Ever? I don't have statistics at hand, but what exactly is the incidence of pedestrians who are not in the roadway getting hit by cars? And how many of those were caused by driver error that wouldn't exist for a self-driving car rather than a forced choice?
A discipline for hard choices in the realm of human mortality are what separates professions from mere occupations. It's what gave birth to the term 'software engineer' on the Apollo team. It's the price of lunch for the self driving car team. Ignoring the implicit decision doesn't relieve anyone of responsibility. Engineering means accepting the fallibility of one's decisions and the potential consequences of failure include people dying.
Yes. It certainly will. (Credentials: I've worked in the industry).
There are a lot of comments below which are variations on the theme of "this is a stupid scenario; just apply the brakes". These people have a terribly naive understanding of the reality in which cars operate.
The truth is that self-driving cars are part of a dynamic, open-ended system which includes roads of arbitrary quality, and pedestrians and cyclists who will be never be under automated control. Such a system is fundamentally unsafe by design. Its operation relies on probabilistic assumptions of safety rather than secondary safety systems, as you would have in a system which is designed to be safe (eg., airplanes and airports). These probabilistic assumptions include assuming that a tire will not blow out, or that a large pothole will not exist on the other side of a hump in a country road, or that a pedestrian will not trip laterally into the path of a high-speed vehicle. When such events do occur, property damage and/or loss of life is a certainty. This is a low-probability event, but one to which we have so utterly habituated that many people -- such as many of the commenters on this article -- are no longer consciously aware that it is even possible.
Nonetheless, it certainly is possible, and happens all the time. Currently, cars kill more than 2 million people per year, worldwide. Many of those fatalities are due to drunkenness or glitches in human attentiveness -- problems which a competently-designed self-driving car will not suffer from -- but many of these fatalities are due to scenarios such as I mention above. Those will continue. Unless we radically change the entire system -- that is, change the way way we build and maintain roads, and the way we integrate or segregate cars from other users of the road -- then this system will remain unsafe by design. Even if the world converts entirely to self-driving cars which operate perfectly, they will still, with an absolute certainty, kill hundreds of thousands of people per year.
Seriously, y'all need to stop being in denial about this.
Hundreds of thousands of fatalities per year is still a tremendous improvement over millions of fatalities per year. It's an unalloyed good and we should doubtless do it. Nonetheless, it represents a hell of a liability problem for the manufacturers. When a human being has a blowout at speed, we never question the correctness of their actions in the milliseconds immediately afterwards. Of course they don't have an opportunity to respond in an appropriate fashion. Human brains can't do that. Instead, we assign liability based on conditions before the crash. If the driver was speeding, we say that the consequences of the blowout (which can easily be fatal) are their fault. If they weren't speeding, it's the tire's fault. Liability ends somewhere between the tire and the driver. As long as the car company has not sold faulty tires, they have nothing to worry about.
When the driver is both created by the car company and has the ability to react in the milliseconds following an incident, it's a different matter. Lawsuits will be inevitable, and car companies will do everything they can to minimise their exposure.
The scenario where your self-driving car needs to make a decision between sending you over a cliff, or forcing a schoolbus over a cliff, may seem terribly contrived. But in reality, on a worldwide basis, this general class of scenario happens hundreds of times every day, creating a level of liability that manufactures must take damn seriously. If they are able to choose between lawsuits from your family, or lawsuits from every family of every kid on that bus, they'll choose the former every time.
And this is why your car will be programmed to kill you.
If people become aware enough of this, the answer is almost certainly that cars will choose the life of their passengers in any situation, unless manufacturers will agree not to compete on this.
Four years ago I was driving down a hill in insanely slick weather. Though I was driving very slowly, the car skidded, and was heading into an extremely busy street. I had enough steering to put the car into a tree, slowly, rather risk something really serious.
Not quite a Trolley Problem -- _I_ was among those better off by cutting the losses. But yes, weird situations and choices do emerge.
I would imagine the risk of hitting a stationary object is lower than the risk of hitting an object moving towards you. Curious to know if this line of reasoning of taking the least risky option was a defense to driving his car into a tree.
Software flaws have been know to kill, so "killing software" has been with us for awhile.
For more details, check out Peter G Neumann's "Computer-Related Risks". Automatic doors crushing people, radiation equipment (for the treatment of cancer) going way out of spec, and many more.
Yes, there are interesting lessons in that, but we're entering new territory when software systems will be explicitly tasked to make life-or-death "decisions" without the active control of a human. The correct answer might be ethically foggy even for humans. This is definitely problematic.
what if going straight will kill more people than swerving? You are driving in a turn, a tyre blows - a system which "gives up" and just applies the brakes as hard as it can allows the car to continue forward - maybe in a path of a school bus, maybe into a group of bystanders, maybe off a cliff. But if you allow it to turn,even a little - how do you know you are not injuring more people?
Under what situation would that happen? It happens with human drivers because they are driving too fast to stop in time. It's very simple... if you need 50 feet to stop you make sure at minimum you have 50 feet to stop. If there are things obscuring your view to the left and right you drive slower to reduce your stopping distance and make sure if something jumps out you still have enough time.
Now, if you are to consider that something jumps out in front of your car (say a deer) which reduces your reaction time, the car should do the same thing a human would do. Slam on the breaks to reduce impact speed.
A tyre blows. A rock hits the sensor on top of the car. An electrical contact gets disconnected due to vibration. Water gets into computer. Those things happen thousands of times, daily, to cars around the world. Automatic cars will have to deal with all of them, in one way or the other. Like you said - the most "failsafe" solution the computer can do is slam the brakes. Which will be good enough in most cases. But I will repeat my question again - what if slamming the brakes causes more injuries than doing something else, and computer "knows" it(as in - it has calculated that it would cause more damage)? Should it still do it?
Edit: The difference between a human slamming on brakes and a computer doing the same is that humans are not perfectly rational. If I see a deer in front of me, I'm probably going to break as hard as I can. But will I take into account that the road is slippery and braking will cause the car to spin and land in front of a lorry, and maybe the correct decision is to not brake and hit the deer? Of course I won't - humans are not quick enough to decide on that. The problem here is, that computers are fast enough - and now we have to decide whatever they should behave in a "dumb" way, like a human would - or whatever they should be making those decisions no human could make, even if they verge on being unethical?
Thus the conventional trolly problem doesn't really do the problem of self-driving car decision trees justice. The school bus isn't a dumb trolly. When your car is heading toward the school bus, the school bus will also running down it's decision tree and the best outcome based on its 15 passengers may be to take you out directly rather than run the risk of miscalculating what will happen if your car dives into the guardrail. Which is to say, that statistical predictions bring confidence intervals into play.
I believe this is why trolly problems in general reveal more about their formulation than about our ethical reasoning. We will throw a switch to shunt the trolly because the bad outcome will remain in the future and the possibility of changing circumstances remains very real. We know from experience that any of our predictions may be fallible and that the more temporally remote the event the more our prediction is fallible. Pushing the fat man off the bridge elicits a different reaction: the outcome is immediate and our prediction less fallible.