Nov 17 2015

Autonomous Cars and Ethics

In the last few years autonomous cars have seemed to turn a corner – suddenly the technology is here. This is one of those technologies that was long anticipated, and then quickly arrived. They are not yet in mass production, but it feels like we are on the steep part of the curve.

Google car is the most in the news. The company has been testing their autonomous vehicle for years. Some use the term “driverless” but there still needs to be a human in the driver’s seat. The success of the vehicle is measured in how often the driver needs to intervene. At first they measured interventions per mile, and now they measure miles per intervention.

The autonomous car is certain to be a transformative technology. I can’t see how it won’t. It will transform the way we get around, could spawn new industries, and may also transform our infrastructure. For example, imagine a fleet of truly driverless cars you can summon with an app, Uber-style. Insert your credit card, and you’re off. For some people this may obviate the need to own a car.

Right now we are trying to adapt autonomous cars to our existing infrastructure. Eventually we will adapt our infrastructure to autonomous cars. At present the goal is to get the cars to drive more like humans. For example, recently a Google car was pulled over for driving too slowly. The algorithm called for the car to drive very cautiously (never more than 25 mph). This resulted in a back-up of traffic. Google now says they endeavor to make the cars drive more like people (at least in some ways).

Driving and Ethics

One interesting question that has arisen is how to program the cars to react to situations in which the actions of the car will affect who is likely to die in an accident. Should the car swerve to miss one person, even if they will plow into many people? Should the safety of the passengers or a pedestrian be prioritized? What if there are four people in the car, should they swerve into a tree to miss one pedestrian?

Often such questions are framed in a way similar to the classic trolley set of ethical problems. Trolley dilemmas are used to think about both the philosophy of ethics and human psychology. In the classic version you are a switch operator on a railroad. A train or trolley is out of control coming down the tracks, and is currently on course to kill 5 people crossing the track. You have only time to execute one decision – you can switch the track to save those 5 people, but the trolley will then hit and kill one person on the other track. Do you do it? Most people say yes.

What if instead, however, you were on the trolley when it is about to hit and likely kill 5 people. You are standing in the front, next to a large person. If you push them off their body will likely stop the trolley saving the 5 people. (Ignore how contrived the situation is, and just focus on the ethical decision.) Do you push them off? Most people say no.

In both cases you are sacrificing one person to save five. The math makes sense. But in the first case your actions are less directly connected to the death of the one person, and that matters to us emotionally.

How do we apply all of this to the programming of autonomous cars? Essentially, will we have to include in their programs a way of prioritizing such decisions?

I think the answer is yes and no. An article at driverless-future.com addresses this question, and argues that it is a non-issue. I don’t quite buy their arguments, however. Here are their arguments:

a) No good solutions to these dilemma exist or can exist. Humans are not able to make a ‘right’ choice when faced with such situations either.

This is the Nirvana fallacy – because there is no perfect choice, it doesn’t matter. I disagree with this. The cars will still need to make the best choice, even if that choice is horrible.

b) These dilemmas assume certainty and knowledge that does not exist in such situations.

Again, this is the Nirvana fallacy. Perfect knowledge is not required. Health care professionals make decisions every day in the absence of certainty or perfect knowledge. What is likely to develop are algorithms that are statistically driven – cars will be programmed to make choices that result in the statistically best outcome overall, even if it cannot optimize the decision to each individual case.

This means that the autonomous car behavior will evolve over time, as we gather data on accidents, their causes and their outcomes.

c) These dilemmas are always incredibly contrived. The probability that a car faces such a situation is extremely low.

This is their best point, but I think a little over stated. Yes, the classic trolley dilemmas are very contrived. They are contrived to present a pure ethical choice, because that is their focus. With autonomous cars the focus is not to explore ethics or human psychology, but to make practical choices for the behavior of a machine.

There are many non-contrived situations that will still require those who program the autonomous cars to make decisions about how the reaction of the car will affect the probability of harm coming to how many people. Even just to swerve or not to swerve to avoid a pedestrian walking in front of the car is such a decision, and hardly contrived. Should the car brake only? How quickly? Should it break and swerve, and in which direction?

d) The question is wrong.

They argue that the real question is not what is ethically right, but how to avoid what is ethically (and legally) wrong. I agree that the question posed is often the wrong question, but I don’t think the article authors replaced it with the correct question. Avoiding legal liability is not the real issue either.

As I stated above, the goal is to minimize harm to people. Autonomous car behavior will be driven by statistical models operating on data, probably experimental at first, but then eventually informed by real-world data.

I do agree that the ethical/legal issues are likely not relevant. What I suspect will happen is that regulations will evolve that determine the behavior of the vehicles, based on best practices to minimize harm and maximize safety. As long as everyone follows the regulations, no one will be liable, even if bad stuff happens.

I do agree with the authors that it is possible such issues will be rare. One of the possible advantages of autonomous cars is that they can be programmed to be very safe, that computer drivers are less likely to make mistakes than humans, they never lose focus, and they can even communicate with each other and the infrastructure in order to avoid accidents.

There will likely be a transition phase where the cars do a little better than people, but still might be prone to accidents. The cars still find snowy or icy conditions challenging, for example. What appears to be happening is that the cars are approved for limited routes and conditions, and those routes and conditions will expand as the technology progresses.

Mature autonomous car technology will likely, however, be very safe, with only rare accidents due to technical failure or truly unlikely situations. Still, there will be people in the system (even if only as passengers and pedestrians) and people always introduce an unpredictable element.

I am curious how the technology will play out over the next 20-30 years. It is a transformative technology, but it has proven very difficult to predict exactly how such technology will transform our lives. We’ll have to wait and see.

39 responses so far