The Ethical Dilemma of Self-Driving Cars

Self-driving cars are almost a reality. Waymo and Uber are testing fleets of self-driving cars in Phoenix in early 2018. Several automakers have autonomous vehicle designs in the works and should have vehicles on the road in the early 2020s. These high-tech cars are designed to be safer for passengers because computer software makes instant decisions based on the surrounding environment. The cars make faster and better choices than humans do when it comes to snap judgments to avoid accidents.

But they pose a challenging ethical dilemma.  Even though self-driving cars will be better than human drivers, there will still be accidents, however rare.  Serious injuries and deaths are bound to happen, but how do the computers justify who lives and who dies in an accident? This question is part of a philosophical issue rather than a technical one: the ethics of self-driving cars.

Unavoidable Accidents and the Trolley Problem

The unthinkable happens when a self-driving car comes upon an unavoidable accident. Imagine there is a slick highway road in the countryside, and you’re the only passenger in a self-driving car traveling the speed limit of 55 mph. After a sharp curve, there is a group of cyclists moving slowly uphill in your lane. Slamming on the brakes means the car careens uncontrollably into the group of cyclists. Swerving hard to the left means hitting an oncoming vehicle filled with four people. Swerving hard to the right means going through a guardrail and careening over a cliff, potentially killing you on impact. The self-driving car has an instant choice to make. Does the car try to save the cyclists, the passengers in the other car or you as the lone passenger?

This life-and-death dilemma is a modern take on a classic moral scenario known as the trolley problem. The [original trolley problem](https://qz.com/1204395/self-driving-cars-trolley-problem-philosophers-are-building-ethical-algorithms-to-solve-the-problem/0 poses an ethical choice made by the operator of the trolley. The trolley barrels down its tracks towards five people. There is no time to hit the brakes because if you do, the trolley jumps the tracks and kills some of your passengers. You can pull a switch to move the car to a siding, but that choice means you move rapidly towards one person rather than five. Your choice ends up with someone dying, but the numbers are up to you.

The scenario poses a moral dilemma. Do you hit the five people on the main track or actively make a decision to kill one while saving the other five?

Consider Your Ethical Quandary

Something else to consider is the safety of the vehicles themselves. If your car knows that the cliffside to the right of the road is only 10 feet tall, you might survive the impact. Your car has airbags, crumple zones and safety features designed to save your life. Contrarily, the cyclists only have their helmets to prevent brain damage. The oncoming car may not have as many safety features in the event of a head-on crash. People in a self-driving automobile have a much better survival rate compared to cyclists, pedestrians and motorcyclists in the event of a crash.

There are no easy answers to who lives or dies when it comes to autonomous vehicles. There are millions of scenarios that could happen. What if two self-driving cars get in a wreck and one person dies? How fast can a computer make a lifesaving decision? Who is liable for when people seek compensation during litigation?

Practical Solutions to Challenges

These are the types of real-world situations that car designers face with regards to the ethics of self-driving cars. The potential problem is so vexing that the National Highway Traffic Safety Administration in September 2016 asked companies developing self-driving cars to [take into account](https://www.weforum.org/agenda/2018/01/why-we-have-the-ethics-of-self-driving-cars-all-wrong) moral dilemmas when it comes to software embedded in the technology.

A group of philosophers is actually working on crash-optimization algorithms to help solve the trolley problem. Philosophers are working with an engineer to turn ethical dilemmas into code that is readable by computer software. The point is not to make an ethical choice right now, but to make it possible for self-driving cars to have a moral framework in context of mathematical language.

A few main solutions take shape with the computer coding, and it’s up to automakers, programmers and policymakers to implement each choice. A utilitarian solution means the car saves as many lives as possible regardless of whether they are in the self-driving car or not. Another view assigns a greater value to the lives of the people inside the self-driving car. A third scenario protects the lives of pedestrians, even it if means swerving into vehicular traffic.

Technological solutions and pure mathematics point to a few arguments that self-driving cars are still the wave of the future despite ethical questions.

Points Regarding Self-Driving Car Technology

Despite advances in car safety, more than 40,000 people died in traffic accidents in 2016. Surely, self-driving cars would cut down on those fatalities by a large margin. This is particularly true for drivers who are impaired, distracted or who suffer from some kind of medical incident while driving. Computers have no such problems.

As self-driving car technology advances, trolley-type scenarios become rarer. Cars can tap into schedules of events or publicly posted social media pages that show if there’s a parade, marathon or bicycle race in the area. Pedestrians and cyclists could have smartphones with GPS or dedicated GPS tags on them to warn self-driving cars ahead of time.

Smart cities and urban environments would have connected devices and cameras all across the landscape that warn cars of imminent dangers. Think about an office building that has sensors on its doors that indicate how many people leave the building next to a busy city street. A self-driving car can tap into that data several hundred feet before coming upon that location. The car can also see traffic cameras at intersections to determine how many people might cross the street at a crosswalk and take safety measures ahead of time. This technology would also know when a light is about to change from green to red.

Self-driving cars can tap into this wealth of connected data to avoid a trolley problem long before it becomes one. The more data a car’s software collects, the better it becomes at predicting what may happen. Infrastructure to support self-driving cars is a must, so long as that infrastructure is secure from hackers and protects people’s privacy.

Legalities and X-Factors

State legislatures may decide the ethics of self-driving cars, but even that has its own set of difficulties. Each state may have different rules to follow. In the event a death does occur, who has liability in this case? Is it the owner of the self-driving car, the manufacturer of the vehicle or the person who embedded the software in the car? What happens if a car is hacked?

Another factor to consider has nothing to do with human behavior. Weather presents its own set of problems that cars have to handle. How slick are roads when it rains or snows? How slow or fast should self-driving cars travel when the weather turns bad? Should vehicles just pull over when it rains or snows too hard?

Final Thoughts

The ethics of self-driving cars aren’t perfect now, nor will they be in the future. There are no right or wrong answers when it comes to morality and this form of technology. As self-driving cars evolve, so too will the ethics behind them.

There are two certainties: This technology is on the way and it should save thousands of lives every year. Saving lives is worth any moral quandaries facing designers in 2018.