Are self-driving cars a safe and ethical choice? Who should be held accountable?

A

Self-driving cars are in the spotlight as the futuristic technology envisioned in Steven Spielberg’s Minority Report becomes a reality. However, accidents and controversies have raised concerns about the safety and ethics of autonomous driving and who is responsible for accidents. There needs to be a societal discussion about how self-driving cars make ethical decisions in the event of an inevitable accident, and whether the responsibility lies with the driver, the manufacturer, or the programmer.

 

Minority Report is Steven Spielberg’s film adaptation of Philip K. Dick’s 1956 novel of the same name. Set in Washington, D.C., in 2054, it depicts a society with a pre-crime system that arrests criminals before they commit murder. It’s a science fiction movie that depicts a cold and bleak future society, and Steven Spielberg imagined a variety of high-tech gadgets that could have been used in the future. Interestingly, some of the most advanced technologies in the movie are slowly being implemented in the real world.
There are many interesting scenes in Minority Report, the most notable of which is when a car drives itself down the road instead of Tom Cruise, who is unable to drive because he is trying to outrun his pursuers.

 

Minority Report movie content (Source - Minority Report)
Minority Report movie content (Source – Minority Report)

 

At the time of the movie’s release, self-driving systems were considered a technology for the distant future. However, since Google officially announced its plans to develop self-driving cars in 2010, the automotive and IT industries have been actively researching and investing in autonomous vehicles, and commercialization is gradually becoming a reality. As self-driving cars have become a trend, manufacturers are paying a lot of attention to this area.
However, self-driving cars are not welcomed by everyone. Some have questioned whether self-driving cars can be trusted by consumers. For example, in May 2016, a driver using Tesla’s autopilot feature was killed when he crashed into a tractor-trailer passing next to him. The car malfunctioned when it recognized the white trailer as the sky. The U.S. government later concluded that Tesla’s self-driving system, Autopilot, was not at fault and that the driver was at fault because he failed to take precautions at the time of the collision. This allowed Tesla to avoid liability for the first fatal accident involving a self-driving car.
However, accidents involving Tesla’s self-driving systems have continued to be common. In the four years since 2019, there have been 736 self-driving system-related crashes in the U.S., and 91% of all self-driving crashes were caused by self-driving systems, including Tesla Autopilot and Full Self-Driving, according to NHTSA statistics. Consumers are questioning the safety of autonomous systems, and a survey by U.S. marketing intelligence firm JD Power found that distrust of autonomous driving has increased.
The Tesla crash didn’t just affect consumer sentiment. While governments have been overhauling their systems since the crash, there is still debate about liability, insurance coverage, and legal rules for self-driving car accidents. For example, consider a situation where a driver is in a fully autonomous vehicle and the car performs all driving actions automatically once the driver sets the destination. Should the driver not be liable for the accident, and if so, who is responsible for the accident? Confusion arises as to who is responsible: the car owner, the manufacturer, or the state under its oversight obligations. The insurance industry argues that manufacturers should be liable for accidents because they are in a position to control the risk of an accident. The automotive industry, on the other hand, argues that it would be excessive for the manufacturer to bear 100% of the blame for an accident. In Germany, Tesla’s autopilot feature in its electric cars is banned because it is an incomplete test version, and in South Korea, Japan, and Europe, standards are being developed and adopted by countries to define the conditions under which autonomous vehicles can overtake or change lanes without the driver having to touch the wheel.
The problem with self-driving cars is that in addition to legal liability, there are also ethical considerations. The ethical issues of autonomous systems are illustrated by a thought experiment presented on the TED-ed YouTube channel. For example, if a self-driving car needs to avoid an object thrown from a truck in front of it, it is given the choice of going straight and crashing into the object, swerving to the right and crashing into a motorcycle, or swerving to the left and crashing into an SUV. While a human driver would make these decisions based on reflexes, the self-driving car would act based on judgments made by the programmer. So on what basis did the programmer program this judgment? Could it be considered premeditated murder? If the hypothetical is extreme, consider the case of setting the car to follow the ethical judgments of its passengers. Would these judgments be a better choice than programming to minimize harm?
MIT is working on a polling game called Moral Machine to address these ethical judgments.

 

Example of choices from the Moral Machine survey (Source - Moral Machine)
Example of choices from the Moral Machine survey (Source – Moral Machine)

 

Moral Machine is a platform for collecting social perceptions on the ethical decisions of artificial intelligence, such as self-driving cars. It poses a situation where a driverless car has to make an ethical choice about whether to sacrifice an occupant or a pedestrian while driving, and asks survey participants to make an acceptable judgment. For example, in an accident, the occupants and pedestrians, social status, physical condition, age, etc. are randomized and judgment is made as an outside observer. If we were programming a self-driving car based on the results of this research, would it be right to set it to save as many lives as possible in the event of an inevitable accident, or would it be more desirable to prioritize the lives of the occupants?
If programming for these value judgments led to an actual accident, would it be possible to escape liability for the accident?
I believe that the legal issues of self-driving cars can be resolved to some extent by agreement between individuals or between individuals and society, but the ethical issues are different. In the modern era, ethical issues are always raised by rapid technological advancement. While the development of science and technology enriches our lives, it often leads to situations that violate human ethics. When self-driving cars are commercialized, the likelihood of life-threatening accidents such as drowsy driving, drunk driving, reckless driving, and retaliatory driving will decrease. Improved traffic flow will also reduce the time it takes to get to your destination and increase the opportunities to enjoy your leisure time. But self-driving cars are not immune to ethical issues. Ethical issues that will be faced not only by autonomous vehicles, but also by artificial intelligence, robots, and even humanity as a whole. Questions about the value of human life and the difference between animal and human life must be addressed before autonomous vehicles are commercialized.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!