What are the safety and ethical dilemmas of self-driving cars and how can we address liability in the event of an accident?

W

 

Self-driving cars are on the verge of commercialization, but many people still don’t understand the safety, ethical dilemmas, and liability issues in the event of an accident. Improving road infrastructure, setting ethical standards, and clarifying legal liability will be necessary, and will require collaboration between government, business, and academia.

 

Introduction

If you look at car commercials these days, you’re likely to see some form of artificial intelligence. Self-driving cars are already on the verge of commercialization, with the U.S. Department of Transportation having already released a 15-point set of guidelines to prepare for the era of autonomous vehicles. Despite this stage of commercialization, many people don’t know much about autonomous vehicles. With this in mind, let’s take a look at three of the hottest debates about autonomous vehicles: safety, the trolley dilemma, and liability in the event of an accident.

 

Safety

In 2022, 42,915 people were killed in traffic accidents in the United States, a 0.2% decrease from 2021. However, it still remains high, and risk factors such as cell phone use while driving, speeding, and drunk driving have increased significantly since the pandemic. According to the National Highway Traffic Safety Administration (NHTSA), about 20 percent of all traffic crashes are related to cell phone use while driving.
“When we get behind the wheel, we make mistakes because we’re only human, and there comes a point where we’re not allowed to drive,” says Raj Rajkumar, co-director of the General Motors-Carnegie Mellon Collaboratory for Autonomous Driving. But Joanne Claybrook, a former auto safety watchdog at the National Highway Traffic Safety Administration (NHTSA), counters the self-driving car argument that humans make mistakes because they’re human, saying that “software drivers” “break down because they’re just a bunch of machines. As Claybrook points out, self-driving cars are made up of electronics and interact with their surroundings, which makes them vulnerable to hacking. While not an example of a self-driving car, there have been cases of car theft due to hacking of the EDR section of a car’s dashcam. With self-driving cars, the risk is even greater and could be exploited. Jake Fisher, director of automotive testing at Consumer Reports, also points out that “self-driving car systems are actually less capable than people think,” and that “the hardest thing about self-driving cars is dealing with humans, and humans are unpredictable.”
When discussing the safety of self-driving cars, in addition to the possibility of system failure or hacking, there is also the issue of road conditions and infrastructure. Self-driving cars rely on highly advanced sensors and AI to operate, but only if the roads are well-maintained and communication between vehicles is seamless. If road signs are damaged or road conditions are poor, self-driving systems are at risk of malfunctioning. Therefore, improving road infrastructure and ensuring standardization and reliability of vehicle-to-vehicle communication systems are essential for the widespread adoption of self-driving cars.

 

The trolley dilemma

The “trolley dilemma” is a thought experiment in ethics that posits the following situation. A train is traveling along the tracks, and there are five people on the tracks. You are standing outside the tracks and you could pull the switch to save the five people, but doing so would kill one other person on the other track. In this situation, we might ask: Is it morally permissible to pull the switch? These choices are not just ethical, but also directly related to the question of whether to sacrifice the driver or others in the AI’s accident handling process. For example, if a vehicle is about to collide with a large number of pedestrians in front of it, should it proceed to protect the majority even if the driver is injured, or should it save the driver at the expense of many others? Since the person setting the program is not God, it would be a very dangerous setup to have a self-driving car control its steering and speed simply by thinking about the number of lives.
Various approaches have been proposed to solve this ethical dilemma. For example, some researchers argue that self-driving cars should be programmed to follow certain ethical principles. This means designing AI to make moral judgments similar to human drivers. On the other hand, others believe that the decision-making process of self-driving cars should be transparent and that ethical standards should be set by social consensus. These ethical discussions will need to be ongoing as the technology evolves, and it is important to involve a wide range of stakeholders.

 

Liability in the event of an accident

In the case of self-driving cars, there is a debate about whether the driver’s negligence covered by insurance should include accidents caused by self-driving AI, since the driver is no longer the driver in the event of an accident. Basically, you’d have to put the driver in the driver’s seat and decide who was at fault. However, in the case of self-driving cars, the design of the AI is a mass-produced commodity by the car manufacturer, so the legal judgment is mixed as to whether the accident caused by the AI is a product defect or an objectively unexpected accident. If self-driving cars are to be commercialized, this will be an obstacle to their commercialization because if this issue is not resolved, there will be many lawsuits and disputes between drivers and car manufacturers in the event of an accident. However, this controversy can be resolved by looking deeper.
According to the National Highway Traffic Safety Administration (NHTSA), there are four stages of autonomous vehicle technology. Stage 1 is Selective Active Control, which is the automation of certain functions. This includes features like lane departure warning and cruise control, which many cars still offer today. Stage 2 is integrated active control, where existing autonomous driving technologies, such as Tesla’s Autopilot, work together to keep the driver’s eyes on the road but free them from the steering wheel and pedals. Level 3 is limited autonomy, where the vehicle is aware of traffic and road flow, allowing the driver to engage in other activities such as reading, and only requires driver intervention in certain situations, which is where Google’s self-driving cars fall. Level 4 is the highest level of autonomy, where the car is fully autonomous in all situations and requires no driver intervention. Depending on the technology level of the self-driving car, liability in the event of an accident could be greatly reduced if the law is clear in advance.
It is also likely that the issue of liability for accidents involving autonomous vehicles will continue to evolve as technology advances. For example, as AI’s ability to learn improves and the error rate of autonomous systems decreases, liability in the event of an accident will become more clearly defined. This is an area of ongoing interest and research for the insurance industry and legal experts.

 

Conclusion

In this article, we’ve looked at the three controversies surrounding self-driving cars from different perspectives: safety, the trolley dilemma, and liability in the event of an accident. We hope that this article has inspired a wide range of people who were not originally interested in self-driving cars to think about their hopes and fears for the near future.
In the United States, it took over 70 years for landlines to reach 90% of households, compared to 15 years for cellular phones and 8 years for smartphones. With the rapid adoption of these technologies, it is expected that self-driving cars will be adopted in a relatively short time. Furthermore, after analyzing the trends in the autonomous vehicle industry, many experts believe that the penetration of autonomous vehicles will increase significantly in the next 10-15 years. In fact, it won’t be long before self-driving cars are on the road. Before that day arrives, it’s important for society to discuss and prepare for the challenges that may arise.
According to a recent report, self-driving car technology faces a number of challenges, including safety, legal issues, and infrastructure. In particular, ethical decisions, cybersecurity, and data privacy have emerged as key issues for autonomous vehicles. Governments, private companies, and academia must work together to develop policies and technologies to address these issues. This will help ensure that autonomous vehicles become a safe and reliable form of transportation.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!