Ethical dilemmas of self-driving cars: How do we determine liability and technical limitations?

E

Discuss the ethical dilemmas and liability that may arise from the introduction of self-driving cars. We explore how the ethical judgment of self-driving cars can be designed from utilitarian and deontological perspectives, and present different approaches to addressing these issues.

 

We face many conflicts in our lives. We argue with people to see whose opinion is right, and we wonder whether our actions are ethically correct. Furthermore, we question why we should do what is “ethically right” and what is “right” in the first place. In other words, we all long to know what is “right” or “good” in our lives. So what is ethical? We could give a long and complicated definition, but let’s think of it as something that we as human beings are expected to do, and recall the previous statement. We face many “ethical conflicts” in our lives.
There are many different kinds of ethical conflicts we face, but in this article, we’ll focus on ethical issues related to science and technology. There are probably many ethical issues related to science and technology. However, most ethical issues related to science and technology center around the question of responsibility: “Who is responsible?” When a technology is developed and someone is harmed as a result, there has always been a debate about who is responsible for the harm, especially when it involves a human operating or directing a machine, the blame can be assigned to a specific group or person, but the debate continues when the blame is ambiguous, such as when it involves artificial intelligence.
As mentioned earlier, liability can be blurred when a machine makes autonomous decisions and actions rather than being operated by a human operator. Self-driving cars in particular are a common sight in our daily lives, but we don’t often have the opportunity to think deeply about their limitations or ethical issues. That’s why we’ve chosen self-driving cars as a topic for this article.
So, what are the ethical issues with autonomous cars? An autonomous car is a car that drives itself without the need for a driver. The ethical issues associated with autonomous cars become more prominent when there is concrete harm, such as someone getting injured or suffering financial losses. Applying this issue to self-driving cars, the question arises: “Who should be liable for harm caused by self-driving cars?”
Before discussing this, we should consider that the subject of ethical responsibility is the ethical agent. In other words, an ethical agent uses rational judgment to reflect on whether what he or she is doing is right. If, despite having sufficient rational judgment, an ethically reprehensible behavior is committed, the person is held accountable. Therefore, we need to examine whether autonomous vehicles can be defined as ethical agents.
Traditional discussions of robot ethics assume that robots are ethical agents that act based on their own ethical principles. However, it is difficult to assume that autonomous vehicles are acting based on their own ethical principles and rational judgment. Therefore, there are two main areas of ethical responsibility for autonomous vehicles. The first relates to how self-driving cars can be designed to make decisions in the event of an accident. This is a classic “ethical dilemma,” such as whose safety should the self-driving car prioritize if it is inevitable that someone will be injured – the pedestrian or the occupant? The second is related to technical defects: if a defect in the production process causes an injury to a passenger or pedestrian, who is responsible?
When discussing the ethical issues of self-driving cars, the first issue is the main one, so this article will focus on the first issue only. For the first issue, we can distinguish between two cases: ⓐ between riders and pedestrians, and ⓑ between pedestrians. The ⓐ problem can be illustrated by the cliff case. If an autonomous vehicle is on a narrow bridge and is about to collide with a bus coming from the opposite direction, the vehicle faces a dilemma: should it continue on and collide with the bus, or should it drive off the bridge and kill the occupants? The case of ⓑ can be described as the Trolley dilemma. A trolley train with broken brakes is running on the tracks, and there are five workers on the tracks. If the train continues to run, the five laborers will die. The person operating the train can switch the track to a different rail that is being worked on by a single laborer. The dilemma of whether or not to switch tracks arises: the sacrifice of the few for the many, or the sacrifice of the many to save the few.
So, what is the ethical algorithm for self-driving cars to choose in such a dilemma? There are three main approaches to ethics in artificial intelligence (AI) and robotics, as well as autonomous vehicles. These are the top-down approach, the bottom-up approach, and the hybrid approach.
The top-down approach involves choosing a specific ethical theory and then analyzing the requirements of the computing system to design algorithms and subsystems that can implement that theory. In other words, algorithms are implemented based on a system of ethical theories, such as Bentham and Mill’s utilitarianism or Kant’s deontology. In this article, we’ll look at utilitarianism and deontology, two ethical theories that represent the top-down approach.
First, let’s look at utilitarianism. When we think of utilitarianism, we often think of the phrase “greatest happiness of the greatest number.” In other words, the utilitarian approach considers the utility of all members of a group. If you think of utilitarianism as one big algorithm, it can have many sub-algorithms. For example, there is a sub-algorithm for which utility values should be prioritized. Applying this to the cliff example mentioned earlier, there are more people on the bus than there are in the self-driving car. Therefore, in the event of a collision with the bus, the self-driving car will fall off the cliff because there will be more medical expenses to compensate.
However, in a real-world collision, a utilitarian algorithm would use a sub-algorithm to decide who to hit. This process of “selecting” victims can lead to a variety of problems, including the possibility of violating the principle of equality enshrined in the 11th Amendment. Utilitarianism has a positive aspect in that the upper algorithm of “greatest happiness of the greatest number” itself is easy to computerize, but it is not clear how to calculate the utility of each individual in a conflict situation, and it raises fundamental questions about the utility of counting human lives as a measure of utility.
The second is the deontological aspect. Deontology states that the moral evaluation of an action is determined by its conformity to a universally stated “deontic imperative,” regardless of its consequences. The deontic imperatives can be summarized as the universality imperative, “Act in such a way that the ratio of your will is always and simultaneously a universal principle of legislation,” and the humanity imperative, “Act in such a way that humanity, either in yourself or in others, is always and simultaneously an end and never a means.” For the aforementioned cliff case, a deontological approach would follow the deontological imperative to “never treat humanity as a means to an end. In this case, the self-driving car would not consider the driver or bus passengers as “means,” so it would try to avoid the collision as much as possible without valuing their lives. Ethical algorithms based on a deontological approach can be viewed positively in that they can be developed in a way that respects human life in certain situations.
However, the deontological approach also has some limitations. First and foremost, there are limitations that arise from the practical application of deontological imperatives. For example, it proposes ‘universal legislative principles’, which raises the question of whether there are universal principles that can be applied in all situations. Also, the criteria for defining universal principles can be subjective. Finally, if human life is prioritized in all situations, it can be difficult to make ethical judgments about other harms.
In addition to the top-down approaches discussed so far, there are bottom-up and hybrid approaches. Bottom-up approaches involve designing algorithms based on empirical training data. This has the potential to work better in real-world situations than a top-down approach, as the algorithm is constructed to reflect empirical cases as it learns from the data. The hybrid approach, on the other hand, combines the strengths of both top-down and bottom-up approaches to design algorithms that are utilitarian in nature, seeking maximum utility, and deontological in nature, following universal ethical principles.
The ethical issues of autonomous vehicles discussed in this article are likely to become more prominent as the technology evolves, and we should focus our efforts on exploring different approaches to solving the ethical dilemmas that autonomous vehicles will face, and on implementing them in practice.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!