Can robots with human-like intelligence and morality be entrusted with human intellectual judgment and responsibility?

C

If robots with human-like intelligence and morality are developed, we need to discuss whether we can entrust them with important decisions that humans make. We conclude that it is difficult to entrust robots with all decisions due to their lack of accountability, value judgment, and creativity.

 

However, if robots are created with intelligence and morals close to those of humans, will we be able to entrust them with not only simple labor but also intellectual judgments that humans make? You might think that this is a topic that has been explored in many science fiction movies and novels. However, the United States has recently built a combat robot that looks like a primitive form of a robot from a movie like Star Wars. These robots have sparked controversy over ethical issues, including their ability to distinguish between friendly and enemy soldiers and kill enemy soldiers. The controversy starts with the question of whether the robot can distinguish between friendly, enemy, and civilians, and extends to the ethical question of killing people. While war is a unique situation, if the technology is scaled up to real life, we could see “robots that think for themselves” in the movies.
My answer to this question is no, we can’t leave it to robots. No matter how advanced robots become, people will never be able to leave everything to them. Those who disagree with me argue that the information processing capabilities of the various types of robots we currently use are superior to our own. They also argue that robots can take over our jobs based on the fact that human thought processes are also made up of electrical signals in the brain. However, I still believe that robots cannot be entrusted with sophisticated intellectual judgment, and I will present two main arguments for this: accountability and value judgment.
Before developing my argument, I need to make a few assumptions that will allow for a more sophisticated discussion. The first assumption is that the robots we will be discussing have intelligence and morality levels close to those of humans. Now, robots are more capable than us in terms of information processing. However, when we talk about intelligence, we’re not just talking about the ability to process information, but also the morals, emotions, and situational judgment that humans use to make decisions. This means that we start with a different assumption from other articles that typically portray robots as emotionless machines. In other words, it assumes that the gap in empathy that currently exists between robots and us is largely erased. The second assumption is that even as robots become more human-like, they are still products, and therefore there will be industry regulations, performance standards, or laws that they must meet. Now, let’s get down to business.
First, we can’t entrust robots with all of our jobs because it’s unclear who will be held accountable if they do something wrong. Just like the computers, smartphones, and appliances we use today can malfunction, so can robots. With the machines we use today, malfunctions often only cause a small delay or inconvenience. But when robots take over human judgment, they will be making critical decisions, and when they malfunction, the impact is likely to be much greater. In fact, in the United States, an error in a power plant system caused an entire state to lose power. Of course, this happened because the system was simple, but sophisticated robots are not immune to malfunction. Right now, we don’t often use robots for such important decisions. However, if robots develop in the future and are used for more important tasks, the impact will not be negligible.
But when a robot does something wrong, can it be punished? When humans do wrong, they try to avoid making mistakes and become more cautious when making important decisions because of the damage to their honor or property. They are also held accountable when wrongdoing occurs despite their efforts. However, even if they have similar intelligence to humans, robots are still just products and do not have a personality. They can’t own money or honor, so they can’t be compensated or punished for anything.
So who is responsible when a robot malfunctions? Whether you blame the manager, creator, owner, or user, no one is directly responsible. Even if the user is the victim, they may have to take the blame themselves. Because of these issues, you can’t leave everything to a robot to make decisions for you. They can offer solutions or process information quickly, but the final decision is still up to humans.
Of course, there’s also the argument that as technology improves, robots will be able to feel pain just like humans, which means we can punish them. Humans feel remorse when they cause harm to others and are socially punished for it. However, robots, even if they feel remorse, cannot compensate or take responsibility for themselves because they are products. It is also possible to use physical pain as a means of punishment in the same way that pre-modern human societies did, but even if robots feel the same pain as humans, it would be barbaric to inflict physical harm on them as punishment. It is also questionable whether it would be meaningful to execute robots.
The counter-argument is that creating legal regulations in advance would ensure that robots are held accountable when they malfunction. Just as humans are held accountable by the law when they do something wrong, robots can be held accountable by having laws in place to make it clear who is responsible and who should be held accountable. However, this argument is unrealistic for two main reasons. First, even if there were a legal basis, it would still be a matter of interpretation. As we often see in cases of legal exoneration, the law is not as clear-cut as it seems. There are many conflicting provisions and considerations, and even the same provision of law can lead to different outcomes. In other words, the law is not a perfect tool to solve all problems, but only a means to provide a basis.
Second, it can be virtually impossible to hold someone legally accountable. Consider the case of a creator, owner, and user, each of whom could be held liable. If we start with the creator, the creator can’t be responsible for the robot’s malfunction forever. If it’s an error at the beginning of the product’s life, the maker is likely to be held accountable, but over time, the product can’t always be kept in its original state. Just like our laptops and smartphones have a warranty of one to two years, robots will also have a warranty period. After this period, it would be difficult to hold the manufacturer accountable. Of course, it may be possible to establish a legal standard based on the warranty period.
In that case, the responsibility would fall primarily on the owner or user. The problem here is that as robots replace human judgment, the scale of damage caused by malfunctions can increase. Moral Machines: Teaching Robots Right from Wrong, the scale of harm may not be limited to a single individual or a small group, as in the case of the power plant system error. The responsibility may fall too heavily on a single individual or company. In this case, the robot may not have the ability to compensate, making liability itself pointless. Furthermore, if the owner is a country rather than an individual or an organization, there could be an ironic situation where the victimized citizen is compensated by the country through taxes.
Holding users accountable can also be unfair, as mentioned earlier. If the victim and the user are the same, it would be unreasonable to hold the user responsible for their own harm. The same is true for owners. The robot has near-human levels of intelligence and morality, so it should have been left to make its own decisions and operate on its own. It would be unfair for the owner to be held responsible for the robot’s misbehavior when they only allowed it to operate and did not intervene. Legislating liability doesn’t solve the problem.
This brings us to the second reason why robots cannot be entrusted with making decisions for humans. Robots are incapable of making value judgments. This may seem to contradict the premise of the previous section, but even if we could scientifically make robots capable of making value judgments, the question is whether they would be socially acceptable. Just because a robot has near-human intelligence and morality doesn’t mean it will be able to make good, non-harmful decisions. Based on our society, the answer is probably not. Most people know what is right and wrong, but sometimes they do the wrong thing. This is due to judgment based on the situation or personal values. Some people commit crimes, while others don’t, even in the same situation. This comes from differences in human will and value judgment.
In other words, robots will make their own judgments just like humans, and the consequences will be unpredictable. Like V.I.K.I. in the movie I, Robot, it is possible that robots will harm humans in order to curb their destructive nature. Because of this unpredictability, even if robots have near-human intelligence and morality, they cannot be entrusted with all human tasks. Some may argue that robots will do no harm because they will only carry out the commands they are given. But the problem is that we can’t always be sure of what’s right, even for ourselves. Just as utilitarianism and Kant’s deontology sometimes lead to opposing moral conclusions, each theory takes a different stance on good faith lying.
Both utilitarianism and deontology provide criteria for making moral judgments, but the motivations for those judgments are not always moral. For example, the United States used the phrase “axis of evil” to justify its wars in the Middle East, but it was also motivated by the interests of American military corporations. Similarly, even if robots make judgments based on moral theories, they are just as likely to abuse them as humans. Since we don’t even have a clear standard for moral judgment, we still believe that robots cannot be entrusted with human decisions and that humans should make the final decision.
To this argument, one might ask: Don’t humans suffer from the same problem? Humans can disagree with each other when making decisions. The argument is that a robot with better information processing capabilities could make better decisions. Moreover, if robots are capable of making value judgments, wouldn’t it be possible for multiple robots to make a decision after discussion? I disagree with this argument for two reasons.
The first is that people will stop using robots that are unpredictable. A robot is still a product, and it should do what we want it to do. If it behaves unpredictably and doesn’t do what we want it to do, it won’t be able to take over our jobs. For example, it would be very annoying if you tried to type “a” at the beginning of a sentence in a documentation program and it kept changing it to “A”. If robots start making their own value judgments, it’s likely to cause more inconvenience than convenience.
The second reason is that not all decisions are made by a group of people. It’s one thing if a group of robots can make a decision together, but in places where you can’t afford it, a single robot will make the decision. If a robot makes the wrong decision, the consequences can be devastating, and, like the conflict between utilitarianism and deontology, they may not be able to agree on a common moral standard.
The third reason why robots cannot be entrusted with human work is that they cannot be creative. One could argue that human creativity is also based on experience. However, it’s one thing for a robot to be creative, but it’s another to generate new ideas. A chess robot can beat a champion simply because it has counted all the moves, not because it has created a new strategy. While a robot may be able to memorize more cases and solve problems based on statistics, this only highlights the difference between humans and robots. Statistical judgment can ignore small changes or extremely rare cases, and humans may be better at spotting these unusual cases.
Even when using deduction, one of the current methods of scientific research, robots may underestimate the likelihood of rare possibilities. This tendency can limit their ability to generate hypotheses. Scientific theories often come up with innovative ideas that overturn the majority opinion, and these are often not derived from thinking based on statistics or existing data. Robots are also limited in their ability to process information, and no single robot can do all the research in the world. It will still be up to humans to decide which areas to research and to suggest new directions.
So far, we’ve discussed three reasons why robots can’t take over human jobs and some counterarguments. Of course, we haven’t yet created a robot with human-like intelligence and morals, and it’s not likely to happen. But as science and technology have advanced rapidly, ethical standards have sometimes failed to keep up. Just as questions were raised after the development of the nuclear bomb, the impact of near-human robots will be enormous, and it may be too late to discuss them then. We need to start preparing for these discussions now.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!