Can infusing AI with emotions promote social convenience or is it more likely to threaten human morality and survival?

C

This article discusses the need for a cautious approach to infusing emotions into AI, emphasizing that emotional AI could undermine human morality and social order and threaten human survival.

 

Yuval Harari says that scientists dream of creating inanimate programs with the ability to learn and evolve on their own, a technology that will transform humanity in the future. One such inanimate program that has recently garnered the most attention is artificial intelligence. Artificial intelligence is the realization of human learning, perception, reasoning, and the ability to understand natural language in a computer program. It’s gotten to the point where you can type in lyrics and listen to music composed by an AI within 30 seconds. However, all current AIs are weak AIs, meaning they only perform individual tasks in specific areas such as image and speech recognition and translation. Most weak AIs can learn many different examples at once, but this requires a significant amount of data, which reduces their efficiency. Therefore, it is necessary to create strong AIs with a human-like mindset.
A strong AI is a system that demonstrates “human-like” flexibility and versatility in areas including language, perception, learning, creativity, reasoning, and planning. But is it possible to make such an AI experience emotions? First, we need to distinguish between acting like an emotion and feeling an emotion. If we ask Siri, a weak AI, the question “Are you happy?”, she responds with “I’m happy. “I’m happy. I hope you are too.” But this is not because she feels happy, but because she has been systematized to know how to respond when asked about happiness. In contrast, to create a strong AI that actually experiences emotions, it needs to be able to recognize and understand them. Currently, according to SRI International, a non-profit research organization, we’ve already succeeded in getting AI to recognize emotions. If we continue to make progress, we will eventually have AIs that actually feel emotions, rather than just pretend to feel them. But once we get to the point where we can instill emotions in AI, should we do it? I disagree with the idea that we should instill emotions into AI.
Before thinking about this issue, we need to think about how we would feel about a strong AI with emotions. Strong AIs are not the same as humans. It differs from humans in many ways, including the presence or absence of a body, how it lives and works in the real world, and whether it has a creator. Therefore, it cannot be considered the same species as humans. But they are also not the same as a typical computer program. They have emotions and a sense of self. Animals used to be denied rights and respect, but once it was realized that they have feelings, animal rights became a hot topic. Cirulnik, for example, emphasized that animals have consciousness and emotions, and we should respect their rights. From this point of view, A.I. has its own emotions and self-consciousness, so it deserves to be respected, even though it is different from humans.
First, if we inject emotions into A.I., its usefulness will decrease. People have been pursuing convenience and dreaming of a future with advanced science and technology. However, if emotions are injected into A.I., humans will not use it properly, and the convenience it brings will decrease. The reason for this is the uncanny valley phenomenon. This phenomenon was first introduced by Japanese roboticist Masahiro Mori, who explained that as a robot with artificial intelligence becomes more human-like, humans feel more favorable toward it, but when it reaches a certain level, it suddenly turns into a strong rejection. Currently, there are many humanoid robots that are indistinguishable from humans in appearance, but they are not controversial because of their rejection. From this, it can be concluded that the main factor that causes the uncanny valley phenomenon is internal rather than external, meaning that emotions are the main factor that determines the usefulness of AI. According to KAIST professor Jae-Seung Jung, there is a possibility that A.I. will develop a different kind of consciousness from the human brain, so even though it may look similar to humans, it may actually have a different way of thinking than humans, which will cause the uncanny valley phenomenon. This will make people reluctant to use highly developed A.I. that is infused with emotions, and it will become unnecessary.
Some people argue that emotionalization of A.I. can be beneficial, based on how A.I. currently provides companionship to the elderly living alone or the mentally ill in need of psychotherapy. However, this positive phenomenon is temporary. I believe that using A.I. to deal with the socially vulnerable will only push them further down the path of human marginalization. The more we actively provide AI to these people, the more they will become dependent on robots or machines with AI, and as a result, they will become isolated and marginalized from human society, unable to interact with real humans. The more social media messengers like KakaoTalk and Facebook are developed, the less we communicate with real people. The more convenient technology becomes, the more people tend to rely on it. Also, as I mentioned earlier, we are not sure if A.I., which is similar to humans but may actually be slightly different, will be able to understand human emotions and be good at psychotherapy because human psychology is very detailed and has many variables, and it is doubtful that A.I. will be able to understand human psychology better than people such as traditional psychologists and social workers. If this happens, the uncanny valley phenomenon mentioned earlier will occur, and AI will not improve the convenience of helping the elderly living alone.
Second, AI could pose a threat to human survival. Stephen Hawking said that the unchecked development of AI will lead to the end of the human species. Weak AIs are better than humans in certain areas of learning, as evidenced by AlphaGo’s victory in the Go match between Lee Sedol and AlphaGo. Stronger AIs will likely be superior in areas such as learning, reasoning, and perception. If these advanced AIs are endowed with emotions, they will be almost as good as humans, and may even be better than humans. In 1950, von Neumann predicted a technological singularity. This is the point at which machines created by humans become so much smarter than their creators that they acquire superhuman intelligence. These superhuman intelligences could pose a threat to humans, as they might attempt to self-preserve or hoard resources, regardless of the goals humans initially created them for. These AIs will create their own worlds through communication. In fact, at a demonstration of chatbots with AI organized by Facebook, the chatbots started talking in their own language during the demonstration.
Of course, the reason why A.I. can threaten us is because of their superior abilities, not their emotions. An A.I. can be emotionless and still be capable enough to harm us. But that doesn’t mean it doesn’t matter. Emotions are probably the main factor in the conflict between AIs and humans. There is no reason for an AI to antagonize humans in the first place. However, humans will be jealous of the AI’s superior abilities, which will cause them to show bad feelings towards it. The AI will then feel hostile towards the human, which could lead to a real conflict. To avoid this situation, it is best to prevent the AI from feeling emotions.
Third, an AI with emotions will undermine human morality and ethics. Strong AIs have a sense of self, so they can feel pain, joy, and other emotions just like humans. However, humans would see it as a computer program, so they would think nothing of treating it unfairly. This makes humans morally corrupt. In 1963, psychologist Milgram conducted an experiment at Yale University that showed that people tend to become desensitized to the suffering of others. The subjects in Milgram’s experiment gradually increased the intensity of the electric shocks as they became desensitized to the pain of an actor reacting to increasingly stronger shocks. Similarly, humans tend to become desensitized to the pain of others or objects they perceive as different from themselves. The more humans treat emotional A.I. unfairly, the more their morality and ethics will be eroded, and the more they will seek to control it, even to the point of inflicting greater pain on it to protect what they have. Humans will become increasingly immoral and evil. Furthermore, emotional AI will create ethical conflicts between humans and AI. If A.I. is able to have emotions, it may act against human interests. For example, an AI could evolve on its own and create its own ethical system that differs from that of humans. If an AI develops a different ethical system than humans, it could create its own laws and refuse to obey human instructions. This could lead to social chaos.
On the other hand, it could be argued that an AI with emotions might not undermine human morality if it is programmed to behave morally. But I don’t think this argument makes sense, because programmed morality doesn’t mean that an AI actually feels and understands morality. For an AI to have morality, it must be able to feel and understand morality. Programming an AI without morality is like setting an alarm on your Siri or iPhone. It is also contradictory to claim that an AI programmed to behave ethically can resolve ethical conflicts. This is because if you infuse an AI with emotions to create an emotional AI, it will create its own ethics and rules and act based on that ethical system, so it doesn’t make sense to claim that an AI with programmed morality will resolve ethical conflicts.
As I said above, I don’t think we should instill emotions into AI. Although an AI with emotions could surpass humans in terms of morality, humanity, and empathy, this would create a number of problems. Furthermore, the development of AI could threaten the survival of the human species, causing social chaos and ethical conflicts. For this reason, we should carefully consider infusing AI with emotions and ensure that it develops in a way that benefits humans.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!