Will AI ever be able to love in the same way as humans?

W

Starting with an episode of the American TV series Forbidden Science, we discuss the possibility of AI being able to communicate and make love in a similar way to humans, the scientific approach, and the quantification of emotions.

 

In the American drama series ‘Forbidden Science,’ there is a rather special episode about robots. The show is dominated by sci-fi and erotic scenes of people falling in love and having sex with robots, including sexbots. While most people would dismiss this as fantasy, technological advances are getting us closer to that situation. As sexbots become available for sale, the boundaries of love around the concept of a person are crumbling. Currently, the mental love of A.I. is indistinguishable from that between humans.
In order to be indistinguishable from human love, A.I. must be able to have human-like conversations. Humans understand and react to the world through their five senses. Love is part of this response because it’s a human behavior. Mental love is mostly triggered by visual and auditory stimuli, leaving out touch, smell, and taste, which are physical connections between people. Conversation, which represents auditory stimuli, should consist of words that a lover would say. According to Brian Christian’s book The Most Human of Humans, for such conversations, an A.I. must possess two qualities. First, it must have a consistent purpose and personality, just like a human. Second, it needs to say and do things that are appropriate for different places and environments. However, if you have the same conversation with everyone, you are not loving. Instead of responding to everyone in the same way, it must determine who it loves and respond accordingly. Therefore, it must be able to recognize the other person’s voice as well as recognize visual stimuli. This will be the beginning of “human-like love” for AI. The technologies described in the following sections are existing technologies that are already in use and do not propose a new model of artificial intelligence. We will also argue that the scientific and technological research listed here is inevitably related to each other.
Let’s discuss the first point of the argument, which is to ensure coherence. The consensus in the field of psychological science is that there is a “structure of emotion,” as proposed by Raymond Williams. The emotional structure of an individual is a deep, shared emotion that is a product of the culture of the time. Take college students for example. The academic style of their school, the atmosphere of their department, their peers, and even the clothes they wear are all products of the culture of the time. These common sentiments are responsible for shaping and changing people’s emotions.
Moreover, the concept of emotions presupposes a division of cognitive objects. In other words, even within a community, people can have different sensibilities. The consensus in the field of psychological science is the following notion of the “partitioning of emotions A sensibility is a system of sensory certainties that simultaneously exhibit a set of boundary settings that define their respective shares and places. These boundary settings are manifested in participatory and political behaviors that indicate an individual’s place within a community. By analyzing data on a person’s political position within a community, certain patterns and tendencies can be identified. The coherent personality that an AI has formed based on its communal identity resembles the personality of a human in that situation.
The second sub-thesis is how the ability to cope with situational change can be realized. According to Rudolf von Laban’s concept of emotional affect, the nature of an emotion is projected onto objects other than the subject, which changes from moment to moment, and the subject’s emotions, which are triggered by the stimuli that each object presents to the subject, are determined by the nature of this emotion.
Visual stimuli are represented by movement. The dynamically changing collection of visible light around us is the essence of vision. Rudolf von Laban expressed the emotional impact of artistic visual stimuli in terms of coordinates. This is called the Laban cube, named after him. The Laban cube shows how vectors of emotion tend to move within the cube. If you see an old man suddenly yelling at you, the emotion of surprise is projected onto him. Similarly, if you see a student studying in a reading room, the data of stillness and peacefulness is stored in the A.I. The emotion of the student is more likely to be static than kinetic. In this way, A.I. can change its feelings about the objects around it in response to changes in the world. This is how A.I. has dynamically changing situations and emotions. The stimulus-response mechanism, in which emotions change in response to external stimuli, is the same for all living things, including humans.
Finally, an A.I. must distinguish the physical characteristics of a loved one from other people. According to Matsuo Yutaka’s book Artificial Intelligence and Deep Learning, the current accuracy of AI has already reached a level where it can distinguish the faces of animals. According to Yutaka, traditional algorithms use a fuzzy function, which takes inputs and modifies the weights based on the outputs. Simply put, if you have 100 pieces of data, you learn from 100 outputs. However, there are newer methods of machine learning that effectively improve on this. It involves processing vast amounts of data, but it’s considered a step up from traditional deep learning. It’s a way to multiply data by an order of magnitude by deliberately creating “noise” or errors. In other words, by slightly modulating the data, you can create dozens of different outcomes from the data, like a parallel universe.
If you’re collecting weather data for baseball, a slightly rainy day and a sunny day obviously mean different things. But without some sort of data manipulation, they’re the same thing: “a day to play baseball”. In other words, if we simply pay attention to the fact that baseball was played, they are indistinguishable. However, if we increase the amount of clouds by a small amount, the former can be transformed into a day where it rains more than the latter and baseball is not possible. Seeing the results of intentionally varying inputs in this way provides a deeper understanding of the data. You can even see that the data is ambiguous and not simply categorized by a single criterion.
According to Yutaka, this approach can be used to better distinguish the physical characteristics of objects. An example is the face recognition algorithm developed by Google that can distinguish between a cat and a human face. The unique waveform of the human voice is a physical quantity that can be recognized with high accuracy. The noise technique of fitting a range of what it thinks is true is similar to the mechanisms of human voice and face recognition.
So far, we have argued that an A.I. can recognize its romantic partner and have an emotional conversation like a human. However, special emotions are required to fulfill the conditions of the thesis. It needs to understand the person it is in love with, and adopt the attitude of a human responding to an instinctive attraction. If we were to make an A.I. feel the emotion of love, how would it be expressed?
You might think that it’s easy to give an AI the emotion of love, but it’s actually easier said than done. The behaviors that humans exhibit when they love are often inconsistent.
Here’s how A.I. understands the other person. First, it divides human personalities into five types. The memories stored in the AI’s algorithm are represented by fictional characters. The many emotions experienced by many individuals are represented by knowledge nodes. Each entity is then assigned one of the five personalities, the ability to recall and forget memories, and an emotional state. Here, emotional states are represented as a vector space, like a Laban cube. However, this vector space consists of six positive emotions and six negative emotions. On the positive side of the z-axis, we have positive emotions such as joy, relief, and pride. Conversely, the six negative emotions are located in the negative direction of the z-axis: anger, disgust, and stress.
In this state, when emotional stimuli come in, the AI starts to create thought threads. This is a task in which five objects with five different personalities are selected and connected by finding and connecting objects that are highly related in five directions. At this time, it calculates how each individual will react by processing data from social media or the Internet. At this time, it is a task to connect individuals with similar reactions one after another. For example, if an emotional stimulus of “being scolded” comes in, a sulky student may not say anything in defiance. However, a timid student might not say anything either. By looking at the behavior of someone who doesn’t say anything when scolded, the AI can predict and associate the two personalities. The next connection continues in the same way. This repetition creates a chain of thought threads. It uses these thought threads to determine candidate personalities that could be the person, and then improves its accuracy.
The complexity of love cannot be pigeonholed into a single type. We can’t generalize because people have different feelings of love. However, through multiple iterations of extracting thought threads, it is possible to combine the right combination of love emotions for a person. For example, if a person’s main personality trait is hot-tempered, and a timid personality trait is often associated with a certain reaction, you can mix in a timid personality trait. By doing this, you can infer data about the other person’s changing personality and emotions, just as a person understands the mind. This gives it a deeper understanding of what’s going on in a relationship.
The AI will also figure out how to treat the other person with such a numerical mix of emotions. If you search for words related to the keyword love on social media, you’ll find a plethora of emotions that humans feel in a relationship. The idea is to find conversations and reactions that would be expected from someone who is similar to your personality and situation.
We will argue how the previously listed AI-related technologies are organically related. The first one, personality derivation through communal traits, represents the initial input to the AI. The subsequent technique of projecting emotions onto objects represents a change in the input to the AI. Finally, the thought threads described are the guidelines for how the A.I. should use those inputs to guide conversations in a romantic situation. This is where Yutaka’s noise technology comes in to ensure the specificity, not the universality, that is essential in a relationship.
A counter-argument to the above argument is that there is a religious and sublime aspect of love that cannot be captured by technology, and that AI cannot capture it. In fact, the emotion of love has long been considered sublime and unscientific. However, from the perspective of our current scientific advancements, human emotions are already quantifiable. Love shouldn’t be an exception.
According to Yuval Harari’s Homo sapiens, human happiness is a purely biochemical process. Each person has an innate level of happiness, which is called a happiness quotient. According to Harari, a happy person lives with a happiness index of 8 out of 10, while a very unhappy person lives with a happiness index of 3. It’s determined by the chemicals serotonin, dopamine, and oxytocin, and has nothing to do with external events. We think that achieving our coveted goals will bring us immense happiness, but this is not the case. Our happiness is no more or less than a chemical high or low, no more or less than that. The emotions we each feel are the result of a thorough internal science. The numerical analysis of emotions is not a mere imitation, but a very close approximation of the actual human emotional system.
Advances in science have shown that thoughts and emotions, once thought to be purely subjective, are now quantifiable and objective. The latest research on artificial intelligence and human emotions is making it possible for AI to love like humans, at least mentally. Although the technology of robotics is not yet able to reproduce the physical love of humans, the mental love that precedes the physical relationship also exists in AI. In the future, it will be necessary to have a deep discussion on how humanity should accept AI and human love.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!