Can AI fully mimic human thought and emotion, and what should we make of it?

C

This article discusses the progress and limitations of artificial intelligence research, specifically exploring whether AI can mimic human thought and deductive reasoning. It also raises the ethical and philosophical issues that will arise if AI can think and feel like humans, and considers the challenges humanity will face in the future.

 

What began as a curiosity in the 1950s in the UK, AI research is now an integral part of the fabric of our society. The technology has evolved from its early days of simple computation and pattern recognition to partially replace human roles in a variety of fields, including economics, mathematics, and science. AI touches human life much more broadly and deeply than we realize, and its development will have a profound impact on the future of humanity.
In particular, advances in AI have been aimed at mimicking many of the unique abilities that humans possess. Among the major mental activities of humans, such as emotions, thinking, and memory, thinking is considered the most important. This is because in order for an AI to fully take over all human roles, it must be able to replicate all the activities that occur in the human brain. Memory is the most specialized part of AI, and emotions can be seen as arising from thoughts about others.
René Descartes said that human thinking shows the value of human existence. This is based on the premise that humans are the only ones who use language and create original cultures through thinking. In other words, thinking itself can be seen as a uniquely human ability. So, are AIs not thinking?
In 1950, Alan Mathison Turing published a paper on machine thinking. Because thinking is abstract, it’s difficult to define precisely, so Turing created the Turing Test to determine if a machine is thinking. In a conversation with a human via a teleprinter, if the human cannot tell if the interlocutor is a machine or a human, then the machine is thinking. Alan Mathison Turing’s presentation may have been convincing and interesting to people at the time, but in hindsight it seems absurd. The machine simply analyzes some of the grammatical structures and words in human speech and puts together a language built into its circuitry to make it seem like it can communicate with humans.
So what conditions must be met for an AI to truly be able to think? First, we need to clarify the definition of thinking. Thinking itself is difficult to define, and there are many different perspectives. From a humanistic perspective, thinking is the process of reasoning to solve problems in the absence of perception or memory. In addition, we can include feelings and, by extension, emotional consideration for people. From a neurological perspective, thinking can be defined as the interaction between the body’s internal feedback and the external environment. Since A.I. is not a physical entity, the neurological definition of thinking is difficult to apply. Therefore, it would be reasonable to adopt a humanistic perspective to determine whether A.I. is thinking.
The aforementioned perceptions and memories are acquired through existing data, so they can be considered inductive thinking. However, the process of inferring something new when there is a lack of acquired perception or memory can be considered deductive. It is deductive thinking that allows humans to use language and create unique cultures. If A.I. is capable of deductive thinking, it will be able to create its own original language or culture just like humans. Therefore, A.I. capable of deductive thinking can be considered to think like humans. So, let’s see if A.I. can actually think deductively.
AlphaGo, one of the most advanced AI systems available today, uses the Monte Carlo algorithm. The Monte Carlo algorithm is an algorithm that probabilistically calculates the value of a function using randomly drawn numbers within a defined range. A classic example of using this algorithm is to calculate circumference. Draw a quadrant inside a square and take about 20,000 random points. The ratio of the number of points inside the quadrant to the number of points outside the quadrant is then used to calculate the circumference. It is known that with about 20,000 dots, the circumference ratio is about 3.14756. We can extend this act of dotting to the act of placing stones on a checkerboard. A checkerboard is 19 times 19, and the number of possible checkerboards is 361 factorials. That’s more than the number of atoms in the universe, and it’s impossible for even the most powerful computer to compute all of them. Therefore, AlphaGo uses a Monte Carlo algorithm to limit the sample to the most likely candidates.
Is this limiting process deductive? Finding the answer to this question will eventually reveal whether AlphaGo can think like a human. The way it decides which candidates to choose from is strictly probabilistic. When an opponent makes a move, Alpha searches its database for the move that most closely resembles the current situation, and based on hundreds of thousands of moves, it finds the next move with the highest probability of winning. AlphaGo doesn’t think about whether it will win or lose when it plays a move. AlphaGo simply chooses the move with the highest probability of winning based on probability. As a result, AlphaGo’s moves can look very different from the patterns played by traditional Go players. For example, in the second game between AlphaGo and Lee Sedol, AlphaGo played the 37th move of black, an unusual move that had never been played before. People saw this move as a perfect choice that was several moves ahead of the game, but in fact, Alpha wasn’t looking ahead at the game, it was simply placing its pieces where it had the best chance of winning in the current situation. Thus, we can see that Alpha is playing Go in a completely inductive way.
But how do human Go players play the game? Human Go players memorize existing Go positions as they play, but it’s impossible to memorize all of them in a huge number of cases (361 factorials). Humans can only memorize a limited number of moves. So how did Lee Sedol, who has a more limited memory than AlphaGo, manage to beat AlphaGo? While AlphaGo works within a limited algorithm to find the most efficient and winning moves, Go players rely on intuition and experience to make the best moves several moves ahead. In fact, Go players often make their best moves by predicting dozens of moves ahead.
Looking ahead through inductive thinking has its limitations. In the irregular and highly variable game of Go, deductive thinking is inevitable. This deductive thinking doesn’t rely on algorithms, so it can be flexible. As we saw in the second game between AlphaGo and Lee Sedol, when AlphaGo makes a mistake, it can get irreversibly bogged down. This is because AlphaGo acts probabilistically, so a single mistake can drastically change the probability of subsequent moves. In contrast, human flexible deductive thinking can be a powerful force in these situations. It was the power of deductive thinking that allowed Lee Sedol to defeat AlphaGo.
Will we see an AI capable of deductive thinking in the future? In my opinion, it is unlikely in the near future. Human thinking is never simple, and at every moment we are thinking and empathizing with different parts of the brain interacting. Only when we fully understand all the mechanisms and structures of the brain will we have the possibility to implement deductive thinking in AI on par with humans. But this is more than just a technical challenge; it requires philosophical and ethical considerations. If A.I. can think and feel like humans, how will we accept it? This is one of the new problems that humanity will face in the future.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!