AI performs well within the rules, but can it be judged as thinking and reasoning beyond the rules like humans?

A

This article discusses AI’s performance in chess, Go, and games, and argues that it may be an exaggeration to say that AI has surpassed human reason. It concludes that AI’s performance is nothing more than computation and algorithmic utilization within a given set of rules, and is fundamentally different from complex human thought.

 

Whether it’s Deep Blue conquering chess, AlphaGo defeating Lee Sedol at Go, or the algorithms of Hacken and Apel proving the four-color theorem, AI continues to amaze us with its endless possibilities. These advances are making what we once thought was possible only in science fiction movies a reality. As AI solves increasingly complex and sophisticated problems, it is not only challenging human intellect, but also changing the structure and daily life of our society. One by one, AI seems to be conquering fields that were once considered the domain of “reason”-the ability to solve problems using mathematical and logical tools. Some say that AI is already capable of reasoning beyond humans, and that it could pose a threat to humans in the future. This concern is a common theme in movies and literature, warning of the potential for AI to replace humans or dehumanize them.
In his book The Most Human of Humans, Bryan Christian argues that AI is encroaching on the realm of human reason with its superior processing power, and that we must now explore the realm of emotion. He believes that unless AI is able to mimic emotions, which are thought to be uniquely human, they will be the last thing that differentiates us. But this reaction begs the question: what if AI wins at chess and wins with enormous computational power? Can we assume that A.I. has invaded human reason just because it wins at chess and handles complex math calculations with enormous computing power? Also, human neurons have a very complex structure and are structurally very different from computers, so it would be difficult to fully implement them in a perceptron-like structure. So, if the processing method of A.I. is different from that of the human brain, can we say that they have the same ‘reason’?
First, let’s think about chess, which Brian Christian called a “human” game. It’s true that computers have beaten humans in chess, albeit due to human mistakes, but these are simply conclusions reached by using fast computation and statistical processing to solve a large number of cases, not by thinking. When a chess grandmaster and Deep Blue played a classic move that is widely known in chess circles, the computer used the most widely known solution to the move and sacrificed its own knight to win the game. The referees joked that the computer simply used the move without knowing what it meant, and that the referees would only ask, “Who took my knight?” after the computer finished the move. This raises the question of whether computers have what we call “reason” at all.
In chess, the number of possible starting positions and the number of moves is relatively small, so much so that Brian Christian referred to the “book” pattern of chess, so AI could perform well with “computational power” rather than “reason.” However, what about Go, which has a much larger number of possible “cases” than chess? Experts predicted that it would take more than a decade for AI to beat humans at Go because the number of possible cases is so large that it is impossible to calculate all the possible cases, so there is a limit to the performance of AI with just superior computational power. This is why AlphaGo’s victory came as a shock to many people. So, should we recognize the AI as rational because it beat a human?
According to Google, AlphaGo uses an algorithm called Monte Carlo tree search (MTCT), which reduces the number of “cases” to be calculated by creating a database called a search tree through practice simulations, rather than trying to solve all the cases. The idea is that a large search tree is built up through many simulations, and then the most valuable data is selected from the branches of the search tree for the real-world situation. This method works because there are a large number of moves that do not violate the rules of Go, but a relatively limited number that people find meaningful and use in real games. The fact that AlphaGo used MTCT to reduce the number of moves is evidenced by the fact that Lee Sedol made errors when he played moves that are not commonly used by people, i.e., moves that are not recorded in the search tree. In this way, the MTCT method reduces the amount of computation by using the search tree, but it is actually a simple computational algorithm that is not much different from the chess algorithm in that it substitutes a large number of moves within a given framework. Of course, the strategy of reducing the amount of computation is an unparalleled improvement over the existing chess algorithm, but it still does not deviate much from the methodology of substituting the number of cases that fit the rules. This methodology has a fatal limitation: while it can perform well within a given set of rules, it cannot be used outside of them. Even AlphaGo, the most advanced artificial intelligence, uses this methodology, which limits its behavior to within the rules, making it a frog in a well of rules.
Since its success in chess and Go, AI has been used in a variety of games and is now becoming more advanced. For example, in a game that requires a lot of thinking, called 2048, AI is showing better performance than humans. The game was developed in 2014, before the AlphaGo vs. Lee Sedol match, and is a game in which players can move blocks in one of four directions on each turn, and when blocks of the same number are combined, they become a new block with two numbers, and the goal is to create 2048 blocks. One of the algorithms that performed well in 2048 is the genetic algorithm. Genetic algorithms are similar to the evolution of living organisms, in which many algorithms are gradually mutated and compete with each other, and the algorithms with higher scores survive, while those that do not are eliminated. AI initially moves randomly among the four directions it can move, but later it becomes more skillful, such as using one of the high-scoring strategies, “backing a large number into a corner. However, this is just a randomized algorithm competing against each other and the best algorithm eventually wins, not an AI playing a game with a mind. In the case of “backing a large number into a corner,” humans have experience and intuition, whereas A.I. is simply using the optimal algorithm that happens to be found by chance through a large number of cases. Although genetic algorithms are strategic algorithms that explore a limited problem space, they are still a methodology that substitutes cases that fit the given rules, such as chess or Go algorithms.
When you watch an AI play a game, it appears to be rational like a human, but in reality, it is just statistically figuring out how to get the highest score by substituting the number of cases that fit the rules within a limited framework. In other words, it excels in the methodology of “trying all possible methodologies within the framework of the rules” but cannot perform in methodologies outside the scope of the rules. As a result, it may be a stretch to say that AI has caught up with human reason.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!