Are mental states and physical mechanisms the same, and what limits do functionalism and the “Chinese Room” thought experiment pose for AI?

A

This article explores the relationship between mental states and physical mechanisms, discussing functionalism and its development in AI. It also critically examines the limits of strong AI and the nature of understanding through the Chinese Room thought experiment.

 

Identificationism, which emerged with the development of neuroscience, views mental states as identical to brain states. According to this view, all of our mental experiences and emotions are directly linked to the physical state of the brain, and the two concepts are virtually identical. However, as various debates have arisen around the theory, it has been criticized that mental states can be physically manifested in a variety of ways. This criticism led to the rejection of monism and the rise of functionalism.
What does it mean that mental states can be physically manifested in different ways? The concept suggests the possibility that the same mental experience or state can be caused by different physical mechanisms. To understand this more clearly in the context of functionalism, imagine the following scenario. Imagine an alien that feels pain exactly like we do. Instead of our nerve cells, the alien’s body has numerous pipes running through it, and the pressure of the water flowing through the pipes causes some valves to open and others to close, and the alien feels pain. As another example, imagine that a robot is invented that feels pain exactly like us, and it feels pain because of the activation of numerous silicon chips and wires in its body. In both of these examples, the alien or robot experiences the same mental state that we do, namely pain, but the physical mechanisms that implement that pain are completely different from the human brain.
These imaginings show that mental states are not fixed to specific physical states. This allows us to argue that mental states are not the same as states in the brain, and it can also be argued that identificationism is not correct. Functionalism, unlike identificationism, does not place much importance on what mental states are made of. Instead, it defines mental states in terms of their specific causal roles, i.e., it describes mental states as processes that produce certain outputs in response to certain inputs.
For example, consider the mental state of pain. When someone pinches us, we let out an “ouch” sound and wince. The pinching is the input, and the screaming or flinching is the output. The physical mechanism that fulfills this causal role could be a nerve cell in the brain for a human, or a silicon chip for a robot. Functionalism holds that these different physical mechanisms can embody the same mental state.
In addition to providing a new solution to the mind-body problem in philosophy, functionalism has had a profound impact on artificial intelligence research and the development of cognitive science. The ultimate goal of those working on artificial intelligence is not to create robots with the same brain structure as humans. Rather, the goal is to create robots that think and feel like humans, even if they are made of different materials and structures. In this way, functionalism recognizes that mental states can be realized in a variety of ways, which has expanded our understanding of mental states.
Another philosophical discussion related to AI is John Rogers Searle’s “Chinese Room” thought experiment. Imagine that a person whose native language is English and who knows no Chinese is in a closed “Chinese room.” The room contains a box with Chinese characters and a rulebook in English that contains rules about how to answer questions in Chinese. The rules in the rulebook are instructions for arranging the Chinese characters in the box according to sentence rules to form sentences to answer the questions. When the person in the “Chinese room” is asked a question in Chinese, he follows the rules in the rulebook and speaks out the answer in Chinese.
But can we say that this person understands Chinese? John Rogers Searle answers this question with a resounding “No.” The person in the “Chinese room” is simply performing a computational function according to the rules to come up with an answer to the question, and does not actually understand Chinese in the process. He uses this thought experiment to raise questions about strong AI – AI that has the ability to understand and think like a human.
John Rogers Searle argues that the way computers process computational information is similar to what happens in a “Chinese room.” The rulebook is a program, and the Chinese language is a language. A rulebook is a program, and a box full of Chinese characters is a database. Computers simply process data as 1s and 0s, and don’t understand the meaning behind them.
For example, the meaning of the sentence “Isn’t it drafty?” in the context of an open window has a complex meaning that cannot be interpreted simply according to sentence rules. The sentence could mean a request to “Close the window!” A computer cannot understand this contextual meaning; it simply processes the sentence according to the rules.
For John Rogers Searle, who divided AI into weak and strong AI, the “Chinese room” argument was a tool to criticize strong AI. While functionalists equate the mind with the execution of a computer program with specific functions, Searle saw it as a simple computational process. The “Chinese Room” argument marked a major turning point in the philosophical debate, broadening the perception and debate about strong AI.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!