Can AI have human-like intentionality and sentience, or is it limited to simple programming?

C

This article discusses the possibilities and limitations of AI, especially strong AI. Based on John Surall’s “Chinese Room Argument”, which argues that programming alone cannot embody human intentionality and sentience, it is necessary to study the physical basis of AI, such as the brain and neurons.

 

Artificial intelligence (AI) has been at the center of controversy for about 30 years. The Wachowski sisters depicted a negative AI-dominated future in their movie series The Matrix. In a world dominated by machines, The Matrix is a fictional world created to cultivate humans as an energy source. The battle between humans and machine programs in a fictional space is what The Matrix is all about. The movie I, Robot also deals with the problems that robots with artificial intelligence can cause.
Philosophically, artificial intelligence is the intelligence produced by an intelligent being or system. When asked the question, “What is artificial intelligence?”, many people will say “a robot that thinks”. This is a common theme in fiction and movies. In the movie I, Robot, Will Smith, the protagonist, goes to investigate the murder of his benefactor, Dr. John. Surprisingly, the suspect is a robot. This robot is a prime example of artificial intelligence, a being that thinks and feels on its own.
In order to discuss AI, we need to define “strong AI”. Strong AI refers to the idea that a digital program made up of 0s and 1s is a mind itself, and that a mind can be implemented through computer programming. In other words, it is possible to create a robot with a mind through computer programming. Classical computationalists, and most engineers, believe that strong AI is possible. This is the kind of AI that people usually think of.
Their argument is simple. The brain is a computer or circuit as hardware, and the mind is programming as software, meaning that a programmed computer can understand facts and respond to them. According to this concept, robots like the Sentinels from The Matrix are examples of strong artificial intelligence.
However, I disagree. I believe that it is difficult to create a thinking robot by simply programming it. In other words, it’s impossible to create a computer or robot with intentionality. Intentionality is a characteristic of living things that responds to or is directed toward an object. For example, when we have the thought that we want to eat an apple, we are having an apple-oriented thought. So why can’t we implement intentionality through programming alone?
To understand this issue, let’s look at John Rogers Searle’s “Chinese Room Argument”. Imagine a person named Searle who doesn’t know any Chinese in an imaginary room. The room contains a large Chinese book, a slightly smaller Chinese book, a Chinese rulebook written in English, and Chinese sentences and instructions. The big Chinese book is background knowledge, the small Chinese book is a story, the rulebook is a grammar (program), the Chinese sentences are questions, and the instructions are executable programs. Now the servers can follow the instructions to create appropriate answers to the Chinese questions. Even if she doesn’t know any Chinese and answers according to the rules, people outside the room will believe that she understands Chinese.
But in reality, she doesn’t understand Chinese at all. This is the crux of the Chinese room argument. Just as she doesn’t fully understand Chinese, a program can’t have intentionality. In other words, if a program is to represent the human mind, it must have intentionality, just like human thoughts. But can a robot have intentionality?
Let’s take the example of pain. The robots in the movie Transformers are portrayed as feeling emotions and as feeling pain when they are attacked. However, a robot like Optimus Prime, which lacks sensory structures like nerve cells, would not actually feel pain, and certainly not the distress or fear that comes with it. Therefore, a robot cannot have intentionality about pain. This is true not only for pain, but also for all other sensations. A robot could programmatically mimic sensory behavior, but it would not have intentionality because it would not have qualia (subjective sensory experience) based on neurons.
This is where the problem with strong AI comes in. Strong AI is about the mind, and it requires a dualistic premise that separates the mind from the body, or that the body comes from the mind. However, based on the arguments above, it is unlikely that strong AI can be achieved because intentionality cannot be implemented in a program.
One could argue against my argument. For example, it could be argued that intentionality can be achieved through parallel computing. The idea is that computers that learn using parallel computing can be used to create intentional AI. However, this can be refuted again with the Chinese room argument. Imagine a Chinese room with as many servers as parallel computing can handle. They can communicate back and forth about the rules of Chinese and produce appropriate responses. But they still wouldn’t understand the Chinese language.
So, if programming with 0s and 1s isn’t enough to create AI, how can we create AI? I’m looking for the answer in Duplication. Up until now, AI research has been focused on programming. We have been digitally mimicking the brain with 0s and 1s without understanding exactly how it works. However, the brain’s behavior is not driven by digital zeros and ones, but by the frequency of electrical signals, complex networks, and chemical signaling in the nerves.
In I of the Vortex: From Neurons to Self, Llinás argues that brain function, language, and emotion begin with a single cell. Neurons have an oscillating nature, and it is through their variations, transformations, and differentiation that the human mind is formed. I share this view. The electrical and chemical reactions that occur in the brain, combined with the physical and chemical properties of the brain, allow us to think, store memories, and feel sensations and emotions.
Take the example of Alzheimer’s disease (AD). One of the symptoms of AD is dementia. Dementia is the loss of memory, and it starts when the nerve cells in the brain responsible for long-term memory die. As the neurons die, not only does the person lose long-term memory and the ability to recognize family members, but qualia also dies along with the neurons, meaning that without the brain, qualia cannot exist.
Therefore, to give robots intentionality, we need qualia, and we need to study the brain and neurons, the substances that contain qualia. To realize true AI, we need to clearly analyze the conditions necessary for the brain to work and conduct comprehensive research on it. However, most AI research to date has been focused on computer programming based on binary methods. To realize human-like AI, we need to move away from conventional digital programming and mimic the properties of human brain cells. It is necessary to combine electrical signals between neurons with traditional programming. If this method is developed, it may be possible to create an AI that thinks like a human.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!