This article discusses whether AI can think like humans. It uses the Turing Test and the Chinese Room Thought Experiment to illustrate the difference between simple information processing and true thinking, and questions whether AI can evolve through a process of counterfactual consensus.
A conversation scene from the movie with JARVIS, an AI assistant
“Hello, sir. (Hello, sir.)”
This is the greeting of JARVIS, Tony Stark’s artificial intelligence assistant, to his master, Tony Stark, in the globally popular hero movies Iron Man and The Avengers. In the movies, Tony Stark says to JARVIS, “A little ostentatious, don’t you think?” and JARVIS says, “I believe your intentions to be hostile” to an enemy who tries to manipulate him. In both movie series, the AI JARVIS is portrayed as if it can “think” and communicate with humans on an equal footing.
About human and AI “thinking
A long time ago, Descartes said. “I think, therefore I am. I think, therefore I exist.” For most people, the answer to the question, ‘Can humans think?’ is ”Yes. Humans can think,” and you’d be hard pressed to find anyone who disagrees. As a human, I can ask myself the question, “Can I think?” because I am thinking in the first place.
So, can A.I. think? Even if we reserve judgment on JARVIS in the sci-fi movie because we don’t know the level of science and technology applied to it and the limitations of its abilities, it remains a question whether the current state of AI is capable of the aforementioned “thinking.” What are the characteristics of “thinking” that distinguish it from other similar behaviors, and what are the differences between humans and AI in this regard?
To better understand these questions, I’d like to introduce two experiments on AI and thinking: the Turing Test and the Chinese Room. The Turing Test is an experiment proposed by Alan Turing in 1950 that tests how similar a computer’s responses are to human responses, based on the belief that “if a computer’s response to any input is indistinguishable from a human response (specifically, if the computer fools the experimenter 30% of the time in all trials), then the computer is intelligent and thinking.
When I first heard about Turing’s belief in this experiment, I had one question: “If a computer (or artificial intelligence) has a large or high-quality database and simply compares and contrasts information according to an algorithm without understanding the input and submits an answer, should it be considered ‘thinking’?” The Chinese Room is a thought experiment designed by John Searle, who had the same question, to refute Turing’s beliefs.
Here’s how the thought experiment works First, you put a person in a room with two windows to receive questionnaires and responses, who speaks no Chinese but can recognize the shapes of Chinese characters adequately, and give them a list of pre-made Chinese questions and answers. An observer outside the room, who does not know that the person in the room cannot speak Chinese, observes the person in the room responding to the Chinese questions.
To the observer outside the room, it would appear as if the person in the room understands all of the Chinese questions and responds appropriately. However, since the person in the room is merely responding to a list of questions, and not understanding the Chinese questions and engaging in a thought process to come up with answers, John Searle concluded that the Turing test cannot determine whether an AI is thinking with real intelligence.
What it means to “think” – centered on “congruence
Thinking is also a process of information processing in the sense that it involves opening and exploring a database of its own experience and learning to generate a response to an input, just as a calculator or search engine does, or as the person in the Chinese room did. Therefore, the question of AI and thinking boils down to the question of what distinguishes ‘mere information processing’ from ‘thinking’. What elements should be included and what should be possible in order to be called ‘thinking’?
Of course, it’s not easy to come up with a rigorous criterion, as many philosophers and engineers have failed to find one that satisfies everyone. However, in this article, I would like to propose one such criterion, the one that “information processing” must contain in order to transcend itself and enter the stage of “thinking,” and it is the following short question.
Can you perform the process of ‘thesis’?
Thesis, antithesis, and synthesis are the three stages of the Hegelian dialectic, consisting of a thesis, antithesis, and synthesis. A thesis is simply a proposition (or assertion) that exists with an opposing antithesis. The antithesis is another proposition that is the opposite of, or contradicts, the preceding thesis. When a thesis and antithesis meet, because they are two contradictory propositions, they are subjected to a “productive logical process” in which they collide and connect over a much longer period of time than when two similar or unrelated propositions meet, producing a multitude of subordinate propositions and secondary knowledge, which are then integrated into a larger propositional “sum” that is deeper than the original thesis and antithesis. The resulting ‘sum’ is qualitatively more advanced than the original ‘theorem’ and ‘antithesis’ and has properties that can be applied to all situations involving the subordinate propositions ‘theorem’ and ‘antithesis’. It does not end there, but the ‘sum’ becomes a new ‘theorem’ again and goes through the logical process of facing the contradictory ‘antithesis’ again, and since it aims for an absolute truth that can be applied to many situations, the antithesis is a logical process with a single completeness that leads to the absolute.
In other words, to propose “antithesis” as a criterion for “thinking” is as follows. When a thinking being has a proposition (in this case, information) and an opposite proposition (information) comes in, it should perform the process of antithesis with the existing proposition (information) and the new proposition (information), not simply store them in a database, list them, and compare them. In addition, you should be able to create a qualitatively developed database by including the newly created ‘sum’ in your own database through the process of counterfactual agreement, and you should explore the database when any input comes in and produce results.
To me, “thinking” is not about keeping a huge database of all incoming propositions (information), searching through it for each response, computing it, comparing and contrasting it, choosing the better one, and submitting an output for the input (this is the process of a PC storing everything in memory and choosing an output). Thinking is when the elements in the database to be explored collide, connect, and integrate to improve the quality of the database and thus produce efficient output. Furthermore, thinking can only be said to occur when the “evolution of the output” comes from the evolution of the database itself, not from the evolution of the “selection” and “submission” of the output (e.g., the speed of the computer’s simple operations).
Here is a diagrammatic explanation of why I believe that antisums can improve the quality of databases. Suppose we have A, B, and C “contradictory” pieces of information as candidate outputs for an input P. If a computation on any input Q similar to P yields A among A and B, there is no guarantee that A will be the appropriate output if C is added, or if P and Q are similar but not identical. Therefore, the computation must be re-done each time to yield an output for all similar inputs, not just P and Q. However, if the process of counterfactual consensus unifies A and B into D and C and D into E, then E will be the appropriate output for both P and Q and similar inputs. Therefore, the quality of the database itself can be improved through the process of antithesis.
Does AI ‘think’?
What if we connect the criterion I proposed, the ability to perform the counterfactual process, to the case of AI thinking that we have been wondering about? If we take the current state of artificial intelligence, such as Siri on the iPhone and AlphaGo, which beat Lee Sedol, as examples, it is clear that Siri, which answers a limited number of questions in a set pattern, cannot perform the counterfactual process.
AlphaGo, which defied the odds in March 2016 to win the world championship in Go, a game with an almost infinite number of possible outcomes, is often referred to as the epitome of “deep learning” technology. However, the core of deep learning is not “advancement” through the logical process of counterfactuals, but rather “categorization” of vast amounts of data. AlphaGo learned by clustering data based on unimaginable computing power and technology, making predictions through classification, and finding the optimal output by computing all the numbers that can be placed on a 19X19 checkerboard in each situation to win against Lee Sedol, but it did not progress toward something absolute in itself.
In other words, I don’t think it’s fair to say that AI is thinking at this point. At this point, AI is merely able to list, contrast, and compare data based on its superior computational capabilities to come up with human-like answers that pretend to be human. For an AI that does not develop through the logical process of counterfactuals, the definition of conscious thinking is unreasonable.
Conclusion
Science and technology are advancing even as you read this article. Thanks to this, the outputs that AI produces for its inputs are becoming more and more human-like. Indeed, one day we may be able to speak Goethe and Nietzsche to AI, but even if AI can speak Goethe and Nietzsche, “thinking” must include the possibility of self-evolution (or, more narrowly, the evolution of a database) through a process of counterfactual consensus.