Can AI think like humans, or is it just mimicking intelligence?

C

 

AI is defined as a system that mimics human intelligence, but true intelligence and thought are two different things. Alex Wissner-Gross’s intelligence formula and TED talk discuss creativity and thought to explore the limits of AI. We discuss whether AI will be able to think like humans in the future.

 

What is AI?

It’s easy to see that AI stands for artificial intelligence. We often interpret AI as something that mimics the knowledge of human behavior and causes it to act. For example, machines that can mimic the intelligence of intelligent human beings, such as AlphaGo, which played Go against Lee Sedol, or the systems that are input into driverless cars, are all AI. However, I think we need to redefine AI from the word itself. AI is simply artificially developed intelligence. By artificial, I mean “created” by humans, either intentionally or unintentionally. However, intelligence is a very difficult thing to interpret, and it’s even harder for a layperson to define it, as different scientists interpret it in different ways. So I’m going to use Alex Wissner-Gross’s definition of intelligence.

 

Intelligence: an ability separate from thinking

Alex Wissner-Gross says that if he were to leave a message for posterity to help them reconstruct or understand intelligence, it would be something like this “Intelligence is a physical process that seeks to maximize the degree of freedom of future action and to avoid limits on its own future,” he said, and he described it as the formula

F = T∇Sτ

This is a formula for intelligence, where F is a force, T is a power, S is a range of possible futures, and τ is a point in time in the future. This formula, which looks ridiculous on its face, drives the behavior we often see when dealing with intelligence. If you put it into a situation, you can get it to balance a stick without any instructions, play a game of pong on its own, increase its wealth in virtual stock trading, or make its social network well-connected. We can see that this formula drives intellectual behavior, social cooperation, and other things that we think of as human.
However, it’s easy to see that having intelligence and thinking are two different things. As mentioned above, intelligence is merely the purpose of preventing future limitations. But thinking is a higher level concept that encompasses this. It’s purposeful and includes the desire to predict the future. For example, when we see other animals hunting with tools or in groups, we think of them as intelligent hunters, not as thinking creatures. Also, people with intellectual disabilities often show great creativity in many ways, even though their intellectual development is incomplete. This shows that they use intelligence as a means to an end, but having intelligence does not mean they think. Therefore, the moment AI shows that it can think, the term AI should be changed. This is because it is already thinking beyond the level of having intelligence.

 

Is there a way to prove thinking?

So far, humanity has been looking at the flip side of the coin when developing AI. The flip side of the coin is the calculated value that AI displays on the surface: a system that inputs data A and outputs data B to show the correct answer to a question. To make things a little easier, let’s look at an example. In this TED talk by Ken Goldberg, you can see a robot called a “telegarden”. It’s a system that allows anyone to access a garden robot online to water or sow seeds. It’s set up in the lobby of a museum in Austria. But you might ask the people who control it remotely: “Is the robot REAL? “Is the robot REAL?” We can use multiple photos online to make people believe there is a robot there, even if there isn’t one. It’s like Descartes’ epistemology problem. AI can be seen as an epistemology problem in the same way. Whether or not an AI is a system that produces output data for input data is a question of epistemology, which means we have to question whether or not it thinks.
If so, can we not see the other side of the coin? In a TED talk by Blaise Agüera y Arcas, who I watched a while back, he asks the question about creativity using the equation

Y = W(*)X

Where W is the complex network of neuronal connections in the brain, X is the data of objects that we perceive with our five senses, and (*) is how the neuronal connections interact when X is input. Finally, Y is the data that we ultimately perceive and output as X. According to TED, the map of neurons called W can be approximated using the operations X, Y, and (*). This way, when we receive X as input, we can produce Y as output. This gives us a glimpse of creativity, of thinking. However, it makes us think about whether the value of Y is complete. In TED, when we put a dog in X, we could see a picture of a dog in Y. But if we asked humans to draw a dog, we could see a picture of a dog. But if we asked humans to draw a dog, would they be able to create a detailed and convincing picture of a dog like the one in TED? If I asked them to draw a dog differently, I doubted that they would be able to do it. In other words, it doesn’t feel like anything more than a collection of data from big data. But what if humanity were to perfectly interpret W, the neural network? Perhaps, through X, (*), and W, we can derive a value called Y like humans. Then, instead of relying on big data alone, we would be able to develop our own W and express the value Y in our own way. Then we will be able to turn the other side of the coin, which is creativity and thought.
So, when will we fully understand the nervous system, make advances in neuroscience, and be able to fully interpret a set of neurons? “The question of whether Machines can Think is about as relevant as the question of whether submarines can swim.” It’s been thousands of years since humans built and sailed ships, and only now are we building submarines and learning about the interior of the ocean, which was once an unknown world. AI is in the process of building and sailing ships right now, so I have no doubt that in the future we will be able to build machines that can interpret and think in the uncharted territory of thought.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!