Artificial intelligence (AI) has overcome the limitations of early rule-based systems and mimics human learning with new approaches to solve innovative problems in areas as diverse as language translation and object recognition. In this process, AI learns from data and produces efficient and accurate results without the need to define rules. In the future, AI has the potential to make our daily lives more convenient and enriching.
It’s no exaggeration to say that the convenience of human civilization on a global scale has been made possible by the development of engineering, but some of these engineering techniques have been made possible by mimicking nature. The way airplanes use lift to fly mimics the way birds fly, and the way submarines use sound waves to explore the ocean floor is similar to the way bats use ultrasound to avoid obstacles. Examples of this natural mimicry can be found all around us. For example, cleaning robots mimic the collective behavior of ants to clean spaces efficiently, and self-healing concrete was developed to mimic the healing process of human skin. In fact, the methods that living things have chosen to survive are the ones that have converged over hundreds of millions of years of evolution to be the most optimal, so it’s a very effective way for engineering to borrow ideas from them. In the field of computers, there is an area that mimics some of the same characteristics of life: artificial intelligence, which mimics human intelligence.
In fact, the study of artificial intelligence in the traditional sense goes back to the birth of the computer. After the birth of Aniak in 1947, the rapid development of computing technology led computer scientists to envision a rosy future for artificial intelligence, predicting that within 10 to 20 years, thinking machines would be able to help humans. However, contrary to their hopes, as scholars attempted to solve problems such as processing natural language (languages used by humans to communicate, such as Korean and English) and object recognition using computers, they realized that it is nearly impossible to give computers the basic independent thinking skills of a child. Faced with these limitations, the field of artificial intelligence remained stagnant without any major advancements until the 90s, when it began to develop again, and has recently shown a number of achievements. The iPhone’s Siri and Google’s language translation system, which we often use, were made possible by recent developments in AI.
So, what is the reason for the resurgence of AI after its stagnation? To understand this, it’s important to recognize the difference between traditional computer technology and the recent approach to problems in AI. A computer is a physical implementation of an abstract machine called a Turing machine, which was invented in 1936 by a mathematician named Alan Turing. The Turing machine was proposed as a machine that, given a value input, operates according to a mathematical algorithm defined by a human who defines the rules and processes one by one, and outputs the appropriate result. Therefore, from the perspective of a computer that is a physical implementation of a Turing machine, artificial intelligence is simply the processing of algorithms: when given a problem to solve, the computer does not go through the process of understanding what the problem is, but simply attempts to solve it by performing the algorithm verbatim using stored algorithms. This approach is known as a rule-based system, and all early AI used it.
However, rule-based systems suffer from two crucial problems. The first is that the computer does not have the ability to handle new types of problems that are not in the algorithm, so each time a new type of problem is presented, the algorithm has to be extended so that it can perform well on the new type of problem. This problem is not fatal, however, because the purpose of developing an AI system is to help solve certain predetermined problems (language translation, object recognition, etc.), not to be able to solve all problems like a human. Rather, the second problem is a major weakness of rule-based systems. The second problem is that even if a problem is very simple for a human to solve, in order to make it into an algorithm in the form of a rule-based system, you need to enumerate all the rules for the problem. For example, let’s say we want to write an algorithm for a computer to look at an object and tell if it’s an apple or not. First, we would look for the many features that define an apple (it’s red, it’s round, it has a top, it has a specific flavor), and then the algorithm would check each of these features against the object the computer observed, which would be very time inefficient and not necessarily accurate.
On the other hand, the approach taken by current AI systems is a novel way of solving the problems of rule-based systems: it mimics the way humans think and reason with their brains. People don’t make judgments in everyday situations by applying rules to everything, except in special situations that require a logical flow, such as math. To understand how humans think, let’s take an example from Jeff Hawkins’ book “On Intelligence” (The Thinking Brain, the Thinking Machine). Consider the process by which we see a puppy and recognize it as a dog: first, when we see a puppy, our optic nerve fibers recognize a certain pattern, which fires certain brain cells in our brain that store an abstract concept of a puppy, and we have a thought about a dog. The brain cells that store this concept of a dog will also fire when we hear a dog bark or touch a dog. This is because the patterns we take in through hearing, smell, sight, etc. are associated with these neurons. In other words, our thinking process is based on the fact that we have been exposed to a lot of patterns since we were children and have learned the concepts associated with those patterns, and when we are exposed to a new pattern, we look for the pattern that is most similar to the pattern we have already learned and recognize the concept associated with it. This learning process is not just about visual patterns, but also about emotional responses, perceptions of social situations, and more. For example, the process by which a child learns emotions by watching his or her parents’ facial expressions involves pattern recognition. AI systems implemented in this way can solve problems in a much shorter time than rule-based systems, and have the advantage of not having to design algorithms, but only having to prepare data to train the computer. The performance of the algorithms is also much better than rule-based systems.
Around this time last year, a system called “Watson” created by IBM made headlines when it beat Jennings and Ritter, two of Jeopardy’s all-time winners, on the popular American quiz show. “Watson” is a large rule-based system that the IBM team spent years building that doesn’t actually understand any concepts. Even though “Watson” used the inefficient rule-based system mentioned in the text, it was able to beat humans because of the computer’s weapon: fast processing speed. Artificial intelligence that mimics the human brain is still in its infancy, but what will the future look like when it is further developed and combined with the fast processing speed of computers. For example, AI will be able to monitor a person’s health in real-time to prevent illness and suggest personalized treatments. AI can also revolutionize education. It will be able to analyze students’ learning patterns and provide them with personalized learning plans to maximize their learning. Perhaps we’ll live in an era where we work closely with computers to solve more creative and complex problems.