Yuval Noah Harari argues that the introduction of the number system has changed our natural way of thinking, and warns that advances in artificial intelligence could intensify this numerical mindset. He explores the positive and negative impacts of artificial intelligence on humanity, and argues that it is human selfishness, not artificial intelligence itself, that we should truly fear.
Yuval Noah Harari’s Sapiens is a critical look at the number system. Harari is concerned that we have become a bureaucratic, compartmentalized system of thinking, rather than a natural human association. He points out that the introduction of the number system in particular has made us more inclined to think in numerical terms. He fears that with the introduction of computers, we are teaching people to speak, feel, and even dream in a numerical language that computers can understand. He even describes the number system as a rebellious writing system, and argues that the rise of artificial intelligence is a product of the number system. So, what does he mean by “thinking in a numerical way” and “speaking, feeling, and dreaming in a numerical language” and why does he express such fear?
In South Korea, math is a big part of the curriculum. In Korea, students in high school, regardless of whether they are in the arts or sciences, take math classes at least three times a week. However, this is only up to high school. After graduating from high school, depending on their major or profession, many people will not be exposed to math beyond the arithmetic. Considering this, it seems that “learning and feeling the language of numbers” doesn’t just mean learning math.
With the digitization of so much of the world, computers are so ubiquitous that it’s hard to find someone who doesn’t use one. From mathematical calculations to writing literary fiction, computers make life so much easier. For this reason, computer-related qualifications are considered fundamental to job readiness, and knowledge of how to use computer programs is essential. So, does “feeling and dreaming in the language of numbers” mean learning and mastering how to use computer programs? Not really. In many cases, you don’t need to know numbers to use a computer program. For example, just because someone writes a novel in a word program doesn’t mean they are imagining and dreaming in numbers.
What it really means to “feel and dream in the language of numbers” is to think and feel about a situation or phenomenon numerically. For example, consider when we are about to watch a movie. It’s almost time for the movie to start, and I can’t tell from the poster whether I’ll be satisfied or disappointed after watching it. What do we look for in this situation to get reassurance? We look for ratings from people who have already seen the movie. When we see what other people have said about a movie, we can feel a little more confident. Similarly, people put a “number” on many things. Intelligence is labeled as IQ, safety is labeled as “accident rate,” and many other things are becoming accustomed to being represented and accepted numerically. When a company comes up with a new strategy, it’s not abstract words, it’s hard numbers like “success rate” and “return on investment”. There is now an effort to quantify even abstract concepts like poverty, happiness, and honesty.
There are two main reasons why authors are afraid of number systems. First, the number system itself is not natural to the way humans think. We are not built to think in numbers. Second, there is a fear of artificial intelligence, which is the end product of number systems. In this article, I’ll discuss the fear of AI a bit more.
There are two main reasons why AI is feared. First, there’s the fear that A.I. will become independent of humans and attack them, which is often portrayed in fiction and movies. Stories like “Avengers: Age of Ultron” and “Transformers” are examples of stories where AI robots threaten to destroy humanity. The possibility of AI attacking humans is not unrealistic. Research is already underway on weapons such as AI-powered unmanned fighter jets, and efforts by governments to create robots that move like humans could make humanoid AI-powered war robots a reality. However, these fears are ultimately a result of humans building robots and developing AI for the purpose of killing humans. AI is, after all, created by humans, and unless it is designed to kill humans, it is unlikely to attack humans. Of course, there are times when humans can create bad outcomes, even if they don’t intend to. In one novel, an AI is asked what it should do to save the environment, and the answer is that humans should disappear. The potential for disastrous consequences exists, even if we don’t intend them. This is why we need to be careful not to completely lose control of AI and let it act autonomously without human intervention.
While the question of the destruction of humanity by AI is still far in the future, a more immediate fear is that AI will take away jobs. The fear is that the basic cycle of capitalism – labor → income → consumption → corporate investment → employment → labor – will be broken by AI and the economy will come to a standstill. Many jobs have already been lost to AI and are still being lost. As AI continues to advance, there will likely be few tasks that humans can do better than AI. It’s only natural that jobs will be lost as a result. However, job losses aren’t necessarily a bad thing. If fewer people are working, but production remains unchanged, we may see a utopia in the future where everyone can live without working. Of course, this is as far in the future as AI wiping out the human race, and it’s probably less likely. But it doesn’t have to be pessimistic to think that AI will eliminate jobs. Who gets to keep the profits from AI, or share them with others, could determine whether the future is a utopia or a dystopia.
So far, we’ve discussed the fears of AI. The fear of A.I. is a valid one. It could destroy the human race, or it could take away our jobs and leave us with a flood of unemployed people. However, both of these issues depend on human choices. If we use AI with good intentions and share the profits it generates with others, we can build a better future. In this sense, it is not AI that we should be afraid of, but rather the selfishness of “I will make a good living for myself”.