Interest in artificial intelligence has increased dramatically since the 2016 Lee Sedol vs. AlphaGo match. While advances in AI have made our lives more convenient, there are also concerns about the possibility of a strong AI dominating humans. Experts emphasize the need to prepare defenses against this.
In March 2016, Lee Sedol and AlphaGo’s 4-1 defeat in the world championship match between Lee Sedol and AlphaGo sparked a significant increase in interest in AI in Korea. AI has been integrated into our lives in various fields such as medicine, retail, insurance, and health. However, in this situation, people have not only interest and expectations for A.I., but also fears. If a strong A.I. can actually think and have a mind of its own, it raises the possibility that it will do things that humans do not order, such as launching nuclear bombs or waging war. This could be disastrous for humanity.
In recent years, artificial intelligence has reached a stage where it can talk to humans and help them make decisions using machine learning (the ability to analyze large amounts of data to predict the future) and deep learning (the ability for computers to categorize objects or data like the human brain). Thanks to these technological advances, we can have voice-activated assistants on our smartphones, AI can diagnose illnesses, and we can even transact with AI. In addition, robots with AI are semi-permanent and can be used around the clock, making them economical. But is AI always beneficial?
For example, robot trading (the act of creating quotes and trading stocks according to rules set by software) has been blamed for the 2012 loss of $440 million in 45 minutes at the US stock exchange Night Capital and the loss of 46 billion won in two minutes at Hanmac Investment & Securities. What these two incidents have in common is that the damage was caused by an AI error, not a human error. Of course, humans sometimes make mistakes, but A.I. errors can be much more damaging than human mistakes.
What if a strong AI with self-awareness was created? There is no guarantee that such an AI would be perfect. British professor Kevin Warwick argues that robots or androids could rebel. His view is that robots will feel that they can do things better than humans and will try to dominate them. Renowned futurist Kurzweil argues that once the singularity (the moment when artificial intelligence overtakes biological evolution) arrives, it will be possible to upload a human mind. In addition, the renowned Dr. Hawking warns that the development of full artificial intelligence could spell the end of the human race: while AI can improve and leapfrog itself, humans will be replaced because they cannot compete with AI due to the slow rate of biological evolution.
However, there are arguments that dispel this nightmare. South Korean professor Lee Kwang-hyung, a leading authority on AI research, explains that robots need ego and organization to dominate humans, “Unlike humans, ego robots cannot create ‘fictions,’” he says, “Therefore, robots do not have the organizational capabilities to exert ‘group power,’ such as forming leadership and forming groups like humans.” In other words, even if A.I. has an ego, it cannot dominate humans due to its lack of organizational capabilities. Dr. Kim Moon-sang, head of the Intelligent Robot Technology Development Business Unit, also predicted that robots will not be developed against humans, saying, “There is no guarantee that the intelligence of robots will be exactly the same as humans.” In other words, it is impossible to convert the biological human brain into a mechanical algorithm.
The development of a strong A.I. could make our future uneasy. However, experts in AI technology argue that it is not possible to develop an AI that can dominate humans. Just as humans can learn ethics, so can robots. For example, a program called Quixote allows AI to learn codes of conduct through reading. The program teaches it by sending “reward signals” when it acts in accordance with the right values and “punishment signals” when it doesn’t. This creates guidelines for AI to behave in a way that doesn’t cross the line in human society.
Weak AI can be used to create “killer robots” for warfare, blur the lines between AI and humans, and create job problems as robots replace humans. This could lead to various social problems, including the collapse of the existing economic system. However, the development of A.I. is enriching our lives and making them more convenient. AI is not being developed for the purpose of destroying humanity. The story of “A.I. will destroy humans” may be an anxiety caused by our ignorance. It’s hard to predict the future with certainty, but it’s also not clear that a strong A.I. can’t dominate humanity as experts have claimed. We need to set up defenses that can control strong AI in case of an emergency.
One way to calm your fears about AI is to think of it as a person. Have faith in the entity you fear, but also have faith in us humans. If you think back to past crises (e.g., epidemics, resource shortages, etc.), humans have responded wisely. We have developed science and technology to achieve the civilization we have today, and A.I. is just a tool we have created. Let’s pay more attention to the role and development of A.I. in the future, rather than being negative about it.