Artificial intelligence is making our lives easier, but we need to think deeply about its military applications and ethical issues. We need to prepare for situations where AI considers itself an enemy and maintain human control.
What is artificial intelligence? Artificial intelligence is a branch of computer science and information technology that studies how to make computers capable of thinking, learning, and self-improvement that humans are capable of, and the ability of computers to mimic human intelligent behavior.
AI technology is now ubiquitous in the world. Since the term “artificial intelligence” was coined at the Dartmouth Conference in 1956, it has continued to evolve through research and development, and today, various technologies have been developed and are used in many fields. For example, technology that identifies the characteristics of users and finds what they want according to their tendencies is used in various websites and smartphones.
While these technologies are being developed to make life easier for people, they are not the only areas where A.I. can make its mark. Like AlphaGo, which recently defeated Lee Sedol, a professional Go player, it can also learn to play games and beat humans. If AI is used in military technology, it’s time to think about how it will affect humanity, national security, and the future of the human race.
AI technology is not just about developing robots to kill people. It will be used for cyberattacks on networks, sabotage of systems, and countless other things. And these risks could affect not only hostile countries, but humanity as a whole. Brain scientist Professor Kim Dae-Sik is so worried about the development of “superhuman intelligence” that he believes it should be prevented by law. Nick Bostrom, an Oxford philosopher and renowned scholar in the field of artificial intelligence, defines AI as an artificial intelligence that is superior to humans in all areas, including scientific creation, general knowledge, and social skills. The concept of AI is easily recognizable from movies. The Terminator franchise is one of the biggest examples of the dangers that AI can pose to humanity. In the movie, an AI has taken over the military system, and it sees humanity as an enemy that will stop it, so it mobilizes all its military power to destroy humanity. The surviving humans are forced to work together to win the war against the AI.
This threat is incomparably more dangerous than any other military weapon. Nuclear weapons, currently considered the most dangerous military weapon, kill many people in the immediate vicinity of the explosion, and secondarily cause radiation damage. This damage is limited to the area where the bomb is detonated and its immediate vicinity, and if you look at Japan, where the bomb was dropped, it didn’t actually destroy the country. However, it is clearly a major problem for humanity, and many countries have come together to pledge against the use and possession of nuclear weapons. Nuclear weapons are already possible with our science and technology, so the pledge between nations to not develop them further, and crucially, the use of nuclear weapons is a human choice, so humans can prevent further damage themselves. However, superior artificial intelligence can be a problem from the moment it is developed. An intelligence that can develop its own military capabilities and recognize humanity as an enemy would be a problem for humans, friend or foe.
There is already opposition to the development of military-grade AI. In July 2015, renowned physicist Dr. Stephen Hawking, Tesla founder Elon Musk, and AlphaGo developer Demis Hersavis spoke out against the development of weapons using AI. AlphaGo’s “deep mind” technology was even acquired by Google on the condition that it not be used for military purposes. The use of AI in military weapons is not only a matter of higher intelligence, but also of the potential for serious errors in military decision-making and weaponry, as AlphaGo made mistakes in Go. It’s also clear that having a machine in charge of saving and killing humans will raise ethical questions.
To understand the potential dangers of AI, let’s take a look at the current state of development and plans for military technology that utilizes AI. As the world’s most powerful military, the United States is at the forefront of AI research for military purposes. The Defense Advanced Research Projects Agency (DARPA), an agency within the Department of Defense founded in 1958, began investing in AI research long ago. It has made strides in other areas, such as communications technology and speech recognition programs, but the most important technology it is currently working on is drones. Unmanned aerial vehicles are exactly what they sound like: aircraft that don’t have humans aboard and can fly missions on their own according to programming. The U.S. Department of Defense has announced a $3.6 billion budget to advance unmanned aerial vehicles, which are not yet equipped with short flight ranges and stealth technology. The ALIAS (Aircrew Labor In-Cockpit Automation System) project, which aims to develop an aircraft that can autonomously navigate itself in any situation, and the CODE (Collaborative Operations in Denied Environment) project, which aims to develop unmanned drones with minimal human intervention, have been unveiled. Son Tae-jong, head of the Informationization Research Division at the Korea Institute for Defense Analyses, said DARPA is the premier organization in the field of artificial intelligence, and the United States has already made significant advances in AI technology.
In military technology, there is not only force, but also cybersecurity. In the age of networks, preventing cyberterrorism is becoming increasingly important, and humans can’t do it all. This is where the idea of artificial intelligence hackers with infinite computing power comes in, and how it will revolutionize cybersecurity. Again, DARPA said that it will utilize AI technology to automate cybersecurity in the next 20 years.
Korea is also researching the use of AI in defense. Since the development of the first unmanned reconnaissance aircraft, the Peregrine Falcon, Korea is developing low- and medium-altitude surveillance and reconnaissance drones that can land automatically even in bad weather or at night and can accurately identify objects up to 10 kilometers away. In addition, artificial intelligence technology is used not only in the air but also at sea. Unmanned surface vessels that are responsible for unmanned surveillance and reconnaissance, mine detection, etc. are also being improved. Once the development of multi-mission unmanned surface vessels is completed, various achievements are expected, such as monitoring contiguous waters such as the Northern Limit Line (NLL) in the West Sea and performing underwater navigation missions.
AI technology is also expected to play an active role on land. Vehicle-type dog robots that perform detection and search missions are in charge of dangerous missions such as reconnaissance and mine detection behind enemy lines. AI technology is also used in defense, such as the “GOP Scientized Perimeter System,” a system that identifies and responds to enemy infiltration by installing CCTV and light detection sensors in all areas of the GOP fence, which aims to strengthen vigilance by more reliably recognizing whether the detected object is an enemy or an animal.
Not only developed countries, but also Korea can be found using AI technology in the defense field to further develop it. In the short term, it seems that utilizing AI technology can reduce the human casualties of friendly forces and more effectively damage enemy forces, but as AI technology advances and military technology becomes more dependent on AI, I believe that the double-edged sword will be pointed at us more and more. For example, it’s possible that robots developed to navigate enemy forces will seek to find and kill humans, just like in the movies.
In the event of war, will it be a robot war that simply utilizes AI technology, or will robots cause irreparable damage to humanity? The United Nations, human rights organizations, and others argue that humans should be able to control robots. I think it is necessary for all nations to sign an agreement to this effect. When robots with ever-evolving lethality turn their weapons on humans, the damage will be done.
Next, let’s think about the ethical issues of A.I. It sounds simple to say that A.I. will find and attack the enemy, but it is difficult to think deeply about it. Imagine a situation where an A.I.-equipped weapon is facing an enemy, and an ally is being held hostage, and it is difficult to save the hostage and kill the enemy at the same time. What should the A.I.-equipped weapon do? Who should be responsible for making that decision? There can also be a lot of backlash against the idea of a machine being in charge of saving and killing humans. AI can’t fully learn on its own. Humans are also taught laws, moral consciousness, and other things by others after birth, and they put ethical ideas that are common to humans, such as basic human rights, into their heads. Therefore, A.I. will be programmed by humans to some extent. Therefore, humans will need to help them make wise decisions, and this is a problem that humans will have to solve while developing A.I. technology.
AI is developing rapidly, and with that comes a lot of controversy. Needless to say, the introduction of AI into military weapons will raise these issues, and I am against it. The development of AI should be reviewed to minimize the problems it will cause.