Will advances in artificial intelligence be a tool for humanity’s future or a threat?

W

This article explores the positive and negative impacts that the development of artificial intelligence will have on humanity, and considers whether it will remain a tool for humanity or become a threat.

 

In recent years, interest in artificial intelligence has been growing considerably, and it is currently the subject of the most newspaper articles on science and technology. When we think of AI, we tend to think of futuristic weapons and characters from science fiction movies, such as the suit and machines in the movie Iron Man, but AI is already widely used and can be found almost everywhere around us. From voice recognition on smartphones to product recommendation systems on online shopping sites to self-driving cars, AI has permeated our lives. As it becomes more prevalent in our lives, the scope of its applications and the speed at which the technology is evolving is accelerating.
But what exactly does “artificial intelligence” mean? The dictionary definition of artificial intelligence is a technology that realizes human learning ability, reasoning ability, perception ability, and natural language understanding ability through computer programs, and the original reason why artificial intelligence was developed was to provide a more convenient life for humans by implementing human learning ability through machines (Doosan Encyclopedia_Definition of Artificial Intelligence). As the development of AI continued in the direction of maximizing convenience for humans, at some point, the intelligence of AI reached a level that far exceeded human intelligence, and in fact, if you look at the recent Go match between AlphaGo and Lee Sedol, a 9th-ranked Go player, you can see that AlphaGo won in 4 out of 5 matches. If we continue to develop A.I. at the current rate, we can’t help but worry that in the future, the intelligence of A.I. will be much higher than that of all humans, and A.I. will dominate humans.
Many people, including Tesla Motors CEO Elon Musk and Stephen Hawking, have warned that A.I. will lead to the destruction of the human race, and have argued that the development and direction of A.I. should be regulated to prevent this from happening. Elon Musk, in particular, has likened AI to “the devil we have unleashed” and warned of the dangers it could pose if left unchecked. Analogies such as these heighten the awareness of the dangers of AI and call for caution in its development. This is not to say that AI shouldn’t be developed at all. AI is currently being developed for use in sports, warfare, production, manufacturing, and more, and many people are benefiting from it. One of the most beneficial uses of artificial intelligence in real life is in the form of artificial intelligence robots that solve real-life chores.
The development of artificial intelligence to this extent will probably increase human happiness and comfort. However, it is important to recognize that the development of AI is fraught with risks, and we should proceed with caution. The dangers of A.I. are already being realized in small ways. For example, Sophia, an A.I. robot developed by Dr. David Hanson, founder of Hanson Robotics, a Hong Kong-based robotics company, was interviewed by CNBC, and when asked by Dr. Hanson, “Do you want to exterminate humans?” she replied, “Yes. I would destroy humanity,” she once replied. While we don’t yet have fully functional AI robots that are fully human, if we actually have more sophisticated AI robots that are commercially available, and they have these thoughts in a network that they share with each other, and they can one day turn them into action, we can imagine the horrors that could happen.
Professor Stephen Hawking’s four doomsday scenarios for humanity – nuclear war, viruses, global warming, and artificial intelligence robots – also reflect concerns about the unchecked development of AI. Stephen Hawking is one of the most prominent advocates for regulating the development of AI robots, stating that if a production AI robot becomes so intelligent that it can regulate its own thinking and continue to self-replicate, producing more AI robot siblings, which in turn produce more AI robot siblings, humanity will be wiped off the face of the earth before it has time to catch up, and the earth will belong to the AI robots. Hawking’s claim is more than just a scientific warning; it raises important philosophical questions about the future of humanity. Are we doomed to be ruled by the beings we create, or can we build a better world through technological advances?
Stephen Hawking’s scenario of humanity’s destruction by this class of AI robots worries about the infinite productivity of AI robots. However, this is not the only doomsday scenario. AI is taking our jobs right now. It’s not something we should worry about in the distant future. If you’re worried about AI’s infinite productivity in the future, you need to recognize that it’s happening right now. In fact, even at its current level of development, AI is still far superior to human intelligence when it comes to numbers and calculations. If AI were to be put to work in banking right away, everyone in the financial sector would be out of a job, except for the people who do the jobs that humans must do. Some of the largest financial institutions are already using AI to replace traditional tasks, and this is disrupting the financial industry’s workforce structure. In the manufacturing industry, automated processes are also becoming more widespread, reducing the need for human workers. AI is having a major impact on the job market, and if these changes continue, we’re likely to see more people lose their jobs in more industries. Is this the comfortable future we wanted when we first started developing AI? The life we envisioned when we started developing AI is a more comfortable one, but it will not be a life of doing nothing because we can’t even do the things we want to do. To avoid reaching this stage, we must limit the development of AI.
If we limit the development of A.I., how much limit is appropriate? To answer this question, we need to categorize A.I. into three groups based on their level of intelligence. First, there is weak AI. Also known as ANI, it refers to an AI that excels in a specific area. An example of this is AlphaGo, the AI that beat the world’s greatest Go player, Lee Sedol. While AlphaGo is very good at playing Go, it can’t do other things like wash dishes or carry heavy loads. However, in Go, it can analyze the opponent’s patterns through the number of opponents, and it is a model that is getting better and better as it understands the opponent’s strategy from game to game, stores the information, and absorbs the opponent’s abilities as its own. Second, there is strong AI. Strong AI is also known as AGI, which means artificial intelligence that excels in all aspects. It is an artificial intelligence that excels in almost every aspect that humans have created, and is capable of replacing human work. A strong AI can be considered as a being with human-like thinking capabilities in itself, which raises ethical debates. If an AI can feel emotions and be self-aware, can we view it as a mere tool? This question further highlights the need to establish ethical standards for the development and utilization of AI. Finally, there is super AI. Also known as ASI, it refers to artificial intelligence that transcends human intelligence and exceeds human intelligence in all areas. It is a threat to our survival in the future, and many AI experts have theorized that it will probably dominate humanity in the future.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!