Will advances in artificial intelligence threaten our creative role or elevate us to godlike status?

W

Advances in artificial intelligence have the potential to blur the lines between man and machine and become an absolute force for solving humanity’s problems, but they also present risks. Strong AI has the potential to help humanity, but it can also pose unexpected threats to humanity along the way.

 

Why are we developing AI? “Deus ex machina” is a staging technique used in Greek theater. It refers to the intervention of an absolute power that resolves and justifies all conflicts in a play. This literary device is sometimes used in movies and plays to resolve conflicts. So, will the AI technology of our dreams be an absolute force that will solve human problems, or an “ex machina,” as the movie’s title suggests? Opinions are divided on this, but I’m in the camp that worries about the possibility of AI becoming an absolute force that lords over humans. This is not just science fiction. In fact, we need to think deeply about the future of AI.
In January of this year, Tesla Motors CEO Elon Musk donated $10 million to the Future of Life Institute (FLI), an organization dedicated to researching AI for humans. The FLI is a voluntary organization of AI researchers from academia, industry, and industry to research AI for the benefit of humans. The FLI is co-founded by MIT Professor Max Tegmark, and its board includes leading academics such as Oxford University Professor Nick Bostrom and Stephen Hawking, director of the Center for Theoretical Cosmology at Cambridge University, as well as Demis Hasavis, founder of DeepMind, which was recently acquired by Google. While the world’s top companies and academics are pushing forward with the development of AI, they are also paying attention to its dangers. Let’s take a look at what they mean by AI and why it could be dangerous.
There are two main types of AI. Weak AI is created to perform only a specific task, such as Apple’s SIRI or Google’s driverless car, Google Car. In the case of AlphaGo, which recently defeated professional Go player Lee Sedol, it can be said that it is a weak AI because it is an AI that only performs a given task of Go. On the other hand, an AI that can perform a given task with cognitive abilities that exceed humans in all areas, not just in a limited range of tasks, is called a strong AI. In the case of the nematode project, which was researched to create artificial life by imitating the neural networks of living organisms, it raised expectations that it would be possible to imitate the human brain with its complex neural network structure one day. Therefore, research on artificial neural network systems based on nematodes can be said to advance the development of strong AI that imitates the cognitive functions of the human brain.
However, the potential of AI is not limited to technical achievements. The advent of strong AI has the potential to fundamentally change our social and economic structure. For example, if strong AI were to replace human labor, it could lead to massive unemployment, and the existing economic system would face serious challenges. These changes aren’t just technological, they can create complex problems for society as a whole.
So how could AI become dangerous? The first scenario is that AI is programmed to have dangerous capabilities of its own. An example of this would be an autonomous weapon. If they fall into the hands of humans who can exploit the technology, such as terrorist and criminal organizations, they could cause massive human casualties, such as mass shootings. Even a weak AI that only performs a specific function can cause great harm to humanity, and the more advanced the technology becomes, the more unimaginable the harm will be.
The second is that even if an AI is created to fulfill a purpose that is beneficial to humans, it can still pose a risk to humanity in how it fulfills that purpose. One of the reasons we want to create strong AI is because we believe that strong AI, with its human-like cognitive and thinking abilities and unlimited labor, will help us solve many of our problems.
Perhaps, as expected, strong AIs can help solve many of our problems, such as food shortages, resource scarcity, and economic stagnation. However, there’s no telling what approach they’ll take in the process of solving these problems. How can we be sure that they won’t choose to reduce the number of consumers (humans) instead of trying to find ways to increase the supply to solve the planet’s food and resource problems? While this is a scenario that would be great in a science fiction movie, we don’t know if it will always be the right choice in other human problems that involve environmental degradation or ethical judgment.
By simulating the nematode’s nervous system, we have succeeded in creating a program that has the same neural network in a computer, which is being distributed as open source and is being further developed. If you ask me, whether a computerized nematode can be called a living being, and how relevant this research is to the coming of strong artificial intelligence, it’s hard to answer. But if you were to ask me if this is the kind of research that humanity needs, I wouldn’t say yes.
Science is beautiful in its own right because it seeks to understand the laws of nature in which we live. The study of A.I. is also beautiful because it can provide answers to how living things, including humans, can perceive and think. However, there is a distinction between the beautiful and the good, and beautiful research is not necessarily good research. If we dare to define good research, it is research that performs only positive functions without causing significant damage to nature, including humanity.
Science and technology cannot always be separated from society, nor is it something that humans should passively accept. Humans are the ones who hold the steering wheel of rapidly growing science and technology, and we are at a crossroads with A.I. technology. It is up to human beings to decide whether A.I. technology will become an “ex machina” for humanity or an “ex machina” above humanity. Therefore, it is clear that having a skeptical view and vague fear of science and technology itself is an attitude that should be avoided, but it is necessary for both researchers and humanity to think about what direction to develop A.I. at the crossroads and to remind themselves of the mindset of pursuing good research.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!