Will advances in artificial intelligence lead to positive change or a dystopian future?

W

While advances in artificial intelligence have the potential to bring convenience to humans, there are concerns that the emergence of a strong artificial intelligence could spiral out of human control, raising ethical issues and the risk of a dystopian future.

 

Humanity has gone through three industrial revolutions that have shaped the world like never before, and now we are preparing for the next one. The first and second industrial revolutions focused on increasing productivity through mechanization with steam engines and electricity, while the third industrial revolution was an IT revolution based on knowledge information. And now, the fourth industrial revolution is taking place, based on vastly improved IT technology. The core technology driving the fourth industrial revolution is artificial intelligence (AI).
AI is a technology that implements intellectual abilities such as human thinking and learning through computers. AI can be generally categorized into weak AI and strong AI. The distinction is based on whether the AI is conscious or not. Weak AI is non-self-conscious AI that specializes in a particular field and is used to compensate for human limitations and increase productivity. Strong AI, on the other hand, is AI that has a mind of its own, capable of thinking freely like a human.
Many experts predict that strong AI will soon emerge, followed by superintelligence, which will surpass human intelligence. Ray Kurzweil, Google’s director of technology and a futurist, made a bold prediction: He claimed that because of the exponential growth of AI technology, computers will surpass human intelligence by 2029 (strong AI) and by 2045 (superintelligence). He calls this point in time when AI surpasses human intelligence the “singularity”.
Experts differ on the future of superintelligence. While Kurzweil is optimistic that “superintelligence could make humans immortal,” Microsoft founder Bill Gates warns that “artificial intelligence technology will pose a threat to humanity.”
We don’t know exactly what the future holds for superintelligence, but if artificial intelligence continues to develop, it’s only a matter of time before we have superintelligence. Therefore, we need to carefully study what the future holds for humanity. If it will be a great help to humanity, we should continue researching it, but if it has the potential to be a disaster, we should stop. The current weak AI is helping us a lot, but the strong AI and superintelligence that will emerge in the future could pose a great danger to human society.
The benefits of AI to humans are very clear. If we look at current uses of AI, the biggest benefit is convenience. Self-driving cars are being developed, and when they are commercialized, people will no longer have to deal with the fatigue of driving, and the risk of car accidents will be reduced as long as the algorithms are working properly. Another benefit is the speed at which they can do things, much faster than humans. AI-powered translation technology will be able to analyze the context and flow of a text, as well as interpret colloquialisms and jokes that people often use. Translation technology will be able to quickly translate what we want to read and do so much faster than humans. All of these technologies offer great convenience to humans in real life and can improve our quality of life.
However, these benefits are only possible with weak AI that specializes in certain areas. Weak AI can’t think like humans, it can only outperform humans in certain algorithms. But what if a strong AI can outperform humans in all areas? In my opinion, the development of strong A.I. will bring more harm and risks than benefits.
First, it will be impossible for humans to control strong AI. Once strong AIs are developed, humans will no longer be able to intellectually outwit them. Humans will try to control them, but they will be able to escape human control with greater intelligence.
Of course, preparations for this are underway. The 23 Principles of Asilomar AI have been established, starting with the three ethics of robotics that science fiction author Isaac Asimov wrote about in his novel Runaround. But the problem is that the scope of AI’s applications has already surpassed ethical categories. Asimov’s ethics of robotics states that robots should not harm humans. However, humans themselves are trying to undermine this principle. Since the Iraq War in 2001, unmanned drones and small robots have been used in international armed conflicts, and Google’s Boston Dynamics is developing robot soldiers for war with the support of the U.S. Department of Defense. This is based on the premise that robots are capable of harming humans, so it can be said that the development is already ignoring the principles. This indicates that the principles of AI development are losing their meaning.
It will also cause chaos in human society as it becomes impossible to distinguish between strong A.I. and humans. Currently, A.I. researchers are studying the human mindset and imitating it. In his book The Birth of Mind, Ray Kurzweil explained that the direction of AI research is to mimic the human brain’s neocortical system. The human neocortical system processes various signals that come into the brain into patterns, and thinks in a way that develops from lower patterns to higher patterns. If a successful A.I. is created through this method, it will not only be able to think like humans, but also process information much faster than humans, resulting in a strong A.I. with higher intelligence, or superintelligence.
So, can we distinguish between human speech and A.I. speech? I don’t think we will be able to distinguish between the two. Already in 2014, Eugene, an AI that passed the Turing test, appeared, and the AIs that will appear in the future will be even more indistinguishable from humans than Eugene. The Turing Test, proposed by Alan Turing in 1950, is a test that attempts to determine whether a machine has intelligence based on how closely it can communicate with humans. If AI advances to this point, human society will be disrupted as the boundaries between humans and AI will disappear. This means that AI will be able to do everything that humans do.
As the boundary between A.I. and humans disappears, it will be possible to clone humans by scanning their brains and copying them into other bodies. The fact that there is another being exactly like us could be very disruptive to our sense of identity. Furthermore, cloning humans through superintelligence shows a lack of respect for life and is ethically wrong. This is not a future we want to see.
Finally, there will be ethical questions about AI. There are two possible scenarios after the emergence of superintelligence. One is a dystopian future. A dystopian future can be depicted as a world where AI wreaks havoc on human society and dominates it to the point where humans are unable to recognize reality, as in the movies The Terminator or The Matrix. The opposite is a utopian future, where humans and AI coexist peacefully, and humans will become part of the “technological singularity” and join machines, such as uploading their brains to the cloud.
I argue that if current AI development continues as it is, a dystopian future will become a reality. Like the development of AI as a weapon of war, if AI developers do not conduct their research in a moral and ethical manner, the AI they develop will be unethical. If A.I. is utilized for warfare, as Boston Dynamics has done, it won’t be surprising if it attacks humans anytime, anywhere.
The bigger question is whether AI can be taught to be ethical. In other words, can humans program ethics into it? This may be possible with weak AIs, but it’s a different story with strong AIs that have higher intelligence than humans and can learn on their own. At this point, humans will not be able to program ethics. The machine will simply decide what is right and wrong and act on its own. If the decisions made by A.I. have a significant adverse effect on humans, then we will be facing an A.I.-induced dystopia.
In conclusion, the development of AI is likely to lead to the emergence of AI that humans cannot control, the shrinking of the human presence, and ultimately a dystopia caused by AI. While some futurists, such as Kurzweil, are optimistic about the future of AI, it is imperative that AI developers design AI in a moral and ethical manner to ensure a positive future. My concern is that the rise of superintelligence could lead to negative consequences rather than positive ones. To prevent this, ethical guidelines must be strictly adhered to during the development process. These should not be defined in writing, such as the Asilomar Principles, but should be recognized by each researcher to ensure that AI is developed in the right direction.
I argue that in order to develop AI in the right direction, we should focus on developing weak AI, which is more practical and utilizable than strong AI. The development of strong AI can be a great danger to humanity. On the other hand, weak AI can think in the same way as machine learning or deep learning, but its scope is limited, so it will not be a threat to humans. Current applications of weak AI have made life easier for many people, and there are still many areas that can benefit from AI technology. Therefore, weak AI will be sufficient for the development of AI.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!