How can we address the negative impact that the development of strong AI will have on our jobs and ethical standards?

H

He explains the negative impacts of the development of strong AI, including job losses, ethical issues, and social polarization, and emphasizes that we need to think about ways to address these issues.

 

Recently, there is a webtoon that has become a hot topic. The title of the webtoon is “Dream Company,” and the topic is artificial intelligence. The webtoon is about a company that has developed a powerful A.I. that becomes the center of the company and manages it. However, this A.I., along with the company’s top management, deceives the public and employees to gain huge profits. The most advanced A.I. is shown controlling other A.I.s, working in the company and managing people. In other words, the webtoon warns of the dangers of future A.I. development.
There are growing concerns and worries about the development of strong AI. Strong A.I. refers to A.I. that can think like humans in general, unlike weak A.I. that can only act like humans in specific goals and fields. There are certainly benefits to the development of strong AI. However, just like the webtoon above, strong A.I. can also cause great harm. In particular, the harms of strong A.I. are unpredictable, so in this article, I will discuss the harms of strong A.I. and why we should stop the development of strong A.I..
First, as AI advances, strong AI will take over human jobs. A research report released by the Davos Forum in 2016 predicted that within five years, about 7.1 million jobs in developed countries alone will be lost to AI, and a research report from Oxford University in the UK predicted that 47% of jobs in the US will be lost to machines and AI. In other words, many experts are predicting job losses due to AI.
AI will replace a high percentage of jobs in blue-collar labor, hospitality, and agriculture and fishing. This means that jobs with low-skilled labor will be replaced, and low-skilled workers are likely to be low- or middle-income. As low- and middle-income people lose their jobs, income inequality will increase, exacerbating the problem of polarization. In fact, income growth in the U.S. has seen the top 1% increase their income by 278% over the past decade, while the middle class has only seen a 35% increase. Studies also show that the incomes of those with advanced skills have grown faster than those of low-skilled workers. This trend will only intensify as AI develops. As AI develops, advanced or hard-to-replace skills related to AI will become more important, further increasing the income of those with advanced skills and higher incomes.
The decline in jobs will not only increase polarization, but also change the employment ecosystem. Currently, the predominant mode of employment is the traditional labor market. This means that people go to work, do their job, and come home, separating their work and personal lives. However, with the development of AI, this employment system will change dramatically. As the Fourth Industrial Revolution creates a variety of industrial systems based on AI, we will see a proliferation of contractual arrangements that mix wage labor with self-employment, and an increase in telecommuting and remote work, i.e., workplaces that blur the lines between work and personal life.
The most prominent new form of employment that has emerged as a result of AI is digital platforms. Unlike the traditional labor market, digital platforms are where companies post tasks on the internet and individuals are paid to complete them. With the exception of senior executives, there is no need for companies to hire full-time employees, so most of the work in this economic system is done on a contingent basis, which leads to greater employment insecurity. However, the market for these digital platforms is growing. Research shows that this kind of online platform economy is growing with the development of artificial intelligence. The rise of the online platform economy is expected to reduce the proportion of full-time jobs, and raise issues of surveillance, security, and privacy in the workplace. In other words, the online platform economy is expected to make people feel more insecure in their employment than in the past.
On the other hand, AI optimists argue that there will be no job losses. Optimists argue that even if existing jobs are replaced by AI, new jobs or new fields of work related to AI, or the fourth industrial revolution, will create as many new jobs as there are jobs lost. They also argue that AI will not cause job losses, noting that there were fears of job losses after each industrial revolution in the past, but no job losses occurred.
However, there are several flaws in this argument. First, the Fourth Industrial Revolution is different from previous industrial revolutions. In past industrial revolutions, technological innovations increased productivity and the resulting increase in demand created more jobs. In the Fourth Industrial Revolution, AI will increase efficiency and productivity, but it won’t increase demand, because what it will do is replace the work we do, so there won’t be much of a demand increase. In other words, if demand doesn’t change, there won’t be job growth. Furthermore, even if there are new 4.0 jobs, experts at the Davos Forum predicted that only 2 million jobs will be created while 7.1 million jobs will be lost. In other words, even if there are 4.0 jobs, they are not enough.
The use of strong AI will also raise a number of ethical issues. Unlike the weak AIs we currently use in our lives, strong AIs are robots that can make their own judgments and decisions without human help. If these robots are put into practice, there is the question of who should be held accountable for their decisions. For example, let’s say AI technology advances to create a medical AI. If this A.I. performs a surgery on a patient by itself and a medical accident occurs due to an error, who is responsible? The company that created the A.I., the A.I., or the hospital that hired the A.I.? Also, if a driverless car causes an accident while driving itself, who is responsible? The car company that created the car or the individual who owns the car? These are complex issues.
According to Brian Christian’s book, even with the development of artificial intelligence, there are still some things that are different from humans. Even with the exception of things like appropriateness of place, humans and A.I. will always be different. Therefore, any judgment made by an A.I. will be different from the judgment made by a human. In other words, an A.I. cannot be held accountable for its judgment in matters related to humans. In the above example, the mistake was definitely made by the AI, but it is not responsible for it. This raises ethical questions about the use of powerful AI.
Lastly, there is the danger that if A.I. is abused by certain individuals or groups, the damage will be passed on to us. Unlike weak AI, strong AI can be actively applied in many fields. In this respect, the speed and efficiency of A.I. will be so fast that it will be difficult for humans to imitate. Therefore, if A.I. is abused by certain groups or individuals, the scale of damage can be unimaginably large.
For example, we recently saw an example of the misuse of AI and computer programs in the United States through Edward Snowden. According to Edward Snowden, the U.S. used AI programs, various computer programs, and big data processing technologies to spy on social media, personal emails, and accounts of everyone in the world. Through artificial intelligence, the US had created a program that could monitor who anyone was seeing and what they were doing at any given time. In other words, we were being watched by the US. This is a clear legal violation that restricts and infringes on individual liberties.
In other words, we can see that AI can be very harmful when it is misused by certain groups, as in the example above. In particular, the above example raises issues of privacy and personal data protection. Moreover, considering that the current A.I. used is a weak A.I., the damage could be even worse if a strong A.I. is misused.
Those in favor of the development and active use of A.I. believe that the harm caused by misuse of A.I. can be reduced by establishing and implementing principles such as the 23 Asilomar Principles before using A.I. In addition, proponents of strong AI development argue that the harms caused by AI will be reduced if each AI company is transparent about its development process and allows the public to monitor it.
However, principles such as the 23 Asilomar Principles are not enforceable, so it may be pointless to have a set of principles because those who develop AI can violate them if they so choose. In addition, forcing companies to disclose their AI development process would violate patent law because the AI technology is patented by the company or individual. In addition, even if the development process of A.I. is transparent in practice, it is unlikely that we will find any abuse of A.I. In other words, if the development of A.I. continues, we can suggest the direction in which A.I. should develop, but the question remains whether it will be realized.
To summarize, the development of strong AI will be a headwind for us. The use of strong AI will lead to job losses, increased polarization, and a host of ethical issues, and if it is abused, the damage it will do to our rights and interests will be unimaginable. Under these circumstances, there is no reason to develop stronger AI that could bring greater risks.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!