It’s 2024, and artificial intelligence is transforming our lives, penetrating finance, healthcare, shopping, and more. While some experts are optimistic about the future, others are concerned about job losses and the dangers of AI. We need creative education and safeguards to coexist with AI.
In 2024, artificial intelligence is deeply embedded in our lives. I first started writing in college while taking a literature course, and although it was just an assignment for credit at the time, I gradually became interested in writing and began to write about various topics. Now, I write about how artificial intelligence affects our lives.
Looking at the history and development of AI, the financial sector has been using AI since the 1980s to counsel investors today. From simple data analysis and forecasting to complex investment strategies and real-time market analysis, AI technology has been used to help the financial industry become more efficient. These changes have greatly increased the efficiency of the financial industry and are helping to deliver better returns to investors. But it’s only the beginning.
Internet shopping giant Amazon uses the Amazon Echo to analyze your buying patterns and lifestyle to make personalized purchase suggestions. It’s more than just a convenience, it’s changing the way consumers live. While these personalized services offer consumers a better shopping experience, they also raise privacy concerns.
Watson, IBM’s artificial intelligence, has been deployed at Gil Hospital in Incheon, Korea, to treat more than 100 cancer patients. According to professors at Gil Hospital, patients usually follow the advice of their surgeons, but the fact that the patients were willing to follow Watson’s prescriptions shows the impact of AI in our lives. The fact that AI is playing an important role in healthcare gives many people hope, but it also raises concerns about the potential for misdiagnosis.
Interim addendum
In his book The Birth of the Mind, artificial intelligence scientist Ray Kurzweil argues that the development of artificial intelligence is inevitable, stating that “we have no alternative but to extend our biological capabilities through information technology in order to more efficiently solve the complex challenges before us.” “We will become one with the intelligent technology we create. Intelligent nanobots in the blood will keep our biological bodies in a healthy state at the cellular and molecular level,” he says, and he is quite optimistic about the future of AI and our lives. Can we expect a positive future?
Unlike Ray Kurzweil, who is optimistic about the future of AI development, there are others, like James Barrett, who are not. This is because of the fear of AI. This is because we’ve been exposed to many examples of AI harming us in the media, such as the Terminator series. There are two main reasons why people are afraid of AI. One is that it will take our jobs as it develops, and the second is that it will harm us, like the Terminator. Should we stop developing AI because each of these fears is a very real threat? Let’s take a closer look.
Technology is already taking away jobs in small ways. We already have robots taking orders in cafes, and robots are replacing many workers in industries. The problem is that with the development of artificial intelligence, this problem will only accelerate and become more severe. Let’s look at our future with more examples. A 2013 study by Frey and Osborne warned that within 20 years, about 47% of all jobs in the United States will be at risk from automation due to advances in artificial intelligence. In particular, they predicted that 90% of jobs such as sports referees, restaurant and coffee shop workers, farm workers, delivery drivers, chauffeurs, real estate agents, legal secretaries, tax preparers, insurance adjusters, and administrative assistants will be replaced by machines. A recent 2020 report from the World Economic Forum estimated that 85 million jobs could be lost to automation globally by 2025, while 97 million new jobs will be created at the same time. This is especially true in sectors such as logistics, manufacturing, and food service, where automation is not only taking over traditional, repetitive tasks, but also highly skilled jobs such as data analytics and customer service. The COVID-19 pandemic has also accelerated the pace of automation by dramatically increasing the demand for remote work and in-person services.
It’s important to note that this is not the way things have been going. Up until now, technology has only replaced “blue-collar” jobs. Factory labor, order-taking, etc. However, the jobs created by AI are different. Do you see tax preparers, legal secretaries, administrative assistants, etc.? These are so-called “professional jobs” that require multidisciplinary thinking. Even professions that seemed unthreatened are at risk. In the legal sector, Blackstone Discovery has developed an artificial intelligence that can do labor-intensive legal research for them.
It’s inaccurate to think of these job replacement issues as simply “job loss. Machines have already taken over simple tasks like calculators and labor robots. The roles that humans used to play were those that required comprehensive thinking. But if these are also replaced by AI, it could lead to a polarization of society. Capitalists would much prefer to hire an efficient and labor-free AI rather than an expensive and inefficient human. Eventually, capitalists who can afford to hire AI will accumulate more and more money, and employees will be forced to offer their labor for cheap because they will not have jobs. When society becomes polarized, the whole society becomes stagnant. Society cannot be built by capitalists alone.
Postscript
While Ray Kurzweil argues that we need to develop AI to “more efficiently solve the complex challenges that lie ahead of us,” the opposite is true: AI is complicating our problems rather than solving them. The threat of AI is not just about jobs. In the future, we will not only see the development of artificial narrow intelligence (ANI), but also artificial general intelligence (AGI) and artificial super intelligence (ASI). In that case, not only our jobs, but our entire lives could be controlled by AI.
Consider a proposition from James Barrett’s book Final Invention. “There are two reasons why artificial intelligence and robots are a topic of discussion. “There are two reasons why AI and robots are a topic of discussion: first, that taking over bodies is the best way for AI to increase its knowledge of the world, and second, that AI wants a human-like form so that it can use human infrastructure,” he writes. There is a reason why A.I. is a threat to humans. Taking over a body is the best way for an AI to increase its knowledge of the world and acquire resources. Human-like machines are better at climbing stairs, putting out fires, cleaning, and handling pots and pans. Similarly, in order to effectively use manufacturing bases and buildings, transportation and tools, and so on, AI will want human-like forms to take advantage of human infrastructure.
There is also a strong possibility of combat robots under AI control. Currently, the largest investor in the development of AI is the Defense Advanced Research Projects Agency (DARPA), which is part of the U.S. Department of Defense. It provided most of the funding for Siri’s development and is the main backer of IBM’s AI development project SYNAPSE. DARPA researches and develops military-related technologies. As part of the Department of Defense, DARPA’s investment in AI means that it will be used for military purposes.
So what should we do about the upcoming AI future? I have two solutions: the first is that we need to do a 180-degree turn in the way we educate. Tyler Cowen, a professor at George Mason University, analyzes the rise of AI from an economic perspective. He predicts that the future will be divided into two main groups of people: those who can take advantage of AI and enhance their skills, or those who will not be disturbed by machines, and those who will not be able to interact with machines and will not even enter the labor force. However, our education only enables the latter group of people. We need to move away from indoctrination, from English education that teaches us to memorize difficult words that are far removed from everyday conversation, from math education that focuses on calculus problems and memorizing information that can be plugged into a calculator and searched on the internet.
Instead, we need to focus on areas where humans have an advantage. In the labor field, it is said that the jobs that are difficult for A.I. to replace are atypical and the content of the work is constant. In addition, human performance is stronger than AI in areas that require sophisticated communication and persuasion skills, a comprehensive perspective, a high degree of flexibility, and even creativity. There’s no way we can keep up with AI in terms of knowledge, computation, and speed. The way we can overcome AI is through creativity, so we need to study and implement education that fosters creativity. We need to develop the ability to ‘utilize’ information, not just acquire it. To do this, we need to abolish memorization-based exams and replace them with performance-based assessments and discussion-based classes.
To prevent AI from surpassing humans and controlling them, we need to create double and triple safeguards for AI. Some people say that instilling Asimov’s Three Principles into robots will solve everything. But this is not enough. The Asimov’s Three Principles are not enforceable, they are merely recommendations. In his book “What Makes a Catastrophe?”, organization theorist Charles Perrow argues that catastrophic events are a “normal” feature of systems with complex infrastructures. Unrelated processes or elements are prone to failure. These are unpredictable. For example, nuclear accidents. We design nuclear power plants with tons of safeguards, but accidents happen in unexpected places. They are completely unpredictable.
Also, if you extrapolate current thinking into the future, you can find some solutions. We live in a world of the Internet, and it has made many things more convenient. But we’ve also lost a lot. Hackers can steal your personal information and sell it to companies or attack other sites to cause financial damage. The cybercurrency Bitcoin has been used for a long time for all sorts of black market transactions under the radar of the police. In a future AI world, what if a scientist intentionally creates an AI that is hostile to humans, and then hacks it to use it for his or her own benefit? Of course, there will be defenders, but with things like hacking, attackers have a huge advantage. They only need to attack thousands of times to succeed once. To prevent this risk, we will have to double and triple restrictions on A.I., including legislation, defense systems, and systems that can stop A.I. with the push of a button.
When Greece defaulted on its debt, a trader sold $4.1 billion worth of futures and index-linked funds, and High-Frequency Trading Systems, sensing the price plunge, placed orders to sell almost all of them at the same time. The process took just a few milliseconds. With such a time lag, is there any room for human intervention? Once AI is developed, there is no way for us to fully control the process. That’s why we shouldn’t be optimistic about the future with AI. As long as there are still questions about the jobs it will create and the safety of the technology, we need to curb research on AI. Ray Kurzweil is quite optimistic about the future of AI. However, there will be no such positive future without thorough preparation. If we don’t prepare, and then try to fix things after the fact, it will be too late.