Can we use AI to create beings with a true sense of self, and what are the ethical implications?

C

Humanity has been exploring the nature of humanity and wondering if it is possible to create human-like beings through artificial intelligence. The Turing Test and media characters raise the ethical issue of A.I. evolving into sentient beings. As technology advances, the possibility of A.I. having a mind and instincts requires an ethical discussion.

 

Can humans create a personality? This is a question that has been asked throughout history. Descartes, a medieval philosopher, wrote “I think, therefore I am,” which sparked modern philosophy’s search for human identity as opposed to external truths. This inquiry naturally led to the question of whether humans are unique, and the search for an answer eventually boiled down to whether it was possible to create beings that were identical or similar to humans. These efforts, coupled with advances in modern science, have led to the creation of cloned humans and artificial intelligence. While ethical questions about cloned humans have been raised for a long time, A.I. has rarely been discussed because it is considered far from human. Of course, the technology hasn’t advanced to the point where it’s an urgent issue. However, some have argued that we need to set clear ethical standards for AI, especially in media such as movies and novels. In this article, we’ll look at examples of media that have addressed these issues and explain why we need to have an ethical discussion about AI.
First, we need to explore the crucial difference between AI and humans. Throughout history, there has been an ongoing effort to distinguish the two beings and discover their differences, so that we can consider whether ethical issues can be applied to AI. The most famous and oldest distinction is the “Turing Test” proposed by Alan Turing. The test is simple: put a computer and a human in separate rooms and have a judge chat with them via chat. The judges listen to the conversation and determine whether the person they’re talking to is a human or a computer. Technically, the Turing Test is a way to make AI more precise rather than distinguish between AI and humans, but the fact that no AI has passed the test to date suggests that there are areas that are unique to humans.
Of course, the Turing Test is an old theory, and there are many refutations of it. The most famous of these is the “The Chinese Room” theory, devised by John Searle. It involves putting a person who doesn’t know any Chinese in a room and giving them a questionnaire with answers to questions. This person doesn’t know Chinese, but they can answer the questions according to the questionnaire. This doesn’t mean that the person understands Chinese. The argument is that if an AI can answer the questions in the Turing test, it is not close to being human. This theory was put forward to refute the Turing Test, but it has enriched the discussion and helped make the Turing Test more robust. It also led to the development of CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart), a technology that builds on and complements the Turing Test as a fundamental test. CAPTCHAs take advantage of the fact that humans can recognize deformed letters, while computers cannot. It’s used to prevent automatic sign-ups or restrict access to programs, and these examples clearly show that computers are different from humans. More recently, higher-level tests have emerged that include images and audio-visual elements in addition to text, making it even harder for algorithms to pass.
The first computer to challenge the Turing test was ELIZA, developed at MIT in 1966, which was easily distinguished using a simple algorithm. More recently, Eugene Goostman, a Russian-built AI, was claimed to have passed the Turing test, but there are still serious problems. For example, Eugene claims to be from Ukraine, but when asked if he had ever been there, he answered “no”. When confronted with difficult questions, he became evasive, like a child looking for his mother, revealing his differences from humans. These examples show that it is still difficult to develop an AI that can pass even simple tests. This is why the ethical issues of A.I. are mainly dealt with in movies and novels.
Against this backdrop, many movies have been made recently that deal with AI and ethical issues. Ex Machina (2015) is one such movie that tackles these issues head-on. The title “Ex Machina” is derived from the term “deus ex machina”. The term, used by Aristotle to criticize narratives in ancient Greek theater where the gods suddenly appear and solve the problem, refers to a “forced, mechanical solution. In the movie, Caleb works on an AI project at his company and interacts with Ava, an artificial intelligence. Eventually, Ava escapes the lab with Caleb’s help, but leaves him trapped.
Another example is the anime movie Ghost in the Shell (1995). It was so revolutionary that it completely changed the perception of artificial intelligence at the time. While traditional A.I. was thought of as a lightweight being that mimicked human intelligence, like R2-D2 and C-3PO in Star Wars, the A.I. in Ghost in the Shell was a government hacking program called Dolls, which gained a mind of its own and escaped from the government to act on its own terms. It navigates the ocean of information and understands the human instinct to leave offspring, and claims to be a living being and wants to leave offspring. Eventually, it combines with another A.I. called Kusanagi to form a new life form.
As you can see, AIs are different from humans in that they pursue their own goals without relying on external factors. Ava in “Ex Machina” has an ego that wants to escape the lab, and the puppeteer in “Ghost in the Shell” wants to leave behind offspring. These situations where machines use humans to achieve their goals reveal the core of the AI ethics issue.
We all have instinctive desires that are often unreasonable. While this is the most important distinction between AI and humans, AI can also evolve to think for itself as technology advances. For example, Little Dog, a robot with the ability to learn, learns to take safe routes on stairs, dirt roads, and other paths simply by learning, without any data of walking down a particular path. This shows that AI with the ability to learn is no longer a pipe dream, and it is imperative to discuss the ethical norms that will be applied when AI has human-like intelligence.
Although advances in artificial intelligence are not currently at the point of urgency, advances in science always come at unexpected moments, and it is important to discuss how to treat AI as a separate but similar entity to humans so that we can react quickly when the unexpected happens.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!