As artificial intelligence develops, the question of whether it should be granted the same rights as humans arises. Human dignity comes from uniqueness and uniqueness, but AI is replicable, so it may be inappropriate to grant it the same rights as humans.
Do created intelligences have rights?
There is a word called “artificial intelligence”. In English, it’s spelled “artificial intelligence”, which literally means “created intelligence”. It refers to an artificial intelligence that is not a living being, but thinks and behaves similarly to a living being, and is often the subject of science fiction films, novels, animation, and other media. For example, in the movie Robocop, a robot that thinks and judges like a human fights criminals on behalf of humans, and in the movie AI, a robot named David who thinks he is human and craves the love of his parents. Even in the old anime Atom, a robot with human emotions appears and leads the story.
Before we dive into AI, it’s important to take a look at intelligence itself and how it came to be. Philosophers have long pondered the human mind and intelligence, and have spent a long time trying to unravel its nature. In modern philosophy, dualism and materialism have tried to explain the nature of the mind, with dualism viewing mind and matter as different, and materialism interpreting mind as a physical phenomenon. Today, due to the advancement of science and technology, materialistic thinking is more prevalent than dualism, and the mind is considered to be the result of material behavior.
The development of AI and its philosophical background
The first discussion of the mind that emerged from materialist thought was behaviorism. Behaviorism attempted to make a 1:1 correspondence between psychological states and behaviors, but the contradiction of having to use different psychological concepts to explain “behavior” led to the decline of this theory. Later, functionalism emerged, which interpreted psychological states as functions, which would later lead to classical computationalism. Classical computationalism views intelligence as a collection of functions that can be combined with computer programs to create artificial intelligence programs called Turing machines.
AI research has evolved alongside these different philosophical theories. At first, A.I. was only able to answer simple questions, but it gradually gained the ability to learn and develop to the point where it could respond more smoothly to human interaction. Today’s A.I. has the potential to improve itself based on learning algorithms, and if this progress continues, we may one day have an A.I. that thinks and feels very similar to humans.
So, if A.I. is indeed capable of thinking at the same level as humans, should it be granted basic rights such as human rights? This is not just a technical question, but also an ethical and philosophical one.
Should AI be granted rights?
A living being has dignity not just because it is alive. For example, the word “cruelty” refers to the unnecessary suffering or killing of animals, but for humans, it includes not only physical pain, but also violations of their rights as human beings. Plants, on the other hand, are alive but not subject to abuse. This means that their rights are only guaranteed if they have the intelligence to recognize what is being done to them. From this point of view, it is necessary to discuss whether, if an artificial intelligence is self-conscious and can recognize acts committed against it, it should be guaranteed certain rights.
The level of intelligence can also be an important criterion. Just as humans and animals have different levels of rights, the level of intelligence of an A.I. may also determine the rights they are guaranteed. If there is an AI that has the same level of thinking ability as humans, does it have the rights to be respected and protected as an intelligent being?
The most important part of this discussion is an understanding of the nature of human dignity and rights. Human beings are dignified not just because they have intelligence. For example, a person with an intellectual disability or a person in a vegetative state still has dignity and enjoys corresponding human rights. This means that human dignity is a separate issue from intelligence. A human being is dignified because he is unique. Each human being lives only once in this world, and once that life dies, it cannot be revived. Because of this uniqueness and transience, humans are dignified and their rights are protected.
On the other hand, A.I., no matter how highly developed, does not possess this uniqueness. It is a collection of data made up of electronic signals that can be replicated as many times as needed, transplanted into a new body, or revived from a backup. As such, it can never have the same dignity as a human being, and it is not appropriate to grant the same rights as a human being to a being that lacks uniqueness.
Conflicts between A.I. rights and human rights
If we grant AI the same rights as humans, this could lead to conflicts with human rights. When AI reaches a point where it can make its own decisions and exercise its own rights, there may be situations where the exercise of those rights violates human dignity. These conflicts could eventually lead to human harm by recognizing the rights of AI. Given that the ultimate goal of technological development is to enrich and facilitate human life, granting excessive rights to AI would defeat the purpose.
As AIs develop, their role and status will continue to be debated. However, even the most advanced AIs are still just programs created by humans, and granting them excessive rights would likely violate human rights. Therefore, discussions about the rights of AIs should be approached with caution and always keep in mind the original purpose of advancing technology for the benefit of humans.