Drawing on Ray Kurzweil’s The Birth of Mind, we discuss a future in which the artificial intelligence cloud could be used to create a “you2” that looks exactly like you and has the same brain, and explore the practicalities, property rights, criminal law, and personal issues of such a clone.
Have you ever imagined having an eye-to-eye conversation with someone who looks and acts exactly like you? In Ray Kurzweil’s book The Birth of Mind, he predicts that in the not-too-distant future, we may be able to create a “You2” that is not only identical to your body, but also to your brain. You2 would be an exact duplicate of you, complete with your every movement, endocrine system, and physical detail. You2 will be created using brain uploading, which uses the artificial intelligence cloud to replicate the shape of a person’s brain. Your neocortex is reverse-engineered to create an artificial brain that looks exactly like your brain and functions exactly like your brain.
This is the realization of the childhood fantasy of “if only there was another me”. But it’s hard to look at the idea of a robot that looks like you, acts like you, and thinks like you in a positive light. I will look at this in terms of the practicalities of You2, the ambiguities of property rights and judicial treatment, and the personal issues that arise from the perfect identicality of you and You2.
We can see the contradiction of You2’s very existence in Ray Kurzweil’s predictions of a future AI society. His prediction of the development of AI is basically based on the “law of accelerating harvest”. This is the idea that human technology improves exponentially based on existing technology. He predicts that it won’t take long for A.I. to go from being able to mimic human speech to being indistinguishable from humans. Ray Kurzweil argues that these advances will soon allow AI robots to stop being human tools and start being treated like humans. But the moment AI robots are on equal footing with humans, You2’s raison d’être disappears.
If you were to buy an AI robot that looked exactly like you, a big part of your intention would be to have it do things that you don’t want to do or that are difficult. However, there’s something you need to know before you ask You2 to do something you don’t want to do, or “ask” it to do something you don’t want to do, human to robot. The You2 robot standing in front of you is literally a second you. Created by scanning your brain, You2 has the same lifestyle, eating habits, strengths and weaknesses, and everything else. You wouldn’t expect it to be willing to take on tasks that you don’t enjoy or aren’t good at, and even if you were to ask it to complete a task for you, you wouldn’t expect it to do it efficiently. There is no reason to use You2 over other capable AI robots.
Above, we’ve talked about the efficiency and convenience issues that come with the introduction of You2. While these are valid arguments for not needing You2, they may not be enough to support the argument that You2 should be banned. However, there is still the question of whether or not we identify You2 with you. In The Birth of the Mind, Ray Kurzweil argues that if you replace a portion of your brain with a small device with a computer inside, you are still the same person. His rationale is that the gradual replacement of the brain would result in only imperceptible changes, and the initial brain and the implanted brain would be indistinguishable. At the end of chapter 9, he argues that eventually you and you2 are indistinguishable, that there are two yous. Consider a situation where there is another person with your identity.
For one thing, the boundaries of property rights to property derived from You2 become unclear. If you view your relationship with You2 as that of an owner and a robot, you can claim that the property that comes from You2 is yours. However, if You2 and you are recognized as one and the same entity, you may have difficulty claiming You2’s property as your own. This issue becomes even more ambiguous when it comes to intellectual property rights. When You2 produces his own intellectual creations, you might argue that the intellectual property rights belong to you because his brain is modeled after yours. Since you are the source of You2’s brain, you can be recognized for your contribution to the creation. Ray Kurzweil’s argument shows that You2 is just one more you, so there is no problem with the above idea. However, since You2 is modeled after your brain, it has a different life than you. You and You2 engage in different activities, have different experiences, and their neocortexes take different paths. From these two different perspectives, it would be impossible to distinguish whether You2’s intellectual property is derived from your original brain or from the experiences You2 has, which could lead to intellectual property ownership disputes.
There is a further problem that arises from having the same brain as well as the same appearance. This is the problem of criminal law. Modern criminal law scholars say that the evaluation of criminal law should start with human behavior. “An act is not criminally culpable unless it is controlled and orchestrated by a human being or involves human intent,” they say. The other thing that criminal jurists argue is necessary for criminal punishment is the capacity to be responsible, which is the ability to commit an act. Responsibility is about whether the actor deserves to be blamed for what they did. An actor is only responsible if he or she is able to recognize things and distinguish between socially defined good and evil.
In this case, it is difficult to consider that the robot’s behavior is an intentional act of a human being, i.e., it has criminal capacity. Also, in the case of responsibility capacity, it is difficult to punish a robot that does not possess artificial intelligence because it cannot make a judgment about the situation. However, suppose you own a robot called You2. You2 thinks and makes judgments in the way that real people think. Therefore, if You2 commits a crime, You2 has what criminal lawyers call criminal capacity, and You2 also has liability capacity because You2 has the intelligence to recognize the situation. Based on these two factors, it seems relatively clear that You2 should be punished for the crime.
Once we have come to the conclusion that You2 should be criminally punished, the next question is how. It’s important to keep in mind that an A.I. like You2 is not a machine programmed by a single person, but a copy of your brain. If a designed A.I. commits a crime, we can hold its programmers responsible, but it would be unreasonable to transfer its thoughts to You2. It is possible to equate A.I. robots with people, to judge You2 as an independent entity, and to punish You2 directly. However, this punishment harms You2’s owner, who is a third party to the case, during the execution of the punishment. The purpose of criminal punishment is to reform the offender and penalize them for their actions, such as by isolating them from society. The subject of the crime, You2, will go to jail for a period of time, and you will lose You2, which you paid a certain amount of money for, during that time. Even if you hadn’t participated in the crime, you’d still lose the use of You2 for a period of time and suffer a financial loss. This creates the dilemma of punishing people who had nothing to do with the crime in order to punish the perpetrator of the crime. Therefore, there is a criminal law problem with punishing You2.
In addition to these social problems, You2 can also cause problems at home. Since You2 is created with a brain, including your memories, at the moment of creation, it can think of itself as the “real you”. The problem arises from this “perfect sameness”. You are robbed of your uniqueness by another person. With the rapid advancement of technology, You2 will look indistinguishable from you and will act based on the same memories. If You2 believes that he or she is the “real” person who has always existed, you will be in trouble. You2 talks to your family and works at your job as if it were you. You insist to your family and friends that you are the “real” you, but there is no evidence to support this. Your position in society, your home, and your family will no longer be yours. By creating a You2, you’re taking it upon yourself to destroy a characteristic and identity that makes you unique in the world.
In his book Homo sapiens, anthropologist Yuval Harari describes the evolution of humans, or Homo sapiens, and asks the question: what name will we give to the humans that will come after us? Of course, humanity will continue to evolve, as it always has. In the not-too-distant future, we may not be able to distinguish between AI robots and humans. But the inability to distinguish between humans and AI doesn’t have to be a reason to create You2. The A.I. that we consider equal and coexist with does not necessarily have to look like us and think like us. We are not just optimistic about the emergence of You2 as a side effect of the exponential development of AI. You2 will cause social and moral disruption by its very existence. We need to recognize the disruption it will cause, and we need to stop it.