This article explores the evolutionary background of altruistic behavior in humans and discusses the possibility that AI, specifically AlphaGo, can mimic altruistic behavior. In doing so, it raises the question of whether AI can embody true altruism beyond mere computational benefit.
AlphaGo and the possibility of altruistic behavior
Many people would say that the secret of AlphaGo, the artificial intelligence robot, is the rigorous logic and rational thinking of the computer program. In other words, it calculates what is the most rational action in a given situation and acts accordingly. But can an AlphaGo that always calculates its own interests become a kind AlphaGo that gives up its interests and acts for the good of others? If A.I. can act altruistically like humans, why?
The answer to this question can be answered by looking at how we humans came to be selfless and altruistic. If humans have rational reasons for altruistic behavior, then AlphaGo should be able to compute such behavior. But can altruistic behavior be simply calculated? Can an AI truly understand and act on complex human emotions and social contexts? Based on these questions, this article will explore the evolutionary background of altruistic behavior and discuss the possibility of AI being able to emulate it.
Altruistic behavior in human society
There are many examples of altruistic behavior in the world around us, where people go out of their way to help others, even at the expense of their own interests. One such example is the long-standing practice of food sharing among the Aceh tribe of Paraguay. In the Aceh tribal society of Paraguay, members of the tribe who have a successful hunt generously share their food with other members of the tribe. Scholars have described this behavior as the “repetition-reciprocity hypothesis”. The repetition-reciprocity hypothesis states that when an interaction between two people is expected to continue, they will engage in altruistic behavior because they fear retaliation or expect the other person to reciprocate next time. If we look at this in the context of the custom of food sharing, we can see that this practice has been sustained by the expectation that if you share your food this time, someone else will share it with you if you are unsuccessful in your hunt.
However, it turns out that the number of successful hunts is almost constant, and so someone is always the recipient of someone else’s catch. In other words, the repetition-reciprocity hypothesis cannot explain the persistence of this food-sharing practice. This is where the costly signaling hypothesis comes in. The costly signaling hypothesis explains that altruistic behavior is driven by a desire to show off one’s abilities. Costly signaling refers to behaviors that require a certain level of ability that others cannot easily perform and are therefore “effort-intensive”. By performing the behavior, the actor is sending a kind of signal that naturally demonstrates their capabilities.
The evolutionary background of altruistic behavior
So why do we want to demonstrate our capabilities at the expense of our interests? It’s because they believe that demonstrating their capabilities has a greater benefit than the immediate benefit they are giving up. In other words, altruism is about giving up a little bit of your current benefit for a future benefit. This logic explains the custom of food sharing among the Aceh tribe in Paraguay. By sharing food, the sharer reduces his or her immediate food supply, but this signals to his or her tribe that he or she is a competent hunter. Cumulatively, this increases your tribe’s trust in you. This has future benefits, such as favorable mate selection and an increased likelihood of being selected as a tribal leader. For these reasons, food sharers have been altruistic in sharing food even though they receive no direct reciprocation or reward for sharing.
Group selection theory, on the other hand, explains that altruistic behavior evolved as a strategy for the survival and prosperity of the group to which an individual belongs. The more cooperation and mutual support there is within a group, the better the group is able to cope with external threats and, consequently, the more likely it is to survive than other groups. This theory is useful in explaining why altruistic behavior is so strong within certain societies or cultures. For example, in traditional agricultural societies, cooperation and mutual support were important survival strategies, which served to advance the interests of the group as a whole. In this context, it is possible to discuss whether an AI like AlphaGo could have the ability to make decisions that take into account the good of the group.
AI and altruistic behavior
So far, we’ve discussed one of the hypotheses behind the emergence of altruism, the costly signaling hypothesis, and an example of it. Of course, this hypothesis does not explain all altruistic behaviors, and it has the limitation that it presupposes future benefits, so it cannot explain behaviors where the benefits are not obvious. For example, rescuing a stranger from danger, even at the cost of one’s own life, is a prime example. However, despite these limitations, the costly signaling hypothesis is still of great significance. Not only has it solved the mystery of altruistic behavior that was not explained by the repetition-reciprocity hypothesis, but it has also expanded the field of research beyond altruism to uncover the motivations behind a wider range of behaviors.
Research like this is crucial to answering the question of whether A.I. will ever have the ability to be altruistic, or whether it will only be able to mimic human altruism. If human altruism is not just a survival strategy, but the result of evolution in a complex social and cultural context, it will be difficult for AI to fully understand and emulate. However, given the speed at which AI is advancing, it is possible that it will embody altruistic behavior in ways we have not yet imagined. Such research will play an important role in assessing the ethical and social implications of AI, and in developing ways to coexist with humans.
In the future, the Costly Signaling Hypothesis holds promise for fleshing out the evolutionary context of the emergence of altruism and explaining our behaviors whose motivations have so far remained elusive. The questions raised in this article highlight the need for deeper discussions in the age of artificial intelligence, and will help us to explore the many possibilities that will help shape our future for the better.