TIT FOR TAT AND RULE FOLLOWING
The axioms of rational choice theory are rules that could be programmed into actors, could exist as behavioral norms reinforced as survival criteria by environmental constraints, or could inform deliberate conduct.
Game theory requires that agents have a set of strategies, based on their evaluation of the expected utility gain from every possible outcome, that tells the agent how to act in every conceivable circumstance.[688] These rules defining instrumental agency are deemed to be nonnegotiable. Thus, when game theorists identify Prisoner’s Dilemma scenarios throughout political economy, international relations, and civil society, the message is that agents must defect. This is because defecting in a PD has implications directly relating to survival; although mutual cooperation is better than mutual defection, sole cooperation has a cost, and unilateral defection a reward. In evolution, natural selection chooses behavioral traits conducive to survival. According to rational choice theory, normal behavior among humans must have the similar property that it provides gain to actors.[689] Thus, it is unclear that one can teach a knave or a fool when to be a conditional cooperator, or that the rulebook would, unlike the Golden Rule, have more rules than exceptions.[690] In Axelrod’s words,Perhaps the most widely accepted moral standard is the Golden Rule: Do unto others as you would have them do unto you. In the context of the Prisoner’s Dilemma, the Golden Rule would seem to imply that you should always cooperate, since cooperation is what you want from the other player.[691]
Axelrod suggests that unconditional altruism may be the best interpretation of the Golden Rule, but he believes that reciprocal altruism that accepts equity in the distribution of shares is ultimately a sufficient, and even better, approximation.[692]
In biological evolution, organisms are programmed with strategies that game theorists typically argue must be individualistically optimizing. However, in human societies, people can adopt strategies at will.
Axelrod’s Evolution of Cooperation makes an unabashedly open case for action predicated on the principle of Tit for Tat. However, given interactions that, although defined by a PD payoff matrix reflecting tangible rewards, are not limited to two individuals and indefinite interaction, it is not at all clear how to implement the Tit for Tat rule of conduct. Not only is the individualistic maximizer free to try all strategies, including Probatory Retaliator, but in the end, the only safe strategy to avoid being suckered and to accrue the rewards from unilateral defection is to decline to cooperate unless incentivized to do so by institutional design. Neoliberal institutions must attempt to achieve conditions that make individuals subject to being punished for any round of play in which they defected in a prior round. This takes strong institutional infrastructures with mechanisms for monitoring, keeping dossiers on actors’ earlier moves, and providing sanctioning devices to steer behavior into a mutually cooperative mold. However, the price paid for treating individuals in accordance with the assumption that strategic rationality is the only logic for their choice may end up driving out the type of voluntary cooperation consistent with logics of appropriateness, team reasoning, or other-regarding considerations.[693]