In The Selfish Gene, Richard Dawkins illustrates his answer to the question of what level it is at which “the fittest” survive in Darwin’s process of evolution. Of Lorenz’s On Aggression, Adrey’s The Social Contract, and Eibl-Eibesfeldt’s Love and Hate, he writes that “they got it wrong…they misunderstood how evolution works. They made the erroneous assumption that the important thing in evolution is the good of the species (or the group) rather than the good of the individual (or the gene).” That is, an organism is an example of a “survival machine” that evolved to protect and serve as a vehicle for replication of the “selfish gene”. In the case of humans, we are the first self-aware survival machine. In Chapter 12, entitled Nice Guys Finish First, Dawkins argues that even though it is the gene that acts “selfishly” for survival, knowledge of this can explain why it is that “nice” behaviour within a human population will not only be observed but tend to “win” over any “nasty” behaviour. With reference to a set of computer-simulated competitions of strategies for an iterated Prisoner’s Dilemma conducted by American political scientist Robert Axelrod, Dawkins illustrates that a set of “nice” and “forgiving” strategies will eventually come to dominate any set, regardless of the initial conditions and other roadblocks that may temporarily put a “nasty” strategy in a dominant position. He then draws an analogy between the computer simulation and the “nice” behaviours observed in animals, plants and genes. “The only conditions are that nature should sometimes set up games of Prisoner’s Dilemma, that the shadow of the future should be long, and that the games should be nonzero sum games. These conditions are certainly met, all around the living kingdoms.”
The crux of Dawkin’s argument is in game theory. Particularly, he employs the concept of the iterated Prisoner’s Dilemma. “To qualify as a true Prisoner’s Dilemma, … the payoffs have to follow a particular rank order. Both sides must see mutual cooperation (CC) as preferable to mutual defection. Defection while the other wide cooperates (DC) is even better if you can get away with it. Cooperation while the other side defects (CD) is worst of all.” The matrix for this game looks like this:
|Prisoner’s Dilemma Game|
|What you do|
|What I do||Cooperate||REWARD: Fairly good||SUCKER’S PAYOFF: Very bad|
|Defect||TEMPTATION: Very good||PUNISHMENT: Fairly bad|
In a one-off game of the Prisoner’s Dilemma, the value of the game is mutual defection. There is no way of guaranteeing trust, and so the best strategy for each player is to Defect. “Unlike the simple game, which is rather predictable in that DEFECT is the only rational strategy, the iterated version offers plenty of strategic scope. … Iteration allows lots of conceivable strategies.” He categorized strategies by certain characteristics:
- “nice” strategies never defect first.
- “nasty” strategies are opposite to “nice” ones.
- “forgiving” strategies, although they may retaliate, do not hold grudges.
- “non-envious” strategies occur in nonzero sum games.
He also defined the “climate” as the ratio of nice to nasty strategies, and clarified that it is only a “true” iterated Prisoner’s Dilemma when neither player knows when the game will end; long “shadow of the future” If they think it will end, it will affect their game, and they will treat it not as an iterated Prisoner’s Dilemma, but rather each game as a one-off Prisoner’s Dilemma in which it is best to defect.
Axelrod’s computer simulated competition of strategies for an iterated Prisoner’s Dilemma highlighted the features of successful strategies. The winning strategies with the largest payouts were “nice”, “forgiving”, and “non-envious”. A prime example of such a strategy was Tit for Tat: begins by playing COOPERATE on the first move, then copies the previous move of its opponent. The first two runs of the competition ended after a specific number of runs, and illustrated a very important feature of the iterated Prisoner’s Dilemma: that the “climate” of a set of strategies affected the success of any particular strategy against others within the set in a specified number of runs. “How can we reduce this arbitrariness (of initial conditions)?” Dawkins asked. The answer is in a phenomenon called “clustering”.
Axelrod’s third run of the computer simulation had a ‘climate’ consisting of equal representations of each strategy, generations were subsequent runs of the game, and winnings were paid out as offspring. As generations passed, some strategies became scarce/went extinct, and eventually settled. “Five other nice but provocable strategies ended up nearly as successful (frequent in the population) as Tit for Tat… When all the nasties had been driven extinct, there was no way in which any of the nice strategies could be distinguished from Tit for Tat or from each other, because they all, being nice, simply played COOPERATE against each other.” Tit for Tat can’t be invaded by a nasty strategy because it tends to beat them, but it is indistinguishable within a population of nice strategies because they will all always COOPERATE. Dawkins treated a combination of nice but retaliatory “Tit for Tat-like” strategies as collectively stable strategy in a given population, but stated that it is possible for a population to have two collectively stable strategies at the same time. It is a matter of luck when one dominates the other. This is the arbitrariness that he wanted to reduce. His example is Tit for Tat and Always Defect: whichever comes to dominate first will tend to stay dominant. There is a knife-edge that when crossed, the population will either tend to let one or the other quickly dominate. So, the initial conditions (climate) matter. To do so, he introduced the property of strategies to be able to “cluster”. In this case of Tit for Tat vs. Always Defect, with Always Defect dominating, even if it is rare, Tit for Tat’s can be locally common. Then in those areas, it can prosper and spread outward. In this way, it is possible for Tit for Tat cross the knife-edge, and become dominant. It can’t work the other way because Always Defect cannot cluster. It always DEFECTS. i.e. even when the initial conditions and luck weren’t optimal for Tit for Tat, its ability to cluster would mean that it *could* cross the knife-edge. This gives Tit for Tat has a “higher-order stability” than Always Defect because of its ability to cross the knife-edge.
Dawkins’ game theoretical approach to the explanation of altruistic behaviour endows us with a basis to be optimistic about people. Regardless of initial conditions, climate, and the current dominance of nasty strategies, the analogy we can draw between it and human behaviour means that a nice but retaliatory strategy can cross the “knife-edge” and never go back. Of course there are limitations of the application of this model to human populations, but it is a useful framework in investigating patterns of altruistic behaviour.
The selfish gene. Dawkins, R. (Director). (1978).New York: Oxford University Press.
 Richard Dawkins, The Selfish Gene (New York:Oxford University Press, 1978) 2.
 Ibid. 229
 Ibid. 226
 Ibid 208
 Ibid. 225
 Ibid. 215
 Ibid. 216