This paper reflects the research and thoughts of a student at the time the paper was written for a course at Bryn Mawr College. Like other materials on Serendip, it is not intended to be "authoritative" but rather to help others further develop their own explorations. Web links were active as of the time the paper was posted but are not updated.

Contribute Thoughts | Search Serendip for Other Papers | Serendip Home Page

Emergence 2006
Reviews of Relevant Books
On Serendip

The Complexity of Cooperation

Joshua Carp

In The Complexity of Cooperation, political scientist Robert Axelrod presents a collection of academic papers. The research behind said papers was conducted between 1986 and 1996 and appeared in a broad range of journals (American Political Science Review, Journal of Conflict Resolution, and Managerial Science, among others). All told, they consider abstract representations of conflict (the Prisoner's Dilemma and novel variants), alliances in war and business, and the spread of cultural values. That said, this variegated corpus is unified by a single common insight: that complex collective behaviors can be modeled—with often surprising verisimilitude—by simulating the interactions of their simplest constituents.


That insight, though crucial, is probably not original and certainly not unique, but Axelrod is uncommonly thoughtful about it. Computer simulations of complexity are, at least in some minimal intuitive sense, interesting, but their actual utility in doing science is not fully clear. Axelrod addresses this matter explicitly only briefly. Computer modeling, he writes, is in some ways akin to deductive modes of science: the sorts of parameters included in a model, and often their values, are derived from (formal or informal) empirical study. Likewise, the output of these models is considered deductively—results typically cannot be gotten by reasoning from initial premises. At the same time, modeling approaches draw from induction: interactions among agents are often specified axiomatically, without regard for real-world circumstances. So Axelrod thinks of these simulations as in some sense distinct from traditional kinds of inquiry—something new emerges from the unification of two principles of science. So modeling may be useful because it is a new and qualitatively different tool in our investigative arsenal.


More direct attention is given to the proper epistemological uses of computer modeling. In his earlier papers (with one exception, the papers in this book are presented in chronological order), he is largely concerned with the iterated Prisoner's Dilemma in one form or another1. This is an abstract situation that real social agents are not likely to encounter, especially not, as in the first paper, in the form of round-robin tournaments once per generation. It seems to follow from this that any knowledge gleaned from modeling this kind of game ought only to be considered relevant to the behavior of real agents in the abstract. In his second paper, Axelrod uses genetic algorithms to evolve strategies for noisy Prisoner's Dilemmas. In this variant, one in ten moves is “implemented incorrectly,” and the opposite of the player's choice is selected. In simulations of the noise-free Dilemma, “tit-for-tat” has consistently emerged as the best strategy overall. Tit-for-tat cooperates on the first move and on each subsequent move plays what its opponent played on the previous move. When noise is added, two modified versions of tit-for-tat perform best: generous TFT, which randomly “forgives” some defections by the opponent (i.e., cooperates following the opponent's defection), and contrite TFT, which “apologizes” after its own defections by cooperating. Axelrod attributes the efficacy of both TFT variants to their “error-correcting” properties: pairs of standard TFT players will both cooperate on every play, assuming each cooperates initially, but noise can disrupt this pattern. Both generosity and contrition can correct an unintended defection, returning the game to a configuration where net punishment is minimized. This is well and good, but the question of the model's practical applicability remains. Axelrod cautions against broad application of his results: they describe simple-minded automata, not genuine social actors capable of calculating complicated decision rules. The most appropriate use of this work, he writes, is in informing social science; reciprocity seems to be in a general sense a fruitful strategy for curbing mutually destructive impulses between interacting agents, and generosity and contrition may be of further help under certain conditions. But the values of the payoff matrix are chosen arbitrarily and have no relation to reality; real people and real nations are not truly expected to square off in neat round-robin tourneys; and conflict and cooperation are rarely limited to pairwise interactions. Our best hope is to extract some principle from these models general enough to capture interesting features of both the models and of real life.


This is Axelrod's official stance in his earlier papers. But two later papers belie the principle. Applying a strategy he describes as landscape theory, Axelrod constructs models of political and economic alliances, and then tests them against empirically-derived data. His models predict alliances among major powers in World War II and among firms vying to establish a standard implementation of the Unix operating system in the late 1980s. There is an interesting tension here: computer simulations are meant to model situations in general terms and to inform us about laws of interaction that transcend particulars. What are we to make, then, of simulations with (apparently) strong and (apparently) unintended predictive power for real events? It may be that simple simulations are more than good abstract analogues for social behavior. It may be that short-sighted actors bound to minimal rulesets are useful models of complex behavior not because they collapse across the putative cognitive sophistication of real actors but because real actors really behave simply. If this is the case, it is not surprising that empirically informed simulations can predict international politics. Where extant empirical data are adequate to describe the relevant parameters of some simulated situation, prediction seems likely, if not inevitable (the need for good data is not to be neglected, though: there might exist otherwise excellent rulesets that describe interactions in terms of constructs that cannot, presently or ever in the future, be measured).


Axelrod further develops his idea of “myopic” social agents in the papers that follow. His models of alliance are of particular use here. Those models assume that each actor has some known and constant propensity to affiliate with each other actor. Further, those propensities are weighted by the sizes of the actors: agents can tolerate siding against other, highly desirable agents when those potential associates are small. In a given configuration of alliance, where each actor is assigned to one side or the other, each actor has a level of frustration (or energy), representing his propensity to shift allegiance. The actor's frustration for each association is defined as the product of his propensity to ally with that associate, that associate's size, and the distance between the two actors (0 if they are on the same side, 1 otherwise). An actor's total frustration is the sum of his frustration with every other actor; the frustration of the system is the sum of the frustrations of all its actors. Stable configurations of the system are those that exhibit minimal systemic frustration—local minima of frustration. Put another way, alliances should cease to change once all actors become unwilling to change, i.e., when any possible action increases total frustration. These equilibrium states2 could conceivably be discovered by hill climbing, but the system of interest is small enough to map the landscape exhaustively, computing every point.


When the propensity matrix is populated with historical data on the major actors involved in the second World War and when sizes are supplied, two small (i.e., low-frustration) minima emerge. One, the smaller, describes alliances at the beginning of the war correctly, with the exception of assigning Poland to the Axis3. The other predicts an entirely different configuration—but a configuration that Axelrod regards, in retrospect, as not wholly implausible. Assuming that this second state was not an artifact of measurement or modeling, this has some interesting implications for international politics. According to this sort of model, real actors are completely blind to fitness landscapes; they make incremental decisions based on preferences for individual associates, without regard to the system as a whole. Real actors, then, are imagined to reach equilibrium by hill climbing. Given a fitness landscape with a large number of local minima, the final configuration that emerges may be largely a consequence of the initial state of the system. If initial states are random in such a situation, so are final states. Axelrod notes that a drawback of Nash equilibria is that they often describe landscapes with many local minima; perhaps, then, the course of history is not inevitable but instead stochastic.


The usefulness of The Complexity of Cooperation is twofold. Foremost, it presents a body of research that together represents a powerful application of agent-based modeling to a whole class of social problems. Beyond that, it offers thought (though no final resolution) on the proper purpose of this sort of modeling. Finally, and perhaps most usefully, it grants the reader access to nearly all the source code behind the research. Most of the models presented can be rerun, reanalyzed, and altered to fit whatever interests the reader brings to the material. All in all, the book is a rich resource with far more depth than can be covered here.

1The Prisoner's Dilemma, in its original and simplest formulation, describes a two-player game where each player may choose to cooperate or to defect. Payoffs for each combination of moves vary, but in all cases the temptation payoff (where the player defects and his partner cooperates) > reward (both players cooperate) > punishment (both players defect) > sucker (player cooperates, partner defects). In its iterated form, play lasts for an indefinite (randomly determined) number of rounds.

2Nash equilibria, formally.

3Poland had a negative propensity to ally with either the USSR or Germany, the poles of the alliance configuration. Since the USSR is a larger power, Poland chose to align itself with the lesser of enemies. Midway through the war, once Axelrod's measure of size rated Germany higher than the USSR, the model predicts Poland switching sides.




| Course Home | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994-2005 - Last Modified: Wednesday, 02-May-2018 10:51:07 CDT