A two-person situation in which (1) Player A stands to gain more in a transaction by acting selfishly towards Player B if B offers to cooperate, and vice-versa ; yet (2) A and B together could gain a higher total reward by cooperating than by both acting selfishly. The standard outcome for a Prisoner's Dilemma is that both players behave selfishly, leading to the worst mutual outcome.
This occurs because for each player, attempting to cooperate exposes that player to risk of defection by the other, and in this dilemma, being defected against while offering to cooperate is worse than mutual defection. As a result, though the ideal outcome is mutual cooperation, without mechanisms for both coordination and for trust, the equilibrium state is mutual defection. (See http://plato.stanford.edu/entries/prisoner-dilemma/ for a survey of the subject.)
This situation, a classic of both game theory and ethics, provides food for thought. Some advocates see it as clear justification for human selfishness (rational self-interest), while other advocates see it as the best mathematical argument for unselfishness. Still others see it as irrelevant to human relations, claiming that it's based on poor assumptions about human nature and selfishness.
The Dilemma
The following table shows one particular form of Prisoner's Dilemma. Each player chooses (not knowing the other player's choice) to "cooperate" or "defect", and then each player gets paid according to the box they jointly land in; the two numbers are what P1 gets, then what P2 gets.
Player 1 C D +-----+-----+ C| 3,3 | 0,5 | Player 2+-----+-----+ D| 5,0 | 1,1 | +-----+-----+In each row (i.e., for each possible choice by P2) P1's best outcome is in the D column. In each column (i.e., for each possible choice by P1) P2's best outcome is in the D row. So defection seems to be rational for each player. And yet they do much better if they both cooperate than if they both defect.
There's an excellent book about this, Robert Axelrod's The Evolution of Cooperation ISBN 0465021212 . In part of the book he describes how he asked academics for different strategies, then ran those strategies against each other in tournaments. His tournaments, and some theoretical analysis, indicate that to do well, a strategy should have four qualities:
Here are some of the simplest strategies.
TitForTat (aka An eye for an eye, TFT)
Other C D +---+---+ MC| C | D | e +---+---+ D| C | D | +---+---+PavlovStrategy (aka Win Stay Lose Change, PAVLOV)
Other C D +---+---+ MC| C | D | e +---+---+ D| D | C | +---+---+AlwaysDefect? (aka ALL-D)
Other C D +---+---+ MC| D | D | e +---+---+ D| D | D | +---+---+AlwaysCooperate? (aka ALL-C)
Other C D +---+---+ MC| C | C | e +---+---+ D| C | C | +---+---+Playing the PrisonersDilemma more than once is known as the IteratedPrisonersDilemma. One of Axelrod's conclusions is that cooperation is not the best strategy for a one-round PrisonersDilemma, but in an IteratedPrisonersDilemma the concomitant "shadow of the future" (i.e., the expectation that you'll interact with the other person again) encourages cooperation among players. (Loosely speaking, players establish reputations, and how you interact with other players will be based on their reputations.)
Both TitForTat and PavlovStrategy are strong competitors in the IteratedPrisonersDilemma. Each has its strengths and weaknesses, which are explored in TitForTatVsPavlov. Despite the impressive showings of these strategies, Axelrod demonstrates that no single strategy outperforms (or ties) all opponents. The best strategy is always determined by its context. See The Evolution of Cooperation for more on this point.
Read the variations at <http://www.spectacle.org/995/index.html> to see why a particular strategy may or may not work.
Note that the terms "cooperate" and "defect" may set off alarm bells in those versed in GameTheory. In this case, they're just names for the two different choices each player can make. Despite the name "cooperate", all choices are made independently by each player, with no collusion allowed.
[I can't find a link, but... related to the Sissy Fight strategy below, a program was able to "take over" one of these game theory experiments, beating out the traditional tit-for-tat strategy, by communicating via certain patterns of behavior and operating as a team. As in so many games, the best way to win is to cheat.]
The most interesting example of this that MichaelChermside ever saw was an experiment (I wish I remembered the reference... Scientific American?) in which the standard PrisonersDilemma was modified by introducing the concept of communication errors... a certain (small) percentage of the time, the players would be mis-informed about the results of their last round. Then the whole thing was set up on what was basically an evolutionary basis: a pool of many interacting players with different strategies was created and allowed to play each other. Then a new pool (the "next generation") was created, with strategies that did better getting a larger share of the "gene pool" in the new generation.
The results were fascinating. They were able to get the system to a stable state where the following happened. First, in a population of mixed strategies, TitForTat proliferated until they took over nearly all of the population. Then AlwaysCooperate? began to take root, because unlike TitForTat it didn't have a tendency to get into a state of alternating defecting after a random communication error, so it cooperated with everyone. AlwaysCooperate? grew, until it had taken over most of the population, at which point AlwaysDefect? appeared. It quickly took advantage of AlwaysCooperate?, and took over nearly all of the population. Then TitForTat began to appear... it wouldn't allow AlwaysDefect? to take advantage of it, but two TitForTats were able to cooperate. So TitForTat took over, and the cycle started again. This could be made stable.
The most interesting (to me) thing about the GameTheory view of PrisonersDilemma is that it shows how cooperation can emerge without intelligence. If you aren't forced to assume intelligence, you can make a lot more conclusions about the system. Making a single combined move forces us to assume intelligence (the two players agreeing to make this move together), and that really doesn't shed any light on cooperation; we've known that intelligent people cooperate for thousands of years. The most interesting fact is that this same level of cooperation would arise even if we weren't intelligent creatures. Another interesting fact is that these strategies are stable. It doesn't matter how scheming or devious the other strategies are, or how complicated their hidden agendas are, a simple TFT strategy will still beat them in the long run. What happens when you have to interact with someone or something that you don't have control over? Without understanding the game of two independent players, you won't get many insights. -- AnonymousDonor
Please note that the term "cooperate" as used in the context of game theory is probably different from what is intend in the phrase above "...we've known that intelligent people cooperate..." In game theory no coordination or collusion is assumed or permitted. There are other significant simplifying assumptions that drive the conclusions of game theory, but make the application of the results to the real world questionable. Collaboration is a far more productive approach in the real world.
Could this be summed up like this? If the players don't know they're playing, the best 'choice' is to defect. If they do, they're screwed by the circular logic.
No. If they don't know they're playing, the best choice is "defect". If they do, both players can choose "cooperate", and both will make out better than they would if they chose "defect".
The evolutionary basis for DramaticIdentity.
Selfishness is not part of the prisoner's dilemma (or general game theory). The supposition is that there are two players making simultaneous decisions without knowledge of the other. Cooperation is precluded by the definition and is not dependent upon "selfishness."
There is an online game called "Sissy Fight 2000" at http://www.sissyfight.com that appears to be related to the PrisonersDilemma. In the game, you take on the role of a young school girl intent on ruining other's popularity and self-esteem.
It works like this: Three to six girls are playing in the schoolyard. You start out with 10 self-esteem points. The goal is to humiliate and abuse others and reduce their points to zero. Each turn, you can choose one action:
Perhaps someone with a deeper understanding of game theory can comment on the game's dynamics, or maybe even analyze a winning strategy.
The best way to win at sissy fight is to open an alternate chat channel (i.e. cheat) and/or to go in to the game as a team (i.e. also cheat).
Winning by destruction: Consider the final line in the movie "War Games" made by the computer after having examined all the outcomes of destructive competition (A Global Nuclear War Scenario):
A STRANGE GAME. THE ONLY WINNING MOVE IS NOT TO PLAY. HOW ABOUT A NICE GAME OF CHESS?Related to PrisonersDilemma, GameTheory helped determine the strategy of the ColdWar, called MAD or MutuallyAssuredDestruction. Indeed, the only winning move is to take no offensive action.
There was one other option that was also considered at the start of the Cold War, the pre-emptive strike. This is even more frightening in the context of very recent history.
The movie DrStrangelove has a great scene where the Russian diplomat explains about the existence of their automatic doomsday device, and Dr. Strangelove says something along the lines of, "That is the whole idea of this machine, you know. Deterrence is the art of producing in the mind of the enemy... the fear to attack. And so, because of the automated and irrevocable decision making process which rules out human meddling, the doomsday machine is terrifying. It's simple to understand. And completely credible, and convincing. [...] Yes, but the... whole point of the doomsday machine... is lost... IF YOU KEEP IT A SECRET! Why didn't you tell the world, eh?"
That's what happens when you don't LetTheHumanPullTheTrigger.
To be fair, the Russian diplomat explains that they were about to announce it to the world when the trouble with Gen. Ripper started, which in a way is an argument against LetTheHumanPullTheTrigger, or at least, 'let a single human pull the trigger'.