User:Kzollman/Nash equilibrium

From Wikipedia, the free encyclopedia

Nash Equilibrium
A solution concept in game theory
Relationships
Subset of: Rationalizability, Correlated equilibrium
Superset of: Evolutionary stable strategy, Subgame perfect equilibrium, Perfect Bayesian equilibrium, Trembling hand perfect equilibrium
Significance
Proposed by: John Forbes Nash
Used for: All non-cooperative games
Example: Prisoner's dilemma
This box: view  talk  edit

In game theory, the Nash equilibrium (named after John Nash, who proposed it) is a kind of optimal strategy in a game involving two or more players, where no player has anything to gain by unilaterally changing her strategy. If each player has chosen a strategy and no player can benefit by changing her strategy while the other players keep their's unchanged, then the current set of strategy choices constitutes a Nash equilibrium.

The concept of the Nash equilibrium is not exactly original to Nash; Antoine Augustin Cournot demonstrated a method for finding what we now call the Nash equilibrium of the Cournot duopoly game. However, Nash showed for the first time in his dissertation, Non-cooperative games (1950), that Nash equilibria must exist for all finite games with any number of players. Until Nash, this had only been proved for 2-player zero-sum games by John von Neumann and Oskar Morgenstern (1947).

Contents

[edit] Examples

[edit] Prisoner's dilemma

A canonical Prisoner's dilemma
Player 2 Cooperates Player 2 Defects
Player 1 Cooperates -0.5, -0.5 0, -10
Player 1
Defects
-10, 0 -2, -2
Main article: Prisoner's dilemma

Two suspects, A and B, are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both stay silent, the police can sentence both prisoners to only six months in jail for a minor charge. If each betrays the other, each will receive a two-year sentence. Each prisoner must make the choice of whether to betray the other or to remain silent. However, neither prisoner knows for sure what choice the other prisoner will make.

This situation is represented in the payoff matrix to the right. In this game the unique Nash equilibrium is (Defect, Defect). In order to demonstrate this two things must be proved. First that (Defect, Defect) is in fact a Nash equilibrium and second that no other strategy set is.

To demonstrate that (Defect, Defect) is a Nash equilibrium, we should consider whether any player would do better by unilaterally deviating. Suppose that both players defect, if one chooses to cooperate and the other does not the cooperator does strictly worse (she will go to jail for 10 years instead of 2). It is important to note, that it does not matter that both players could do better if they both deviated and cooperated together. This does not invalidate (Defect, Defect) as a Nash equilibrium.

Now we will consider the other strategy profiles in order to demonstrate they are not Nash equilibria. First, suppose both players choose (Cooperate, Cooperate). Player 1 would do better by changing his strategy to Defect (he would avoid the 6 months in jail). Again the fact that if they both defect they do worse is irrelevant. Second suppose one player defects and the other cooperates. Here the cooperating player would do better by choosing defect (she would receive a 2 year sentence instead of 10).

[edit] A more complex game

A competition game
Player 2 chooses '0' Player 2 chooses '1' Player 2 chooses '2' Player 2 chooses '3'
Player 1 chooses '0' 0, 0 2, -2 2, -2 2, -2
Player 1 chooses '1' -2, 2 1, 1 3, -1 3, -1
Player 1 chooses '2' -2, 2 -1, 3 2, 2 4, 0
Player 1 chooses '3' -2, 2 -1, 3 0, 4 3, 3

Consider the following two-player game: both players simultaneously choose a whole number from 0 to 3. Both players then win the minimum of the two numbers in dollars. In addition, if one player chooses a larger number than the other, then he has to pay $2 to the other. This game has a unique Nash equilibrium: both players choosing 0. Any other choice of strategies can be improved if one of the players lowers his number to one less than the other player's number. In the table to the left, for example, when starting at the green square it is in player 1's interest to move to the purple square by choosing a smaller number, and it is in player 2's interest to move to the blue square by choosing a smaller number. If the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win nothing, then there are 4 Nash equilibria.

[edit] Mixed strategy Nash equilibria

Matching pennies
Heads Tails
Heads 1, -1 -1, 1
Tails -1, 1 1, -1

In the game matching pennies there is no pure strategy Nash equilibria. Suppose both players choose the same strategy (either heads or tails), player 2 does better by switching. On the other hand, if the two players choose different strategies, then player 1 does better by switching. Despite this, matching pennies has a Nash equilibrium in mixed strategies. Instead of choosing a single action, when a player uses a mixed strategy she randomizes her choice assigning some positive probability to two or more strategies. In matching pennies, the mixed strategy Nash equilibrium has both players choosing each strategy with equal probability. This can be verified by noting that if player 1 plays heads with probability one half, anything player 2 does is as good as anything else. (The best he can do is get 0 on average.) This is true if player 2 chooses heads with probability one half.

This demonstrates an odd characteristic of mixed strategy Nash equilibrium. In a mixed strategy Nash equilibrium, each player does not choose a strategy that maximizes her own return given the other players' strategies. Instead each player chooses a strategy which makes the other players indifferent between their strategies. This fact has lead to some controversy over the interpretation of Nash equilibrium, which will be discussed below.

[edit] Nash equilibria and dominance

[edit] Conditions for Nash play

[edit] Interpretations of Nash equilibria

[edit] Formal definition and existence of Nash equilibria

Let (S, f) be a game, where S is the set of strategy profiles and f is the set of payoff profiles. When each player i \in [1,n] chooses strategy x_i \in S_i resulting in strategy profile x = (x1,...,xn) then player i obtains payoff fi(x). A strategy profile x* \in S is a Nash equilibrium (NE) if no deviation in strategy by any single player is profitable, that is, if for all i

f_i(x*) \geq f_i(x_i, x*_{-i}).

A game can have a pure strategy NE or a NE in its mixed extension (that of choosing a pure strategy stochastically with a fixed frequency). Nash proved that, if we allow mixed strategies (players choose strategies randomly according to pre-assigned probabilities), then every n-player game in which every player can choose from finitely many strategies admits at least one Nash equilibrium.

[edit] Proof sketch

Let σ i be a mixed strategy profile of all players except for player i. We can define a best response correspondence for player i, bi. bi is relation from the set of all probability distributions over opponent player profiles to a set of player i's strategies, such that each element of bii) is a best response to σ i. Define

b(\sigma) = b_1(\sigma_{-1}) \times b_2(\sigma_{-2}) \times \cdots \times b_n(\sigma_{-n}).

One can use the Kakutani fixed point theorem to prove that b has a fixed point. That is, there is a σ * such that \sigma* \in b(\sigma*). Since b(σ * ) represents the best response for all players to σ * , the existence of the fixed point proves that there is some strategy set which is a best response to itself. No player could do any better by deviating, and it is therefore a Nash equilibrium.