Zero-sum

In game theory and economic theory, zero-sum describes a situation in which a participant's gain or loss is exactly balanced by the losses or gains of the other participant(s). If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Zero-sum can be thought of more generally as constant sum where the benefits and losses to all players sum to the same value of money and pride and dignity. Cutting a cake is zero- or constant-sum because taking a larger piece reduces the amount of cake available for others. In contrast, non-zero-sum describes a situation in which the interacting parties' aggregate gains and losses is either less than or more than zero. Zero sum games are also called strictly competitive.

Contents

Definition

The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal (generally, any game where all strategies are Pareto optimal is called a conflict game) [1]

Situations where participants can all gain or suffer together, are referred to as non-zero-sum. Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players are sometimes more or less than what they began with.

The concept was first developed in game theory and consequently zero-sum situations are often called zero-sum games though this does not imply that the concept, or game theory itself, applies only to what are commonly referred to as games.

Solution

For 2-player finite zero-sum games, the different game theoretic Solution concepts of Nash equilibrium, minimax, and maximin all give the same solution. In the solution, players play a mixed strategy.

Example

A zero sum game
A B C
1 30, -30 -10, 10 20, -20
2 10, -10 20, -20 -20, 20

A game's payoff matrix is a convenient representation. Consider for example the two-player zero-sum game pictured at right.

The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices.

Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points.

Now, in this example game both players know the payoff matrix and attempt to maximize the number of their points. What should they do?

Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, while with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. But what happens if Blue anticipates Red's reasoning and choice of action 1, and deviously goes for action B, so as to win 10 points? Or if Red in turn anticipates this devious trick and goes for action 2, so as to win 20 points after all?

John von Neumann had the fundamental and surprising insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimise the maximum expected point-loss independent of the opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This minimax method can compute provably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that Red should choose action 1 with probability 4/7 and action 2 with probability 3/7, while Blue should assign the probabilities 0, 4/7 and 3/7 to the three actions A, B and C. Red will then win 20/7 points on average per game.

Solving

The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem. Suppose a zero-sum game has a payoff matrix M where element M_{i,j} is the payoff obtained when the minimizing player chooses pure strategy i and the maximizing player chooses pure strategy j (i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element of M is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found by solving the following linear program to find a vector u:

Minimize:
\sum_{i} u_i
Subject to the constraints:
u ≥ 0
M u ≥ 1

The first constraint says each element of the u vector must be nonnegative, and the second constraint says each element of the  M u vector must be at least 1. For the resulting u vector, the inverse of the sum of its elements is the value of the game. Multiplying u by that value gives a probability vector, giving the probability that the maximizing player will choose each of the possible pure strategies.

If the game matrix does not have all positive elements, simply add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will have no effect on the equilibrium mixed strategies for the equilibrium.

The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Or, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation of M (adding a constant so it's positive), then solving the resulting game.

If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations. So such games are equivalent to linear programs, in general.

Non-zero-sum

Economics

Many economic situations are not zero-sum, since valuable goods and services can be created, destroyed, or badly allocated, and any of these will create a net gain or loss. Assuming the counterparties are acting rationally, any commercial exchange is a non-zero-sum activity, because each party must consider the goods s/he is receiving as being at least fractionally more valuable to him/her than the goods s/he is delivering. Economic exchanges must benefit both parties enough above the zero-sum such that each party can overcome his or her transaction costs.

See also:

Psychology

The most common or simple example from the subfield of Social Psychology is the concept of "Social Traps." In some cases we can enhance our collective well-being by pursuing our personal interests — or parties can pursue mutually destructive behavior as they choose their own ends.

Complexity

It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. As former US President Bill Clinton states:

The more complex societies get and the more complex the networks of interdependence within and beyond community and national borders get, the more people are forced in their own interests to find non-zero-sum solutions. That is, win–win solutions instead of win–lose solutions.... Because we find as our interdependence increases that, on the whole, we do better when other people do better as well — so we have to find ways that we can all win, we have to accommodate each other.... Bill Clinton, Wired interview, December 2000 .[1]

Extensions

In 1944 John von Neumann and Oskar Morgenstern proved that any zero-sum game involving n players is in fact a generalized form of a zero-sum game for two players, and that any non-zero-sum game for n players can be reduced to a zero-sum game for n + 1 players; the (n + 1) player representing the global profit or loss.

References

  1. Samuel Bowles: Microeconomics: Behavior, Institutions, and Evolution, Princeton University Press, pp. 33–36 (2004) ISBN 0691091633

External links