« MDL consolidation and appealability | Main | When Agencies Don’t Appear on the SG’s Briefs »

Thursday, January 22, 2015

Game theory post #2 of ????: Basic Concepts

This is the second post in an indefinite series of game theory for law professors. In this one, I'll describe some basic concepts---the rudimentary language of game theory as a vocabulary list. This page, incidentally, has even simpler definitions of some of the concepts described here, as well as a few concrete examples.

Let us begin, however, by fixing an idea of our task in mind. We have at least two players (where a player can be any entity that makes choices and receives payoffs---depending on the level of analysis, this can be individuals, firms, governments, or a combination of them), each player can make moves, actions that, in conjunction with other players' moves, affect the state of the world (the outcomes experienced by that player as well as others), and each player has a utility function mapping probability-weighted states of the world to a preference ordering. And our goal is to say something intelligent about what the players have incentives to do---often, although not always, with the assumption that they are sufficiently rational that they will do what their incentives will point toward, but let us bracket that issue for the time being. That saying of something intelligent is also known as "solving" the game. Also, I only will be discussing non-cooperative game theory; there's a branch of game theory called cooperative game theory too, but I know it less well and never use it. (Those of you who study things like constitution-making and contracts might look into it though.)

Strategic and extensive form games

There are two classic ways to visualize the problem of a game. The first is strategic form (a.k.a. "normal form"), represented as a chart which displays the possible combinations of moves and the payoffs from each to each player. The second is extensive form, which represents the possible combination of moves and payoffs from each in order, like a tree diagram, with the payoffs at the end. This webpage gives a good example of each. Note that this is merely a representation: it's possible (and often sensible) to do game theory without using these kinds of pictures, but the pictures are a good way of summarizing the issues in play, and you'll see them in most game theory articles.

Simultaneous and sequential play

Consider two different kinds of real-world game. First, consider soccer, and a player racing toward the goalie. A good simplified model of the decisions facing the players there is that each chooses to kick the ball or leap (respectively) to one side or the net or the other. Things happen fast enough that they choose simultaneously---by the time the player with the ball kicks, the goalie needs to have already decided which direction to leap (but obviously not before the former commits to kicking in a direction). Or, and equivalently for practical purposes in most cases, we might imagine the players having chosen whenever they want, but secretly---the goalie might have decided to leap to a given side six months in advance, but we can model it as simultaneous as long as they don't tell one another. (It's actually the secrecy---more precisely, the lack of information---that really matters for modeling purposes; the traditional use of the language of simultaneity is basically just shorthand.) This is a classic simultaneous game, and is most easily represented in the strategic form.

By contrast, think about chess or go. In such games, players take turns: they can see what the other player did before making their own decision. Typically, we represent such games in the extensive form.

Note that it's possible to have elements of both kinds of games in a single complex strategic interaction. For example, in litigation, some elements of players' strategies are concealed from one another, like how much to spend on investigation and research before the initiation of suit, others, like what procedural motions are filed, are visible. There are fancier things we can do to represent these, like specify players' "information sets." We can also imagine a series of simultaneous games played between the same players multiple times, where the players can see what one another did in previous rounds (see below).

As you can see by now, a lot of the action in game theory is in specifying how much information players have about what the other players are doing, their payoffs, exogenously set states of the world, etc. I won't introduce much detail on this here, but might write a future post all about information.

Strategies

A player's strategy is a complete specification of his or her moves, across the whole game---that is, at every possible state of behavior from other players. (To be more precise, a complete strategy is a specification of a player's moves in at all his/her information sets), where the notion of an information set counts everything that looks the same to the player as identical. Obviously, a player can't have a strategy that generates different actions depending on different states of behavior that the player can't distinguish, given what the player knows. But let's leave that aside for the moment.) A strategy can be simple: "no matter what the other player does, I'll kick to the left," or complex ("at any given point, if the goalie leapt to the left at least three out of the last five times, then I'll kick to the left, otherwise right"). It can also be probabilistic---this is called a "mixed strategy" (I'll always kick to the left with .45, and to the right with .55).

Dominance

A strategy is dominant if it's always best for the player. The classic case of dominance is in the prisoners' dilemma, with which many of you will be familiar (discussed in a later post): "always defect" is a dominant strategy for each player, in that it optimizes the players' payoffs no matter what the other player does. Dominance is divided into two categories: strict dominance and weak dominance. Strict dominance means that the strategy always does better than competing strategies. Weak dominance means that the strategy does at least as well as competing strategies.

One way to solve a game is known as "iterated deletion of strictly dominated strategies. (Deleting weakly dominated strategies is a dicier proposition.) That means what it says it means. Look at the strategies available to the players. If one is strictly dominated by something else, chop it out. Keep doing this until there aren't any strictly dominated strategies left. If there's only one strategy left, there's your solution. Even if there are multiple strategies left, at the very least you know that no rational player will choose one of the strategies that you removed, because why would anyone choose a strategy that's strictly dominated by some other? (An easy practical way to do this for simple games is to write out each strategy as an ordered set of payoffs corresponding to possible moves by the other player, then just delete the ones that are lower than anything else left standing.)

Nash Equilibrium

Very few interesting games will be solvable by deleting dominated strategies. (You don't need game theory to predict that people won't do obviously stupid things.) However, every game (meeting basic criteria) has at least one Nash equilibrium. It's sort of the ur-solution concept. (There are lots of other solution concepts out there, and I'll discuss a few later, but they are all subsets of the Nash equilibrium---everything that meets one of these other criteria is also a Nash equilibrium.)

A Nash equilibrium is very simple to describe: it's a set of strategies, one for each player, such that for each player i, if nobody else changes his or her strategy, player i can't improve his or her payoff by changing his or her strategy. (Note: most people introduce the notion of Nash eqilibrium by way of the idea of a "best response." I don't really see the need to define a separate term for that, but if you care, read this.)

For predictive purposes, the key point about Nash equilibrium is that it would usually be silly to expect rational players to choose strategy sets that aren't them. If a strategy set isn't a Nash equilibrium, then at least one player can do better by switching, so why would that player not do so?

Many games have more than one Nash equilibrium. Some games technically have an infinite number. This raises a notoriously difficult problem, that of equilibrium selection, about which game theorists have spilled immense amounts of ink.

Repeated games

A game can have as many steps as you like, but sometimes those steps are repeated. An important class of games are those that are just one single-round game (like the soccer example), but repeated. A game can be repeated a finite and known number of times, a finite and unknown number of times (indefinite repetition), or an infinite number of times. Often finitely repeated games are fairly easy to solve. (In a future post: backward induction and subgame perfect equilibrium, the basic tools for doing so.) Indefinite and infinite repeated games are in some sense even easier, and some sense even harder. They're easier because a mad set of proofs known collectively as the "folk theorem" show that a huge class of infinite/indefinite repeated games have a vast number of potential solutions. Above, where I said some games have infinite equilibria? Here's a good place to find some. And that, of course, is the harder sense: there isn't much that can be reliably predicted even for rational players.

Posted by Paul Gowder on January 22, 2015 at 11:27 AM in Games | Permalink

Comments

Excellent series. I hope you will continue it.

Posted by: JCT | Jan 22, 2015 3:47:52 PM

Thanks!

Posted by: Paul Gowder | Jan 22, 2015 5:09:32 PM

Wanted to second the request to continue. This is wonderful and helpful stuff.

Posted by: Anon | Jan 22, 2015 10:09:11 PM

I would phrase your point about Nash equilibrium a little more weakly. If there is a stable pattern of play you would expect it to be a Nash equilibrium, since otherwise someone would have an incentive to change. But it is less clear that the first time a game is played that play would resemble a Nash equilibrium.

In real world terms, if attorneys in a litigation generally follow some particular strategy I would think it is reasonable to expect that it is a Nash equilibrium in some model of the litigation process. It is less clear that game theory has much to contribute if you want to predict how litigants and attorneys will behave in an unusual type of situation.

Posted by: Jr | Jan 23, 2015 6:19:04 AM

Absolutely correct and an important point to keep in mind: especially with more complex games, equilibrium play often tends to be the product of learning and/or selection.

Posted by: Paul Gowder | Jan 23, 2015 7:12:09 AM

"It is less clear that game theory has much to contribute if you want to predict how litigants and attorneys will behave in an unusual type of situation."

Yes, there's learning and selection, but I also think that as player sophistication grows, so do predictions about opponent responses. I've been in plenty of long, long strategy meetings in which responses to unusual situations are parsed and decisions are reached based on responses predicted three or four cycles later.

Posted by: Michael Risch | Jan 23, 2015 8:25:37 AM

Post a comment