« Photo Safaris | Main | Open Thread: How do we Stop the Madness? »

Thursday, January 29, 2015

Game theory post 6 of N: the anxiety of rationality

The first five posts have pretty much laid out the basics of functional day-to-day game theory. (Well, I still need to do an information sets post.  Don't let me leave without doing one!)  Together, they amount to sort of the “street law” of the game theory world---the stuff a non-specialist actually tends to use on a regular basis. Now it’s time to delve into some worries that have been tabled for a while, plus a little bit of the fancier stuff. Howard has kindly allowed me to linger a little bit past my designated month in order to finish this series, so more to follow soon.

One of the big issues left lingering is the question of rationality. Most game theoretic research is built on the much-loathed “rational actor model,” according to which, roughly, people are treated as if they have stuff they want to achieve, which they weigh up together in some fashion and then pursue in the most direct way, by taking the acts that yield them the best expected goal-satisfaction. Yet there are many people who worry---sometimes rightly, sometimes not---that actual human decision-makers don’t act that way.

Today, I’m going defend the rational actor model a little bit, by talking about how sometimes, when we criticize it, we misunderstand what “rationality” means.* Onward:

I have to lead this off with one of Hume’s most infamous quotes. This is from the reason-as-slave-of-the-passions bit (danger: casual European Enlightenment racism included). 


Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledged lesser good to my greater, and have a more ardent affection for the former than the latter. 

What does this mean? The claim Hume is defending here is that rationality is relative to preferences. Judgments of rationality should not be judgments of the goodness or badness of the goals one has (with the possible exception of when they’re internally inconsistent), either for the world or for oneself. Rather, rationality is, in every important sense, means-end rationality. One is rational when one is good at figuring out how to achieve one’s preferences, where we imagine those preferences as exogenously set.

Now, this is actually a non-trivial (by which I mean “controversial”) philosophical view, which goes under the name “instrumentalism.” But---and this is important---the claim is controversial as a matter of philosophy of action, not as a matter of social science. What I mean by that obscure sentence is that it may make sense to say that we can, philosophically, attribute claims of value to people who carry out intentional acts, however, if we’re actually trying to predict what people will do (which, remember, is primarily what we’re trying to do with this game theoretic enterprise), we ought to not judge their preferences. Instead, we ought to try to figure them out, and when we achieve our best guess as what they are, take them as given, and make our conclusions about whether they are “rational” or not by proceeding from those exogenously set preferences to behavioral predictions.

Thus, if, as practical social scientists, people behave differently than how our fancy models predict, that might mean that they’re irrational. Or it might mean that their preferences are just different from what we think they are.

Two famous examples. First, the “ultimatum game.” The simplest possible bit of game theory. Two players, a fixed pot of money. Player 1 gets to decide a split, player 2 then gets to decide whether to accept or reject; if P2 accepts, the split is implemented, if P2 rejects, nobody gets anything. There are two subgame perfect equilibria to this game: a) P1 offers zero, P2 accepts anything offered, and b) P1 offers the smallest possible nonzero amount, P2 rejects zero, accepts all else. (The first of those is only an equilibrium because P2 is indifferent between accepting and rejecting when offered zero; nobody really cares about it.)

The thing with the ultimatum game is that basically nobody plays "equilibrium strategies," if by "equilibrium strategies" you mean "the equilibria I just mentioned, which are rooted in the totally idiotic assumption that utility is the same thing as money." (They are not!  They are not the same!) P1s in experimental context almost always offer more than the bare minimum; P2s almost never accept the bare minimum.

There are two explanations for this failure of prediction: 1) people are dumb, and 2) people care about their pride, fairness, not having to accept insulting offers, etc., more than money. Experimental economists have gone to some lengths to try to tease them out, but it’s actually hard to tease out these kinds of fairness motivations. (The paper I just linked, for example, seems seriously confused to me: it tries to eliminate fairness considerations by delinking ultimate payoffs from round-by-round actions, but fails to consider that the fairness consideration might not be about distribution of ultimate payoffs but about things like not being treated badly in a given round---that is, it ignores the expressive aspect to fairness.) It would be a bad mistake, observing the empirical results of the ultimatum game experiments, to leap to the conclusion “people are irrational, so game theory is useless!” The conclusion “people care about more than just the amount of money they receive” is equally plausible, and matches our experience of things like, well, hell, like trading money for status and self-worth and positive self-and-other impression management all the time. How does Rolex stay in business again? 

Second famous example. Voting. Why do people vote? This is something that political scientists have struggled with for, seriously, decades. (That may say more about political scientists than it does about voting.) On one account, it’s a strategic problem: we have preferences over policy (or over the things we get from policy, like lower taxes and a reduced risk of being thrown in jail/a higher shot at getting the people we don’t like thrown in jail), and voting allows us to influence that policy with some nonzero probability. Basically, this is the probability of being the decisive voter. So, in principle, there’s some equilibrium number of voters, such that those who do not vote would do worse by voting (because the cost of voting, like standing in long lines and taking time off work, is not worth the probability-weighted policy benefit to be gained), and those who do vote would do worse by not voting (for the opposite reason).

The problem of this model of voter motivation is that, given the number of people who actually vote, the probability of being the decisive voter in a given election, in a big country like the U.S., is really really really tiny. (Ok, maybe it’s a bit bigger if you’re voting in the race for town dogcatcher. But who cares?) Yet lots of people vote, even in things like presidential elections. So we’re probably not playing equilibrium strategies based on the model of voting behavior which imagines people motivated by probability-weighted policy outcomes. Are people just stupid?

Well, waaay back in 1968, Riker and Ordeshook wrote a famous (or infamous) paper, which, stripping away the huge amount of math, basically says “yo, maybe people derive utility from voting itself.” They expressed this with a term “D” in a utility function, where “D,” in polisci grad seminars, tends to be summarized as standing for “duty,” but which really captures a whole slew of kinds of non-policy-related preference satisfactions that come from voting, like being a good citizen, participating in shared sovereignty, expressing one’s commitments, etc.

There are two things we might think about Riker and Ordeshook’s D. The first is: “How pointless! This just kills any ambition of models of voter rationality to tell us anything useful or predictive about the world, because anytime we see someone who votes despite our models predicting the opposite, we just get to conclude that they must have had a bigger D than we expected!” (Although, in fairness, experimentalists get cleverer and cleverer every year at coming up with sneaky ways to tease these things out.)

The second thing we might think is: “Duh! Of course that’s why people vote.”



* Not always. Sometimes we’re wrong to criticize it because we fail to understand ways in which people might actually behave rationally---such as when they operate in an environment, like competitive markets, which selects irrational actors out. Sometimes we’re wrong to criticize it because by “irrationality” we just mean “lack of information.” (I really need to write a big omnibus post about information in game theory, actually. It may happen.) Sometimes we’re just right to criticize it, because there actually is a ton of psychological evidence for “bounded rationality,” a set of results about how people behave in systematically ends-frustrating ways, like “hyperbolic discounting.” I’ll write a post about that soon too.

Posted by Paul Gowder on January 29, 2015 at 12:11 PM in Games | Permalink

Comments

Post a comment