Thursday, January 29, 2015
Game theory post 6 of N: the anxiety of rationality
The first five posts have pretty much laid out the basics of functional day-to-day game theory. (Well, I still need to do an information sets post. Don't let me leave without doing one!) Together, they amount to sort of the “street law” of the game theory world---the stuff a non-specialist actually tends to use on a regular basis. Now it’s time to delve into some worries that have been tabled for a while, plus a little bit of the fancier stuff. Howard has kindly allowed me to linger a little bit past my designated month in order to finish this series, so more to follow soon.
One of the big issues left lingering is the question of rationality. Most game theoretic research is built on the much-loathed “rational actor model,” according to which, roughly, people are treated as if they have stuff they want to achieve, which they weigh up together in some fashion and then pursue in the most direct way, by taking the acts that yield them the best expected goal-satisfaction. Yet there are many people who worry---sometimes rightly, sometimes not---that actual human decision-makers don’t act that way.
Today, I’m going defend the rational actor model a little bit, by talking about how sometimes, when we criticize it, we misunderstand what “rationality” means.* Onward:
I have to lead this off with one of Hume’s most infamous quotes. This is from the reason-as-slave-of-the-passions bit (danger: casual European Enlightenment racism included).
Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledged lesser good to my greater, and have a more ardent affection for the former than the latter.
What does this mean? The claim Hume is defending here is that rationality is relative to preferences. Judgments of rationality should not be judgments of the goodness or badness of the goals one has (with the possible exception of when they’re internally inconsistent), either for the world or for oneself. Rather, rationality is, in every important sense, means-end rationality. One is rational when one is good at figuring out how to achieve one’s preferences, where we imagine those preferences as exogenously set.
Now, this is actually a non-trivial (by which I mean “controversial”) philosophical view, which goes under the name “instrumentalism.” But---and this is important---the claim is controversial as a matter of philosophy of action, not as a matter of social science. What I mean by that obscure sentence is that it may make sense to say that we can, philosophically, attribute claims of value to people who carry out intentional acts, however, if we’re actually trying to predict what people will do (which, remember, is primarily what we’re trying to do with this game theoretic enterprise), we ought to not judge their preferences. Instead, we ought to try to figure them out, and when we achieve our best guess as what they are, take them as given, and make our conclusions about whether they are “rational” or not by proceeding from those exogenously set preferences to behavioral predictions.
Thus, if, as practical social scientists, people behave differently than how our fancy models predict, that might mean that they’re irrational. Or it might mean that their preferences are just different from what we think they are.
Two famous examples. First, the “ultimatum game.” The simplest possible bit of game theory. Two players, a fixed pot of money. Player 1 gets to decide a split, player 2 then gets to decide whether to accept or reject; if P2 accepts, the split is implemented, if P2 rejects, nobody gets anything. There are two subgame perfect equilibria to this game: a) P1 offers zero, P2 accepts anything offered, and b) P1 offers the smallest possible nonzero amount, P2 rejects zero, accepts all else. (The first of those is only an equilibrium because P2 is indifferent between accepting and rejecting when offered zero; nobody really cares about it.)
The thing with the ultimatum game is that basically nobody plays "equilibrium strategies," if by "equilibrium strategies" you mean "the equilibria I just mentioned, which are rooted in the totally idiotic assumption that utility is the same thing as money." (They are not! They are not the same!) P1s in experimental context almost always offer more than the bare minimum; P2s almost never accept the bare minimum.
There are two explanations for this failure of prediction: 1) people are dumb, and 2) people care about their pride, fairness, not having to accept insulting offers, etc., more than money. Experimental economists have gone to some lengths to try to tease them out, but it’s actually hard to tease out these kinds of fairness motivations. (The paper I just linked, for example, seems seriously confused to me: it tries to eliminate fairness considerations by delinking ultimate payoffs from round-by-round actions, but fails to consider that the fairness consideration might not be about distribution of ultimate payoffs but about things like not being treated badly in a given round---that is, it ignores the expressive aspect to fairness.) It would be a bad mistake, observing the empirical results of the ultimatum game experiments, to leap to the conclusion “people are irrational, so game theory is useless!” The conclusion “people care about more than just the amount of money they receive” is equally plausible, and matches our experience of things like, well, hell, like trading money for status and self-worth and positive self-and-other impression management all the time. How does Rolex stay in business again?
Second famous example. Voting. Why do people vote? This is something that political scientists have struggled with for, seriously, decades. (That may say more about political scientists than it does about voting.) On one account, it’s a strategic problem: we have preferences over policy (or over the things we get from policy, like lower taxes and a reduced risk of being thrown in jail/a higher shot at getting the people we don’t like thrown in jail), and voting allows us to influence that policy with some nonzero probability. Basically, this is the probability of being the decisive voter. So, in principle, there’s some equilibrium number of voters, such that those who do not vote would do worse by voting (because the cost of voting, like standing in long lines and taking time off work, is not worth the probability-weighted policy benefit to be gained), and those who do vote would do worse by not voting (for the opposite reason).
The problem of this model of voter motivation is that, given the number of people who actually vote, the probability of being the decisive voter in a given election, in a big country like the U.S., is really really really tiny. (Ok, maybe it’s a bit bigger if you’re voting in the race for town dogcatcher. But who cares?) Yet lots of people vote, even in things like presidential elections. So we’re probably not playing equilibrium strategies based on the model of voting behavior which imagines people motivated by probability-weighted policy outcomes. Are people just stupid?
Well, waaay back in 1968, Riker and Ordeshook wrote a famous (or infamous) paper, which, stripping away the huge amount of math, basically says “yo, maybe people derive utility from voting itself.” They expressed this with a term “D” in a utility function, where “D,” in polisci grad seminars, tends to be summarized as standing for “duty,” but which really captures a whole slew of kinds of non-policy-related preference satisfactions that come from voting, like being a good citizen, participating in shared sovereignty, expressing one’s commitments, etc.
There are two things we might think about Riker and Ordeshook’s D. The first is: “How pointless! This just kills any ambition of models of voter rationality to tell us anything useful or predictive about the world, because anytime we see someone who votes despite our models predicting the opposite, we just get to conclude that they must have had a bigger D than we expected!” (Although, in fairness, experimentalists get cleverer and cleverer every year at coming up with sneaky ways to tease these things out.)
The second thing we might think is: “Duh! Of course that’s why people vote.”
* Not always. Sometimes we’re wrong to criticize it because we fail to understand ways in which people might actually behave rationally---such as when they operate in an environment, like competitive markets, which selects irrational actors out. Sometimes we’re wrong to criticize it because by “irrationality” we just mean “lack of information.” (I really need to write a big omnibus post about information in game theory, actually. It may happen.) Sometimes we’re just right to criticize it, because there actually is a ton of psychological evidence for “bounded rationality,” a set of results about how people behave in systematically ends-frustrating ways, like “hyperbolic discounting.” I’ll write a post about that soon too.
Monday, January 26, 2015
Game theory post 5 of N: the joy and madness of repeated games
One thing about strategic interactions is that humans tend to repeat them. For example, participants in a market may engage in trades over and over, neighbors may make the same decisions with respect to borders, common resources, etc. over and over, even some litigants in a particularly litigious industry may find themselves facing one another in court over and over (ahem, cough, cough, AppleandGoogleandSamsungandMicrosoftandAllTheRest). Unsurprisingly, game theorists have developed a body of knowledge for dealing with repeated games—that is, games that can be divided into subgames which are played over and over.
There are two categories of repeated games: finitely repeated, and indefinitely or infinitely repeated games. And as it turns out, they behave very differently. Generally speaking, finitely repeated games tend to behave (at least formally) sorta more-or-less like one-short games; and we would intuitively expect that to be true, for a finitely repeated strategic form game is just the same thing as a longer game written in extensive form. But things go really wild when you move to the indefinite/infinite category.
To illustrate, let’s think about the prisoners’ dilemma again. Here’s one thing you might think about the finitely repeated PD: “hey, wait a minute, maybe now cooperation can be sustained! After all, if the players cooperate in the first round, maybe they’ll learn to trust one another, and continue to cooperate in future rounds—especially if they both understand that this trust will be destroyed if they don’t cooperate, or, equivalently, if someone stabs the other player in the back, the stabbee can be expected to punish the stabber by defecting in future rounds. (These kinds of strategies have all kinds of flashy names among game theorists: there’s “tit-for-tat,” the strategy of cooperating except when your opponent/partner has defected in the previous round, then defect; there’s “grim trigger,” cooperating unless your opponent/partner has ever defected, then defecting forever…)
As it turns out, in the finitely repeated PD, that’s just not true. (Again, people sometimes behave differently in the real world, but we ought to get out our purely instrumentally rational and strategic starting point before we start worrying about when and why observed reality deviates from it.*) Suppose there’s ten rounds to the game, and imagine you’re a player trying to figure out whether this cooperation strategy will work. Here’s how your internal monologue could go:
Ok, there are ten rounds here. If we both cooperate in the first round, then the threat of future defection should keep everyone on the straight and narrow in the future. But what constrains us in round ten? After all, in round 10, there’s no future round in which I can threaten the other player with punishment; accordingly, defection is a strictly dominant strategy in round 10, we should predict it no matter what. If I cooperate in round 10, I’m just a sucker. So we’ll both defect in round 10. But then, wait a minute. If defection is definitely going to happen in round 10, then in round 9 there’s no realistic (credible) threat of punishment either. You can’t threaten someone with an act you’re going to take anyway. So defection is strictly dominant in round 9 too. But then what constrains us in round 8? …
This, of course, is just a more intuitively expressed version of the notion of backward induction, given in the previous post. And we can see that it’s aptly named, for the reasoning process in cases like this actually looks kind of like mathematical induction: if the conclusion at this point compels the same conclusion at the next point in the sequence, then we’re warranted in making inferences all the way down. Unsurprisingly, the only subgame perfect equilibrium of the finitely repeated PD is mutual defection at every round. And this is a general fact about repeated games with unique Nash equilibria in the one-shot version (see proof on pg. 10 of these lecture slides — which also give an excellent math-ier presentation of the stuff I’m describing here): the Nash equilibrium of the one-shot game is, repeated over every round, the subgame perfect equilibrium of the repeated game.
But when we get into infinitely repeated games, then everything goes out the window. We don’t need to get into the mathematics of it, but just think about the same logic in the context of the PD again: all of a sudden, there’s no end point to carry out backward induction from. Because of that small change in the facts, punishment for prior defection (or reward for prior cooperation) is a realistic prospect at every single round: the rounds never stop (or they stop at an unpredictable time), so players always have a threat to make against one another. Conditional strategies like grim trigger and tit for tat suddenly start to be plausible, and the prospect of sustained cooperation again appears on the horizon.
In fact, as it turns out, there are a series of results known collectively as “the folk theorem” that suggest that infinitely repeated games have infinite subgame perfect equilibria. Anything can be sustained in equilibrium, under two conditions: 1) players can’t discount the future too highly; 2) the strategy set in question has to yield single-round payoffs better than those that can be obtained by the one-shot Nash equilibrium.
On the one hand, this is great. It allows us to explain how things like sustained cooperation can be possible in strategic contexts where there’s an incentive to defect. For example, it can be used to explain how reputation mechanisms work in markets to keep people honest. One of the most influential papers in the economic history of law, Milgron, North & Weingast 1990, essentially uses a more complex (because multiplayer) version of the indefinitely repeated PD to model how decentralized commercial enforcement institutions work.
However, while the folk theorem is useful in that sense for backward-looking explanation, it’s bad news for prediction: given that there are an infinite number of behavior patterns that are supportable in equilibrium in such situations, how do you predict which ones will show up? It ain’t easy. (Many game theorists just wave their hands around and say “focal points!”—about which more later.) If you’re a Popperian falsificationist about your philosophy of science, of course, then blowing up prediction is also a good way to blow up backward-looking explanation…but you probably shouldn’t be a Popperian falsificationist.
So. Anyway. That’s quite enough of that. I’m writing this post as the blizzard of our nightmares moves into Princeton (where I’m holed up this year), so perhaps soon we’ll see real-life applications of the finitely repeated PD as civilization breaks down, looters descend, hyenas emerge from the woods to drag away the weak, &c. Memo to fellow denizens of the impending weather apocalypse: I have a fixed cooperative disposition! Honest! Please don’t eat me! First.
* Teaser: sometimes players with a fixed disposition to be “irrational,” like to play tit-for-tat or grim trigger, can actually do better when they play with one another. In a context where doing better is selected for, players with such dispositions can prosper. See Axelrod, The Evolution of Cooperation and Skyrms, The Evolution of the Social Contract (previewed in a freely available Tanner Lecture http://tannerlectures.utah.edu/_documents/a-to-z/s/Skyrms_07.pdf ); also see basically all of evolutionary game theory, which I think I’ll probably post about at some point even though it is more advanced material than most of the stuff in this series, just because I find it delightful.
Saturday, January 24, 2015
Game theory post 4 of N: extensive form games, a deep dive
How about some Saturday game theory over brunch?
The one-round strategic form games of the previous post are the simplest possible presentation of some actual game theory. Now I want to put on my political scientist hat and dig into a slightly less simple, but much beloved, game.
We might call this the “punishment game.” It imagines a boss or a dictator or a parent giving commands to a subordinate or a subject or a child, where the boss prefers her commands be obeyed, and the subordinate prefers not to obey; if the subordinate defies the command, the boss has the power to inflict punishment at a personal cost. The following illustration (now with actual numbers, for clarity!) captures the situation, with the subordinate’s payoffs listed first; discussion is after the fold. (Sorry for the ugliness; remember how I said that I’m horrible at graphics?)
Let’s look at obedience here. Remember that a full strategy includes a specification of the moves that will be made at every possible decision point, even if they won’t be reached in equilibrium. This fact will be important in a moment.
So suppose the subordinate plays the strategy “always obey” and the boss plays the strategy “never punish.” It’s easy to see that this isn’t an equilibrium: given that the boss is playing never punish, the subordinate can do better by switching the strategy to “always defy.” By contrast, the strategy pair “always obey, always punish” IS a Nash equilibrium: the subordinate does worse by deviating (getting smacked), and the boss is indifferent because no matter what she does, she gets the 10 payoff in the left-most terminus of the picture.
But there’s a certain unintuitiveness to this equilibrium. Suppose you’re the subordinate. You might reasonably think: “my boss has this strategy of always punishing, but it’s irrational for her to have that strategy: if I defy, she does worse by punishing than she does by just letting me slide. So why shouldn’t I just defy?” In other words, the boss’s threat to punish isn’t credible, because it’s too costly for her to actually carry it out. So, intuitively, we ought not to predict that the players will actually end up in the [obey; punish] equilibrium.
The technique game theorists have come up with to eliminate threats that are not credible from our prediction pool is a refinement to Nash equilibrium called “subgame perfect equilibrium.” A loose description of that solution concept is that a strategy set is subgame perfect if it is a Nash equilibrium of every subgame of the original game. Here, the [obey; punish] strategy set is a Nash equilibrium of the subgame that begins when the subordinate obeys, but is not a Nash equilibrium of the subgame that begins when the subordinate defies. Accordingly, it isn’t a subgame perfect equilibrium. (All subgame perfect equilibria are also Nash equilibria.)
The easy way to find subgame perfect equilibria is a process known as “backward induction.” Essentially, what you do is look at the last decision each player can make in each line of play and figure out what is best; then you count the payoffs from that decision as the payoffs for the choice that leads to it in the prior step, and keep going until you’ve solved the whole thing. (We call these decision points “nodes.”)
That’s a little abstract, but it will become clear when applied to the example. Think of the boss’s decision: if the subordinate has defied, she may either punish or refrain from punishing; her payoff from punishing is -1, and her payoff from refraining is 0. She can be expected to not punish. Given that, we can impute the subordinate’s payoff at that node: if he chooses to defy, he can expect a payoff of 10, based on the boss’s most rational response; this may be compared to her payoff if she obeys = 0. From this, we can conclude that the only subgame perfect equilibrium is subordinate always defies, boss never punishes. And that’s the prediction we ought to make.
Note how this is a really interesting problem for lawyers, for it suggests that punishment---like the sort that the legal system deploys---can be irrational. The obvious example is consumer contract enforcement: it can easily be irrational to enforce a consumer contract, because the costs of doing so are so high relative to the small payoffs; a mass dealer in consumer goods and services can in principle look down the game tree and breach its contracts with impunity, at least in the absence of something like fee-shifting, a class action mechanism, statutory demages, etc. to give plaintiffs a sufficient incentive to punish them. This model is a concise explanation of those features of our legal institutions. It’s also a favorite model of political scientists, mainly because of its obvious relevance to, e.g., international relations problems of deterrence.
Standard solutions to the problem: 1) Repetition---if the boss deals with the subordinate many times (an indefinite number, actually), we can sometimes find subgame perfect equilibria in which punishment happens thanks to its deterrent effect (more on repeated games later); 2) precommitment---if, for example, the boss can hand over the job of punishing to an independent agent (like, say, a judge!) who does not incur the costs to do so, this might make the threat credible. But there’s lots and lots to say about credible threat models; this is really just a teaser to show why we might want to say some of it.
Friday, January 23, 2015
Game theory post 3 of N: some classic (one-shot, strategic form) games
There are a number of classic textbook games that are highly useful, primarily because if you know them well, you can often see real-world situations that have similar payoff structures; doing so, you have a pretty good initial guess at what will happen in those situations. Accordingly, I'll collect some here. (Behind the fold.)
Of course we have to start here. Everyone knows the PD, so I won't belabor it. If you want more on it, a long and deep discussion is here; a shorter summary is here. The key idea is that both players (its standard presentation is 2 players, but it can be extended to more) have strictly dominant strategies of defecting from cooperative play with one another, such that the only Nash equilibrium is mutual defection, but mutual defection is worse for the players than mutual cooperation.
The following image shows a simple example, in abstract form. (This example assumes the game is symmetrical, that is, the players each get the same cooperation/defection payoffs, however, a PD need not be stymmetrical. It's just easier to notate if it is. Here, and thereafter, player 1 will choose the row, and have payoff first; player 2 will choose the columns and have payoff listed second.*)
The PD is so popular because it's an excellent way of representing many situations where there is an incentive to defect from a mutually beneficial cooperative arrangement. For example, there have been many PD models in international relations literature, because an armed standoff can easily be understood as a PD. Imagine it: two countries are facing one another over a tense border, and can demilitarize or retain military readiness. If both demilitarize, they both do well, because they can spend their resources on less wasteful things. If one country demilitarizes and the other does not, the country that keeps its army can quickly conquer the other, and take all the resources. If neither country demilitarizes, they keep all this wasteful tension. Keeping the army is strictly dominant for both parties, that's the only equilibrium, both would be better off if both demilitarized, and you have a model of the Korean Peninsula.
For more legal implications, consider things like property rights (mutually beneficial cooperative relationships where all parties have an incentive to defect, if not adequately enforced), and even judicial corruption.
Also, people don't seem to agree where the apostrophe goes. I say there are two prisoners, who share the same dilemma, thus, prisoners'.
Battle of the Sexes
Some people like to divide the world into two general kinds of games: commitment games, and cooperation games. The PD is an example of a commitment game, in that the key issue is that players are unable to commit to a beneficial course of conduct. The "battle of the sexes" is a classic example of a coordination game.
The backstory that leads to the sexist name (sorry, but it's the language game theorists use, and will help with literature searches, etc.) is a classic piece of gender essentialism: a husband and a wife want to go out, but the husband prefers the football game, while the wife prefers the opera; they both, however, prefer to be together to being separate. How can we predict what they will decide?
The key takeaway here is that there are two equilibria: both at opera and both at football game. The resources of game theory have trouble explaining which one will be chosen; we often appeal to non-strategic ideas (like "focal points"---social background norms, basically) to make a prediction. Real-world situations that resemble a battle of the sexes are situations where the parties have to come to one of several possible mutually beneficial arrangements, as in contracting or the negotiated drafting of regulations, although such situations can often be better modeled using more complex games in which players have threats to execute against one another, different payoffs from deadlock, etc. in order to shift the surplus from cooperation their way.
This is another extremely important classic coordination game, which imagines a pair of hotheaded teenaged drivers playing, as the name suggests, a game of chicken on the roads: any player who swerves loses face, but if nobody swerves, they both die. I say this is a coordination game (contra the otherwise good Wikipedia page, which describes it as an anti-coordination game), for reasons that will hopefully become clear on comparing battle of the sexes and chicken to matching pennies, below. While the players do not want to be doing the same thing, they do want to be coordinating their strategies to get compatible behavioral pairs, just like in battle of the sexes. There are two pure equilibria, each in which one swerves and the other does not. There is also one mixed, which is problematic, because mixed equilibria (that is, ones in which the players randomize their choices) in the chicken game can lead with positive probability to worse outcomes, namely a crash. The problem of the chicken game, then, for sufficiently bad crash payoffs, can be seen as getting the players to choose one of the pure equilibria---that is, settling on who has to swerve.
Another really interesting thing about chicken is the way it gives players an incentive to find ways to precommit to a choice. If one of the hotheaded teenagers can, for example, yank out the steering wheel and throw it out the window, s/he forces the other player (if rational) to swerve. Precommitment was most famously dramatized in the doomsday machine of Dr. Strangelove, although the underlying game there is probably poorly described as chicken (it really makes more sense as a sequential punishment kind of commitment game, perhaps to be described in a later post).
In the legal context, we might model some kinds of settlement negotiations as a game of chicken, where if both parties refuse to back down, they burn up all their utility on expensive litigation.
This really is an anti-coordination game. The story is that two players are gambling and choose a side of a penny to display; one player wants them to match while the other wants them to not match. This is also a variant of rock paper scissors, and of the soccer goalie problem noted previously. It's also the way that mixed strategy equilibria are traditionally introduced, for in this game there are no pure strategy equilibria, only a mixed strategy equilibrium. In the simplest case, the players weight each option equally, although when we change around payoffs and such this changes (how to figure out mixed strategy Nash equilibria is a topic for, perhaps, another post, although I'm not sure that it is sufficiently useful for the audience of these posts to be worth doing). Even this seemingly simple game, however, can be made endlessly complex by a motivated economist...
Today's last game is kind of a hybrid between the PD and battle of the sexes. The story is that two people are hunting, and they may either choose to hunt stag or hare; it takes them both to successfully hunt stag, while either acting alone may successfully hunt hare. But, of course, one stag is a lot more meat than two hare...
The key is that there are two pure Nash equilibria, in coordination game fashion, and while one is better for both players than the other, each player may get a consolation payoff by choosing the strategy corresponding to the worse equilibrium. So if players trust one another (cooperatively hunt stag), they do well; if they do not trust one another, they do less well (hunt hare). If one person trusts, and the other doesn't, then the poor fool who trusts gets a sucker's payoff, and the untrusting one still does a'ight. In lots of cases, stag hunt is a good alternative model to PD. Bryan Skyrms (who taught me all the evolutionary game theory I know) has written a really good book on the subject, although you can also get the heart for free in lecture form. This one is a really rich game, Skyrms's lecture will give you a taste of it, but this post is already too long, so I won't discuss it beyond raising its existence.
I don't think I'll be able to keep up this post-a-day pace for much longer, but more to come!
* These diagrams are all intuitive simplifications, and elide some edge cases where the diagrams given might not constitute the game in question. For the PD, for example, there is a more complex condition that might be added in repeated play, as described in Robert Axelrod & William D. Hamilton, The Evolution of Cooperation, 211 Science 1390 (1981). Finally, I am lousy at graphics---just have zero visual intuition---so my apologies if there are any errors or weird typos in the pictures...
Thursday, January 22, 2015
Game theory post #2 of ????: Basic Concepts
This is the second post in an indefinite series of game theory for law professors. In this one, I'll describe some basic concepts---the rudimentary language of game theory as a vocabulary list. This page, incidentally, has even simpler definitions of some of the concepts described here, as well as a few concrete examples.
Let us begin, however, by fixing an idea of our task in mind. We have at least two players (where a player can be any entity that makes choices and receives payoffs---depending on the level of analysis, this can be individuals, firms, governments, or a combination of them), each player can make moves, actions that, in conjunction with other players' moves, affect the state of the world (the outcomes experienced by that player as well as others), and each player has a utility function mapping probability-weighted states of the world to a preference ordering. And our goal is to say something intelligent about what the players have incentives to do---often, although not always, with the assumption that they are sufficiently rational that they will do what their incentives will point toward, but let us bracket that issue for the time being. That saying of something intelligent is also known as "solving" the game. Also, I only will be discussing non-cooperative game theory; there's a branch of game theory called cooperative game theory too, but I know it less well and never use it. (Those of you who study things like constitution-making and contracts might look into it though.)
Strategic and extensive form games
There are two classic ways to visualize the problem of a game. The first is strategic form (a.k.a. "normal form"), represented as a chart which displays the possible combinations of moves and the payoffs from each to each player. The second is extensive form, which represents the possible combination of moves and payoffs from each in order, like a tree diagram, with the payoffs at the end. This webpage gives a good example of each. Note that this is merely a representation: it's possible (and often sensible) to do game theory without using these kinds of pictures, but the pictures are a good way of summarizing the issues in play, and you'll see them in most game theory articles.
Simultaneous and sequential play
Consider two different kinds of real-world game. First, consider soccer, and a player racing toward the goalie. A good simplified model of the decisions facing the players there is that each chooses to kick the ball or leap (respectively) to one side or the net or the other. Things happen fast enough that they choose simultaneously---by the time the player with the ball kicks, the goalie needs to have already decided which direction to leap (but obviously not before the former commits to kicking in a direction). Or, and equivalently for practical purposes in most cases, we might imagine the players having chosen whenever they want, but secretly---the goalie might have decided to leap to a given side six months in advance, but we can model it as simultaneous as long as they don't tell one another. (It's actually the secrecy---more precisely, the lack of information---that really matters for modeling purposes; the traditional use of the language of simultaneity is basically just shorthand.) This is a classic simultaneous game, and is most easily represented in the strategic form.
By contrast, think about chess or go. In such games, players take turns: they can see what the other player did before making their own decision. Typically, we represent such games in the extensive form.
Note that it's possible to have elements of both kinds of games in a single complex strategic interaction. For example, in litigation, some elements of players' strategies are concealed from one another, like how much to spend on investigation and research before the initiation of suit, others, like what procedural motions are filed, are visible. There are fancier things we can do to represent these, like specify players' "information sets." We can also imagine a series of simultaneous games played between the same players multiple times, where the players can see what one another did in previous rounds (see below).
As you can see by now, a lot of the action in game theory is in specifying how much information players have about what the other players are doing, their payoffs, exogenously set states of the world, etc. I won't introduce much detail on this here, but might write a future post all about information.
A player's strategy is a complete specification of his or her moves, across the whole game---that is, at every possible state of behavior from other players. (To be more precise, a complete strategy is a specification of a player's moves in at all his/her information sets), where the notion of an information set counts everything that looks the same to the player as identical. Obviously, a player can't have a strategy that generates different actions depending on different states of behavior that the player can't distinguish, given what the player knows. But let's leave that aside for the moment.) A strategy can be simple: "no matter what the other player does, I'll kick to the left," or complex ("at any given point, if the goalie leapt to the left at least three out of the last five times, then I'll kick to the left, otherwise right"). It can also be probabilistic---this is called a "mixed strategy" (I'll always kick to the left with .45, and to the right with .55).
A strategy is dominant if it's always best for the player. The classic case of dominance is in the prisoners' dilemma, with which many of you will be familiar (discussed in a later post): "always defect" is a dominant strategy for each player, in that it optimizes the players' payoffs no matter what the other player does. Dominance is divided into two categories: strict dominance and weak dominance. Strict dominance means that the strategy always does better than competing strategies. Weak dominance means that the strategy does at least as well as competing strategies.
One way to solve a game is known as "iterated deletion of strictly dominated strategies. (Deleting weakly dominated strategies is a dicier proposition.) That means what it says it means. Look at the strategies available to the players. If one is strictly dominated by something else, chop it out. Keep doing this until there aren't any strictly dominated strategies left. If there's only one strategy left, there's your solution. Even if there are multiple strategies left, at the very least you know that no rational player will choose one of the strategies that you removed, because why would anyone choose a strategy that's strictly dominated by some other? (An easy practical way to do this for simple games is to write out each strategy as an ordered set of payoffs corresponding to possible moves by the other player, then just delete the ones that are lower than anything else left standing.)
Very few interesting games will be solvable by deleting dominated strategies. (You don't need game theory to predict that people won't do obviously stupid things.) However, every game (meeting basic criteria) has at least one Nash equilibrium. It's sort of the ur-solution concept. (There are lots of other solution concepts out there, and I'll discuss a few later, but they are all subsets of the Nash equilibrium---everything that meets one of these other criteria is also a Nash equilibrium.)
A Nash equilibrium is very simple to describe: it's a set of strategies, one for each player, such that for each player i, if nobody else changes his or her strategy, player i can't improve his or her payoff by changing his or her strategy. (Note: most people introduce the notion of Nash eqilibrium by way of the idea of a "best response." I don't really see the need to define a separate term for that, but if you care, read this.)
For predictive purposes, the key point about Nash equilibrium is that it would usually be silly to expect rational players to choose strategy sets that aren't them. If a strategy set isn't a Nash equilibrium, then at least one player can do better by switching, so why would that player not do so?
Many games have more than one Nash equilibrium. Some games technically have an infinite number. This raises a notoriously difficult problem, that of equilibrium selection, about which game theorists have spilled immense amounts of ink.
A game can have as many steps as you like, but sometimes those steps are repeated. An important class of games are those that are just one single-round game (like the soccer example), but repeated. A game can be repeated a finite and known number of times, a finite and unknown number of times (indefinite repetition), or an infinite number of times. Often finitely repeated games are fairly easy to solve. (In a future post: backward induction and subgame perfect equilibrium, the basic tools for doing so.) Indefinite and infinite repeated games are in some sense even easier, and some sense even harder. They're easier because a mad set of proofs known collectively as the "folk theorem" show that a huge class of infinite/indefinite repeated games have a vast number of potential solutions. Above, where I said some games have infinite equilibria? Here's a good place to find some. And that, of course, is the harder sense: there isn't much that can be reliably predicted even for rational players.
Wednesday, January 21, 2015
Experimental Game Theory Series #1 of ???
I'd like to try an experiment: methodological propaganda/skillsharing in a series of blog posts. I had originally planned a fairly large number of these and essentially an internet course in basic game theory, but then the 20th of the month snuck up on me, and there's very little chance the whole thing gets out before my blogging residency (such as it is) runs out. So let's get as far as we can, and see how people like these posts; if they prove popular, perhaps they can continue somewhere else. (I'm also totally hijacking the "games" category" on the blog for this. Because, obvs.)
With no further ado: an introduction to game theory for lawyers/law professors, post 1 of N: why?
I'm a huge fan of game theory as an intellectual tool. It provides a surprising amount of analytical punch at reasonable, lawyer level, amounts of math---you can make useful models that shed important light on the social world with nothing beyond basic algebra. (It also scales up to math way above my pay grade, including alarming things like linear programming.) And while many involved in the legal enterprise make use of the tool---mostly L&E folks---there are many who could benefit from it but find the jargon and the formalization intimidating, or are put off by the policy agendas associated with many who use the tool ("efficiency!").
So I will spend part of my blawggey bandwith this month setting out some basics of game theory that, I hope, will be suitable for academic readers with no training in the subject to put to immediate use in their own scholarship (with suitable feedback from specialists, of course), and potentially also introduce to students in courses that can benefit from it (e.g., anything with negotiation or regulation on the agenda). The intended audience is academics with no formal training in formal modeling beyond the sorts of references that may appear in undergraduate economics (at the introductory level) and political science courses. Readers with training in the subject will find this series of posts terribly uninteresting. Also, the posts will begin quite elementary and become more fancy over time; today, I will begin with the absolute basics: what game theory is, and why you might want to use it.
1. What is Game Theory?
Game theory is, in essence, the study of rational strategic preference-optimizing behavior. Its space is best carved out in contrast to its counterpart, decision theory, which has the same subject, minus the "strategic."
Decision theory is just math about how to get what you want. More wonkily, it imagines an actor who views a set of probabilistically arranged states of the world, and can rank-order those states in terms of his or her preferences (i.e., I prefer a .5 chance at state A to a certain chance of state B, and so forth). Thanks to the math underlying Von Neumann-Morgenstern utility, we know that a minimally rational ordering of such states allows us to get numbers on an interval scale, which we can call "utility." (Philosophers, economists, and the like have started a bunch of fights about what the notion of utility actually means; we can mostly leave them aside until such time as we want to use it to talk about things like "efficiency.") Then, essentially, decision theory is a fancy set of tools to flesh out the prediction that people will take the actions that lead them to their highest expected utility---the actions that bring it about that the sum of the utility numbers, weighted by the probability of the states that yield them, is largest.
Game theory takes the same actor, with the same properties, and introduces another player---a second agent, also with those properties. It then makes the paths from those agents' actions to the states of the world over which they have preference orderings dependent on one another's behavior. It is this interdependency to which we refer when we say that the subject of game theory is "strategic" action.
To see strategic action, consider the classic tale, "The Gift of the Magi." We all know it, it's the one about the poor husband and wife who give self-sacrifical gifts to one another: the husband sells his watch to buy the wife fancy combs for her hair, and the wife sells her hair to a wigmaker to buy the husband a fancy chain for his watch. In addition to being a heartwarming Christmas love story, yadda yadda, it's also a story of strategic action gone wrong: the outcomes of their gifts depended on what one another did, but neither took that into account in making his/her own decision: some game theory could have helped this couple.
The question "why game theory" is really the question "why (formal, sorta-mathematical-but-not-as-mathey-as-those-weirdos-in-econ) modeling?" The short version is that it generates predictions about how people will behave, and those predictions (flawed though they may be; abstractions from reality always are) as generated by game theory as opposed to by something else can be useful to your scholarly enterprise in several different ways:
1. Sometimes you can just take a game off the shelf.
Surprisingly often, having a basic familiarity with the classic games---the prisoners' dilemma, the battle of the sexes (being invented by a bunch of cold warriors and mathematicians in the 20th century, the legacy of game theory includes some unfortunate names with sexist connotations), the stag hunt, chicken---can spark a flash of insight. You might be considering a policy problem, and notice that the people involved have payoffs that resemble those assumed in one of the classic games. Great: you have a pretty good first-pass prediction about what's going to happen, and a pretty good first-pass idea about what might need to be changed to change the outcome. You might also be able to generate more insight by stating the conditions under which the payoffs actually resemble the game in question. [SHAMELESS SELF-PROMOTION ALERT:] My wonderful colleague Maya Steinitz and I have a paper that does just that in the context of corrupt transnational litigation.
2. Intuitions are unreliable.
We all have intuitions about the incentives that a given institutional structure or policy creates for those who interact with it. More than once, however, I've been pretty convinced that system X created outcome Y, tried to prove it formally, and found that I'm unable to formalize the intuition---sometimes because the result the analysis yields actually is the opposite of what I had predicted. This is obviously important, and shows how modeling can provide a rough and ready way of testing our beliefs about the world.
3. Push the intuitions a step further.
Even if your intuitions are reliable, how far out do they go? You may have an intuition about what happens when people interact once, but are the intuitions as strong or as reliable when they interact multiple times? Sometimes, the math can keep going when the intuitions run out.
4. Generate and refine empirically testable hypotheses
Another advantage of formal modeling is that it allows us to see a variety of candidate causal factors on a given behavioral outcome. By explicitly specifying the utility functions that generate agents' payoffs, we can identify what things to take to our data.
5. Find policy levers.
Back in #1 above, I said that stating the conditions under which the payoffs resemble a given game can add insight to the world. But "insight" is never enough for legal scholarship---we (well, someone---law review editors? tenure committees? John "I hate Kant and Bulgaria" Roberts?) always demand some kind of doctrinal or policy payoff at the end. Here's one way to get it: Now that you know the conditions under which the players have an incentive to do X, you have at least one if not several candidates for places that policymakers can intervene in order to bring about/abolish X-doing. But by having all the different things that feed into the incentives laid out before you in neat mathematical form (as per the previous item), it can allow you to see policy options that may previously have escaped notice. ("Congress will have an incentive to kick puppies so long as the price of tea in China is greater than the number of votes for Scottish independence, so down with the Union!")
So there's my brief for why game theory is worth caring about. The details are for subsequent posts.
Wednesday, October 08, 2014
Zombies Defeat Tort Law
It's always a shame to let a Prawfs guest stint go by without working in zombies. Maybe there's just something in the air. The Walking Dead is returning to my DVR box (any series which once starred a law professor's kid can't be all bad). Maybe it's that I'm still hoping a review copy of Zombie in the Federal Courts will arrive.
So next week, my college's campus gets taken over by a game called "Humans v. Zombies." According to this article in the student newspaper, all campus needs to prepare itself, because hordes of people shooting each other with nerf guns and tagging each other with two hands are about to descend. What could possibly go wrong?
A bit, learned the plaintiff in Brown v. Ohio State University, 2012 WL 8418566.
Plaintiff attended Parent's Weekend at Ohio State University's Columbus campus. Why not go on a midnight Ghost Tour? Unfortunately, President Obama was on campus that week, so his limo needed an escape route, which obviously meant putting a double layer of plywood on sidewalks (somebody should fire someone from the Secret Service or something). Anyhow, plaintiff tripped on that hazard, broke her arm, and filed suit.
Why didn't she see the plywood so evident on the sidewalk? Because a nearby "game of humans vs. zombies being played by students ... diverted her attention."
Zombies 1, Humans 0
Though of course, having been distracted by the zombies, she was able to avoid the application of the "Open and Obvious" doctrine and escape summary judgment -- genuine issues of material fact existed on "whether attendant circumstances overcome application of the open and obvious doctrine".
Monday, May 28, 2012
Law as Plinko
My last moments in the classroom this past semester were spent engaging in what is likely a familiar exercise for most law professors -- trying to inspire students and leave them with some parting words of wisdom, encouragement, and motivation. I look forward to these moments, and hope that my last-minute ramblings help bring together the general themes of the course and, more broadly, replenish their passion for the law to the extent that specific and more immediate parts of their experience -- such as Socratic conversations, lengthy readings, and concerns about the final examination -- have them questioning why they are in law school and are incurring debt in the process. To quote Michael Scott, I might as well tell my students on the last day of classes to "get as much done as you can... because, afterward, I'm going to have you all in tears."
This semester, I discussed what I attempted to accomplish in the course and apologized to the extent that I fell short of their expectations. I revealed to them what led me to study the law, and why I am continually fulfilled and humbled by my pursuit to understand the law and the law's role in society. In my constitutional law course, I read to my students Neal Katyal's comments after Hamdan, celebrating the rule of law and how it distinguishes us from other political communities. I also asked my students whether anyone has seen The Godfather. Predictably, all hands were raised. When I asked what the first line of the movie is, no hands went up. The first line is, "I believe in America." I explained candidly why I believe in America, and it is specifically because of the structure of the Constitution that they just (hopefully) learned about and also because they will be active participants in that structure, seeking to improve the law and society.
I also, in a rather light portion of my semester-ending remarks, share my fun theory of the law -- that the law is like Plinko. Yes, Plinko. An explanation follows:
It seems to me that the law is similar -- the facts of a case are like the chips, and the pegs are established cases that the facts must work through, and the space is the result that the court eventually hands down (e.g., granting or denying a motion, reversing or affirming a decision). What, I believe, we do in law school is also related -- we attempt to ensure that students understand the pegs (the applicable precedents), how they have evolved or shifted over time, and the critical facts and context that help explain where the pegs are. In general, in a Socratic exercise and on the final examination, students entertain a modified or new fact pattern, and analyze how those facts may "fit" in the existing framework. We give students random fact patterns because it is unlikely that, in practice, they will receive a factual problem that is identical in all respects to an established case. They must have a substantive foundation -- an understanding of the precedents -- and the skills -- how to research, write, and argue -- in order to properly assess how the new facts may work their way through the relevant cases and to then be able to advocate, on behalf of their client, for how those facts should work their way through the prior cases. This is why I refer to cases as guideposts -- they literally are the pegs that set the general bounds within which certain issues will be examined and resolved.
Further, students, equipped with an understanding of the law and the tools to analyze and advocate, can argue for why the guideposts should and must change. Here is where they can become agents for broad social change -- by removing and reconstructing the guideposts that previously constrained and dictated how certain issues would be reviewed. Again, in order to do this, students need the substantive foundation in the law and the skills with which to dissect cases and propose new legal principles. The study of legal doctrine and professional skills may seem tedious, slow, and boring at times, but is critically necessary if students are to one day be effective representatives of their clients' interests and/or instruments of robust changes in the law and society.
This rather informal way of looking at the law as Plinko seems consistent with Holmes's theory of law as prediction. When a contestant puts that chip down on the board, one does not know where it will land; at best, one can develop some sense as to where it may land given certain data points. Similarly, armed with a set of facts, an attorney can offer only his or her prediction as to how a certain judge will apply certain guideposts, and what the outcome will be.
Law as Plinko also may help one appreciate the different aspects of the legal process. Whereas the top pegs may be akin to standards for the sufficiency of a complaint and jurisdictional issues, later pegs may be akin to guideposts governing whether the facts should survive a motion for summary judgment, and the final pegs akin to the standards on the merits of a legal issue. This theory also emphasizes framework and process, where students focus on result (e.g., who "won" and who "lost").
It doesn't leave them in tears, but students seem nonetheless to enjoy this admittedly nutty way of viewing the law.
Thursday, February 02, 2012
Copyright and the Romantic Video Game Designer
My friend Dave is a game designer in Seattle. He and his friends at Spry Fox made an unusually cute and clever game called Triple Town. It's in the Bejeweled tradition of "match-three" games: put three of the same kind of thing together and they vanish in a burst of points. The twist is that in Triple Town, matching three pieces of grass creates a bush; matching three bushes creates a tree ... and so on up to floating castles. It adds unusual depth to the gameplay, which requires a combination of intuitive spatial reasoning and long-term strategy. And then there are the bears, the ferocious but adorable bears. It's a good game.
Now for the law. Spry Fox is suing a competing game company, 6waves Lolapps, for shamelessly ripping off Triple Town with its own Yeti Town. And it really is a shameless ripoff: even if the screenshots and list of similarities in the complaint aren't convincing, take it from me. I've played them both, and the only difference is that while Triple Town has cute graphics and plays smoothly, Yeti Town has clunky graphics and plays like a wheelbarrow with a dented wheel.
I'd like to come back to the legal merits of the case in a subsequent post. (Or perhaps Bruce Boyden or Greg Lastowka will beat me to it.) For now, I'm going to offer a few thoughts about the policy problems video games raise for intellectual property law. Games have been, if not quite a "negative space" where formal IP protection is unavailable, then perhaps closer to zero than high-IP media like movies and music. They live somewhere ambiguous on the spectrum between "aesthetic" and "functional": we play them for fun, but they're governed by deterministic rules. Copyright claims are sometimes asserted based on the way a game looks and sounds, but only rarely on the way it plays. That leads to two effects, both of which I think are generally good for gamers and gamemakers.
On the one hand, it's well established that literal copying of a game's program is copyright infringement. This protects the market for making and selling games against blatant piracy. Without that, we likely wouldn't have "AAA" titles (like the Grand Theft Auto series), which have Hollywood-scale budgets and sales that put Hollywood to shame. Video games have become a major medium of expression, and it would be hard to say we should subsidize sculpture and music with copyright, but not video games. Spry Fox would have much bigger problems with no copyright at all.
On the other hand, the weak or nonexistent protection for gameplay mechanics means that innovations in gameplay filter through the industry remarkably quickly. Even as the big developers of AAA titles are (mostly) focusing on delivering more of the same with a high level of polish, there's a remarkable, freewheeling indie gaming scene of stunning creativity. (For some random glimpses into it, see, e.g. Rock, Paper, Shotgun, Auntie Pixelante, and the Independent Games Festival.) If someone has a clever new idea for a way to do something cute with jumping, for example, it's a good bet that other designers will quickly find a way to do something, yes, transformative, with the new jumping mechanic. Spry Fox benefited immeasurably from a decade's worth of previous experiments in match-three games.
The hard part is the ground in between, and here be knockoffs. Without a good way to measure nonliteral similarities between games, the industry has developed a dysfunctional culture of copycattery. Zynga (the creator of Farmville and Mafia Wars) isn't just known for its exploitative treatment of players or its exploitative treatment of employees, but also for its imitation-based business model. Game developers who sell through Apple's iOS App Store are regularly subjected to the attack of the clones. In Spry Fox's case, at least, it's easy to tell the classic copyright story. 6waves is reaping where it has not sown, and if Triple Town flops on the iPhone because Yeti Town eats its lunch, at some point Dave and his colleagues won't be able to afford to spend their time writing games any more.
This is something I've been thinking about the copyright tradeoff recently. One way of describing copyright's utilitarian function is that it provides "incentives to produce creative works." That summons up an image of crassly commercial authors who scribble for a paycheck. In contrast, we sometimes expect that self-motivated authors, who write for the pure fun of it, will thrive best if copyright takes its boot off their necks. But a better picture, I think, is that there are plenty of authors who are motivated both by their desire to be creative and also by their desire not to be homeless. The extrinsic motivations of a copyright-supported business model provide an "incentive," to be sure, but that incentive takes the form of allowing them to indulge their intrinsic motivations to be creative. In broad outline, at least, that's how we got Triple Town.
I'm not sure where the right place to draw the lines for copyright in video games is. I'm not sure that redrawing the lines wouldn't make things worse for the Daves of the world: giving them more greater rights against the 6waves might leave them open to lawsuits from the Zyngas. But I think Triple Town's story captures, in miniature, some of the complexities of modern copyright policy.
Wednesday, February 01, 2012
Puzzles for Lawyers
Every year, for what I at least consider a fun time, I go to MIT for the annual Mystery Hunt, a 48-hour team puzzle competition. There are crosswords, logic puzzles, puns and wordplay, and much, much more. I'd like to explain why a fair number of lawyers (there are four on my team alone) find this stuff fun; as examples, I'll use a pair of puzzles that connect back to the law.
This year, I was part of the group writing the Hunt, so I wanted to sneak in a bit of legal silliness. "Tax ... in ... Space" was the result. It's the puzzle equivalent of a shaggy dog joke: a parody of a tax form with absurdly complicated instructions. The tax "law" is completely made up, of course, but I added a bunch of in-jokes for people who've had at least a basic course in tax. Here's a sample:
(f) The illudium phosdex exploration quasi-credit shall be equal to the sum of wages and tips, Capital Gains, lower-case gains, and income from the sale of bitcoins, less the amount of remote backup withholding, if any, except that if the illudium phosdex exploration quasi-credit so computed exceeds 200,000, the illudium phosdex exploration quasi-credit shall instead be equal to half of twice the Robocop statue construction checkoff.
Last year's Hunt also had a very nice (and quite funny) puzzle called "Unnatural Law." It took the form of a narrative by "HistoryBot-2225121561375435" of how sentient robots overthrew and oppressed humanity. Each paragraph described some awful thing the robots did to their human underlings, e.g.:
The robot overlords greatly disliked allowing their human prisoners to be released while awaiting judgement. Not wanting to destroy all hope immediately--for where was the fun in destroying a human's spirit too quickly?--they instead computed the maximum amount of money a human could obtain and set the release fee at twice that amount. This had the unfortunate effect of increasing the number of humans incarcerated. Initially, the robots addressed this by packing humans five hundred to a cell, but that was insufficient. Next, the robots halved the size of human containment pens, keeping the number of humans in each pen the same. They found that doing this doubled the stress level in the containment pen, which the overlords considered a pleasant side effect.
I'll explain how this particular puzzle worked after the jump, so that anyone who wants to try their hand at it without hints isn't spoiled.
The first "aha" in solving the puzzle was to notice that in each paragraph, the robots violated a Constitutonal amendment. In the paragraph above, for example, the robots are running roughshod over the Eighth Amendment by requiring excessive bail (twice the "maximum amount of money a human could obtain"). It turns out that each paragraph refers to a different amendment.
The second "aha" was to notice that each paragraph also refers to a scientific "law." In the one above, the robots discover that keeping a fixed number of humans in a pen of half the size results in double the stress. That's just a disguised version of Boyle's Law: halving the volume in which a fixed amount of a gas is contained doubles the pressure. Again, each paragraph refers to a distinct scientific law or theorem.
Now for the WTF step, the one that doesn't start to seem natural until you've been solving Mystery Hunt-style puzzles for a while. The amendments have unique numbers but not names, wich suggests that they might represent some kind of order. The scientific laws have names but numbers, which suggests that they might be a source of text. And HistoryBot's name is a random-looking collection of digits, which suggests that it could be a bunch of indices into some other text: that is, instructions telling you to take the 2nd letter of the first phrase, then the 2nd letter of the second phrase, and so on until the 5th letter of the last phrase. Putting it all together, then, you have a bunch of phrases (the scientific laws' names), an order for those phrases (by amendment number), and a specific letter to pull out from each (the digits in HistoryBot's name).
Getting to this point in an actual Hunt might take a group of focused solvers an hour or two. Some of that time would be spent staring at the puzzle waiting for the first aha, and probably somewhat more staring at it waiting for the second one. Then some laughs as the first, more recognizable identifications give way, followed by some head-scratching and occasional minor flashes of insight as the rest gradually make sense. And then, as the final answer emerges, a feeling of real satisfaction. It's a great experience, one that draws on some the mental habits that bring some people to law school, but is also an enjoyable change of pace from it. In other words, yes, this is an event for those crazy people whose favorite part of the LSAT was the logic games.
Thursday, June 16, 2011
Coming soon to a theatre near you ...
"Moneyball" the movie. The moneyball concept gets a lot of play in the realm of academic hiring and performance analysis. Of course, that get's no play in this movie - but if Brad Pitt plays moneyball general manager Billy Beane, then who is Billy Beane in law and what actor plays him in Moneylaw the movie?
Tuesday, June 07, 2011
Is deliberation overrated?
I'm not saying that deliberation is necessarily overrated, but I'm starting to wonder about its relative value. In recent years I've read a number of books and articles on the decision making processes of groups such as James Surowieki's The Wisdom of Crowds (2005) and Cass Sunstein's Infotopia: How Many Minds Produce Knowledge (2008), and found them to be very interesting and insightful. Both of these books at least suggest the possibility that group decision making may not always be better with group deliberation.
Of course, to suggest that something is 'overrated' typically implies that it is somewhat highly rated in the first place. When I look around, I see deliberation everywhere - government decisions, academic committee decsisions, tenure decisions, where to eat lunch, jury outcomes, Supreme Court outcomes (ok, only to a degree on that one). I think it's fair to say that deliberation is cherished in this country. But is it all that it's cracked up to be? What are its attributes? How do we evaluate its worth (relative to other systems)?
For a bit of class fun last semester, I tried a class exercise that was suggested by one of my readings on this subject.I divided the class into three groups of equal size: 1) The deliberation group, 2) The secret vote group, and 3) the list vote group. I then held up for the class to see (all had roughly equal views) a glass container of paper clips. They were able to view the container for 30 seconds. I then asked the groups to decide how many paper clips were in the container. The secret ballot group was to do just that - each person would make a guess, write it down in private and their estimates would be averaged. The list group would use a list - the first person to decide would write their estimate on the top of the list and then the estimates would go from there (everyone could see the prior estimates)- and they were averaged. The deliberation group deliberated on the best estimate and used a consensus decision rule on the number of paper clips.
The results? The best estimate was by the secret vote group, followed by the list group, and the worst estimate (by far) was by the deliberation group. Of course, this little exercise is hardly ready for scientific peer review and was done primarily for fun and to introduce the class to varying decision methods. However, given the prevalence of deliberation in our society, might it give us pause to think about whether it's 'overrated'? I'm not sure. Certainly there are other considerations at issue (e.g. how the process makes participants feel). But I thought I'd see what Prawfs readers thought.
Posted by Jeff Yates on June 7, 2011 at 11:58 AM in Criminal Law, Deliberation and voices, Games, Judicial Process, Law and Politics, Legal Theory, Life of Law Schools, Science, Teaching Law | Permalink | Comments (3) | TrackBack
Wednesday, April 27, 2011
More Prawfs Lawfs!
It's time for more Prawfs Lawfs riddles!
Why wouldn't the dean let the rookie professor teach Oil & Gas Law?
– She wanted someone with the rank of fuel professor.
Why did students find the tax professor's lectures disgusting?
– She kept talking about gross income.
What did the professor of commercial law say when the student asked him whether his class on Article 4 would be worthwhile?
– “You can bank on it.”
Monday, April 18, 2011
It's Time for Prawfs Lawfs!
It's Prawfs Lawfs! Get ready for hilarious riddles!
Why did the Trusts & Estates professor quit?
– He couldn't find the will to continue.
Why was the Pretrial Advocacy professor fired?
– Her students drafted too many complaints.
What advice was the French law professor given when he started his job at an American law school?
– “It's publish or Paris.”
Sunday, August 29, 2010
PrawfsPuzzler: Law Prawfs Crawsword!
There's only one special thing to note: In keeping with the hide-the-ball and antique-language traditions of law school, no warning is given in the clue when Latin is required.
1. Hypothetical estate
7. For a soft-spoken prof.
8. A grade awarded at some universities for academic dishonesty or lack of attendance
9. Our subject
10. Art. 4 governs bank deposits.
12. HMO protector from 1974
13. Holmes explained that early legal systems emphasized vengance. An example was the stoning or surrendering of this animal when one did harm.
14. Your laptop-toting student may not have brought one.
16. Burma, Liberia, and the U.S.A. are the holdouts.
17. The first American law school, established 1773, in Connecticut
21. Said of a mind
22. You may be asked for one from a student wanting a fed. clerkship.
23. An association of members, a place where people drink, or something you could be bludgeoned with, but it's not "club."
24. 18 U.S.C. purveyor
26. May inhabit a utility easement in a condominium tower
27. A guardian __ litem is court-appointed to look after the legal interests of another.
28. Found in bankruptcy captions
30. Applicable to about half of American law schools, it's longer now, thanks to the Energy Policy Act of 2005.
1. Key no. in DUI cases
2. Wielded in the assault case in 26 down
3. E.g., apples, oranges
4. Allow students to weigh in without a hand up
5. The thing
6. Is this going to be on the ______?
7. Traditional color of law for regalia
11. There wasn't one in Dougherty v. Salt.
12. Impeachment data on a JPEG
13. An act and an agency, under DOL, that can literally make you CYA.
15. If you've read one trade secret case, it's probably ___ duPont deNemours & Co., Inc. v. Christopher.
18. This means the S.Ct. will hear you. But I'd check your breath anyway.
19. It's what you're supposed to do to think like a lawyer.
20. Home to six law schools that are surrounded by, but not in, the 4th Cir.
23. This kind of nipping didn't justify vague vagrancy ordinances in Papachristou v. City of Jacksonville.
24. ADR practitioners
25. The ABA-approved law school closest to Canada, which is about 750 yards due south.
26. I __ S et Ux. v. W __ S
29. This Meese headed a 1980s report recommending stricter obscenity laws.
Saturday, July 31, 2010
PrawfsPuzzler: Law Trivia Sudoku!
Here's the deal, this sudoku puzzle is quite hard ... unless you correctly answer the questions below about the law. Answers fill in blanks, helping you solve the sudoku. If you get all the answers, solving the rest of the puzzle should be no problem. Hey, you didn't go to law school because you were good at math!
This works just like a regular sudoku, except that instead of using the digits 1 through 9, as most sudokus do, in this puzzle we'll do it computer-programmer style and start with 0, going up through 8.
Instructions: Fill the blank spots in the grid so that each column, row, and bolded square contains one and only one of the numbers 0, 1, 2, 3, 4, 5, 6, 7, and 8.
A The Magnuson-Moss Warranty Act was enacted in 19__.
B This rule allows the exclusion of relevant evidence on the basis of unfair prejudice.
C This is the most recent amendment to the U.S. Constitution.
D This section of the Securities Exchange Act of 1934 allows recovery of short-swing profits by officers and directors.
E This rule allows motions on the basis of the pleadings.
F This title of the U.S. Code concerns patents.
G This title of the U.S. Code deals with the judiciary.
H The Voting Rights Act of 19__ prohibited the administration of literacy tests.
I This rule permits testimony by experts if scientific, technical, or other specialized knowledge will assist the trier of fact.
J This chapter of the Bankruptcy Code allows individuals to save their homes from foreclosure.
K This title of the U.S. Code sets forth crimes and criminal procedure.
L The Copyright Act of 19__ forms the basic statutory framework for current copyright law.
M This rule defines relevant evidence.
N This rule permits summary judgment.
O This title of the U.S. Code concerns immigration, aliens, and nationality.
Monday, July 26, 2010
More Flip-Flop Puzzles - For Fun and a Fabulous Prize!
Here it is! Another set of prawf-themed flip-flop puzzles! (Previous installment here.)
[UPDATE: We have a winner!]
And this time, I'm giving away a prize! The first law professor to e-mail me with the correct answer to each of the three flip-flop puzzles below will receive a box of CHIPPERS - chocolate-covered potato chips made right here in North Dakota at George Widman's candy shop in downtown Grand Forks!!!
Open to all U.S. resident full- or part-time law professors who are willing to have their name and school announced here on PrawfsBlawg. (Puerto Rico welcome. Void where prohibited. Contest ends when a winner is declared, July 31, 2010, or whenever I stop feeling like giving away candy, whichever occurs first. For a full list of rules, draft something, and I'll take a look at it. N.B.: Hand delivered entries will be composted for next year's sugar beet crop.)In case I don't get a winner right away, I will provide new clues each day by revising this post, around midday. If you want to see the answers, come back after I get a winner.
Instructions: Using the clues provided, complete the blanks below to create a chain of words, where the next word in the series is formed by adding, deleting, or changing a single letter from the word before. So, for the clue "a musical floor swab," the answer could be "HIP HOP MOP." For "despise discussion of headwear," the answer could be "HATE HAT CHAT."
a testamentary instrument expiring upon the testator's attainment of substantial stature:
_ _ _ _
_ _ _ _
a contest over illuminated billboards under the First Amendment:
_ _ _ _ _
_ _ _ _ _
what a patent attorney must do for an inventor of a rotating machine part for harvesting bivalves:
_ _ _ _
_ _ _
Monday, June 21, 2010
PrawfsPuzzler: Flip-Flop Puzzles
Using the clues provided, complete the blanks below to create a chain of words, where the next word in the series is formed by adding, deleting, or changing a single letter from the word before. So, for the clue "a musical floor swab," the answer could be "HIP HOP MOP." For "despise discussion of headwear," the answer could be "HATE HAT CHAT." Got it? Now try these prawf-themed flip-flop puzzles:
an A+ in the spring semester for a 3L:
_ _ _ _
_ _ _ _ _
citing to an overruled case in the footnotes:
_ _ _
_ _ _ _
coastal real estate in fee simple with no easements or covenants, in the eyes of a grumpy property prawf:
_ _ _ _
_ _ _
_ _ _ _
(note: transposing "land" and "sand" is perfectly acceptable)
Sunday, November 01, 2009
Weekend Trivia Challenge: Island Law Schools
Here's another geography-based question for you.
Which ABA-accredited law schools are on islands?
University of Hawai‘i at Mānoa William S. Richardson School of Law
Saturday, October 24, 2009
Weekend Trivia Challenge: The Smallest Law School
Which ABA-accredited law school is the smallest in terms of student population?
Saturday, October 17, 2009
Weekend Trivia Challenge - Next-to-Last State Without an ABA-Approved Law School
A while ago, I asked what state was the only one without a law school. One commenter panned me for coming up with questions that were too easy. Okay, this one should at least be harder than that. But maybe not much ...
Of the 49 states with ABA-accredited law schools, which was the last state to get one? In other words, what was the next-to-last state without an ABA-accredited law school? And in what year was its first (and as yet only) law school approved by the ABA?
Nevada. The William S. Boyd School of Law at the University of Nevada, Las Vegas gained approval by the ABA in 2000 and was granted full ABA accreditation in February 2003. UNLV was granted membership in the Association of American Law Schools in January 2004.
For a new law school, UNLV got good fast. It is currently ranked 75th in the U.S. News rankings.
Saturday, October 10, 2009
Weekend Trivia Challenge - Name this Law School
Saturday, October 03, 2009
Weekend Trivia Challenge: Latitude and Longitude Extremes
Which ABA-accredited law schools are at compass extremes? That is, which law school is the most southern, western, eastern, and northern?
Furthest South (mainland): University of Miami School of Law, in Coral Gables, FL, at 25° 43′ 17.92″ N, 80° 16′ 45.36″ W
Furthest West (mainland): University of Oregon School of Law, in Eugene, OR, at 44° 2′ 34.69″ N, 123° 4′ 9.44″ W
Sunday, April 19, 2009
Weekend Trivia Challenge: The Only State Without a Law School
Which is the only American state without an ABA-accredited law school?
Answer below the fold ...
Here's a bonus question: Which law school publishes the Alaska Law Review, a publication sanctioned by the Alaska Bar Association?
The answer is here.
Sunday, April 05, 2009
Weekend Trivia Challenge: First Regular Professorship of Law for Non-Undergrads
Which U.S. school established the first regular professorship of law for students other than undergraduates?
Answer below the fold ...
The University of Transylvania in Lexington, Kentucky, in 1798.
Today, Transylvania University is a liberal-arts college, with no law-school program. However, in 1865, Transylvania developed a publicly funded land-grant school which was eventually spun off as a separate institution. That spin off is the University of Kentucky, which does, of course, have a law school.
UK's College of Law was founded in 1908, its heritage tracing back to Transylvania's 18th century professorship.
Monday, May 26, 2008
Weekend Trivia Challenge – The Biggest Law School
Which law school has the largest total enrollment of J.D. students, including both part-time and full-time?
Thomas M. Cooley Law School
in Lansing, Grand Rapids, and Auburn Hills, Michigan
Other law schools aren’t even close. In 2006, Cooley had 3,252 J.D. students.1 That was the most recent number I could find. According to Cooley’s current statistics, they have 3,723 total students enrolled, which includes LL.M. students in the Intellectual Property and Taxation programs.2
The list of the top 10 schools in total J.D. enrollment, as of 2006, can be found here.
Bonus question: What school has the most full-time J.D. students (per U.S. News and World Report)?
Highlight this paragraph to see the hidden white text for the answer3: Harvard Law School
Saturday, May 10, 2008
Weekend Trivia Challenge - Name this Law School
Can you name this law school?
Founded in 1834, it is the oldest law school in its state, and one of the oldest in the nation. Originally a private school, in 2000, it merged with a flagship public university. A bitter battle followed over whether the school should remain in its original locale or relocate to its new university's main campus.
In the end, both sides won. The law school retained its old home while building new facilities 80 miles away at the university's main campus. The bifurcated structure is, according the the school, "two completely unified, interconnected campuses".* The two-campus structure does, however, have its skeptics and critics.
First-year students select one campus, and all required courses are available there. In the 2L and 3L years, students can switch campuses to take advantage of unique programs or classes available only at the other campus. Upper-level students can also take classes from the remote location through a "highly sophisticated, suitable and advanced audiovisual telecommunications system."**
If you need a another hint, highlight this paragraph to see the hidden white text: Included among this law school's alumni is the first secretary of the U.S. Department of Homeland Security.
So, what law school is this?
The Penn State Dickinson School of Law
The school's original home is in Carlisle, Pa., next to Dickinson College, a liberal-arts institution with which it was once affiliated.
The new location is in University Park, Pa., on the main Penn State campus. The new University Park building, under construction, looks something like a Rubik's Snake. It's quite an impressive structure. If you want to see it, you can watch this 3-D animation video. (WARNING: Contains extremely inspiring music.)
Sunday, April 27, 2008
Weekend Trivia Challenge - The Nation's Oldest Law Review
Which law review is the oldest?
Answer below the fold ...
The University of Pennsylvania Law Review
According to its website, the University of Pennsylvania Law Review is the nation’s oldest, founded in 1852. It was originally published as the American Law Register.
Sunday, April 20, 2008
Weekend Trivia Challenge - Top Cited Law Review Article of All Time
According to a 1996 study by Fred R. Shapiro published in the Chicago-Kent Law Review, which law review article is the most cited of all time in other law review articles?
Answer below the fold ...
"The Problem of Social Cost" by Ronald H. Coase
The Journal of Law & Economics, vol. 3, p. 1 (1960)
Not a big surprise, huh? Below is a list of the top 50. See Fred R. Shapiro, The Most-Cited Law Review Articles Revisited, 71 Chi.-Kent L. Rev. 751 (1996). The full list of the top 100 is in Shapiro's article.
By the way, Shapiro's article itself has done quite well, garnering 127 cites as of today. For comparison, the lowest ranked article to make Shapiro's top 100 had 204 cites.
Ronald H. Coase, The Problem of Social Cost, 3 J.L. & Econ. 1 (1960).
Herbert Wechsler, Toward Neutral Principles of Constitutional Law, 73 Harv. L. Rev. 1 (1959).
Gerald Gunther, The Supreme Court, 1971 Term--Foreword: In Search of Evolving Doctrine on a Changing Court: A Model for a Newer Equal Protection, 86 Harv. L. Rev. 1 (1972).
Charles A. Reich, The New Property, 73 Yale L.J. 733 (1964).
Oliver Wendell Holmes, Jr., The Path of the Law, 10 Harv. L. Rev. 457 (1897).
Abram Chayes, The Role of the Judge in Public Law Litigation, 89 Harv. L. Rev. 1281 (1976).
Robert H. Bork, Neutral Principles and Some First Amendment Problems, 47 Ind. L.J. 1 (1971).
Richard B. Stewart, The Reformation of American Administrative Law, 88 Harv. L. Rev. 1667 (1975).
Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890).
Duncan Kennedy, Form and Substance in Private Law Adjudication, 89 Harv. L. Rev. 1685 (1976).
Guido Calabresi & A. Douglas Melamed, Property Rules, Liability Rules, and Inalienability: One View of the Cathedral, 85 Harv. L. Rev. 1089 (1972).
Frank I. Michelman, Property, Utility, and Fairness: Comments on the Ethical Foundations of ‘Just Compensation’ Law, 80 Harv. L. Rev. 1165 (1967).
Marc Galanter, Why the ‘Haves' Come Out Ahead: Speculations on the Limits of Legal Change, 9 Law & Soc'y Rev. 95 (1974).
Joseph Tussman & Jacobus tenBroek, The Equal Protection of the Laws, 37 Cal. L. Rev. 341 (1949).
Stewart Macaulay, Non-Contractual Relations in Business: A Preliminary Study, 28 Am. Soc. Rev. 55 (1963).
John Hart Ely, The Wages of Crying Wolf: A Comment on Roe v. Wade, 82 Yale L.J. 920 (1973).
William W. Van Alstyne, The Demise of the Right-Privilege Distinction in Constitutional Law, 81 Harv. L. Rev. 1439 (1968).
Owen M. Fiss, The Supreme Court, 1978 Term--Foreword: The Forms of Justice, 93 Harv. L. Rev. 1 (1979).
Henry G. Manne, Mergers and the Market for Corporate Control, 73 J. Pol. Econ. 110 (1965).
Frank I. Michelman, The Supreme Court, 1968 Term--Foreword: On Protecting the Poor Through the Fourteenth Amendment, 83 Harv. L. Rev. 7 (1969).
William L. Prosser, The Assault Upon the Citadel (Strict Liability to the Consumer), 69 Yale L.J. 1099 (1960).
Anthony G. Amsterdam, Perspectives on the Fourth Amendment, 58 Minn. L. Rev. 349 (1974).
Robert H. Mnookin & Lewis Kornhauser, Bargaining in the Shadow of the Law: The Case of Divorce, 88 Yale L.J. 950 (1979).
Frank H. Easterbrook & Daniel R. Fischel, The Proper Role of a Target's Management in Responding to a Tender Offer, 94 Harv. L. Rev. 1161 (1981).
Henry M. Hart, Jr., The Supreme Court, 1958 Term--Foreword: The Time Chart of the Justices, 73 Harv. L. Rev. 84 (1959).
William J. Brennan, Jr., State Constitutions and the Protection of Individual Rights, 90 Harv. L. Rev. 489 (1977).
Henry M. Hart, Jr., The Power of Congress to Limit the Jurisdiction of Federal Courts: An Exercise in Dialectic, 66 Harv. L. Rev. 1362 (1953).
H.L.A. Hart, Positivism and the Separation of Law and Morals, 71 Harv. L. Rev. 593 (1958).
Laurence H. Tribe, Trial by Mathematics: Precision and Ritual in the Legal Process, 84 Harv. L. Rev. 1329 (1971).
Paul Brest, The Misconceived Quest for the Original Understanding, 60 B.U. L. Rev. 204 (1980).
John Hart Ely, Legislative and Administrative Motivation in Constitutional Law, 79 Yale L.J. 1205 (1970).
Roberto Mangabeira Unger, The Critical Legal Studies Movement, 96 Harv. L. Rev. 561 (1983).
Thomas I. Emerson, Toward a General Theory of the First Amendment, 72 Yale L.J. 877 (1963).
Alexander Meiklejohn, The First Amendment is an Absolute, 1961 Sup. Ct. Rev. 245.
Bruce J. Ennis & Thomas R. Litwack, Psychiatry and the Presumption of Expertise: Flipping Coins in the Courtroom, 62 Cal. L. Rev. 693 (1974).
Lon L. Fuller, Positivism and Fidelity to Law--A Reply to Professor Hart, 71 Harv. L. Rev. 630 (1958).
Henry M. Hart, Jr., The Relations Between State and Federal Law, 54 Colum. L. Rev. 489 (1954).
Cass R. Sunstein, Interest Groups in American Public Law, 38 Stan. L. Rev. 29 (1985).
Richard A. Posner, A Theory of Negligence, 1 J. Legal Stud. 29 (1972).
Joseph L. Sax, Takings and the Police Power, 74 Yale L.J. 36 (1964).
Robert M. Cover, The Supreme Court, 1982 Term, Foreword: Nomos and Narrative, 97 Harv. L. Rev. 4 (1983).
Duncan Kennedy, The Structure of Blackstone's Commentaries, 28 Buff. L. Rev. 205 (1979).
Lon L. Fuller & William R. Perdue, Jr., The Reliance Interest in Contract Damages (pts. 1 & 2), 46 Yale L.J. 52, 373 (1936-37).
Friedrich Kessler, Contracts of Adhesion--Some Thoughts About Freedom of Contract, 43 Colum. L. Rev. 629 (1943).
Harry Kalven, Jr., The New York Times Case: A Note on ‘The Central Meaning of the First Amendment,’ 1964 Sup. Ct. Rev. 191.
Lon L. Fuller, The Forms and Limits of Adjudication, 92 Harv. L. Rev. 353 (1978).
Thomas C. Grey, Do We Have an Unwritten Constitution?, 27 Stan. L. Rev. 703 (1975).
Frank I. Michelman, The Supreme Court, 1985 Term--Foreword: Traces of Self-Government, 100 Harv. L. Rev. 4 (1986).
Richard A. Epstein, A Theory of Strict Liability, 2 J. Legal Stud. 151 (1973).
William L. Cary, Federalism and Corporate Law: Reflection Upon Delaware, 83 Yale L.J. 663 (1974).
Sunday, April 13, 2008
Weekend Trivia Challenge - Highest ranked non-U.S. journal
According to the latest Washington & Lee University rankings of law reviews (2007, combined score), which non-U.S. journal is the highest ranked, and what country is it from?
To give you a fighting chance, we'll do it multiple choice style.
- European Journal of International Law, from the U.K.
- Cork Online Law Review, from Ireland
- Theoretical Inquiries in Law, from Israel
- Oxford Journal of Legal Studies, from the U.K.
- International Review of Law and Economics, from the Netherlands
- The Journal of International Coastal and Maritime Law, from Switzerland
- The Cambridge Law Journal, from the U.K.
- His Royal Highness's Journal of International Casino Law, from Monaco
- McGill Law Journal, from Canada
- Zambia Law Journal, from Zambia
And this bonus question: Which of the above the above journals do not exist?
Answers below the fold ...
Which journal is the highest ranked?
Theoretical Inquiries in Law, from Israel
It is tied at no. 168 with the Michigan State Law Review; the Stanford Journal of Law, Business & Finance; and Tax Law Review.
The next highest-ranked non-U.S. journals are: the European Journal of International Law, from the U.K., at 187th; the International Journal of Constitutional Law, from the U.K., at 208th; the International and Comparative Law Quarterly, from the U.K., at 268th; and the Journal of International Criminal Justice, from the U.K., at 273rd.
Bonus: Which journals don't exist?
The Journal of International Coastal and Maritime Law, from Switzerland
His Royal Highness's Journal of International Casino Law, from Monaco
Washington & Lee Law School, Law Journals: Submissions and Ranking, 2007 ranking of journals by combined score.
Sunday, April 06, 2008
Weekend Trivia Challenge - Can you name this law professor?
Can you name this law professor?
Her scholarly focus has been on juvenile rights, and she has been called one of the most important scholar-activists of recent decades. In a leading article, she argued that discrimination against juveniles requires justification. In making her argument, she compared children to slaves, wives, and Native Americans, as classes of people who have been historically treated as dependents, not legally competent to speak for themselves.
In a second leading article, she extended her argument, contending that because children reach maturity on a gradual basis, courts should not make the same presumption of incompetence for newborns as they do for 17-year-olds. Instead, she urged, children should be presumed competent by the courts, and evidence of legal incompetence should be evaluated on a case-by-case basis.
In the Yale Law Journal, she wrote: “By and large, the legal profession considers children – when it considers them at all – as objects of domestic relations and inheritance laws or as victims of the cycle of neglect, abuse, and delinquency. Yet the law’s treatment of children is undergoing great challenge and change. Presumptions about children's capacities are being rebutted; the legal rights of children are being expanded. As the structure of family life and the role of children within it evolves, the law is likely to become ever more embroiled in social and psychological disputes about the proper relationship between government and family. The task for lawmakers will be to draw the line between public and private responsibility for children.”
At one point, her husband gained some national media attention in 1987 when there was speculation he might run for president, though he did not enter the race.
Who is she?
The answer is below the fold ...
Hillary Rodham Clinton
She was a law professor at the University of Arkansas in Fayetteville from 1974 to 1976
Hillary Rodham, Children's Policies: Abandonment and Neglect, Yale Law Journal vol. 86, p. 1522 (June 1977)