« Sunday Music (Cover) Blog (or Against Novelty Too) | Main | The Study of 'Social Justice' in a Law School »

Monday, May 04, 2009

Actuarialism, Individualization, and Fairness

It is perhaps not surprising, given my strong support of evidence based practices, that I am an advocate of actuarial models. So it is easy to imagine my despair (or at least irked annoyance) when I came across the following passage in an appellate opinion from Indiana (which I read while preparing for this):

The use of a standardized scoring model, such as the LSI-R, undercuts the trial court's responsibility to craft an appropriate, individualized sentence. Relying upon a sum of numbers purportedly derived from objective data cannot serve as a substitute for an independent and thoughtful evaluation of the evidence presented for consideration. As our Supreme Court recently noted in discussing the appellate review of sentences, "[a]ny effort to force a sentence to result from some algorithm based on the number and definition of crimes and various consequences removes the ability of the trial judge to ameliorate the inevitable unfairness a mindless formula sometimes produces." Cardwell v. State, 895 N.E.2d 1219, 1224 (Ind.2008). Therefore, it is an abuse of discretion to rely on scoring models to determine a sentence.

Rhodes v State, 896 NE2d 1193 (Ind App 2008). There are at least two closely related, significant errors with the Indiana Supreme Court's logic cited here.

1. An "individualized" sentence. There is simply no such thing. I think that Frederick Schauer's Profiles, Probabilities and Stereotypes should be mandatory reading for all law students. No decision is made in isolation: there are always background assumptions and probabilities lurking. Take an example from a recent lunch-table debate I was involved in. A witness to a crime tells a police officer that a mugger was a white man, 6 feet tall with curly hair and a blue windbreaker. A few minutes later and a few blocks away, the officer stops someone who meets these details, and justifies the stop based on his "individualized suspicion," given the description.

But, of course, there is nothing individualized about his suspicion at all. There are host of generalities behind this decision to stop. For example, the officer is assuming that the witness gets race, height, and clothes roughly correct. If he knew that a majority of witnesses misidentify the race of an assailant, then he would not have "individualized" suspicion. Any sort of so-called individualized assessment is actually a comparison of specific characteristics to generalizations. So to say that an algorithm is in tension with individualization is completely wrong. It is just a question of which algorithm we're going to use: the ones in our heads, or the ones on the computer screen.

Once we accept that there is nothing individual about individualization, the second problem with the court's opinion follows directly:

2. The "unfairness" of generalities. This is common argument against actuarial models. The model can only take into account the traits it is programmed to consider, so what happens when there is a relevant factor that isn't in the model? This is sometimes referred to as the "broken-leg problem." Assume we have a powerful model that predicts whether someone is going to go to the movies on a Friday night. The model predicts that Joe is going to go, but he doesn't. Why? Because Joe broke his leg on Thursday, and the model didn't have a "broken leg" item. The argument is that we need human judgment to take into account these idiosyncratic outcomes that the model isn't programmed to recognize.

But this is a variant of the Utopian fallacy: that an actuarial model isn't perfect doesn't mean it isn't better. Sure, in Joe's case the model fails and human judgment could have reached a better outcome. But what about the host of other instances in which the model reaches a better or more accurate conclusion? Models can be mis-specified, but human judgment is biased and flawed. Both have their problems. Individualization is nothing more than a comparison with generalities, and the actuarial turn has made it clear that, on average, well-designed models make these comparisons better. So the occasional broken-leg error is swamped by the run-of-the-mills successes. 

Thus, the Indiana Supreme Court gets it wrong: it is often the very mindlessness of the model that advances fairness. 

It is easy today to make fun of the George Bush for his infamous "I looked into his eyes" statement about Vladimir Putin, but the fact is that almost everyone is similarly overconfident about his or her ability to make unguided judgments. Courts need to stop making unsupported defenses of "judgment" and start confronting more directly the implications of the actuarial turn for how they should do their jobs.

Posted by John Pfaff on May 4, 2009 at 08:20 AM | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference Actuarialism, Individualization, and Fairness:


The comments to this entry are closed.