Monday, September 15, 2008
CELS 2008 Report
I’m back from Cornell, and happy to report about the conference, which was, in decidedly non-professorial terms, cool upon cool. The paper selection was done extremely well; almost everything I had the good fortune to hear was of very high quality. It was very reassuring to see people do carefully planned and meticulously analyzed social science, devote thought and care to their design, control for a variety of variables, "clean up" their data, and produce work that has enough richness and groundedness to be actually useful in the real world.
This is the first time I've been at CELS, but I'm told it has gone a long way toward being inclusive of a variety of disciplines. Most projects I got to hear about involved psychology, economics (and econometrics), political science, and criminology; methods ranged from survey research, to existing mega-databases, to experiments and quasi-experiments. All the presentations I attended were quantitative; for more qualitative stuff, including ethnographies and in-depth interviews, it's perhaps better to head to the LSA annual meeting. However, the beauty of CELS was in its manageable size. There were no more than 350 people and about seven concurrent panels, and they were crafted in a way that allowed for pursuing a "story arc", such as corporate governance, criminal justice, law and psychology, or others, for those interested in doing so.
I followed the criminal justice panels and got to attend most of them, in addition to a few other papers, and you're welcome to click below and find out what I thought was hot.
The first law and psychology panel opened with Dan Kahan presenting his work with David Hoffman and Don Braman, in which they masterfully constructed and simulated the American venire to counter Scalia's remark on Scott v. Harris. They found great variation in potential jurors' assessment of risk and justification of police action after watching the video. Oy, Justice Scalia, we are a bit more diverse in our opinions of police power than you might think. This was followed by Dan Simon, presenting a nice survey experiment (done with Doug Stenstrom and Steven J. Read) in which they provided their subjects with three investigator roles: a neutral ("inquisitorial") position, and an adversarial position (on behalf of a student caught cheating, or on behalf of the university). The investigators' perceptions of factual details in the case were biased by the position they were told to assume.
I then had to take off, and later got to hear the latest thing from Tom Tyler and Jeffrey Fagan, who conducted a survey in New York to assess factors contributing to compliance and cooperation with the police. Consistent with Tyler's body of work, citizens tend to shape their behavior based on procedural justice, which is impacted by the police's performance, fairness, and demeanor in face-to-face interactions.
Moving on to another room, I got to see John Pfaff's neat data on correctional severity. Assessing the causes for prison overcrowding, Pfaff uses data from the National Corrections Reporting Program on Sentencing Practices to examine variation in release practices, coming to the conclusion that our prisons are not overcrowded due to people staying there for long periods of time, but rather to a large volume of people coming in in the first place. To my surprise, California was not such a negative outlier in release practices.
What followed was an amazing experience. Everyone in the room was allowed to take a peek into the world of econometric studies of the death penalty, and to witness a cross between a genuine debate on the meaning of methodology and replication, and somewhat of an academic three-ring circus. As many readers may know, Ehrlich's work in the 1970s was cited in Gregg v. Georgia, leading to a reinstatement of the death penalty after a four-year moratorium; studies following Ehrlich's work have claimed to discredit their findings. The new generation of feuding parties includes Hashem Dezhbakhsh and Paul Rubin, who argue that their work confirms the deterrence effects of the death penalty, and Justin Wolfers (who was the discussant!), whose replication aims at discrediting the findings. Lots of good points were made. There are legitimate questions of what constitutes a faithful replication of a study; also, there's a respectable debate on the merits of controlling for certain variables and the purpose of including, or excluding, Texas from the analysis. In addition, we all got, for the price of admission, a healthy dosage of mud slinging, including critique over who chose to publish at a peer-reviewed publication and who didn't, and public exposure of the email exchange that preceded the conference. Afterwards, the two factions exited the room and went to lunch, leaving me to dig into my grilled veggie wrap and ponder other dimensions of the debate, namely, how we should improve dialogue across disciplinary boundaries, and how I wish someone studied the ideological aspect of all this, namely, whether in this sort of debate (or in the gun control/deterrence debate) methodological disagreements scrupulously follow political party lines.
In the afternoon, my own presentation precluded me from hearing a lot of other people's stuff, but I did get to hear Brian Rowe, who was on my panel, present his careful and methodical analysis of gender bias in traffic ticketing, which included not only the offender's gender, but also the police officer's, as well as their "toughness" vis-a-vis enforcement standards.
For once, the poster session was manageable in size, and I got to see quite a bunch of interesting things, including Shauhin Talesh's analysis of the "holsterization" of rights in the context of the CA "lemon law", as well as Amy Steigerwalt's (with Pamela Corley and Artemus Ward) simple-but-beautiful explanation of SCOTUS unanimous decisions (it's the law, folks; some decisions just lack the legal complexity that makes for dissents and separate opinions). I also had a chat with Alexia Brunet about her work with Ronald Jay Allen on the deterrent effect of towing, in which they notice a spillover effect on a variety of offenses. Don't you just love poster sessions? It's so much fun to hear a person describe his or her work in an intimate, personal, fun setting, and some folks' posters were pretty creative, too.
The next morning brought a new set of interesting stuff. Rob MacCoun and several others have done some work on the "decriminalization" of marijuana, that is, on the limitations on enforcement that several states placed in the 1970s. Do people actually know their state has decriminalized marijuana? Not really, says MacCoun; the effect is strongest after the legal change and then it goes away. So, there is a modest deterrent effect, unsupported by enforcement. Then, Emily Owens and Shawn Bushway presented a nuanced take on deterrence, which takes into account the difference between expected and actual punishment. They argue that a broad gap between sentence expected and sentence served actually detracts from deterrence, and conclude that "truth in sentencing" laws are good for deterrence (I'm not sure I'd go that far, but it was interesting to pay attention to a differential so far neglected by the literature). Finally, J.J. Prescott examined the deterrent effect of registration and notification laws, carefully distinguishing between their enactment and controlling for quite a variety of intervening factors. It seems that the registry size might affect deterrence and recidivism in quite unexpected ways, and that registration and notification do not operate in the same way. Good stuff.
Then, Brandon Garrett showed some data from waht I think is a really important work in progress (with J.J. Prescott) on litigation by people who eventually were exonerated. The time that passes from conviction to exoneration is shockingly long, and there are several factors that impact it (correlation, not causality): one piece of practical advice to the wrongly convicted is to insist on one's innocence. Finally, Kuo-Chang Huang provided a fantastic opportunity to examine the effects of legal representation. As it turns out, Taiwan is in the process of phasing out its public defenders and substitute them for legal aid attorneys. In the process, they are randomly assigning defense attorneys and legal aid attorneys to defendants, providing a fantastic opportunity for a natural experiment (there are always quibbles, but in the world of empirical work it rarely gets more beautiful than this). The bottom line, argue Huang and his coauthors, Kong-Pin Chen and Chang-Ching Lin, is that a public defender is slightly more likely to get you convicted, but also slightly more likely to get you off with a lighter sentence. Their interpretation of this has to do with the public defenders' fee structure or with their social-welfare-influenced role perception, but I suspect it might also have something to do with repeat playing.
A quick word about logistics: this conference was fabulously organized. My personal favorite: the little 2GB chip we all got, with all the conference papers on it.
I've had a terrific time, and came home excited about work. Alas, abundant emails attacked me, and I went into the wrong room to teach my makeup class. But such is the Monday that follows glorious weekends.
Posted by Hadar Aviram on September 15, 2008 at 06:09 PM | Permalink
TrackBack URL for this entry:
Listed below are links to weblogs that reference CELS 2008 Report:
Thanks for the very useful summary, Hadar. For those who are interested Dave Hoffman had a very interesting discussion of the Dezhbakhsh and Rubin /Wolfers dust-up over at the co-op a few days ago. It's here:
Posted by: Matt | Sep 15, 2008 7:05:05 PM
That's a good account of what went down, Matt. An unavoidable part of the problem is, I think, that econometric analysis has become much more sophisticated than it was in the Ehrlich days. And, indeed, the audience's tools for deciding who won the Battle of the Titans would highly depend on how schooled in methodology in general, and in econometric analysis in particular, we are. At some point Justin Wolfers made the point that it was unfortunate that not everyone could immediately tell the good research from the bad. And someone in the audience mentioned that the channels of dialogue need to be open, and that we can't afford to write off entire disciplines and blame them for being ideologically motivated.
I do think there's nothing wrong with being ideologically motivated, and it makes perfect sense to me that, much as we try to do quality work, our biases will operate when we design our work. That, however, doesn't excuse us from the need to examine our work and make sure we're doing the best we can to marshall convincing evidence.
I also, by the way, happen to think there is nothing wrong with arguing over the death penalty on the moral plane, deterrence arguments aside.
Posted by: Hadar Aviram | Sep 15, 2008 9:48:05 PM
Hadar. I'm not sure that the DP debate is as complicated as you make it out to be in your comments. I think that folks who had methods training could follow the moves in the econometrics debate pretty comfortably, and form a very good sense of who had the better of the exchange. I agree with you, though, that naive realism plays a big role in the intensity of these debates, and that we all need to be very aware of the biases that might be affecting our work with data.
It seems like we went to many of the same panels. I also blogged about MacCoun here a few days back.
Posted by: dave hoffman | Sep 15, 2008 11:33:53 PM
Dave, here's the thing: While I feel that I have a sense of who had the better of the exchange, I can't avoid the nagging doubt that my opinion about the debate is ideologically biased, and that might impact my opinion about questions that, IMHO, don't have one correct answer (such as what should count as proper experiment replication). I confess I care more about the death penalty than about econometrics. :)
Posted by: Hadar Aviram | Sep 16, 2008 12:00:46 AM