« Should I Be Sad? | Main | The Law School Case Method & How to Assess the Teaching/Learning Process Throughout the Semester »

Thursday, April 02, 2009

What is the Future of Empirical Legal Scholarship?

First, thanks to Dan and everyone else at Prawsblog for inviting me to post here this month. I'm really looking forward to it.

I found Jonathan Simon's recent posts about the future of empirical legal scholarship quite interesting. On the one hand, as someone in the ELS field, I found his general optimism about its future uplifting. But I'm not sure I share it, although my concern is more internal to the field itself than external. I'm going to be writing about this a lot this month, so I thought I'd use my first post to just lay out my basic concerns.

Jonathan focuses on trends outside of ELS: cultural and attitudinal shifts within the law as a whole, and more global changes in, say, economic conditions. And at one level I think he is right--the decreased cost of empirical work, the PhD-ification of the law, the general quantitative/actuarial turn we have witnessed over the past few decades all suggest ELS is here to stay. But there are deep internal problems not just with ELS, but with the empirical social sciences more generally, that threaten their future. I will just touch on the major points here, and I will return to all of these issues in the days ahead.
To appreciate the problem, it is first necessary to note a major technological revolution that has taken place during the past three decades. The rise of the computer cannot be overstated. Empirical work that I can do in ten minutes sitting at my desk would have been breathtakingly hard to do twenty-five years ago and literally impossible fifty years ago. The advances have been both in terms of hardware (computing power and storage) and software (user-friendly statistics packages). The result is that anyone can do empirical work today. This is not necessarily a good thing.

So what are the problems we face?

1. An explosion in empirical work. More empirical work is, at some level, a good thing: how can we decide what the best policy is without data? But the explosion in output has been matched by an explosion in the variation in quality (partly because user-friendly software allows people with little training to design empirical projects). The good work has never been better, and the bad work has never been worse. It could very well be that average quality has declined. Some of the bad work comes from honest errors, but some of it comes from cynical manipulation.

2. A bad philosophy of science. Social scientists cling to the idea that we are following in the footsteps of Sir Karl Popper, proposing hypotheses and then rejecting them. We are not. We never have. This is clear in any empirical paper: once the analyst calculates the point estimate, he draws implications from it ("My coefficient of 0.4 implies that a 1% increase in rainfall in Mongolia leads to a 0.4% increase in drug arrests in Brooklyn"). This is not falsification, which only allows him to say "I have rejected 0%." Social science theory cannot produce the type of precise numerical hypothesis that falsification demands. We are trying to estimate an effect, which is inductive.

3. Limited tools for dealing with induction. Induction requires an overview of an entire empirical literature. Fields like medicine and epidemiology have started to develop rigorous methods for drawing these types of inferences. As far as I can tell, there has been no work in this direction in the social sciences, including ELS, of any sort. This is partly the result of Problem 2: such overviews would be unnecessary were we actually in a falsificationist world, since all it takes is one black swan to refute the hypothesis that all swans are white.

As a result of these three problems, we produce empirical knowledge quite poorly. To reuse a joke I've made before and will likely make again at least a dozen times this month, Newton's Third Law roughly holds: for every empirical finding there is an opposite (though not necessarily equal) finding. 

With more and more studies coming down the pike, and with little to no work being done to figure out how to separate the wheat from the chaff, ELS could defeat itself. If it is possible to find any result in the literature and no way to separate out what is credible and what is not, empirical research becomes effectively useless. (This only exacerbates the problem identified by Don Braman, Dan Kahan and others that people choose the results that align with their prior beliefs rather than adjusting these beliefs in light of new data.)

So what is the solution? Empirical research in the social sciences needs to adopt a more evidence-based approach. We need to develop a clear definition of what constitutes "good" and "bad" methodological design, and we have to create objective guidelines to make these assessments. We have to abolish string cites, especially of the "on the one hand, on the other hand" type, and replace them with rigorous systematic reviews of the literature.

Of course, these guidelines and reviews are challenging to develop for the methodologically straight-forward randomized clinical trial that medicine relies on. In the social sciences, which are often forced to use observational data, the challenge will be all the greater. But, as I'll argue later this month, the rewards will be all the greater as well.

The use of systematic reviews is particularly important in the law, for at least reasons:

1. Inadequate screening. Peer review is no panacea by any means, but it provides a good front line of defense against bad empirical work. We lack that protection in the law. There are some peer reviewed journals, but not many. And the form of peer review that Matt Bodie talked about for law reviews recently isn't enough. The risk of bad work slipping through is great.

The diversity of law school faculties, usually a strength, is here a potential problem. Even theoretical economists have several years of statistics, so everyone who reads an economics journal has the tools to identify a wide range of errors. But many members of law school faculties have little to no statistical training, making it harder for them to know which studies to dismiss as flawed.

2. Growing importance of empirical evidence. Courts rely on empirical evidence more and more. And while disgust with how complex scientific evidence is used in the courtroom has been with us since the 1700s if not earlier, the problem is only going to grow substantially worse in the years ahead. Neither Daubert nor Frye are capable of handling the evidentiary demands that courts increasingly face.

Given that my goal here was just to touch on what I want to talk about in the weeks to come, I think I'll stop here. This is an issue I've been thinking about for a while now, and I am looking forward to seeing people's thoughts this.

Posted by John Pfaff on April 2, 2009 at 11:56 AM in Peer-Reviewed Journals, Research Canons, Science | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef01156ec95417970c

Listed below are links to weblogs that reference What is the Future of Empirical Legal Scholarship?:

Comments

Really interesting contentviajar.

Posted by: you | Jun 22, 2019 2:08:40 PM

Very interesting post. I think these are the types of questions that must be considered as the ELS movement matures. One specific thought which I would like to echo is the use of computer---for not only processing data but also for use in information retrieval.

Over at the Computational Legal Studies Blog, we recently highlighted the Feb 6th issue of Science. Among other things, this issue outlines the rise of computing power and its possibilities for Large-N data analysis. I think it echoes much of the sentiments you express herein. Anyway, I will be interested in reading your post in the days to come.

Posted by: Dan Katz | Apr 2, 2009 11:04:04 PM

Thanks for a very thoughtful post -- I'm looking forward to your future posts. Here's a point I'd put forward for consideration: to what extent will the increasing statistical and methodological complexity of empirical work lead non-experts or semi-experts to rely on the top journals and the top experts in the field based solely on credentialing? (And I say "solely" to emphasize the point.) I can imagine that when it comes to hiring empirical scholars, faculty members without experience in these areas will put more of a premium on the placement of the article rather than their own independent evaluation of it. And when law profs seek to bolster their non-empirical law review articles with empirical evidence, they'll be more likely to cite to research in the top journals, rather than research that they find most convincing or rigorous. Is this a good development? What are the dangers?

Posted by: Matt Bodie | Apr 2, 2009 4:17:24 PM

Thank you for your interesting post, John. Like you, I believe it is very important to have a conversation about these issues.

Student-published journals also appear to be interested in the challenges of publishing empirical legal scholarship. For example, the NYU Law Review has posted a nonbinding set of guidelines for novel empirical analysis, perhaps in reaction to what you describe as the "explosion" of bad work.

In addition, I notice that the NYU Law Review has created a data repository for the empirical scholarship that it publishes. This allows scholars like you to separate the wheat from the chaff by analyzing the same data. NYU's repository not only allows scholars to download the data from the website, but allows them to perform fairly sophisticated empirical analysis using a web-integrated program (see the "subsetting" link).

It will be interesting to see if other journals use Harvard's Dataverse software to create their own data repositories. That could be a significant step toward developing "a more evidence-based approach." In addition, it is an approach that is scalable -- as the technology continues to make it easier to conduct empirical analysis, we cannot hope to limit the field to the most knowledgeable or experienced ELS scholars, but must develop ways to separate out what is not credible (as you eloquently noted above). Data repositories maintained by legal journals are a promising part of the solution.

Posted by: ELS enthusiast | Apr 2, 2009 3:20:59 PM

One thing I forgot to mention: I do think there is at least heuristic value in hypothesis generation and testing but that value is not in any way tied to the viability of the Popperian criterion of falsification: we need not believe in falsifiablility as the sine qua non for distinguishing science from pseudo-science if only because insofar as hypotheses are related to empirical "facts" they deign to represent, formal logic is not helpful (and Popper's was a formal criterion).

Posted by: Patrick S. O'Donnell | Apr 2, 2009 2:52:20 PM

An additional problem worth noting is that the most rigorous data analysis will not make up for flaws in the way data was collected and coded. Making empirical claims, for example, in a criminal justice context may require significant criminal law expertise to discern how collecting or coding data in a certain way might omit subtlety or skew the observed phenomenon. To the extent legal academics are now better trained in empirical analysis (e.g., spending necessary time in PhD programs to become experts in statistics), they may be sacrificing some of the practical knowledge necessary to identify flaws in underlying data collection.

Posted by: JB | Apr 2, 2009 2:10:29 PM

Re:

2. Not just "bad" philosophy of science, but little awareness of philosophy of natural and social science generally.

3. I would add here little understanding of the nature of inductive reasoning as such. Cf., for instance, the possible implications that follow from John D. Norton's more-than-plausible argument that "there are no universal inductive inference schemas" (in his paper, 'A Material Theory of Induction;' ct. too his paper, 'A Little Survey of Induction'). One implication is related to your mention of the importance of "evidence:"

"Since inductive inference schemas are underwritten by facts, we can assess and control the inductive risk taken in an induction by investigating the warrant for its underwriting facts. In learning more facts, we extend our inductive reach by supplying more localized inductive inference schemes." [We'll leave aside for the moment the interesting fact/value "interrelatedness" questions that will invariably arise here, as discussed by Amartya Sen and Hilary Putnam, for example. I think we would benefit from more systematic investigation into the 'value-ladedness' of empirical research, which is intrinsic to every stage of inquiry.]

Norton's argument suggests the folly of searching for universal formal inference schemes modeled on, say, deductive logic ('all inductive inference is local').

It may be the case that there are no more "rigorous" standards or models forthcoming. And having spent some time last year reading the literature on medicine and epidemiology I don't think there's a lot to be gained in looking there (I'm open to persuasion, just presumptively sceptical): EBM (and RCTs), for example, does not seem to have any direct relevance for the social sciences and, in any case, within its usual domains is beset by a unique set of problems that suggest widespread belief in this gold standard within medicine may be a bit quixotic. (I don't mean to deny its relevance or historical efficacy, but rather to point out the fact that we're increasingly appreciating its limits). Epidemiology has some undeniable relevance, but the field is frequently unable to pinpoint precise causal mechanisms and an examination of current debates and controversies in the field suggests we proceed with much caution before looking to it as providing the requisite methodological discipline we hanker after.

It perhaps goes without saying that one of the more recalcitrant issues here revolves around the belief that the natural sciences are the repository for the kinds of models and standards, the analytical "robustness" and "rigor," that we should imitate in the social sciences. Now we need not draw hard and fast boundaries between these two basic kinds of science (after all, we have sufficient reason to label them both sciences), but I think there are a host of reasons that we should take care not to elide the very real distinctions here between natural and social science.

Posted by: Patrick S. O'Donnell | Apr 2, 2009 2:01:29 PM

Great post. I look forward to future installments and particularly hope you will elaborate on your "bad philosophy of science" point.

Posted by: Sarah L. | Apr 2, 2009 1:29:00 PM

A problem related to (or perhaps a subset of) point 1: misuse of empirical work that is sound in itself to support a point only tenuously connected to the data. Some subjects just aren't fairly susceptible to investigation by empirical study, yet such studies have an allure (viz. "numbers don't lie"), and the upshot is to create an incentive to shoehorn: "how can I massage the problem in such a way that it can be studied empirically?" For example, my recollection is that empirical studies were done (Prof. Ringhand did one, and I think Cass Sunstein did, too) showing that the conservative members of the Rehnquist court were, on average, more likely to vote to strike down statutes than its liberal members. This data (I assume for now that it is accurate) was then used as a foundation for the proposition that the former group were more "judicially activist" with little or no serious attempt to connect the data with the point it was adduced in support of.

Perhaps these are outliers, I don't read a huge amount of empirical studies, but the thought came to mind as a potential problem.

Posted by: Simon | Apr 2, 2009 12:18:45 PM

The comments to this entry are closed.