« Should I Be Sad? | Main | The Law School Case Method & How to Assess the Teaching/Learning Process Throughout the Semester »
Thursday, April 02, 2009
What is the Future of Empirical Legal Scholarship?
First, thanks to Dan and everyone else at Prawsblog for inviting me to post here this month. I'm really looking forward to it.
Posted by John Pfaff on April 2, 2009 at 11:56 AM in Peer-Reviewed Journals, Research Canons, Science | Permalink
TrackBack
TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef01156ec95417970c
Listed below are links to weblogs that reference What is the Future of Empirical Legal Scholarship?:
Comments
Really interesting contentviajar.
Posted by: you | Jun 22, 2019 2:08:40 PM
Very interesting post. I think these are the types of questions that must be considered as the ELS movement matures. One specific thought which I would like to echo is the use of computer---for not only processing data but also for use in information retrieval.
Over at the Computational Legal Studies Blog, we recently highlighted the Feb 6th issue of Science. Among other things, this issue outlines the rise of computing power and its possibilities for Large-N data analysis. I think it echoes much of the sentiments you express herein. Anyway, I will be interested in reading your post in the days to come.
Posted by: Dan Katz | Apr 2, 2009 11:04:04 PM
Thanks for a very thoughtful post -- I'm looking forward to your future posts. Here's a point I'd put forward for consideration: to what extent will the increasing statistical and methodological complexity of empirical work lead non-experts or semi-experts to rely on the top journals and the top experts in the field based solely on credentialing? (And I say "solely" to emphasize the point.) I can imagine that when it comes to hiring empirical scholars, faculty members without experience in these areas will put more of a premium on the placement of the article rather than their own independent evaluation of it. And when law profs seek to bolster their non-empirical law review articles with empirical evidence, they'll be more likely to cite to research in the top journals, rather than research that they find most convincing or rigorous. Is this a good development? What are the dangers?
Posted by: Matt Bodie | Apr 2, 2009 4:17:24 PM
Thank you for your interesting post, John. Like you, I believe it is very important to have a conversation about these issues.
Student-published journals also appear to be interested in the challenges of publishing empirical legal scholarship. For example, the NYU Law Review has posted a nonbinding set of guidelines for novel empirical analysis, perhaps in reaction to what you describe as the "explosion" of bad work.
In addition, I notice that the NYU Law Review has created a data repository for the empirical scholarship that it publishes. This allows scholars like you to separate the wheat from the chaff by analyzing the same data. NYU's repository not only allows scholars to download the data from the website, but allows them to perform fairly sophisticated empirical analysis using a web-integrated program (see the "subsetting" link).
It will be interesting to see if other journals use Harvard's Dataverse software to create their own data repositories. That could be a significant step toward developing "a more evidence-based approach." In addition, it is an approach that is scalable -- as the technology continues to make it easier to conduct empirical analysis, we cannot hope to limit the field to the most knowledgeable or experienced ELS scholars, but must develop ways to separate out what is not credible (as you eloquently noted above). Data repositories maintained by legal journals are a promising part of the solution.
Posted by: ELS enthusiast | Apr 2, 2009 3:20:59 PM
One thing I forgot to mention: I do think there is at least heuristic value in hypothesis generation and testing but that value is not in any way tied to the viability of the Popperian criterion of falsification: we need not believe in falsifiablility as the sine qua non for distinguishing science from pseudo-science if only because insofar as hypotheses are related to empirical "facts" they deign to represent, formal logic is not helpful (and Popper's was a formal criterion).
Posted by: Patrick S. O'Donnell | Apr 2, 2009 2:52:20 PM
An additional problem worth noting is that the most rigorous data analysis will not make up for flaws in the way data was collected and coded. Making empirical claims, for example, in a criminal justice context may require significant criminal law expertise to discern how collecting or coding data in a certain way might omit subtlety or skew the observed phenomenon. To the extent legal academics are now better trained in empirical analysis (e.g., spending necessary time in PhD programs to become experts in statistics), they may be sacrificing some of the practical knowledge necessary to identify flaws in underlying data collection.
Posted by: JB | Apr 2, 2009 2:10:29 PM
Re:
2. Not just "bad" philosophy of science, but little awareness of philosophy of natural and social science generally.
3. I would add here little understanding of the nature of inductive reasoning as such. Cf., for instance, the possible implications that follow from John D. Norton's more-than-plausible argument that "there are no universal inductive inference schemas" (in his paper, 'A Material Theory of Induction;' ct. too his paper, 'A Little Survey of Induction'). One implication is related to your mention of the importance of "evidence:"
"Since inductive inference schemas are underwritten by facts, we can assess and control the inductive risk taken in an induction by investigating the warrant for its underwriting facts. In learning more facts, we extend our inductive reach by supplying more localized inductive inference schemes." [We'll leave aside for the moment the interesting fact/value "interrelatedness" questions that will invariably arise here, as discussed by Amartya Sen and Hilary Putnam, for example. I think we would benefit from more systematic investigation into the 'value-ladedness' of empirical research, which is intrinsic to every stage of inquiry.]
Norton's argument suggests the folly of searching for universal formal inference schemes modeled on, say, deductive logic ('all inductive inference is local').
It may be the case that there are no more "rigorous" standards or models forthcoming. And having spent some time last year reading the literature on medicine and epidemiology I don't think there's a lot to be gained in looking there (I'm open to persuasion, just presumptively sceptical): EBM (and RCTs), for example, does not seem to have any direct relevance for the social sciences and, in any case, within its usual domains is beset by a unique set of problems that suggest widespread belief in this gold standard within medicine may be a bit quixotic. (I don't mean to deny its relevance or historical efficacy, but rather to point out the fact that we're increasingly appreciating its limits). Epidemiology has some undeniable relevance, but the field is frequently unable to pinpoint precise causal mechanisms and an examination of current debates and controversies in the field suggests we proceed with much caution before looking to it as providing the requisite methodological discipline we hanker after.
It perhaps goes without saying that one of the more recalcitrant issues here revolves around the belief that the natural sciences are the repository for the kinds of models and standards, the analytical "robustness" and "rigor," that we should imitate in the social sciences. Now we need not draw hard and fast boundaries between these two basic kinds of science (after all, we have sufficient reason to label them both sciences), but I think there are a host of reasons that we should take care not to elide the very real distinctions here between natural and social science.
Posted by: Patrick S. O'Donnell | Apr 2, 2009 2:01:29 PM
Great post. I look forward to future installments and particularly hope you will elaborate on your "bad philosophy of science" point.
Posted by: Sarah L. | Apr 2, 2009 1:29:00 PM
A problem related to (or perhaps a subset of) point 1: misuse of empirical work that is sound in itself to support a point only tenuously connected to the data. Some subjects just aren't fairly susceptible to investigation by empirical study, yet such studies have an allure (viz. "numbers don't lie"), and the upshot is to create an incentive to shoehorn: "how can I massage the problem in such a way that it can be studied empirically?" For example, my recollection is that empirical studies were done (Prof. Ringhand did one, and I think Cass Sunstein did, too) showing that the conservative members of the Rehnquist court were, on average, more likely to vote to strike down statutes than its liberal members. This data (I assume for now that it is accurate) was then used as a foundation for the proposition that the former group were more "judicially activist" with little or no serious attempt to connect the data with the point it was adduced in support of.
Perhaps these are outliers, I don't read a huge amount of empirical studies, but the thought came to mind as a potential problem.
Posted by: Simon | Apr 2, 2009 12:18:45 PM
The comments to this entry are closed.