Friday, August 12, 2005
“Big Tent” Empiricism
Some of the other law blogs have used Tracey George’s new paper as a jumping off point to discuss the trend in law toward empirical legal scholarship (e.g., here and here). There is indeed such a trend toward quantitative empirical work in the academy, it is powerful, and this added rigor in legal scholarship is worth applauding. Attend the annual meeting of the American Law and Economics Association (probably the most consistently high-quality law conference in the country), and you will encounter dozens of presentations by the new generation of legal empiricists. Many of these papers present genuinely important findings, though the Q & A sessions tend to be much duller than the presentations, as they usually are dominated by questions that begin “Did you control for …?” or “Have you tried using x data set?”
Legal empiricists with graduate training in Economics, Political Science, Sociology, and other fields will continue to be in great demand in the coming years and will make enormous contributions to our understanding of the law. But I worry that this new generation of quantitative empiricism is crowding out qualitative empiricism and what is pejoratively called “casual” empiricism. My sense is that young law scholars doing qualitative empirical projects have been getting hammered on the job market, particularly when those scholars do not sport graduate degrees in disciplines other than law. Moreover, it is my (casual) impression that the papers posted in SSRN’s Experimental and Empirical Legal Studies subject matter journal are almost universally quantitative. There are real problems with qualitative empiricism – perhaps the biggest is the risk that an unsavory scholar will slant his or her observations to tell a more interesting story. Yet so many of the law’s most important insights, hypotheses, and theories have flowed from qualitative empirical work (e.g., Ellickson, Bernstein, Goffman, or, to go back earlier, Tocqueville, Machiavelli, and Burke, etc.) that to cut this methodology off at the knees risks impoverishing legal thought. I don’t have the courage of my convictions on this point, and there are not qualitative projects on my research agenda at the moment, but a school looking to “buy low” and “sell high” on junior legal scholars might view the academy’s general hostility to qualitative empiricism as a market opportunity.
Posted by Lior Strahilevitz on August 12, 2005 at 04:26 PM | Permalink
TrackBack URL for this entry:
Listed below are links to weblogs that reference “Big Tent” Empiricism:
Tracked on Aug 15, 2005 5:02:57 PM
“perhaps the biggest is the risk that an unsavory scholar will slant his or her observations to tell a more interesting story”
I feel compelled to comment on this common misperception. It is very, very hard for a quantitative empiricist to manipulate his results without making his moves obvious to people who understand econometrics. Short of outright fraud with data, covert manipulation of “hard” studies is almost impossible (and even data fraud is increasingly difficult because of the growing demand that authors reveal their coding files). It’s actually a lot easier to manipulate the results of “soft” studies because there are no generally-accepted standards of data analysis in softie armchair empiricism. Which is exactly why more and more people abandon armchair empiricism in favor of “hard” empiricism. Replicability, transparency, and mathematical precision of equations beat chatter and speculation of case studies any day.
And that’s why, by the way, the Q/A sessions of Law/Econ talks are the most exciting part of the program. That’s where you get to find out whether a presenter played tricks with methodology to get favorable results. The “dull” questions about controls and instruments sting without mercy and with no regard to a presenter’s fancy position and prior achievements. You just can’t bullshit your way out of those questions.
Posted by: Kate Litvak | Aug 12, 2005 5:20:06 PM
Either you misread my post or I'm missing your point. The full quote from my post was "There are real problems with QUALITATIVE empiricism – perhaps the biggest is the risk that an unsavory scholar will slant his or her observations to tell a more interesting story." I was making the same point you just made in the comments - that it's easier to fudge qualitative research than quantitative research. But that's not to suggest that qualitative research has no value; it can be very valuable, and some research questions lend themselves to much better investigation via qualitative methods.
I agree that Q & A can be interesting when the questioner is trying to show that the researcher is playing fast and loose with the data. But I don't think that's the tone of the vast majority of questions at ALEA, where the work has usually been vetted reasonably well beforehand. In those cases, I usually find the methodological questions (including the ones I ask) to be pretty dull. But I'm glad someone finds them interesting :)
Posted by: Lior | Aug 12, 2005 5:33:10 PM
Sorry, I did misread it. When I see signs of support for softie empirical junk, I become illiterate.
Posted by: Kate Litvak | Aug 12, 2005 5:44:21 PM
So what exactly is the line between quantitative empiricism and qualitative empiricism?
Posted by: Paul Gowder | Aug 12, 2005 6:06:52 PM
Wow -- by "softie empirical junk," do you mean to include, say, history? That is, of course, a social science that relies upon the case study method. I hadn't realized quite the extent of the glorious revolution that econometrics had wrought -- but then perhaps that just goes to show the chronic "illiteracy" of a soft empirical junkie.
Speaking of history, by the way, there was a small uptick in the hiring of legal historians a few years ago -- though it could have just been the vagaries of a small cohort of Yale JD/PhD grads entering the market around the same time. But otherwise, I think Lior's point is well taken. One of legal academia's strengths, methodological pluralism, is also one of its weaknesses. It can lead to productive critiques and syntheses across methodologies and epistemologies, as well as to rancor and chauvinistic blindness to the strengths of others' approaches and to the weaknesses of one's own. And at least in some law schools, and in many departments in fields with multiple approaches, once a tipping point in a department's hiring is reached, heterodox pluralism gives way to orthodoxy (of all sorts -- including qualtitative scholars who see no merit to a particular form of quantitative research like econometrics, or to any forms of it). Orthodoxies in turn can lead to enormous advanced breakthroughs in knowledge, but also to stale, religiously observed paradigms with no sense of history or humility that a generation hence will look like delusional gobbledygook. The goal should be to maximize the collective gains that come from pluralism and broad debate, and to minimize the blind chauvinism -- but then, please remember that the proviso that ended the first paragraph still applies.
Posted by: Mark Fenster | Aug 13, 2005 12:22:09 AM
“Junk” refers not to a field, but to the mismatch between the purpose of a study and its methodology. If a legal historian says, “Here are awfully interesting private letters shedding a new light on the adoption of legislation X,” that’s not junk. If he says, “The adoption of legislation X led to the deterioration of families, redefinition of gender roles, and increased urbanization,” all based on a few private letters, interviews, and law review articles, which in turn cite nothing but another couple of letters, interviews, and more law review articles, which cite a couple of letters… and so forth in endless circles, that’s junk. Call it “methodological orthodoxy” if you wish, but you just can’t make claims of correlation, let alone causation, on the basis of such data. Sadly, way too many legal academics do just that, using other people’s speculations and fantasies as “empirical evidence” for their own speculations and fantasies. That’s true for many fields, not just legal history.
Posted by: Kate Litvak | Aug 13, 2005 2:44:13 AM
As long as we agree that both sides can play this game. Historians have made very powerful critiques, backed by hard data, of some allegedly economics-based work, e.g. Epstein's _Forbidden Grounds_, and Bernstein's book on Lochnerism.
Posted by: Joseph Slater | Aug 13, 2005 12:03:21 PM
Yes, it goes both ways. I've long criticized empirical finance literature for "hardie" junk.
Posted by: Kate Litvak | Aug 13, 2005 12:24:01 PM
So, uh, empiricists... I'm going to naggingly ask the question again because now I'm truly curious. What's the difference between quantitative empiricism and qualitative empiricism? Is it like the difference between positive and normative?
Posted by: Paul Gowder | Aug 14, 2005 2:11:37 AM
Try the Wikipedia definitions:
and then follow the links for a general introduction to basic research methods.
The entry on quantitative methods isn't good, but it at least gives you a general sense of the distinctions.
Posted by: Mark Fenster | Aug 14, 2005 9:04:36 AM
Thank you. Heavens, you'd think that people would just use the methodology appropriate for the question at issue and have done with it.
Posted by: Paul Gowder | Aug 14, 2005 11:16:24 AM
But it's not always clear what methodology is appropriate for the question at issue. That certainly comes up in studies of law, with legal scholars (as a gross generalization) pushing more for "internal" explanations of judicial decision-making (following precedent, e.g.) and political scientists and historians (as a gross generalization) tending to rely more on "external" factors like shifts in societal attitudes and politics. See the competing explanations for the Supreme Court upholding mid-to-late New Deal laws.
Posted by: Joseph Slater | Aug 14, 2005 11:51:22 AM
Right, but "internal" and "external" wouldn't necessarily mean "quantitative" or "qualitative" would they?
Posted by: Paul Gowder | Aug 14, 2005 12:07:20 PM
Not necessarily, no. I was just making the broader point that people sometimes have good faith disagreements -- with good arguments on both sides -- about which methodology (broadly construed) is appropriate to answer a certain question about law, its development, and/or impact. And on the other hand, of course there are some questions in which certain tools seem much more likely than others to provide defensible answers.
Posted by: Joseph Slater | Aug 14, 2005 1:58:42 PM
I think people are being too easy on quantitative empirical work. I think the potential for bad work is quite high--both deliberate manipulation, but even more simple shoddiness. I recently did a little training in econometrics for the first time since my first year of grad school many years ago, and was simultaneously pleased and appalled by how easy it is to run a regression nowadays. This leads to the following problem. A researcher runs 20 regressions on her computer. 19 of them give uninteresting results, but 1 is interesting. She reports the 1 result, ignoring the rest. The result is reported as statistically significant, but in fact, that significance test is really meaningless. This is of course a classic problem, but the ease of running regressions today makes it much worse. How widespread is this problem? Who knows? We do not ask people to discuss the regressions they choose not to report. There is a real risk that most of the empirical work out there is literally insignicant. This risk is of course even greater in law, where many people doing empirical work are not as well trained as in other fields.
And that's just one of the serious questions about the meaning of quantitative empirical work.
Posted by: Brett McDonnell | Aug 15, 2005 10:51:45 AM
1) Kate -- what do yo umena by "hardie junk" in empirical finance?
2) Brett -- you raise a correct point, raised in economics by Diedre McClosky. George Stiger once said in class that "the worst tthing that ever happened to economics was when a regression became a free good."
Posted by: Mark Weinstein | Aug 15, 2005 11:21:20 AM
Brett: no, we do ask people to discuss the regressions they choose not to report. That’s what robustness checks are for. Try to publish a finance paper without showing them to a reviewer. And we also ask for multiple regressions in each table, with different controls and such. And we ask for a correlations table. Things aren’t perfect, but they aren’t nearly as bad as you think.
Mark: "hardie junk" is running fancy regressions without understanding the issue under investigation, without really looking at the data, with spurious instruments, with coding pulled out of a hat... as if mathematical complexity fixes any of those problems. I’ve named names in my papers, so no need to do it here.
Another version of “hardy junk” is much of formal modeling. I recently reviewed a theory paper for a major conference, with a complicated model showing that if the entrepreneur performs better, his chances of being replaced are lower. I am not kidding. And it wasn’t the worst one, either, because it at least had plausible assumptions.
In both cases, authors hide the absence of clear thinking or extreme ignorance of the reality behind fancy techniques. Similar to the favorite trick of legal crits: throwing lots of multisyllabic words into trivial political manifestos to impress students editors.
Posted by: Kate Litvak | Aug 15, 2005 3:09:52 PM
The comments to this entry are closed.