Tuesday, April 17, 2012
We are all Empiricists Now, so Which Empiricists Should We Hire?
Evidently, we are all empiricists now. Except for me. But even I have a cool randomized field experiment in-progress with David Abrams, so I'll become an empiricist in no time, at least by some people's definition. Phase one: Collect data. Phase two: ???? Phase three: Profit.
Anyway, the Brian Leiter thread on empiricists, general frustration at identifying the right criteria for classifying empiricsts, and the subsequent comments ("My earlier post cataloguing School X's eight empirical legal scholars neglected to mention my dear friend and colleague, the multi-talented empiricist Slobotnik. Signed, mortified School X booster.") provide an opportunity to ask what sorts of empiricists should be hired in the legal academy. I recognize that the answer some people will provide is "none." I'm not addressing that crowd, though I am raising some issues that might be helpful to people who are skeptical about empiricist hiring in general on law faculties.
Here, then, are a few thoughts about how to hire entry-level quantitative empiricists with PhDs in disciplines like Political Science or Economics, as well as a coda about what many empiricists should be doing as the "field" matures. Hiring qualitative empiricists or experimentalists is a different ball of wax entirely, so I'm not really writing about those sorts of hiring decisions. My views are informed by having been a member of a law school's faculty appointments committee for most of the last decade (with trips to seven of the last ten AALS hiring conferences, for the quantitatively minded). They do not reflect the views of my institution. And my views don't match up perfectly with the way I have voted internally. I'll omit obvious advice like (a) hire smart people, and (b) fill curricular needs:
1. Ignore the findings. The legal academy probably focuses too much attention on the results of the empirical research project, particularly when hiring entry-level scholars. This is an empirically testable claim, but my impression is that entry level scholars with highly significant results do better on the market than candidates with marginally significant or null results. If this effect exists, it is largely pernicious. It rewards blind luck, it promotes the testing of questions that the empiricist already has strong intuitions about, it encourages entry-level scholars to write tons of papers (with less care) or run countless regressions until they find an interesting result, and it reinforces existing publication biases, which tend to publicize significant results and bury null results. Subject to the caveats below, we should not expect someone who achieved a highly significant result in paper A to be particularly likely to achieve a highly significant result in paper B . . . unless the scholar in question falsified data in paper A and wants to press her luck. But when you're doing entry level hiring, you really ought to care about papers B, C, and D. Which is why you should (almost) ignore paper A's findings.
2. Emphasize the methodology. Now the caveat to suggestion 1. Sometimes what's driving a highly significant result is a methodological breakthrough or the construction of a large new data set. These efforts or achievements should be rewarded. Someone who had a methodological breakthrough in paper A is plausibly more likely to have further breakthroughs in paper B. (Again, this is testable.) Someone who assembled a massive data set is likely displaying the work ethic and care that will serve them well in future projects. The same goes with framing a really interesting question, ideally one where either a null result or a highly significant result is revealing. Now, there are two major problems with emphasizing methodology. First, scholars genuinely making significant methodological breakthroughs are likely to go to Economics or Political Science departments so they can hang around with other researchers who are making methdological breakthroughs. Second, most law faculties don't have enough good empiricists to evaluate the empirical chops of a teched-up entry-level candidate. These faculties tend to lean heavily on references. And most references are relatively unreliable. (Except for me. And you!) The only things less reliable than references are outside letters and amicus briefs.
3. Hire candidates who intend to grab low-hanging fruit. There are important fields in legal scholarship where empirical scholarship has largely saturated the market. Setting aside extremely gifted candidates, these are areas where it is easy to pile up citations and hard to make much of an impact. I think that's become true of Corporate and Securities law, as well as judicial behavior, and the bar may be getting higher for quantitative empiricists writing in these areas. But there are other areas of law where great empirical scholarship is harder to come by: Civil Procedure, Comparative Public Law, Bankruptcy, and Health Law. Ok, you might have caught on to what I did there, having just mentioned the specializations of the last four JD/PhD empiricists hired by Chicago. Of course, these hires happen to be brilliant too; and that doesn't hurt. That's not to say we didn't try to hire a couple empiricists in fields where the low-hanging fruit has been picked. But the trend may be meaningful.
4. Hire empiricists who have really practiced law. This is a hedging strategy. A fair number of empiricists on the market have little evident interest in legal doctrine and seem poised to become middling or worse teachers and colleagues. An empiricist who has actually practiced law at a high level and seemed to have this practice experience inform her research agenda is a relatively good bet to add value to the institution even if the research winds up only being ok. My understanding is that at least one major law school that launched a JD/PhD program refused to let its JD/PhD candidates participate in on-campus interviewing or otherwise utilize the Career Services office to pursue non-academic jobs . . . [Shakes head].
5. What will we do with all of these empiricists? Some empiricists have become or will become superstar researchers. Most will not. An interesting question going forward is what the latter group should do with their time. I would hope that non-superstar empirical scholars increasingly turn their attention to replicating highly significant work by others upon which policymakers have relied. If my hunch about results-driven hiring is correct, then the temptation of entry-level scholars to falsify data is strong. I worry that some scholars will give in to temptation. A good faculty workshop can catch all kinds of errors in the data. Many good questions are asked about robustness. But such a workshop will be unlikely to unmask intentional falsehoods in the underlying data - that typically takes a lot of time and attention. I suspect that the legal academy is presently at a point where trying to replicate famous empirical results - using new data sets ideally - may represent some of the most socially useful low-hanging fruit, especially in fields that are heavily populated by empiricists.
Such replication is usually not methodologically innovative, so it probably isn't the wisest work for most entry-level scholars to do, given the obsession most faculties have with "high upside" hires. But for established empirical scholars who have largely reached their ceilings, a renewed emphasis on replication would be most welcome. This is an alternative to the "teaching colleges" approach discussed elsewhere. It is probably not wise to ask average-ish tenured JD / PhDs to give up research and focus exclusively on teaching. But it is perhaps more appropriate to ask that they try to maximize the social value of their research, and keeping the profession honest through replication may be the best way to accomplish that end.
Jon Klick offered the following additional thoughts, with which I largely agree:
Ideally, you do want someone who knows the difference between a true null/zero and a statistically imprecise result. Further, to some extent, statistical precision will be endogenous to research design. All other things equal, a better design (or using more appropriate data) is more likely to lead to either identifying a true zero or else a statistically significant result. This suggests that there is some information content about the candidate’s skills included in the finding of a statistically significant result. As for zero/insignificant results, assuming the candidate can speak thoughtfully about whether it is a true zero vs a limitation in the research design and/or inherently noisy data, I agree that we shouldn’t downgrade a candidate on that basis.
There’s another important sense where the results matter. Econometric work (really any statistical work) is as much art as science, so there are times when you do everything right and you come up with some crazy result that is almost certainly wrong. Unsophisticated/immature empirical researchers often present results like these and come up with some post hoc rationalization. This is a very bad sign. A sophisticated/talented empirical researcher knows to either re-think his design or to abandon the research and move onto something else in these cases.
I do worry about the problem of "crazy" results being abandoned and never seeing the light of day. As a Bayesian, I want to know about crazy results, null results, and every other kind of result. I certainly feel that a good empirical scholar ought to caveat the heck out of those crazy results and other scholars citing that work need to understand those caveats to contextualize the results.
TrackBack URL for this entry:
Listed below are links to weblogs that reference We are all Empiricists Now, so Which Empiricists Should We Hire?:
This is an excellent post, and very useful.
Lior, I'm curious about your take on a question somewhat related to #4: When looking for empiricists, to what extent should a faculty value what we might call "traditional" law-related credentials such graduating form a top law school, law school grades, clerkships, etc. Anecdotally, faculties generally like to see those traditional credentials to gain confidence that the faculty candidate will operate at a high level in teaching and legal analysis outside doing empirical research. To what extent should schools relax those preferences when looking for an empiricist, either by hiring someone without a J.D. or someone with a J.D. from a less highly ranked school (or someone who did less well in law school courses)?
Posted by: Orin Kerr | Apr 17, 2012 10:33:14 AM
Thanks, Orin. It's not clear to me that the right answer to your question differs from the right answer for other sorts of hires. So, while I have intuitions about this question, they are mainly just that. I do think it is a mistake not to look at a law school transcript when doing entry-level hiring, both to see how someone did, but also to see what classes they took.
Posted by: Lior | Apr 17, 2012 10:52:35 AM
Is it possible to become an "empiricist" without having a PhD? (Obviously, I am trying to do this, or I wouldn't be asking the question.) I already have a tenure track teaching job so I don't have to worry about going on the market. I like math and have been teaching myself statistics (although I have no formal background in either). I have put together a data set and will hopefully produce my first empirical piece this summer. I assume that it will be hard to get people to take me seriously, but if I produce good work I had hoped I could overcome my lack of formal training. What do you think?
Posted by: anon | Apr 17, 2012 11:24:27 AM
P.S. This blog doesn't like Chrome. I always have to switch to Firefox to post. :(
Posted by: anon | Apr 17, 2012 11:25:05 AM
"anon" asks whether it is possible to become an "empiricist" (whatever *that* is) without a PhD. The short answer is to look at people like Dan Kahan at Yale, who does slick, elegant, solid empirical work. Although Dan does not have a PhD, he does better work than many so-called empiricists with PhDs. Some of his success (but, by all means, not all of it) comes from working with smart social scientists like Paul Slovic. Although those in the legal academy tend to produce single-author articles, in other disciplines it is the norm to co-author. My recommendation to a non-PhD who wants to do empirical work: (1) go to workshops (http://lawweb.usc.edu/who/faculty/workshops/legalWorkshop.cfm); and (2) collaborate with a social scientist.
Posted by: Robert Rocklin | Apr 17, 2012 11:43:50 AM
"And most references are relatively unreliable. (Except for me. And you!)"
Posted by: Michael Risch | Apr 17, 2012 3:03:55 PM
I would add category 2a: Emphasize the right methodology. There is a hiring bias in favor of complexity, but complexity can hide weak skills and weaker models. A well-designed study can report only marginals, mean differences or crosstabs, and do so to devastating effect. If it lacks a regression table that is probably a sign that the researcher has good judgment, not that they don't know how to "do" empirical research.
In response to anon, I would begin where Lior leaves off. Learn what techniques are important in your field by replicating the work of others. There are lots of publicly available datasets, and many authors will share with you for the price of an e-mail.
Posted by: Joe Doherty | Apr 17, 2012 9:38:01 PM
The legal academy probably focuses too much attention on the results of the empirical research project, particularly when hiring entry-level scholars. This is an empirically testable claim, but my impression is that entry level scholars with highly significant results do better on the market than candidates with marginally significant or null results.
If that's true, I'd take it as a sign that (some) law faculty don't quite grasp the whole concept of social science. If you want good social science, it is neither necessary nor sufficient to find someone who says, "In testing whatever null hypothesis I happened to choose, I found a p-value of 0.049 or below as to something or other."
Posted by: Stuart Buck | Apr 19, 2012 8:28:55 PM
Also, I disagree with this from Klick:
"All other things equal, a better design (or using more appropriate data) is more likely to lead to either identifying a true zero or else a statistically significant result. This suggests that there is some information content about the candidate’s skills included in the finding of a statistically significant result."
First, I'm not sure what a "true zero" means -- we can fail to reject the null hypothesis, and we can do so with lots of statistical power, but you should never say that you found a "true zero." (In anything that you'd want to measure, there's never going to be a "true zero" anyway -- for example, even if the death penalty doesn't deter, are you ever prepared to say that it doesn't even have an effect size of 0.000000000001 standard deviations? I.e., that the effect really is precisely zero, and not some extremely tiny number that is different from zero?)
Second, research design can, in some cases, increase statistical power and hence the likelihood of finding a significant result (for example, doing cluster randomization in such a way that the intraclass correlation is smaller). But it is logically fallacious to make the opposite inference: that a significant result gives any information about research design or skills.
People can (and do) get significant results by playing around with the model and data in all kinds of ways -- choosing what to do with outliers, what to do with missing data (listwise deletion, any of various imputation procedures), choosing among hundreds of model specifications (adding quadratics, adding interaction terms, etc.).
Posted by: Stuart Buck | Apr 20, 2012 9:32:43 AM
As one who comes from outside of law, but is an empirical researcher, some comments.
1) No, a Ph.D. is not necessary to do empirical work. The value of a social sciences Ph.D. is what it teaches about a discipline, and how to do solid research in that discipline. One can certainly come to empirical research from the law side, and there are many examples of those who did.
2) Do not, I repeat, do not, take a technique driven approach. The vast majority of good empirical work uses fairly simple statistical techniques. The more technique driven you are, the more likely it is that your data does not meet the assumptions of your approach, and thus the more likely you are depending on asymptotic results that can lead you astray with small small sample sizes. This means that in evaluating empirical researchers you should look for what Arnold Zellner called "sophisticated simplicity." Put a big premium on people who tease out results using fairly simple econometrics. Real effects will still be there.
3) Stay away from people who lack common sense, or who take their findings too seriously.
4) Stay away from people who confuse statistical significance (which you can just about always buy with a large enough sample, see Lindley's paradox) with real import.
Posted by: Mark Weinstein | Apr 20, 2012 5:34:07 PM