« Apple Backdating Opinion and "oPtion$" Book Club | Main | Adverse Possession in the News »

Thursday, November 15, 2007

The Potential Pathologies of "Leiter-scores"

I have a modest thesis, provoked by the release of the most recent Leiter rankings:  I suspect deans will now want a "Leiter-analysis" of any potential lateral candidates.  I have anecdotal evidence that this is already going on and I would guess that as these rankings become even more central to a school's self-conception and its promotional materials, faculty hiring will be more focused on attempts to "game" these rankings by poaching on the basis of "Leiter-scores".  (In theory, even the entry-level market could be affected, though much less so.)

A few caveats.  First, this may not be such a bad thing -- or it may ultimately tend to favor candidates who would otherwise be favored anyway.  If we like and embrace these rankings because we think they tend to measure scholarly influence, the lateral market is already designed -- at least in theory -- to give weight to this criterion.  Thus, assuming the metric is one we accept, Leiter and Leiter-scores (in short, your citation count) merely give us a proxy to quantify potential lateral candidates.  That may be a useful shorthand and may not adulterate the process through quantification, so long as the numbers are used sensibly.  Second, there are probably only a relatively small group of schools that are especially "Leiter-sensitive."  At the very top and the schools far outside the "top 50," few schools care where they find themselves, I'd guess.  Yale isn't losing much sleep over whether Chicago beats it out for the top slot.  But there is still a sizable group of law schools who probably do care, whether because the school "underperforms" in US News rankings or "underperforms" in Leiter's rankings, creating incentives to focus intensely on "Leiter-performance".  And given that some critical mass likely do care, one can reasonably ask whether there will be any effect on lateral hiring and if it is a welcome effect.

My instinct is that although the effects of "Leiter-fixation" will likely be small all-things-considered, they will be unwelcome.  First, it could lead schools to hire more people in "high citation" fields, disserving  those who focus in low-citation areas.  If I have one open hire and I am very Leiter-sensitive, I will hire another IP person over any evidence scholar:  the 10th most cited IP scholar beats out the highest cited evidence scholar.  Obviously a stylized example -- and rarely are lateral searches quite so open-ended -- but it makes the point fine.

A second effect is the potential to distort perception of a scholar's work and promise.  Although at the very top of the Leiter rankings ( Sunstein, Posner, etc.), no one would reasonably contest the importance of the winners of Leiter-competition, there is much less that can be said about using Leiter-scores in the middle of the field.  Yet, those on the hiring end with acute Leiter-sensitivity will tend to prefer a candidate with 250 citations over a candidate with a mere 50 citations, even though there is no evidence that that is a meaningful distinction, especially when the candidates work in high-citation fields like Con Law.

No doubt, Leiter is not responsible for the misuse of his metrics and he is fully honest about what they do and do not measure.  All the same, given their popularity, it does seem worth thinking about how to respond to deans and appointments committees who show themselves to be too Leiter-sensitive for comfort.

Any thoughts?

Posted by Ethan Leib on November 15, 2007 at 07:25 PM in Life of Law Schools | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef00e54f97e81b8834

Listed below are links to weblogs that reference The Potential Pathologies of "Leiter-scores":

Comments

Anyone interested in this topic should take a look at James English's The Economy of Prestige. English focuses on the self-undermining nature of things like the Leiter scores, which are primarily designed to provide an objective basis for analysis of quality independent of plain old "prestige of location of scholar."

English points out how "alternative awards" and rankings tend to produce results that are the same as the mainstream ones they aim to supplement, correct, or displace. For example, "Sundance Awards" become more "Oscar like" as the former tries to be as consequential as the latter.

In Leiter's case, it's easy to see how a similar dynamic of convergence might happen. If mid-level scholar at mid-level school wants to get the attention of high-level scholar at high-level school, he/she will want to cite the latter. The desire to get to a higher-ranked school is not quality-related, but does lead to more cites for the higher-ranked schools' scholars.

So the ultimate effect of such "alternative rankings" may well be to reinforce the stale indicators of status they purport to displace. Until law reviews address "letterhead bias," this hierarchy will only become more entrenched.

Posted by: Troubled | Nov 16, 2007 6:36:16 PM

I teach at a school that is very "Leiter-sensitive," as Ethan puts it. This sensitivity, in addition to having the effects that Ethan suggests, has a couple of others as well. One is a pressure to blog, as a way of increasing downloads and citations (though indirectly). The second is a pressure to cite the work of others on the faculty, even if that work is not directly on point. The pressure I have described has come in the form of suggestions from the dean to do what I have described. As Ethan says, Brian in no way encourages this kind of thing, but it happens.

Posted by: Anonymous | Nov 16, 2007 2:18:37 PM

My hope is that Brian will give me a fighting chance for citation-rankings fame, and create a new category for "bald, late-30s professors at mid-western law schools who write about law-and-religion". Brian -- what do you say? =-)

Posted by: Rick Garnett | Nov 16, 2007 9:24:17 AM

Chemerinsky is currently teaching at Duke, and will for the same amount of time as me. (If we want to be technical, I'm not actually teaching at Texas, since I'm on leave!) And Young is currently teaching at Texas. But in the quite foreseeable future, they will both be gone, and given their prominence and distinctive contributions, someone might be interested in that fact. Individuals do not, in fact, move around very much, even if in any given year, most top schools will hire a couple of scholars laterally.

Posted by: Brian Leiter | Nov 16, 2007 8:04:14 AM

Thanks to Brian for the clarification. I merely intended to highlight the fact that a ranking of institutions by which of the "top scholars" is on the faculty raises the issue of affiliation, which is not nearly as important if one is looking merely at the scholars themselves. I do stand by my "logic," however. As a potential student, I certainly looked at the faculty currently teaching at a school, not blogs that announced who had accepted an offer for the academic year following the one in which I enrolled. Scholars often jump around between the top schools; seems perfectly logical, especially in this context, to look at which school the scholar presently teaches rather than when an announcement of a move was announced. Professor Leiter's situation is easily distinguishable from Chemerinsky's. Leiter currently teaches at Texas, and will for some time. But I see Leiter's point as well.

Posted by: anon | Nov 15, 2007 11:14:26 PM

I have a technical point. Here's the description of the methodology for the studies:

"Names were searched as, Brian /2 Leiter, except where multiple middle initials or similar factors made necessary a wider scope."

Are schools using this approach in collecting their lateral data? If so, they would collect a fair number of star footnote citations for most folks. This may or may not be what the study is intended to count. Is there a way schools are handling this?

Posted by: Matt Bodie | Nov 15, 2007 10:53:28 PM

Thanks, Ethan, for your reply. I suppose time will tell to what extent this kind of information influences, perversely or constructively, hiring practices.

A brief reply to the brave "anon" right before you. Every ranking I have ever done, in law or philosophy, has been forward-looking, because I have always taken the primary audience to be prospective students. My UT colleague Ernie Young was credited to Duke in the "overall" per capita impact study because he had accepted an offer there before (about a week before!) we completed that study. Chemerinsky and Fisk are now listed as with Irvine. What could possibly be the relevance of where "citations" were accumulated? By that "logic," most of Chemerinsky's citations should be credited to USC, even though he hasn't taught there for several years now.

Posted by: Brian Leiter | Nov 15, 2007 10:48:00 PM

Brian,

I think I was assuming that, indeed, the "rational" and "average" Leiter-sensitive schools will care much more about "overall per capita impact" than their performance on any field-specific ranking. That is for a few reasons (which I might have totally wrong, mind you): 1. Most schools that will care about these things will not really be capable without a fair bit of luck of landing someone on the top 10 list anyway, so appearing in the list can hardly be the goal of lateral recruitment (unless a school is extremely optimistic, lucky, or rich). 2. Except in rare cases, one would expect promotional materials to focus on overall quality, not field-specific quality. Of course, there are exceptions. I can imagine our trying to show off about our "domination" of the Evidence category (no other school appear twice!) or our having the second most cited Legal Ethics/Legal Profession professor. But someone in PR would likely counsel us away from such claims.

Still, I take your counterpoint. Your making the field-specific rankings available might serve the instructive purposes you are suggesting. I do, nevertheless, fear that citation accumulation is the next fixation -- after spending per student, faculty-student ratio, and the like, of course....

Posted by: Ethan Leib | Nov 15, 2007 10:21:51 PM

It is interesting that Brian Leiter affiliated himself with the University of Chicago for the purposes of these rankings. My understanding is that he is still at Texas, and moreover, the citations accumulated while he was at Texas. It is not incredibly meaningful, except that he chooses to rank institutions, and his inclusion in the University of Chicago category affects the number, and ultimately the percentage, in the rankings--and has the effect of placing Chicago above Harvard for the purposes of those rankings.

Posted by: anon | Nov 15, 2007 10:14:32 PM

You're exactly right, Ethan. Quantitative metrics are very tempting for busy hiring committees and Deans trying to sell their schools to busy people. But they can't substitute for the intangible qualities that make a scholar's work ultimately good. I think John Keats makes, in poetry, the point you are correctly making, Ethan. Substitute "rankings" or "quantitativeness" for "philosophy":

... Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
Philosophy will clip an Angel's wings,
Conquer all mysteries by rule and line,
Empty the haunted air, and gnomed mine -
Unweave a rainbow, as it erewhile made
The tender-person'd Lamia melt into a shade.

Posted by: Anon | Nov 15, 2007 9:53:35 PM

There is always a danger that those being evaluated will adjust their behavior with an eye to future evaluations, though as adjustments go, trying to hire "high impact" scholars is not nearly as bad as, e.g., sending out fee waivers to hopeless applicants to increase one's acceptance rate.

Anyway, I agree with you that there are real dangers here, but one of your examples I find puzzling. You write: "If I have one open hire and I am very Leiter-sensitive, I will hire another IP person over any evidence scholar: the 10th most cited IP scholar beats out the highest cited evidence scholar." I would have thought the effect of specialty listings is the opposite of what you describe: namely, to make clear that scholarly impact as measured by citations is sensitive to area, so it would be an error to look at impact across fields. Perhaps if there exists a law school that thinks only in terms of overall per capita impact, someone will make the calculation you suppose. But the rankings just released should help counteract that tendency by making clear field-specific impact.

As I note, the ordinal listing is not meaningful, but the "top ten" and "top twenty" lists do a pretty good job (ignoring the ordinal aspect) of capturing 80-90% of the leading and most influential senior people in the various fields. If they didn't, I would stop collecting this data. There may come a time when that's the case, of course.

Posted by: Brian Leiter | Nov 15, 2007 8:18:22 PM

The comments to this entry are closed.