« Bad Ideas | Main | What Makes it Okay for Reporters to Trespass After Disasters? »

Thursday, July 21, 2011

Law Review Rankings

Maybe it’s the hundred-degree heat talking, but I think law review rankings are a little bit useful.
As a reader and researcher, I do make some use of an article’s placement as a screen for how close of an initial read to devote to it. When I look at the c.v.’s of two scholars whose work I’ve never read, I’m probably inclined to look more attentively at the work of the one with the fancy cites.  Yeah, I said it. Put away the pitchforks, dear readers: I don’t think I’m alone. Satisficing is not going away.  And, by the way, perceived prestige is an important motivator for the nonprofit labor force.

It would be nice, then, if there were reliable guides to the signaling value of a given journal placement.
U.S. News gives us a decent if limited signal; since most authors agree that at the pinnacle its rankings are roughly meaningful, we get scarcity.  So we can assume that journals at the top are more selective than others.  Whether they make good decisions when picking the few from the many we don't know.  And in the end, using selectivity as a measure of quality leads us, um, to this.  Is there a better way to rank journals?  

An under-appreciated problem here is that this throws us back into the problem of defining what is good legal scholarship.  Given that journal editors are likely to respond to the incentives of an explicit ranking system, some care has to go into constructing it.  An approximation of a value-neutral approach might be to simply rank publications based on the use others scholars make of them.  (For a thoughtful review of why that method works and what its problems are, see Russell Korobkin, 26 FSU L. Rev. 851, and Ronen Perry.)  Korobkin argues that, basically, citation counts create the least bad set of incentives; usefulness to others seems like a decent result even if it's somewhat distorting of the real scholarly mission (which, of course, is to be completely useless).

Well, the Washington & Lee Law Library, as many readers will know, offers a ranking of law journals based on total citations and “impact factor,” or IF. IF in the larger scholarly world is a widely-used metric of the quality of journal editors’ judgment; it represents the mean number of citations per article per year for the journal. It’s not actually a great measure, since it tells us nothing about the quality of the citing articles, and reputation probably produces IF as much as the other way around.

As weak as IF is in general, W&L’s implementation is particularly problematic.  

If you probe the W&L description of their methods closely, you find that they aren’t really calculating IF. What they’re doing instead is counting how many times each journal is cited at least once in a given article. That method tends to shrink the distance between top journals and others (and, probably, to underweight specialty journals), because it gives journals no credit for being cited more than once per article. A real IF would count citations separately for each published article, add them up, and then divide by the number of articles published per year.

Also, there’s gaming, as some have noted around here recently. Thomson Reuters, which compiles IF rankings for non-law subjects and sells the results to journals for their advertising purposes, reports self-citations for each journal. Users can then make up their minds whether they care.

Finally, to be parochial, W&L only uses Westlaw to generate its citation counts, and Westlaw doesn’t include Tax Notes, a major publication for us tax types. (This is also our gripe with Leiter). So tax articles are (sniff) even more under-appreciated.

None of this is to pick on W&L. It’s wonderful that someone has taken on the task of generating information that’s useful to all of us. But hopefully they are open to reform. Another path forward is Thomson to enter the law market.

Either way, what I'd particularly like to see is some kind of quality-weighted influence measure, along the lines of google pageview, as described here.

Posted by BDG on July 21, 2011 at 12:58 PM in Law Review Review | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef015390127e68970b

Listed below are links to weblogs that reference Law Review Rankings:

Comments

@Andrew: a variation of the idea that "a citation from a crackpot author isn't as valuable as a citation from a widely respected author" is implemented in S.J. Liebowitz & J.P. Palmer, "Assessing the Relative Impacts of Economics Journals", 22 Journal of Economics Literature 77-88 (1984) (see especially page 82).

Posted by: Ronen Perry | Aug 25, 2011 7:31:50 AM

You might test the value of any of these ranking systems (and of ranking systems generally) by convincing a stat geek with time on his or her hands to compare the results produced by those systems with law professors' judgments about the best work in their fields. In environmental and land use law, for example, two professors have for years put together annual compilations of top articles, and the selections are made by polling a large group of faculty. Perhaps other sub-fields do the same thing. Those selections aren't perfect measures of quality, but they're probably much better than any readily available alternative, and they create databases that could be used to do some empirical analysis.

Posted by: Dave Owen | Jul 22, 2011 10:51:25 AM

"Westlaw doesn’t include Tax Notes, a major publication for us tax types. (This is also our gripe with Leiter). So tax articles are (sniff) even more under-appreciated."

I assume Westalw also does not include the National Tax Journal, another important journal in the tax world (it's mostly an economics journal, but some high quality articles from legal scholars appear every year).

Posted by: GU | Jul 21, 2011 7:43:21 PM

I'm trying to work through this idea in my head, but might it be possible to do some sort of recursive citation checking? The logic being, of course, that a citation from a crackpot author isn't as valuable as a citation from a widely respected author.

Of course, this would require picking some arbitrary cutoffs. But maybe a system could work like this:

1. Pick a date range that we care about to check the journal's quality. (5 years back seems to make sense to me. Who cares how good the journal was 10 years ago, after all? But maybe a few more years would be good.)

2. Take all of the articles the journal has published in that timeframe, and collect all articles that cite to those articles. The count of those articles (allowing multiplicities if one article cites more than one article) is the "first order citation count."

3. Find all articles that cite any articles in the first order citation group. Again, multiplicities count. The number of articles in that group is the "second order citation count."

The second level seems like a reasonable place to cut it off. If we're talking about a limited number of years, there probably won't be that many more levels of citation anyways (and counting all of the levels would favor the older articles over the new ones, which wouldn't make much sense.)

This would of course tend to skew the rankings in favor of those journals which are publishing on current and hot issues, though I'm not sure that's a bad thing. And it wouldn't be difficult to calculate using a computer system.

Posted by: Andrew MacKie-Mason | Jul 21, 2011 4:16:20 PM

Given the drawbacks to citation counts as currently implemented (e.g., being limited to the Westlaw database, not capturing the breadth of a scholar's productivity), I would think Google Scholar's new citation database would be the way of the future: http://googlescholar.blogspot.com/2011/07/google-scholar-citations.html. Google is still toying with the right metrics, and misses quite a few citations, but it has the resources to invest in getting this closer to right than Leiter, W&L, or SSRN.

Posted by: anon | Jul 21, 2011 2:00:09 PM

I think the W&L rankings are a joke. The only time I hear someone mention them is when someone is trying to justify what would otherwise be considered a bad placement, i.e., "Sure, the school is in the 3rd tier, but their journal is ranked #56 on W&L!"

Posted by: LawProf | Jul 21, 2011 1:43:24 PM

Well-played, Zoom.

Posted by: Dan Markel | Jul 21, 2011 1:43:19 PM

The comments to this entry are closed.