« Tinker wept | Main | Chinese Spies! Russian Trolls!! Self-Defeating Tribal Paranoia in America »

Thursday, February 22, 2018

Meta-Ranking of Flagship US Law Reviews

Two years ago, PrawfsGuest Bryce Newell (now at University of Kentucky) created a meta-ranking of the top US law reviews. On his personal blog, Bryce has updated the ranking (in sortable format) for 2018. Worth a look in contemplating where to submit and publish in the new submission cycle.

Posted by Howard Wasserman on February 22, 2018 at 11:44 AM in Teaching Law | Permalink

Comments

Thanks for the link. It's worth noting that some of the rankings are out-of-sync with widely shared perspectives on relative prestige of law reviews, and that a meta-ranking of such rankings will include that noise. For example, any ranking of law reviews that has the University of Chicago Law Review ranked 14th, below both UCLA and Cornell, has some serious problems.

Posted by: Orin Kerr | Feb 22, 2018 1:18:36 PM

Orin, as the author of the ranking in question, I certainly agree that the meta-ranking may be subject to accumulating (and amplifying) noise from the rankings that feed into it. And, it's clear that all of these rankings suffer from methodological problems and are subject to fair critique. For those reasons I haven't tried to push the ranking as anything more than providing another source of information. If anything, and even if you ignore the meta-ranking, I hope the sortable rankings from US News, W&L, and Google Scholar are useful or interesting to some.

On the other hand, a serious question: should we take anything away from the fact that U. Chi. L. Rev. (as the example you cite) consistently under-performs relative to its US News Rankings on available rankings based at least in part on some form of impact/citations/etc. (W&L and Google Scholar)? This gets us back to the basic question of whether all we (should) care about, for purposes of publishing, is really whatever metric of prestige is generally accepted across the community (e.g., US News peer reputation scores for a law school) regardless of the association between that ranking and the journal itself, the articles the journal publishes, and how much impact those pieces have on future scholarship? This seems bizarre... yet, it seems that is what the discipline does.

Posted by: Bryce Newell | Feb 22, 2018 1:34:01 PM

One clarification: I didn't mean to necessarily imply that the University of Chicago Law Review SHOULD be placed at #14, below e.g., Cornell and UCLA. It may well be that it deserves to be placed more in sync with US News Reputation rankings... but then what evidence/method/etc. should we use to evaluate journals?

Another metric:
U.Chi.L.Rev.'s 2016 Journal Citation Reports Impact Factor is 2.284 (1.889 in 2015), with a 2016 5-year IF of 2.248).
Cornell's is 2.150 (3.066 in 2015), with a 2016 5-year IF of 2.639).
UCLA's is 2.177 (2.648 in 2015), with a 2016 5-year IF of 2.75).
These suggest both Cornell and UCLA compete quite well with Chicago.

Posted by: Bryce Newell | Feb 22, 2018 2:19:35 PM

Robert Bork went to the University of Chicago and never even made it onto the Supreme Court. So I don't think U of C is that prestigious. I don't think U of C has put any more people on the Supreme Court than UCLA or Cornell, so I think they all rank the same.

Posted by: The Bork Chicago School | Feb 22, 2018 2:59:23 PM

Some quick calculations, based on JCR Impact Factor numbers:

19 of the top 20 flagship law journals (as ranked by average US News Peer Reputation rankings from 2010-2018 -- the prRank column in my MetaRank table) have a JCR-computed IF (Washington U. L. Rev does not appear).

By 2016 IF: Chicago ranks at #11 (2.284), slightly above Cornell (#14; 2.150) and UCLA (#13; 2.177).

By 2016 five-year IF: Chicago ranks at #13 (2.248), below Cornell (#10; 2.639) and UCLA (#8; 2.755).

If we average annual Impact Factors over the past 5 years: Chicago ranks at #15 (1.883), below Cornell (#12; 2.265) and UCLA (#11; 2.451).

Again, I don't mean to pick on Chicago (I would be over the moon if they ever accepted one of my papers!; it was just the comparison made above), but I do mean to suggest that I think we should base our judgments about the relative importance of law reviews on something more than US News scores or solely because it's a widely shared perspective that X journal is better than Y journal.

But, I suppose, in evaluating publication offers/tenure cases/etc., maybe perception is more important than these other considerations? Are all of these impact-related metrics so full of methodological flaws that they are worse than the alternative?

Posted by: Bryce Newell | Feb 22, 2018 3:16:27 PM

My sense is that Chicago Law Review has traditionally given more preference to its own profs than the average top law review does -- but I might be wrong about that. If true, this could explain things.

Posted by: a non | Feb 22, 2018 3:27:27 PM

Bryce asks: "This gets us back to the basic question of whether all we (should) care about, for purposes of publishing, is really whatever metric of prestige is generally accepted across the community (e.g., US News peer reputation scores for a law school) regardless of the association between that ranking and the journal itself, the articles the journal publishes, and how much impact those pieces have on future scholarship? This seems bizarre.."

It doesn't seem bizarre to me, I confess. It seems to me that law reviews offer two basic things: (1) Editing and cite-checking, of somewhat varying quality, designed to improve and polish the article; and (2) A highly imperfect measure of prestige based on the perceived intensity of the competition for a slot in that journal. It's hard to measure (1), as it depends on who are the editors, and different authors care to varying degrees about how much (1) matters. But it's relatively straightforward to measure (2), as there are widely shared attitudes about that among members of the legal academy.

If I understand him correctly, Bryce suggest that journals do something else: Perhaps the identity of the journal influences the impact of the scholarship beyond the journal's prestige. But I'm somewhat skeptical of that, and I would want to know more: What's the thinking behind how that works? I can imagine that happening in some specific cases. For example, the Harvard Journal of Law & Public Policy is distributed in paper form to all Federalist Society members, which means that articles in it may reach an audience of judges that other journals presumably don't ordinarily reach. But that's a pretty unique situation.

Posted by: Orin Kerr | Feb 22, 2018 4:09:13 PM

Also, a question: Does the Washington & Lee rankings measure absolute number of citations for that journal, or the number of citations per article? Even if you think citations matter, that would make a big difference. For example, Chicago has only 4 issues a year; Cornell has 6 issues a year. That might explain why it has fewer total citations. (I would check myself, but the W&L site seems to be down.)

Posted by: Orin Kerr | Feb 22, 2018 4:15:53 PM

I guess this is a good time for me to repost this link on this very subject from last year.

http://witnesseth.typepad.com/blog/2016/07/google-scholar-releases-2016-journal-rankings-controversy-ensues.html

Posted by: Rob Anderson | Feb 22, 2018 5:07:17 PM

Regarding Orin Kerr's first comment, I'm not sure how widely "widely shared perspectives" really are. If you're an elite law professor, clerked on the federal district/circuit/supreme court, research in public law, following comparably situated colleagues on Twitter, reading the same SCOTUS cases and attending the same conferences, that's one aggregation of opinion among law professors. But I think you might be surprised how much that aggregate is NOT actually "widely shared" among ALL law professors. So while the tippy-top is pretty uncontroversially uncontested -- Harvard and Yale will always be the top -- it can get much more diffuse much more quickly beyond that, even within the community of American law professors.

Posted by: Sam | Feb 22, 2018 9:41:56 PM

Sam, interesting comment. I agree that there are specialty areas that have their own shared sense of rankings often based on specialty journals. For example, tax law scholarship is often published in tax law journals, and outsiders to tax law often can't tell what kind of prestige signals different placements might be sending. (If you're a non-tax law prof on an appointments committee looking to fill a tax law slot, for example, you might have to call a tax law professor friend to evaluate that.) But is your sense that there is a lot of disagreement as to the relative prestige of student-edited law reviews? For example, Bryce's meta-ranking ranks University of Chicago's law review below UCLA's law review. Is your sense that a there's a significant subset of U.S. law professors that share that view, and that if they had offers from both journals they would pick UCLA over Chicago?

Posted by: Orin Kerr | Feb 23, 2018 2:15:39 AM

I think this document provides a much better ranking guide -- the included Q&A explains why. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3026293

Posted by: AnonProf | Feb 23, 2018 8:22:02 AM

Orin: To answer your question, I think the more accurate statement is that there just isn't a firm notion that Chicago is better (in the sense of prestige) than UCLA, such that a ranking putting it below it would stand out as odd. Instead, it would seem roughly uncontroversial, or, within parameters, arbitrary. They are both lumped in together in a dense clump of around 20-ish schools clearly below Harvard, Yale, Stanford, Columbia, and maybe one or two others.

Think of it this way. When your students inquire about clerkships on the federal district courts, outside SDNY, DC, LA, and a few other geographically desirable regions, the "prestige" of many of the rest don't fall into an ordinal system. There are clumps, and within the clumps, not particularly ordered if we set aside personal preferences. For the students who've set aside the high-prestige clerkships in the SDNY etc., as great to get but not expected, so too with law professors and law review placements -- the ordering of the second clump just isn't as sharp. It may appear that way if you take an especial interest in rankings, generally: of law schools, law reviews, clerkships, etc., but for a great many of us, beyond the first few, the rest just lump in together.

Posted by: Sam | Feb 23, 2018 10:23:12 AM

Putting aside bigger questions about the significance or validity of the rankings, I wonder how meaningful it is to focus on ordinal ranking rather than, say, tranches. Orin, to your question, I don't think of UCLA Law Review as better than Chicago's, but I also don't think of it as worse. Part of that is because I see large numbers of journals as indistinguishable. To me, some group of elite law reviews are largely indistinguishable - wherever they're ranked respectively, I view UCLA, Chicago, Michigan, Cal, etc. as about the same. All great, but not clearly a difference in prestige, etc. Same w/law schools and law reviews across the board. There certainly are differences, but I think those are associated more with classes/groups of schools/law reviews. Even in our status-obsessed field, I wonder how many people would feel strongly about the distinction among schools ranked 24-29 in the USNWR.

Posted by: anonjrprof | Feb 23, 2018 10:29:23 AM

I haven't had time to jump back in here with a full response. Orin, I appreciate your thoughtful response and explanation. I've heard explanations like it before from law professors, but I hadn't quite understood. I think it's fair to say that academic disciplines outside law approach evaluating journals very differently. My focus on measures of impact in my comments reflect that way of thinking: journals gain importance in their field (and are thought of as more prestigious or beneficial to publish in) at least in good part due to their impact (and thus the potential that having an article published in that journal will increase the impact of the article itself), often measured by citations (we can debate the best metrics to use for this, of course) and their selectivity. Selectivity in exclusive-submission journals and mass-submission venues (e.g., law) is not comparable, so that concept doesn't seem to mean the same thing w/r/t legal scholarship. I am an advocate for qualitative review (rather than solely relying on quantitative metrics of evaluating scholarship), especially in reviewing tenure cases, for example, but I still think these metrics have a valuable place.

On the other hand, if I understand your argument correctly, law journals may serve a very different function than, say, scientific or social science journals. Whereas peer-review is Sci/SS journals is intended to serve as a check on publishing poor work by putting it through review by external subject-matter experts (again, we can argue this), law journals serve to publish most anything an author believes is worth publishing, but offer cite-checking and copy-editing purposes and (potentially) some amount of prestige based on the scarcity of places in journals at top schools compared to the number of authors vying to have those "better" students cite check and copy-edit their papers... Is that a fair reading of what you've said? Perhaps the "peer-review" then comes post-publication, as others cite and discuss/critique a paper (or ignore it)... (?)

But, if the author of a potential law review article cares more about publishing in a journal that generally has greater "impact", however measured, than about having a particular class of students copyedit their paper, then these impact-related metrics seemingly become fairly important...

Posted by: Bryce Newell | Feb 23, 2018 3:32:46 PM

copy-editing my prior post: I should have said "cite-checking and copy-editing *services*"

Posted by: Bryce Newell | Feb 23, 2018 3:34:11 PM

I find Bryce's meta-ranking extremely helpful because it synthesizes data from a number of other studies that use distinct factors. This methodology tends to fix flaws with each of the separate studies. I agree with Bryce that prestige is a matter of influence and influence at least in part should be measured by frequency of citation. I think some journals (e.g. San Diego) are ranked too low, but then again no study is entirely intuitive.

To Orin's earlier question, my understanding is that on W&L's page citation is the raw total number, while impact factor is the average number of citations per published article. That is, IF equalizes the citation disparity that is inevitable to manifest because of varying numbers of published volumes. Is my understanding correct? Also, on the W&L cite, do citation count and/or IF include student works or only articles?

Posted by: Alexander Tsesis | Feb 23, 2018 4:10:04 PM

Sam, anonjrprof ,that's very interesting. My sense of things is different, FWIW. In my experience, the prestige ranking of student-edited law reviews goes 1) Harvard, 2) Yale, 3) Stanford, 4) Columbia, 5) Chicago, 6) NYU, and then a tie at 7) with Michigan and Virginia (and maybe Cal, but Cal may be 9th). The UCLA Law Review is considered less prestigious than any of the T14s. If you're an entry level candidate trying to impress appointments chairs, for example, you would be nuts to take a UCLA Law Review offer over a University of Chicago Law Review offer.

Bryce, I think there is indeed a big difference between law reviews and journals in the sciences. My vague sense, at least from my grad school days, is that, in the sciences, there is usually a handful of subject matter journals in each subject matter that only publishes work in that subject matter. As an academic, you only publish in a journal that is about your narrow specialty; you submit to only one journal at a time; and each journal is peer-reviewed. In that environment, the journals that are considered best are the ones that have traditionally had the biggest impact in the field by publishing the most important papers. It's really different in law because the student-edited journals are almost all generalist journals, run by student editors that serve for only a year, and multiple submissions are the norm. In law , hundreds or thousands of professors submit to all the same journals all at the same time; the student editors (with faculty support in some cases) pick the articles that they think are best; and the norm is to expedite from the first offer to see how high up the chain one's article can go. The selection process, with the norm of expedites, produces a rough and imperfect (very rough and imperfect!) indicator of quality that leads to some sense of prestige.

Posted by: Orin Kerr | Feb 24, 2018 2:18:02 AM

I’d probably take Penn over NYU, Michigan, Virginia, and California.

Posted by: Anon | Feb 24, 2018 6:08:23 PM

The W&L site appears to be back up at a different address. It appears the site is being redesigned. The methodology for the Combined Score ranking (which seems to be what most people refer to) is here: https://managementtools4.wlu.edu/LawJournals/Default4.aspx In short, it's 1/3 IF and 2/3 total cites, but users can alter this default weighting.

The W&L site has this to say about the IF and Combined Score rankings:

"Impact-factor rankings should be used cautiously as they are biased against journals that publish a larger number of shorter articles, such as book reviews. Nevertheless, if two legal journals have a similar composition of articles, notes, and book reviews, then from an author's viewpoint it's reasonable to compare the impact-factor of each to see which is a better journal with which to publish. The implication of a similar ranking by total citations, but a dissimilar ranking by impact-factor is that the journal ranked lower by impact-factor is publishing some articles of lesser quality, or of less general interest. It's suggested that in preference to using impact-factor, the combined-score ranking (a weighting of both impact-factor and total cites) offers a more balanced view of journal ranking." (see https://managementtools4.wlu.edu/LawJournals/Default5.aspx)

Posted by: Bryce Newell | Feb 26, 2018 4:16:14 PM

It does appear that some of the W&L data may have been corrupted in the past week or two. A commenter on the Angsting Thread noted that Washington Law Review is listed as 912th in total (or 223rd in General law journals), while the data I relied on for the Meta-Ranking listed it at 51st. They haven't updated the ranking, so I assume it must be a data corruption issue as they update their site (or something similar).

Posted by: Bryce Newell | Feb 27, 2018 11:05:39 AM

FYI: the W&L rankings are being corrected (Washington Law Review's data has now been fixed), and the new site address is: http://go.wlu.edu/lawjournals

Posted by: Bryce Newell | Feb 27, 2018 1:43:25 PM

It strikes me that if every objective measure (combined score, impact factor, and journal citations) ranks the University of Chicago Law Review in the 20s or below, then it must be that the academy subjectively over-values that law review. I agree with the poster above who pointed to the persistent pattern at the U. of C. L. Rev., over at least three decades and probably much longer, of favoring their own faculty.

I want to stress that I think the Law Review is certainly better than the 20s, so like any other metric the W&L one is imperfect, but I also don't think it objectively fits into the top ten.

The Virginia L. Rev. has precisely the same problem, elitism in selecting profs teaching in top 25 schools or higher and a strong preference for UVA faculty, plays into the latter law review being objectively less often cited, ie. having lesser impact, than would be expected from that law school's quality.

Posted by: anon | Feb 28, 2018 10:45:07 AM

Post a comment