« Australian politicians as bad as U.S. politicians . . . | Main | FIU COL leads Florida Bar passage . . . again »

Monday, September 17, 2018

Reconstructed Ranking for Law Journals Using Adjusted Impact Factor

I would like to thank everyone for their comments and especially USForeignProf who added an important perspective. The main  motivation of our study was to expose the risks of blindly relying on rankings as a method for evaluating research. While we do not have data about the impact of metrics on the evaluation of research in law, we suspect that law schools will not be insulated from what has become a significant global trend. Our study highlights two unique features of the law review universe, which suggest that global rankings such as the Web of Science JCR may produce an inaccurate image of the law journals web: (1) the fact that the average number of references in SE articles is much higher than in articles published in PR journals; and (2) the fact that citations are not equally distributed across categories. In our study we tried to quantitatively capture the effect of these two features (what USForeignProf has characterized as the dilution of foreign journals metrics) on the ranking structure.

To demonstrate the dilution effect on the Web of Science ranking, we examined what happens to the impact factor of the journals in our sample, if we reduce the “value” of a citation received from SE articles from 1 to 0.4. We used the value of 0.4 because the mean number of references in SE journals is about 2.5 times greater than the mean number of references in PR journals (in our sample). For the sake of the experiment, we defined an adjusted impact factor, in which a citation from the SE journals in our sample counts as 0.4, and a citation from all other journals as 1. I want to emphasize that we do not argue that this adjusted ranking constitutes in itself a satisfactory solution to the ranking dilemma. We think that a better solution would also need to take into account other dimensions such as journal prestige (measured by some variant of the page-rank algorithm) and possibly also a revision of the composition of the journals sample on which the WOS ranking is based (which is currently determined - for all disciplines - by WOS stuff). However, this exercise is useful in demonstrating numerically the dilution effect. The change in the ranking is striking: PR journals are now positioned consistently higher. The mean reduction in impact factor for PR journals is 8.3%, compared with 46.1% for SE journals.  The table below reports the results of our analysis for the top 50 journals in our 90 journals sample (data for 2015) (the complete adjusted ranking can be found here). The order reflects the adjusted impact factor (the number in parenthesis reflects the un-adjusted ranking). In my next post I will offer some reflections on potential policy responses.

  1. Regulation and Governance (10)
  2. Law and Human Behavior (13)
  3. Stanford Law Review (1)
  4. Harvard Law Review (2)
  5. Psychology, Public Policy, and Law (18)
  6. Yale Law Journal (3)
  7. Texas Law Review (4)
  8. Common Market Law Review (22)
  9. Columbia Law Review (5)
  10.  The Journal of Law, Medicine & Ethics (29)
  11. University of Pennsylvania Law Review (8)
  12. Journal of Legal Studies (15)
  13. Harvard Environmental Law Review (14)
  14. California Law Review (6)
  15. American Journal of International Law (19)
  16. Cornell Law Review (7)
  17. Michigan Law Review (9)
  18. UCLA Law Review (12)
  19. American Journal of Law & Medicine (36)
  20. Georgetown Law Journal (11)
  21. International Environmental Agreements-Politics Law and Economics (41)
  22. American Journal of Comparative Law (25)
  23. Journal of Law, Economics, & Organization (37)
  24. Journal of Law and Economics (35)
  25. International Journal of Transitional Justice (42)
  26. Law & Policy (44)
  27. Harvard International Law Journal (26)
  28. Chinese Journal of International Law (47)
  29. Journal of International Economic Law (48)
  30. Law and Society Review (46)
  31. Antitrust Law Journal (27)
  32. Indiana Law Journal (24)
  33. Behavioral Sciences & the Law (51)
  34. Virginia Law Review (16)
  35. New York University Law Review (17)
  36. Journal of Empirical Legal Studies (39)
  37. Leiden Journal of International Law (54)
  38. University of Chicago Law Review (20)
  39. Social & Legal Studies (58)
  40. World Trade Review (61)
  41. Vanderbilt Law Review (23)
  42. Harvard Civil Rights-Civil Liberties Law Review (32)
  43. Modern Law Review (63)
  44. Annual Review of Law and Social Science (49)
  45. European Constitutional Law Review (64)
  46. Oxford Journal of Legal Studies (59)
  47. Journal of Environmental Law (65)
  48. European Journal of International Law (57)
  49. Law & Social Inquiry (62)
  50. George Washington Law Review (31)

Posted by Oren Perez on September 17, 2018 at 02:53 AM in Article Spotlight, Howard Wasserman, Information and Technology, Law Review Review, Peer-Reviewed Journals | Permalink


Dear Anon3
You raise a valid point - but ultimately I think that our position - that the divergence in citation is due to institutional-cultural biases (cartel-like) and not to valid epistemic reasons is more plausible. We discuss it in detail in the paper (and see my previous post (http://prawfsblawg.blogs.com/prawfsblawg/2018/09/tacit-citation-cartel-between-us-law-review-considering-the-evidence.html)). We rely among other things on the persistent critique of law reviews by various U.S. law profs. For a recent and very persuasive critique see: Friedman, Barry, Fixing Law Reviews (July 1, 2017). Duke Law Journal, https://ssrn.com/abstract=3011602.
I accept your point that US law schools do not use the WOS ranking. However, other schools around the world might and this could unjustifiably dilute the value of PR journals. It is possible that we over-estimate the current significance of such metrics for law, but as I already mentioned we also rely here on global general trends, which may penetrate law schools in the future.

Posted by: Oren Perez | Sep 19, 2018 12:59:09 PM


I see. So your argument would be that if a SE article gets 2.5x the citations of a PR article *only* because it is 2.5x longer, and not because it's 2.5x better, your adjustment compensates for that. In full, your argument is that SE articles get 2.5x the cites exactly and entirely because of some combination of length and cultural differences in citation practices.

The problem is, you haven't done the work to demonstrate that this is true. It is entirely possible that length differences and citation practice differences would, all else equal, cause SE publications to get 2x the cites, but they get 2.5 the cites because they are *also* epistemically more valuable on the whole. If that is true, making a 2.5x adjustment unfairly devalues the influence of SE articles. It is likewise easy to imagine, and entirely consistent with your observations, that, all else equal, differences in length and citation practices mean that SE publications should get 3x the cites of PR articles. But because they are epistemically lower quality, they only get 2.5x the cites. In that case, your 2.5 adjustment doesn't go far enough.

My point is that there a number of possible hypotheses explaining the mean difference in citation between SE articles and PR articles *that you acknowledge* -- length, difference in citation practices, and epistemic merit. You can't simply assume that differences in epistemic merit between SE articles and PR articles don't contribute to differences in mean citations, especially given that your entire study is premised on comparing differences in epistemic merit between SE articles and PR articles. At the very least, ranking based on citations-per-word (or per-1,000 words if that's yields a prettier number) takes length--likely a significant factor in explaining some of the difference--out of the equation. Then you'd be left with an adjusted combined ranking with differences likely only explainable by epistemic merit and citation practices. I'm still not sure that gets you far enough, as I don't think you can justifiably infer how much of an impact either epistemic merit or differences in citations practices alone has. But at least you'd be closer to where you want to be.

Finally, if SE articles cite almost exclusively to SE articles and PR articles cite almost exclusively to PR articles, then why should we expect differences in mean citation count to reflect anything meaningful? Even assuming that you could somehow properly adjust the above rankings to account for differences in length and in citation practices (and isolate epistemic merit), you're left with a comparison between apples and oranges. If scholars publishing in SE journals are, relatively speaking, worse scholars, the value of each of their length- and practice-adjusted citations should carry relatively less weight: it is less valuable as a signal of quality. Put another way, even when we adjust Major League Baseball and minor league (AAA) stats to account for the number of games played, park advantages, etc., a player with 3.0 WAR in the majors is not directly comparable to a player with 3.0 WAR (or whatever the equivalent would be) in AAA. Your research has shown that SE publications and PR publications operate as different leagues. But your research hasn't told us if one is the American League and the other is the National League (which are almost, but not perfectly, comparable) or if one is MLB and the other is AAA (which are less comparable) -- or if one is baseball and the other is softball (which are closely related but ultimately different sports). And I don't think this issue can be easily dismissed by noting that we are all talking about the law, or by noting that there are some combined rankings -- from my experience, the differences seem to me more on the level baseball-softball differences than they do American League-National League differences. These articles are written with fairly different purposes in fairly different styles by fairly different scholars (who, among other things, have fairly different training). Even if you were able to somehow isolate epistemic merit as a factor, I'm still not sure that the merit of being cited by a UK LLB or an Australian PhD is equivalent (for better and for worse) than the merit of being cited by a US JD.

Along these lines, you've repeatedly justified your argument by noting that most rankings are combined and that combined rankings are widely used. Who uses combined rankings? At least in the US, I've never heard of anyone seriously using anything besides USNWR to evaluate article placement -- which is not a combined ranking of law reviews. So few legal scholars publish in PR journals that is rare for a hiring or tenure decision to turn on where the Modern Law Review ranks vs the Vanderbilt Law Review; and, in every instance in which such a comparison might matter, I've only ever seen decisionmakers look at the piece itself.

Posted by: anon3 | Sep 19, 2018 12:07:52 PM

Here are some responses to the comments above.

I think the comments from Rebecca and Joey completely misread the whole purpose of our project. We start with the observation that research evaluation is increasingly influenced by metrics. This is a global trend not unique to law but we believe law is increasingly influenced by it. While I appreciate Rebecca's point that she herself does not judge papers by metrics, it does not change the big picture. I'm also somewhat surprised to have such comments from U.S. law profs where US News Ranking has such a deep impact and the selection practices of US law reviews use various heuristics which are far from relying on considerations of quality alone (e.g., relying on authors’ CV, their institutional affiliation and their past publication record) in their publication decisions (see, e.g., Albert H. Yoon, ‘Editorial Bias in Legal Academia’ (2013); Jonathan Gingerich, ‘A Call for Blind Review: Student Edited Law Reviews and Bias’ (2009)) . Given the influence of metrics our paper seeks to offer an empirically driven critique of one important metric and we caution against using this (and other metrics) blindly without having a good understanding of their methodology.
There was also a repeating comment that PR and SE journals should be evaluated separately. I think that this claim is far from obvious. First, all the major rankings lump the two categories together. Second, in terms of their subject matter PR and SE journals do not represent different scientific domains and so ranking them together does make some sense. And finally, while the distinction between the two categories is obviously a very well-known fact to law profs (less so to people outside law) the citation pattern we found has not been studied before and our paper (to the best of our knowledge) is the first to expose it. While there have been several studies looking at citation practices in law reviews (a pioneering study is Olavi Maru, ‘Measuring the Impact of Legal Periodicals’ (1976) 1 Law & Social Inquiry: 227) they all focus, to the best of our knowledge, on U.S. law reviews and do not explore the interaction between the two classes.
To anon 3, our findings are based on citation per article. Specifically, we found that The mean number of references in SE articles is 2.5 times higher than in articles published in PR journals. The reason for that difference is also quite known to the audience of this blog: First, articles in SE journals tend to be much longer than those in PR journals. Second, SE journals place special emphasis on corroborating every statement with ref. See, e.g., Arthur Austin, ‘Footnote Skulduggery and Other Bad Habits’ (1989) 44 U. Miami L. Rev. 1009.

Posted by: Oren Perez | Sep 19, 2018 6:03:27 AM

Oren --

Are your rankings based on citations per article or citations per volume (or citations per page number or something else entirely)?

Posted by: anon3 | Sep 18, 2018 6:24:49 PM

Well, I give the author some credit for being willing to continue to post about this paper/topic in the face of much criticism from comments on the blog posts. That said, at the end of the day my reaction is this: I am sure glad that I do not actually work in a field where everyone's prestige and even institutional evaluations are yoked to some highly imperfect (and inevitably often wildly wrong) mathematical formula.

We have plenty of problems in the legal academy. But I am glad that we don't have the particular ridiculous set of problems that we would have if we (and our law schools) tried to evaluate our work using lists and measures like the one in this blog post.

Posted by: Joey Fishkin | Sep 18, 2018 4:25:19 PM

wow. I don't mean to be rude but I can't imagine caring about this. This whole project seems like counting angels on the heads of a pin in order to convince people that it somehow matters whether there are 15 or 9000. If I want to judge the quality of someone's scholarly work, I read their work. I don't just check and see where it was published.

Posted by: Rebecca Bratspies | Sep 18, 2018 12:29:37 PM

Our argument starts with the problem that rankings are used as a proxy for evaluating research. Further, all the major law-review rankings lump together PR and SE journals (Clarivate Analytics Web of Science Journal Citation Reports (JCR), CiteScore from Elsevier, Scimago and Washington and Lee). Given these two assumptions we also show that the fact that the mean number of references in SE journals is about 2.5 times greater than the mean number of references in PR journals together with the exclusive nature in which PR journals distribute their citations influence their relative JCR ranking in a significant way. This dilution of the ranking of PR journals is then at least partially a product of cultural-institutional structures and not of epistemic merit. If the rankings would not have been used to evaluate research all that would not have been important. But the fact is that rankings are used for that purpose (and increasingly so) and hence we think it is interesting to question and analyze the hidden assumptions on which they are based.

Posted by: Oren Perez | Sep 17, 2018 1:31:24 PM

Yes, I find it strange that under the adjustment, PR psychology journals apparently have a greater impact than, say, the Yale Law Journal. One would be hard-pressed to find anyone who would take this seriously, at least to the extent we are discussing law-related publications.

I still think the answer is that SE and PR journals are different and serve different markets. This study seems to me to be akin to arguing that there is a strawberry cartel tht disadvantages coconuts. Strawberries are sold in tubs of 20 or so, while no one ever buys 20 coconuts at a time. This isn’t because of a strawberry cartel. It’s because strawberries and coconuts are different and serve different purposes. We could make strawberries count “less,” which will reduce the strawberry/coconut purchase divide, but that just seems bananas.

Posted by: GH | Sep 17, 2018 1:28:08 PM

Striking that the psychology journals do so well. Perhaps their citation counts should also be subject to a multiplier (for their cartel-like behavior)?

Posted by: Prof X | Sep 17, 2018 1:06:30 PM

If SE pieces are longer and make more claims than PR pieces, then they should use and generate more cites (and, as a result, any ranking that discounts SE cites will misleadingly understate the impact of SE pieces). Put another way, we should expect a piece that is 30,000 words that makes 100 distinct claims (original and un-original) to use approximate 2.5x as many cites as a piece that is 12,000 words that makes 40 distinct claims (original and un-original).

Mean number of references is a poor criterion on which to make an adjustment like this. At a minimum, you need to also account for the length of these respective articles. But even so, that could be misleading if PR articles tend to be much more singularly focused than SE articles.

What really matters here is "cites per distinct claim" or something of that nature -- and not "cites per article."

Posted by: anon3 | Sep 17, 2018 11:56:44 AM

How do we know that law reviews “overcite”? Says who? Maybe peer review journals undercite? I don’t see how we can determine either to be the case. Yes, law reviews can be very footnote-heavy, but does that mean we should adjust downward the value placed on every law review cite? That seems...weird.

Posted by: GH | Sep 17, 2018 11:38:00 AM

It makes sense to me. Law Reviews over cite, so a cite in a law review should count less.

It is striking to me to see the specialty journals jump up from their normal ranking on W&L. Of course, these types of ranking don't reflect prestige of placement in US legal academia, but they are still interesting

Posted by: AnonProf | Sep 17, 2018 11:27:29 AM

I am confused about what this achieves. You slashed the impact value of a SE journal cite to 0.4, compared them to PR cites with an assigned impact value of 1.0, and then assembled it all into a new ranking.

Doesn’t the fact that SE journals yield more citations suggest they have a higher impact, on balance? How does cutting the impact value of a cite to a SE journal to 0.4 and then comparing them to a PR impact score of 1.0 per cite amount to anything more than stacking the deck to achieve the result you want? This isn’t so much as an “adjustment” as a revaluing of the data to “fix” what the data reveals. And more concerning, based on your overall theory can’t we fairly accuse PR journals, which i suspect on balance cite more often to PR journals than to SE journals, of operating their own citation “cartel” within their own market? Why can’t there be effectively two markets (SE and PR) that occassionally intersect?

Posted by: Anonomon | Sep 17, 2018 11:17:17 AM

The comments to this entry are closed.