« Careers over Jobs | Main | Joint reponse to comments on the cartel post »
Wednesday, September 05, 2018
Tacit Citation Cartel Between U.S. Law Reviews
In my previous post I discussed the various metrics that are being used to measure law schools and legal journals. One of the difficulties with these metrics is the perverse incentives they may create for both authors, research institutions, and journals to use various manipulating techniques in order to elevate their scores. Examples of manipulating strategies include the publication of editorials with many journal self-citations, coercive journal self-citation, and citation cartels (Phil Davis, ‘The Emergence of a Citation Cartel’ (2012)). There have been several conspicuous cases of citation cartels, which have been widely discussed in the literature. Particularly notorious was the case of several Brazilian journals that have published articles containing hundreds of references to papers in each other’s journals in order to raise their journals’ impact factors (Richard Van Noorden, ‘Brazilian Citation Scheme Outed’ (2013)). We distinguish in the paper between explicit citation cartels, in which the cross-citations are a product of explicit agreement between editors or scholars and tacit citation cartel. In the latter case the citation dynamics may be a product of tacit cultural and institutional habits. Both tacit and explicit citation cartels should be distinguished from epistemically-driven scientific communities. Although tacit citation cartels do not carry the same immoral connotations as explicit citation cartels, they have similar adverse effects, especially given the increasing influence of the impact factor in the evaluation of research quality. By (artificially) elevating the scores of some journals and disciplines over others, they may distort the publication choices of scientists, and consequently may impede the creation of ideas.
The challenge for the metrics industry then is to develop ways to detect and respond to both tacit and explicit citation cartels. In our paper ‘The Network of Law Reviews: Citation Cartels, Scientific Communities, and Journal Rankings’ (Modern Law Review) (with Judit Bar-Ilan, Reuven Cohen and Nir Schreiber) we examined the ranking of law journals in Journal Citation Reports focusing on the question of the existence of tacit citation cartels in law. We studied a sample of 90 journals included in the category of Law in the JCR: 45 U.S. student-edited (SE) and 45 peer-reviewed (PR) journals. The sample, which amounts to 60% of all legal journals in JCR, included the most prestigious PR and SE journals (e.g., Harvard Law Review, Yale Law Journal, Columbia Law Review, Journal of Legal Studies, Oxford Journal of Legal Studies, Modern Law Review). The number of papers published by both SE and PR journals in our sample is nearly identical (47.8% of the articles were published in PR vs. 52.2% in SE journals). There are huge differences, however, in the total number of references and in the number of references per article. The SE journals produced in 2015 overall 3 times more references than the PR journals. The mean number of references in SE articles is 2.5 times higher.
We found, using both statistical analysis and network analysis that PR and SE journals are more inclined to cite members of their own class, forming two separated communities. You can find the citation graph here. Close analysis revealed that this phenomenon is more pronounced in SE journals, especially generalist ones. We found that SE generalist journals, direct and receive most of their citations to and from SE journals. This tendency reflects, we argue, a tacit cartelistic behavior, which is a product of deeply entrenched institutional and cultural structures within the U.S. legal academia. Because the mean number of references in SE articles is 2.5 times higher than in articles published in PR journals, the fact that their citations are directed almost exclusively to SE journals elevates their ranking in the Journal Citation Reports in a way that distorts the structure of the ranking. In the next post I will demonstrate the implications of this finding on the journal ranking in JCR. In further posts I will also consider some potential explanations and counter-arguments associated with this result.
Posted by Oren Perez on September 5, 2018 at 01:35 AM in Article Spotlight, Information and Technology, Life of Law Schools | Permalink
Comments
So much for peer review.
Posted by: Prof X | Sep 9, 2018 9:09:20 PM
I think using the term “journal cartel” is an exaggeration. Jurists themselves are the ones who use citations and journal do not impose any kind of specific citation. This true for student run journals and peer reviewed journals. The difference between these types of journals are the nature of citations. Through experience student run journal tend to citation extensive. Bashar H. Malkawi
Posted by: Bashar H. Malkawi | Sep 9, 2018 2:01:29 PM
I've written for both peer reviewed journals and law reviews. Each category of journal has its own style of papers, exploring different questions and using different methods. It makes sense that most peer reviewed journals cite other peer reviewed pieces, because that's the literature peer reviewed papers are generally advancing. Likewise with law reviews - a law review style paper can benefit from citing peer reviewed pieces, but the types of papers people tend to write for law reviews are similar to the papers other people have written in the past in law reviews.
To call this a citation cartel is a bit off-mark, I think.
Posted by: Jared Ellias | Sep 6, 2018 8:20:20 PM
OJLS, MLR, Cantab LJ and other leading PRs:
-word limit 12k-15k;
-Bibliographic footnotes actively discouraged by editors;
Major US law reviews:
-word limit 25k
-Bibliographic footnotes encouraged by editors
Citation cartel??!?! C'mon man...
Posted by: Prof-Hopeful | Sep 6, 2018 12:32:02 PM
The other commenters have already highlighted the most significant concern with the authors' argument--namely, that it posits a very particular explanation for a very general trend in the data. There seem to be alternative (and in my view more plausible) explanations for the empirical findings. For example, perhaps law review scholarship is simply more insular than other fields due to some distinguishing institutional characteristics (such as its comparatively tepid appetite for technical methodologies). Or perhaps the distinctive features of the student review system creates private incentives among authors to simply insert a glut of law review citations alongside the citations that really matter, thereby skewing the distribution of citations for largely artificial reasons.
I would add to this that the history of antitrust provides an excellent case study on the risks of imputing agreement (tacit or express) to mere parallel conduct. When a group of large competitors set similar prices, it is tempting to conclude that they must be fixing prices, at least tacitly. While that *could* be true, there is no prima facie basis for identifying this as the most likely explanation. (On the contrary, even in highly competitive markets, the relevant firms are often setting similar prices.) The firms may behave in parallel because they operate in the same market; they face substantially the same incentives; and because none of them wants to have its own price significantly higher than the others. But the broader lesson here is that, when similarly-situated actors make similar decisions, this does not support an inference of agreement. A more likely explanation is that all of the actors have some common incentives, and are responding to them in a common and rational way.
I would also add that most tacit (or otherwise secretive) cartels include a relatively small number of participants, unlike the one theorized here. Large cartels tend to be well-organized, with settled rules and frequent communication (and thus not tacit). For instance, the NCAA (which enjoys certain antitrust exceptions, but otherwise has the basic attributes of a cartel) has a large body of rules and penalties used to maintain parity and amateurism among its many participating schools. But how are the many student reviewed journals--whose editors stick around for two years at most--orchestrating such a large scheme? This strikes me as highly implausible.
Posted by: CompetitionLawProf | Sep 5, 2018 3:54:50 PM
An underlying assumption here seems to be that journals are the entities making determinations of whether to cite SE or PR, perhaps explicitly ("cite this source"). I don't think this reflects reality, at least for US law journals. Authors make citation decisions, not journals.
My own experience as a US Law Prof is that 99% of the citation recommendations are made be me, the author, not by the journal (and I approve 100% of the citation choices). So talking in terms of "journal citations" isn't accurate. Even when the journal requests a citation for a particular point, it's almost always in the form of "citation needed" (which I then need to add myself). In the rare instance where the journal editor makes a recommendation, it's almost always based on convenience for the editor (which means these days it's more often then not a blog post or Slate article). The more plausible explanation is that what you're observing is a pattern among authors, not among journals.
In addition, as another commentor noted, a major norm in US law journals is that virtually every proposition needs a supporting citation -- even (sometimes especially) if it's commonly known. My experience is that because SE journals are much more accessible--they're easier to search using the high-powered search tools of Westlaw and Lexis; they're not hidden behind paywalls; and the norm is to permit authors to widely distribute their published articles to peers (in contrast with many PR, which explicitly prohibit free dissemination of the published work by authors)--the result is that I'm far more likely to go to a SE source for a common proposition than a PE. I only cite a PR if I have it already open on my cognitive desk or if it was the only source for a particular insight or piece of empirical data.
There remains the possibility that journals are implicitly making determinations about SE/PR citations by only accepting articles that exclusively (or preferentially) cite their type. I see this as very far fetched. There are just too many other drivers of journal citations to make this very high on the list of reasons why a journal would accept an article.
In the end, my reaction is that when you're a hammer, everything looks like a nail. I'm especially concerned about the dilution of the concept of a "cartel" to the point where it becomes so diffuse that it is effectively meaningless.
Posted by: Midwest Law Prof | Sep 5, 2018 12:12:39 PM
Orin (not Oren) has provided an answer to my rant on Facebook yesterday about the nature of student editing versus peer editing. I've edited the rant slightly here to incorporate his answer. I'm not sure that this is precisely on the point of "cartels" but ... whatever. It is about citation practice and the editing thereof.
* * *
I have had the following subjected to peer review as opposed to student-edited law review placement - my own book, two book chapters, and now four articles. With one of the articles (Law, Culture & Humanities), I was assigned a peer editor AFTER the blind reviews came in and approved the piece. That editor suggested (correctly) that I had misstated something about Kant’s philosophy and I duly revised the piece to accord with his comment. For one of the book chapters - something of a festschrift - the editor of the volume politely asked if I could make the critique of the honoree slightly less in-your-face, but never suggested that I not make the substance of the critique.
I have also been asked to respond to peer review comments that are, shall we say, constructive. There the editor’s final decision to go ahead rather than “revise and resubmit” or “reject” (this occurred with the book proposal) depended on my willingness to make some changes along the lines of the peer review comments.
What I have NEVER had in the peer-review experience is a raft of editors poring over every one of my footnote citations. That appears to be an exclusive attribute of student-edited journals. Where did the practice arise? I assume Harvard where everything arose. And I assume that, back then, most of the citations were to cases and statutes. My neighbor, a Harvard biologist, confirmed that, in biology publishing, any substantive commentary occurred at the accept, revise, or reject stage, and not during the editing process.
As I think about it, the practice of cite-checking strikes as of a piece with the general paternalism of law school generally - seating charts, cold-calling, taking attendance, and not relying on the professionalism of scholars to stand behind their sources.
* * *
Orin's explanation makes perfect sense in two respects. The first is about editing practices. Law review articles used to be pretty much exclusively about legal doctrine. If you were a judge's clerk, you might well "cite-check" the advocate's reference to a case or statute because the outcome ought to turn on the reliability of the reference to the case at hand. Now in a manner far more akin to ritual practice than utility, student editors cite-check everything.
The second is about citation practice in legal academic writing, and it's an empirical question. Do we cite more and use more footnotes than other disciplines? I think we do, for the same reason Orin offers, although I'm not sure as between writing and editing which is the chicken and which is the egg. That's a practice into which we've been socialized even if we, as writers, complain about how many MORE citations the student editors want. (An example occurred to me this morning. In my recent piece, I used the term "technological singularity" without citing something for its meaning. There's no way a student editor would let that pass! Would a professional or peer editor?)
Posted by: Jeff Lipshaw | Sep 5, 2018 12:00:17 PM
Am I missing something? Wouldn't this just reflect the fact that one of our academic norms is to cite other scholarship (and, indeed, often to perform literature reviews)? For most mainstream law professors, the other relevant scholarship is in mainstream (SE) law reviews. And this is a function at least in part of tenure and promotion standards.
Posted by: E | Sep 5, 2018 11:43:15 AM
So, to be diplomatic, I would say this episode is an excellent example of how detailed on-the-ground institutional knowledge is essential to any good empirical work. I would strongly urge the authors to recruit a U.S. legal scholar to their team, and to ask that person if their hypotheses make any sense (to me, as to most other commenters here, they don't, but perhaps long conversation would reveal some sensible parts that aren't yet apparent).
Posted by: BDG | Sep 5, 2018 10:06:25 AM
It would be helpful if your subsequent posts addressed the significance of this supposed tendency if (as I think is nearly the case) impact factor lacks virtually any significance for an author's submission and placement decisions -- versus, say, more vaguely perceived prestige and exclusivity. And the value of dressing this up as tacit collusion.
Also, you might consider addressing explicitly the casual impression that this is navel gazing about statistically significant tendencies in navel gazing. I'm sure that's unfair, and that you have an answer, but it might be worth spelling it out.
Posted by: Ed | Sep 5, 2018 8:41:46 AM
I would like to know what the null hypothesis looks like for this research. That is, what would one have to have observed in order to have concluded that there was _not_ a "citation cartel"?
I was curious so I, too, glanced at the paper's list of journals and like Orin I have not heard of quite a lot of the peer reviewed journals, maybe most of them. That doesn't mean I would dismiss a piece of research appearing in one. It's just that I have no particular priors about whether the journal is any good or any kind of signal about the quality of the work.
I also have one other stating-the-obvious type of observation which is that a lot of the peer reviewed journals appear to be more international than than almost all the U.S. student-edited law reviews. Perhaps in a future post the author will discuss the question of whether the observed "cartel" actually is basically an observation of the relative parochialism of United States law professors, who tend to mostly cite (and be most interested in) domestic U.S. topics and domestic U.S.-based scholarship.
Posted by: Joey Fishkin | Sep 5, 2018 8:25:48 AM
I'm surprised to learn that anyone takes seriously a citation ranking that mixes student edited law reviews and peer reviewed articles. Extending Orin's original point 2, the norm at student edited reviews is that every factual or legal statement that is not new requires a cite. That's a lot of cites. Law professors have complained about this for years but go along to get along.
Posted by: Michael Risch | Sep 5, 2018 7:26:35 AM
Thanks for the reply.
1) Oren writes: "Whether or not US law profs are familiar with the ranking does not matter for that issue because i'm sure most US law profs are familiar with the PR journals in our sample." Maybe I'm just missing something obvious, but can you say a bit why you are sure of that, and why having heard of the journals is the key? For what it's worth, I looked over the list of PR journals in your sample, and I would say I have heard of about half of them and not heard of the other half.
2) Oren writes: "then this raise doubts about the validity of the ranking. And when this ranking starts to influence the evaluation of research then we have a problem." On an intuitive level, I certainly get that it's mixing apple and oranges to treat citations in student-edited law reviews as the same as those in other academic journals. The citation practices are really different.
Posted by: Orin Kerr | Sep 5, 2018 6:40:11 AM
Orin thanks for your comments.
On the first point the Web of Science and the JCR is probably the most important ranking of academic journals across disciplines. It is possible that U.S. law professors are more influenced by US news ranking. However, this is not relevant to our point - we used the database of the JCR to analyze the citation network and find the pattern we report. Whether or not US law profs are familiar with the ranking does not matter for that issue because i'm sure most US law profs are familiar with the PR journals in our sample.
On the second point, this may be the case but if you lump together 2 journal categories where one category produces many more citations than the other, but directs all the citations to itself (without convincing epistemic justifications in support this exclusivity) then this raise doubts about the validity of the ranking. And when this ranking starts to influence the evaluation of research then we have a problem.
Posted by: Oren Perez | Sep 5, 2018 6:06:58 AM
Oren, thanks for the post. Two thoughts:
1) I've been a law professor for 17 years, and an editor of a student-edited law review for a few years before that, and I have never heard of "Journal Citation Reports." I'd be surprised if other U.S. law professors have heard of it, either. If no one has heard of a citation report, how could the practices of people or journals be part of a "citation cartel" designed to influence that thing they don't know exists?
2) The citation practices of U.S student-edited law reviews are based on the citation practices of U.S. legal briefs. The practice in legal briefs is to cite absolutely everything, as many times as you say anything or reference anything. It's a citation practice from the practice of law, not academia. Isn't that the very likely reason why U.S. student-edited law reviews have more citations and citations per article than peer-reviewed journals that are based on an academic citation model?
Posted by: Orin Kerr | Sep 5, 2018 2:50:23 AM
The comments to this entry are closed.