« Submission Angsting Fall 2016 | Main | JOTWELL: Campos on aggregating administrative action »

Monday, July 25, 2016

Google Scholar Law Review Rankings - 2016

Google has published its 2016 Google Scholar Metrics, just in time for the fall law review submissions angsting season to begin (I see that in response to folks already calling for a new Angsting Thread, Sarah has just posted the Fall 2016 Angsting Thread slightly ahead of schedule). I've placed a table with the 2016 Google Scholar Rankings for flagship/general law reviews below the break (with comparisons to the 2015 ranking). I started tracking these Google Rankings as part of the Meta-Ranking of Flagship Law Reviews that I first proposed here at Prawfs in April (combining USN, W&L, and Google scores into a single ranking). And, as both Google and W&L have updated their rankings/metrics since that time, I'm also working on an updated meta-ranking in time for the opening of the fall submissions period (just for fun).

I realize most people probably don't make any submissions decisions based on the Google Rankings (and the methodology does have its limitations; and one startling change in the 2016 data is that the North Carolina Law Review, ranked #21 in 2015, doesn't even show up in Google's metrics this year for some reason - perhaps their article repository no longer meets Google's inclusion criteria), but I do think it provides an interesting metric for measuring law journal impact, alongside the W&L rankings, particularly for someone like me who publishes in both law reviews and peer-reviewed journals in other disciplines. I like that Google Metrics can provide some idea of how a particular range of law reviews measure up to a social science journal - and vice-versa - in terms of scholarly impact. The W&L ranking doesn't provide much of that information, as it is generally limited to law reviews; US News college rankings don't apply; and the Journal Citation Reports rankings by Thompson Reuters doesn't have very good coverage of legal journals.

However, with Google's metrics I can see e.g., how the social science journals I've published in (or am thinking about submitting to) stack up against law reviews. For example, I can see that Government Information Quarterly has a slightly higher average Google Metrics score (63; h5-index of 51, h5-median of 75) than the Harvard Law Review (61; 40/82), that The Information Society (26.5; 21/32) ties with the UC Davis Law Review (26.5; 20/33) and the Ohio State Law Journal (26.5; 18/35), and that Surveillance & Society (21; 18/24) ties the Houston Law Review (21; 16/26). I think this can be helpful for gauging where to submit research that crosses disciplinary boundaries, but I see how it might not be so useful for someone who only wants (or needs) to publish in law journals. I'm curious if any readers find the Google metrics useful for comparing law/non-law journals or for thinking about (law) journal submissions generally.

2016 Google Scholar Law Review Rankings

Includes only flagship/general law reviews at ABA accredited schools (I think I've captured (almost) all of these, but let me know if I've missed any). Rankings are calculated based on the average of Google's two scores (h5-index and h5-median), as proposed here by Robert Anderson. The final column shows how much a journal's rank has changed in 2016 versus last year's ranking (0 indicates no change, a positive number indicates the ranking has gone up in 2016, while a negative number indicates a drop in ranking in 2016).

Journal Rank (2016) h5-index h5-median Average Score Rank (2015) Rank Change
Harvard Law Review  1 40 82 61 1 0
The Yale Law Journal  2 41 61 51 2 0
Columbia Law Review  3 36 61 48.5 3 0
U. Pennsylvania Law Review 4 35 61 48 5 1
Stanford Law Review  5 33 54 43.5 4 -1
The Georgetown Law Journal  6 33 52 42.5 6 0
Texas Law Review  6 35 50 42.5 8 2
New York U. Law Review 8 28 53 40.5 11 3
Cornell Law Review 8 31 50 40.5 13 5
California Law Review  10 31 46 38.5 9 -1
Virginia Law Review  10 32 45 38.5 10 0
Michigan Law Review  12 30 44 37 6 -6
Minnesota Law Review  13 29 44 36.5 12 -1
U. Chicago Law Review 14 29 43 36 16 2
UCLA Law Review  14 29 43 36 15 1
Vanderbilt Law Review  16 30 36 33 16 0
Fordham Law Review  16 28 38 33 22 6
Notre Dame Law Review  18 26 39 32.5 18 0
Indiana Law Journal  18 26 39 32.5 26 8
Duke Law Journal  20 26 38 32 13 -7
Northwestern U. Law Review 20 26 38 32 22 2
Boston U. Law Review  20 28 36 32 26 6
William and Mary Law Review  20 26 38 32 19 -1
Iowa Law Review  24 27 36 31.5 20 -4
Boston College Law Review  25 25 35 30 26 1
Florida Law Review  25 22 38 30 22 -3
The George Washington L. Rev. 27 25 34 29.5 31 4
Emory Law Journal 28 19 39 29 30 2
U. Illinois Law Review 29 22 34 28 29 0
Hastings Law Journal  29 20 36 28 32 3
U.C. Davis Law Review  31 20 33 26.5 43 12
Ohio State Law Journal 31 18 35 26.5 43 12
Arizona Law Review  33 19 33 26 35 2
Maryland Law Review  33 22 30 26 45 12
Southern California Law Review 35 22 29 25.5 37 2
Washington and Lee Law Review 35 21 30 25.5 47 12
Seattle U. Law Review 37 18 32 25 38 1
Cardozo Law Review  38 21 28 24.5 33 -5
Washington U. Law Review 39 20 28 24 35 -4
Wake Forest Law Review  39 18 30 24 38 -1
Wisconsin Law Review 41 20 27 23.5 22 -19
Washington Law Review  41 19 28 23.5 49 8
American U. Law Review  43 19 27 23 40 -3
Connecticut Law Review  44 19 25 22 40 -4
George Mason Law Review  45 18 25 21.5 49 4
Houston Law Review 46 16 26 21 58 12
Alabama Law Review 47 17 24 20.5 49 2
Seton Hall Law Review 47 14 27 20.5 52 5
South Carolina Law Review 47 16 25 20.5 68 21
Brigham Young U. Law Review 50 17 23 20 52 2
Penn State Law Review 50 17 23 20 58 8
Colorado Law Rev.  52 15 24 19.5 47 -5
Pepperdine Law Review 52 15 24 19.5 52 0
Oregon Law Review 52 14 25 19.5 72 20
UC Irvine L. Rev. 55 16 22 19 84 29
Lewis & Clark Law Review  55 16 22 19 33 -22
Santa Clara Law Review 55 17 21 19 64 9
Howard Law Journal 55 14 24 19 55 0
New York Law School Law Review 55 15 23 19 58 3
Georgia Law Review 60 14 23 18.5 55 -5
Tulane Law Review  60 14 23 18.5 64 4
Arizona State L. Journal 62 16 20 18 93 31
U. Miami Law Review 62 14 22 18 77 15
Case Western Reserve Law Review 62 15 21 18 81 19
Georgia State U. Law Review 62 15 21 18 72 10
U. Kansas Law Review 66 13 22 17.5 68 2
U. Richmond Law Review 66 14 21 17.5 77 11
Utah Law Review  68 14 20 17 72 4
Temple Law Review 68 14 20 17 95 27
San Diego Law Review 68 14 20 17 86 18
Loyola U. Chicago Law Journal 68 16 18 17 81 13
Marquette Law Review 68 14 20 17 95 27
Buffalo Law Review 73 13 20 16.5 58 -15
Nevada Law Journal 73 13 20 16.5 86 13
Louisiana Law Review 73 13 20 16.5 64 -9
Mitchell Hamline Law Review 73 14 19 16.5 95 22
Florida State U. Law Review 77 14 18 16 68 -9
Loyola of Los Angeles Law Review 77 11 21 16 46 -31
Missouri Law Review 77 12 20 16 55 -22
DePaul Law Review 77 14 18 16 81 4
Brooklyn Law Review 81 14 17 15.5 77 -4
U. Cincinnati Law Review 81 14 17 15.5 68 -13
Chicago-Kent Law Review 81 13 18 15.5 58 -23
Michigan State Law Review 81 14 17 15.5 118 37
Mississippi Law Journal 81 11 20 15.5 95 14
New England Law Review 81 13 18 15.5 95 14
Pace Law Review 87 11 19 15 86 -1
Washburn Law Journal 87 11 19 15 84 -3
Duquesne Law Review 87 11 19 15 95 8
SMU Law Review  90 10 19 14.5 95 5
Saint Louis U. Law Journal 90 12 17 14.5 95 5
Vermont Law Review 90 12 17 14.5 40 -50
Capital U. Law Review 90 13 16 14.5 113 23
Denver U. Law Review 94 12 16 14 64 -30
Indiana Law Review 94 12 16 14 72 -22
Nebraska Law Review 94 12 16 14 113 19
Hofstra Law Review 94 12 16 14 104 10
West Virginia Law Review 94 12 16 14 123 29
Albany Law Review 94 12 16 14 58 -36
Creighton Law Review 94 11 17 14 86 -8
U. St. Thomas Law Journal 94 12 16 14 113 19
Tennessee Law Review 102 11 16 13.5 93 -9
Texas Tech Law Review 102 12 15 13.5 104 2
Suffolk U. Law Review 102 12 15 13.5 109 7
Valparaiso U. Law Review 102 12 15 13.5 122 20
Catholic U. Law Review 106 10 16 13 113 7
U. Pacific Law Review 106 10 16 13 118 12
Southwestern Law Review 106 10 16 13 109 3
Villanova Law Review 109 11 14 12.5 86 -23
UMKC Law Review 109 10 15 12.5 86 -23
Mercer Law Review 109 10 15 12.5 126 17
Cleveland State Law Review 109 11 14 12.5 123 14
John Marshall Law Review 109 9 16 12.5 118 9
Touro Law Review 109 10 15 12.5 128 19
Rutgers U. Law Review 115 10 14 12 86 -29
Akron Law Review 115 11 13 12 72 -43
Drake Law Review 115 10 14 12 95 -20
Kentucky Law Journal 118 9 14 11.5 118 0
Syracuse Law Review 118 9 14 11.5 104 -14
Maine Law Review 118 10 13 11.5 104 -14
Quinnipiac Law Review 118 9 14 11.5   No 2015 Rank 
Idaho Law Review 118 8 15 11.5 104 -14
Wyoming Law Review 118 9 14 11.5 128 10
Chapman Law Review 118 9 14 11.5 109 -9
Ohio Northern U. Law Review 118 8 15 11.5 113 -5
Southern Illinois U. Law Journal 126 8 14 11 131 5
Northern Kentucky Law Review 126 9 13 11 131 5
Oklahoma Law Review 128 10 11 10.5 137 9
U. Toledo Law Review 128 10 11 10.5 123 -5
Arkansas Law Review 130 9 11 10 77 -53
Loyola Law Review 130 9 11 10 131 1
U. Arkansas Little Rock Law Review 130 9 11 10 142 12
St. John’s Law Review 133 8 11 9.5 137 4
The Wayne Law Review 133 8 11 9.5 142 9
South Dakota Law Review 133 7 12 9.5 135 2
U. Memphis Law Review 136 8 10 9 126 -10
Campbell Law Review 136 7 11 9 131 -5
St. Mary's Law Journal 136 8 10 9   No 2015 Rank 
Roger Williams U. Law Review 136 8 10 9 142 6
Baylor Law Review 140 7 10 8.5 147 7
Willamette Law Review 140 8 9 8.5   No 2015 Rank 
Widener Law Journal 140 8 9 8.5 137 -3
Arizona Summit [Phoenix] Law Review 140 7 10 8.5 147 7
FIU Law Review 144 7 8 7.5 147 3
Tulsa Law Review 145 6 8 7 142 -3
Montana Law Review 145 6 8 7 150 5
North Dakota Law Review 145 5 9 7 153 8
Stetson Law Review 148 5 8 6.5 137 -11
Texas A&M Law Review 148 6 7 6.5 137 -11
South Texas Law Review 148 6 7 6.5 150 2
Thurgood Marshall Law Review 148 6 7 6.5 150 2
Oklahoma City U. Law Review 152 5 7 6 109 -43
U. Hawaii Law Review 153 5 6 5.5 153 0
North Carolina Law Review          21 No 2016 Rank 
Mississippi College Law Review         128 No 2016 Rank 
U. Louisville Law Review         135 No 2016 Rank 
Nova Law Review         142 No 2016 Rank 
U. Detroit Mercy Law Review         153 No 2016 Rank 
U. Pittsburgh Law Review           Not Ranked
U. San Francisco Law Review           Not Ranked
New Mexico Law Review           Not Ranked
Gonzaga Law Review           Not Ranked
Drexel Law Review           Not Ranked
U. Baltimore Law Review           Not Ranked
Northeastern U. Law Journal           Not Ranked
U. New Hampshire Law Review           Not Ranked
Charleston Law Review           Not Ranked
CUNY Law Review           Not Ranked
Cumberland Law Review           Not Ranked
U. Dayton Law Review           Not Ranked
California Western Law Review           Not Ranked
St. Thomas Law Review           Not Ranked
Widener Law Review           Not Ranked
Northern Illinois U. Law Review           Not Ranked
Regent U. Law Review           Not Ranked
Western New England Law Review           Not Ranked
Golden Gate U. Law Review           Not Ranked
Florida Coastal Law Review           Not Ranked
Barry Law Review           Not Ranked
Whittier Law Review           Not Ranked
Thomas Jefferson Law Review           Not Ranked
John Marshall Law Journal           Not Ranked
Southern U. Law Review           Not Ranked
Elon Law Review           Not Ranked
North Carolina Central Law Review           Not Ranked
Appalachian Journal of Law           Not Ranked
U. District of Columbia Law Review           Not Ranked
Western State U. Law Review           Not Ranked
Ave Maria Law Review           Not Ranked
Thomas M. Cooley Law Review           Not Ranked
Liberty U. Law Review           Not Ranked
Florida A & M U. Law Review           Not Ranked
Faulkner Law Review           Not Ranked
Charlotte Law Review           Not Ranked

Posted by Bryce C. Newell on July 25, 2016 at 12:00 PM in Law Review Review | Permalink

Comments

Anonprof's response was just called to my attention. First, anonprof: grow up. "Names" have descriptive and referential content, that's why we use them (so do you, e.g., accusing me of "statistical malpractice" [I confess I don't understand what you're talking about in that instance]). If it's really too upsetting that I referred to some partisans of the Google Scholar nonsense ranking as "naïve enthusiasts," then you must be an extremely fragile person. All this is especialy surprising given that you deem my reasons for my substantive point "clearly correct." (I did not, however, criticize Google Scholar for picking up self-citations. As in our earlier exchange on this thread, you should read more carefully, both the Sisk methodology and my criticisms of Google Scholar. I'm sorry you didn't take me up on the offer to correspond via e-mail about the Sisk rankings.)

If Prof. Newell tries to correct for the volume of publication problems, I'll be interested to see the results. It won't be as dramatic as with the philosophy journals, I suspect. Prof. Newell should write to Prof. Devitt for an explanation of why she carried out the adjustment as she did.

Posted by: Brian | Aug 8, 2016 9:06:48 AM

Prof. Newell, I don't think anyone disputes that Google Scholar Metrics are noisy. Prof. Leiter's substantive points in his post are clearly correct and are points made by many others about Google Scholar and other citation methods, including on this thread. For example, Prof. Leiter points out that Google Scholar counts self-citations. Clearly this is a drawback, but (insofar as I understand Prof. Leiter to argue that his own methodology is superior) it doesn't distinguish Google Scholar from Leiter/Sisk. The latter counts any reference to a faculty member's name, including articles they have written (whether or not they contain self-citation) or thanks in vanity footnotes. This surely creates more noise than self-citation, as the set of "erroneous" cites is much larger.

More generally, Prof. Leiter's tone is more appropriate to a schoolyard bully than to a scholarly blog. That Google Scholar is noisy and does not answer every question we would want to know the answer to is obvious. It does not follow that the Google Scholar rankings are "nonsense," or that those who think they have value are "naive." Many of our colleagues in law quite reasonably do not work with statistical data. It therefore falls to those of us who do to call out egregious misstatements about the kinds of claims one can, and which authors and institutions do, make based on available data. Prof. Leiter's repeated suggestion that Google Scholar is measuring average article impact is approaching the level of statistical malpractice. As you say, different data are useful for answering different questions. It would be sufficient for Prof. Leiter simply to note that Google Scholar does not answer the question that he is interested in.

Posted by: anonprof | Aug 4, 2016 1:56:46 PM

First, I want to note that I do think Brian's new comments about noise in the Google metrics are quite appropriate - and they do demonstrate a limitation of using the Google numbers.

Second, I also agree with anonprof that there are at least two levels of measurement at play here (impact by journals as a whole vs impact of a journal on the basis of impact per article/volume, etc.) that, to me, both appear to hold some value - but, of course, the usefulness of one metric over another depends on what it is you want to measure (and on the inherent limitations of the method). It appears Brian wants us to focus on weighting by publication frequency, rejecting any value in measuring gross journal impact. In his new post, he also points back to a earlier Google Scholar ranking of philosophy journals by Kate Devitt "adjusted for volume of publication". In her own words [with my own annotations in brackets], Kate "took the total number of citable documents 2010-2012 [from SJR]" per journal, and then "divided by 3 [per year, I assume], then divided this number by 5 to weight it against the Google Scholar data." Brian called this a "reasonable way" to adjust for publication volume. We could certainly do this with the Google law journal data as well to make Brian happier with this whole exercise, but I'm still a little confused about part of Kate's methodology.

Can anyone tell me why Kate divided by 5 to weight against GS data? Is this a hypothetical number of volumes per year? Why not weight on a per article basis (for example, by finding the average publication volume across the population of journals and then normalizing the Google Scholar h5-index and h5-median for each journal against the average)? Would averaging these normalized scores (as I did in the ranking above, but without normalized scores), then produce "reasonable" results? Or should we prioritize one of the metrics (h5-median or h5-index) over the other or normalize the data differently?

I plan to produce a weighted ranking out of pure curiosity about what will happen, and I'm open to methodological suggestions...

Posted by: Bryce C. Newell | Aug 4, 2016 11:37:01 AM

Prof. Leiter now has another post up about Google rankings on his blog. He writes: "The other day I remarked on what should have been obvious, namely, that Google Scholar rankings of law reviews by impact are nonsense, providing prospective authors with no meaningful information about the relative impact of publishing an article in comparable law reviews." He goes on to call those of us who think Google Scholar rankings have value various names. I just pause to note that Google Scholar does not purport to measure what Prof. Leiter says it measures. Google Scholar's impact rankings operate at the level of the journal, not the average impact of an article published in the journal. Although Prof. Leiter appears to feel quite passionately that the former statistic has no value, my sense is that many people (including most certainly anyone involved in publishing journals) does not agree. Obviously, Prof. Leiter is correct that the second number is also interesting, and to many of us might be more interesting. I am uncertain, though, why Prof. Leiter feels the need to call those who favor more data, with appropriate caveats, names.

Posted by: anonprof | Aug 4, 2016 11:06:30 AM

I wish I had written it as well as anonprof 11:36! Very well explained.

Posted by: Robert Anderson | Jul 28, 2016 1:49:04 PM

Prof. Anderson's point is that the h5 index, as an empirical matter, does not in fact move very much based on the number of articles published. Prof. Leiter's critique that Google Metrics are influenced by output is therefore factually mistaken. The reason has to do with the way in which h5 indexes are constructed. From Google, "The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by, respectively, 17, 9, 6, 3, and 2, has the h-index of 3." The h5 index (which is just the h index for articles published by a journal in the last five years) is the core number used to calculate Google Scholar Metrics. Volume can affect this number, but can't do so to any great extent.

For example, adding one additional article cited 100 times to the hypothetical set of articles above -- so that the set was now 6 articles cited 100, 17, 9, 6, 3, and 2 -- would only cause the h index to rise from 3 to 4 (there are now 4 articles cited at least 4 times). Individually highly cited articles thus do not significantly affect the h-index. Similarly, lots of articles with small number of citations will not really affect an h index. Imagine a journal that publishes 5 articles a year, so that in the 5 year period Google uses it has 25 articles. Imagine that the set of 25 articles has an h index of 15, meaning that at least 15 of the articles published in the last five years are cited 15 times. Imagine that the journal chooses to try to increase its h-index by doubling its publications to 10 per year, or 50 in five years. Just as with admitting a law school class, we typically think increasing quantity results in decreasing quality. To keep it simple, imagine that of the extra 25 articles, 20 are cited 15 times are fewer. Those additional 20 articles have no effect on the h-index at all. There is still not a 16th article with 16 citations, which is what is necessary to move the h-index up to 16. The other 5 articles would move the index up. In no event, however, could they move the index above 20, no matter how highly cited they are. Depending on the distribution of citations among those articles with more than 15, the h-index might not even move that much.

As Prof. Anderson notes on TaxProfBlawg. this feature of the h-index is the reason it was created and is widely understood in the hard and social sciences. The influence of Prof. Leiter's methodology--which is highly susceptible to the level of output because it is just total citation counts--on the use of citation studies in law unfortunately means that the h-index is not yet widely used in law. As Prof. Leiter notes in his exchange with Prof. Anderson, there are good reasons to want to use aggregate citation counts when the unit of analysis is a faculty member. As Prof. Leiter writes, if Cass Sunstein is more influential partly because he is brilliant and partly because he is productive, that is surely something we care about. Less persuasively, Prof. Leiter argues that "In the case of faculty citation counts, no one is interested in the average impact of a faculty member's article." As the various responses to Prof. Leiter's critique suggests, this very strong claim is incorrect. Rather, different data are useful for answering different kinds of questions, and people are interested in both the data provided by the Leiter/Sisk studies and those provided by rankings based on the h-index.

A final point about h-indexes as applied to faculty members, rather than journals. When a faculty member has few articles with significant numbers of citations, the h index does indeed reward productivity. For example, in the original 5 article set above -- 17, 9, 6, 3, and 2 -- a sixth article cited only 4 times raises the h index to 4. If that sixth article gets a 5th cite, a seventh article cited only 5 times would then raise the h index to 5, and so forth. But once the h index gets high (as with senior faculty or journals), the returns to additional productivity decline. When used on faculty members, as Google Scholar does on its author pages, the h-index thus strikes a balance between rewarding productivity and the quality of output -- the h-index puts less weight on productivity as the overall quality of the work (as measured by the h index) increases.

Posted by: anonprof | Jul 28, 2016 11:36:04 AM

As I understand it, Leiter's objection was that the Google study is useless because the results favor journals that publish more articles. Andersons's July 27, 9:01:45 reply is that the h-index is in fact adjusted for volume output.

But I'm no mathematician but a philosopher, so perhaps someone can correct me if I'm misunderstanding.

Posted by: Anon | Jul 28, 2016 10:49:39 AM

I don't understand how anything in 10:04 is responsive to the original objections. Can someone translate it?

Posted by: anonagain | Jul 28, 2016 10:34:04 AM

I found the following reply of Robert Anderson to Brian Leiter to be convincing, demonstrating the value of the Google Scholar law review rankings. I paste it from Taxprof, http://taxprof.typepad.com/taxprof_blog/2016/07/are-the-google-law-review-rankings-worthless.html#comments:

"Professor Leiter, my point is exactly that the h-index numbers effectively *are* adjusted for the article volume output. When I say that the volume does not significantly affect the h-index, that is equivalent to saying that the h-index is effectively adjusted for the volume of article output. It's just that it's not adjusted by simply dividing one number by the other. The ranking produced by the h5-index is very highly correlated with the ranking produced by cites per article (impact factor), and the ranking produced by the h5-median is virtually identical to the ranking produced by cites per article. So if you think cites per article (impact factor) is not worthless, then you must think the h-index is not worthless, and you certainly must think the h5-median is not worthless. For these purposes, they are effectively the same thing. There is a vast literature on this that readers can easily find (on Google Scholar), if they are interested.

Posted by: Robert Anderson | Jul 27, 2016 9:01:45 PM"

Posted by: Anon | Jul 28, 2016 10:04:40 AM

anonprof, e-mail me at [email protected] if you want to pursue this.

Posted by: Brian | Jul 27, 2016 5:55:10 PM

Prof. Leiter: Can you clarify which of my remarks are inapt? The 2015 Sisk study uses "total citations in law reviews over the last five years" for each faculty member. There is no effort to control for output. The unit of analysis is the faculty member's body of work, no matter how large, just as Google Scholar's unit of analysis is a journal's total set of publications, no matter how large. To be sure, the Sisk study contains a very thoughtful defense of both the use of citations to measure impact (as one among many metrics to which we might look), as well as a defense of this particular methodology, both drawing on your own prior and thoughtful work on the subject. But I understand Sisk to concede the basic point I make and that I think you make about the Google Scholar ranks: that "[t]o describe this natural phenomenon [that those senior scholars who have published more are cited more] as a “bias against younger scholars," however, strikes us as a mistaken characterization of the tendency of scholars with greater experience and a larger body of published work to have a greater influence in the legal academy."

You could substitute journals for scholars in the quote from Sisk and it would be equally true: those journals that publish more articles receive more citations, but to call that a bias against journals with fewer articles is a mistaken characterization of the tendency of journals with a larger body of published work to have a greater influence in the legal academy. What the Leiter/Sisk studies and the Google Scholar metrics measure is aggregate impact at the level of a faculty member/journal, and they are useful for that purpose. If, though, we are interested in the average quality of articles produced by X journal or Y faculty member, we need to control for output. Neither SIsk/Leiter nor Google Scholar does this.

If I have misunderstood the Leiter/Sisk methodology, I would be grateful if you could point me to the sections of the article that explain either 1) how the study controls for the amount of scholarly output in the way you say Google Scholar should, or 2) Sisk's argument (or yours) as to why we should infer anything about the quality of articles produced by a faculty member (rather than the impact of a faculty member's corpus) from total citation counts that do not control for output.

Posted by: anonprof | Jul 27, 2016 4:35:01 PM

Wow, I really didn't expect this post to turn into such a vigorous discussion. I appreciate the varying responses to this exercise in applying Google's ranking to law reviews. I do agree (and noted in the original post), that Google's methodology does have some limitations, and I think that Brian's concerns should be carefully considered. I wouldn't advocate anyone use this as the perfect metric. However, I also agree with Robert and anonprof that the ranking is not meaningless or "bullsh*t", as Brian suggests it is, but that it can provide some potentially useful information (if read with an understanding of the methodology behind it and the inherent limitations, of course). I can't imagine it's any less defensible as a metric of law journal impact than the US News peer reputation ranking of the journals' host schools.

I also appreciate Ken's insightful point. Of course, even with a project that could be relevant to law or another discipline, the benefits of peer review by colleagues who understand your methodology is very important. And, of course, a paper written for a social science journal should usually also take a very different form than one written for a law review, as there vastly different expectations in terms of format, length, and style.

But, back to the original point: Brian's concerns have gotten me interested in figuring out what amount of bias the number of articles published actually has on Google rank, and in computing some analyses of this and related questions. I haven't had time to do much with this yet, but I hope to do some more soon. As a very rough entry point into this analysis, I've calculated the number of pieces published in each of the flagship journals for the 50 law schools with the highest peer reputation scores in US News's rankings (averaged over the past 8 years of US News data). This is obviously noisy, as I have not taken type of publication (editorial, comment/note, essay, article, etc.) into account. (I think this is particularly noisy in regards to HLR, which Westlaw separates into many separate entries based on the case reviews, book reviews, and comments that HLR publishes. In fact, HLR published 643 pieces in 2011-2015, or about 178 pieces more than the next journal (Fordham), which also has a number quite a bit higher than the norm.) I then computed an average number of published pieces per year from 2011-2015 (Google's frame), and plugged this into SPSS. Running a quick Pearson's correlation test, we see that:

Articles per year x the 2016 Google Scholar Rank gives us a significant Pearson correlation (at the 0.05 level; two-tailed) of -0.294. This suggests a fairly weak negative correlation between the two variables, meaning that as the number of published pieces goes up there is a statistically significant but fairly weak improvement in Google rank. This is what Brian's argument would suggest. However, if we remove HLR (which is an obvious outlier when we plot the data visually), the correlation is no longer significant at the 0.05 level and, in fact, the correlation statistic becomes positive, at about 0.214. This suggests that, without the Harvard Law Review in the mix (and the high number of citations to HLR articles probably has a number of other inputs as well, besides sheer number of published pieces), we do not find any statistical support for the claim that the number of published pieces correlates to an increased rank in the Google rankings. (I've tried to note some of the limitations of my analysis above, however, so this shouldn't be taken as conclusive evidence at this point.)

I also hope to present a Google Rank normalized to journal articles published, but to do this (and the above analysis correctly), I will need to control for type of published pieces, not just the gross number pulled from Westlaw, which may prove to be too much work to be worth it at the moment - unless anyone else wants to jump in and help run some analysis! If so, let me know, and I'll be happy to share the data files.

Posted by: Bryce C. Newell | Jul 27, 2016 4:30:36 PM

Oh come now, "another anon" you can do better than an actual ad hominem, right? Prof. Anderson at least tried to address the reasoning.

anonprof: please take a closer look at the Sisk methodology, many of your claims are inapt with respect to what was actually done.

Posted by: Brian | Jul 27, 2016 2:33:29 PM

Leiter trashes it because Chicago doesn't perform as well as he'd like. That much should be obvious by now.

Posted by: another_anon | Jul 27, 2016 12:48:09 PM

Whether one should control for the number of articles published depends on the question one is answering. Faculty are often interested in the average quality of articles published in a journal, because they are interested in the signal sent by placing in that journal. To assess quality in this way, you need to control for the number of articles, as Professor Leiter suggests. Likewise, as anon notes, using total citations to measure the quality of articles published by individual faculty members clearly also requires controlling at least for the number of years teaching. Even better would be controlling for the number of articles published. Prof. Leiter's studies only look at citations within the last five years, but that controls for something different. It gives us a sense of the current relevance of the entire body of a faculty member's work, but tells us nothing about the average quality of the articles published by that faculty member.

Publishers of journals, on the other hand, are interested in the overall impact of their journals. In general, from the gravitational pull of celestial bodies to the overall impact of law faculties, things that are bigger, all else equal, have more influence. In the case of journals, they can provide wider coverage of topics and points of view, participate in more debates, and publish more articles that are likely to be influential. Senior faculty are likely more influential for the same reason. They have had more time to write about more topics and more opportunities to submit to the law review lottery and win the HLR (or other top law review) golden ticket that, on Prof. Leiter's view, leads to citations irrespective of merit (I take this to be his point about how anything published in HLR will be cited). Deans might therefore prefer senior hires to junior ones for the same reason publishers would, all else equal, want to publish more articles rather than fewer.

It seems to me that both of these data sets provide useful information, albeit to different people in response to different questions. As anyone who has ever worked with data knows, you have to ask whether the data you have actually answers the question you are asking. Unfortunately, sometimes they seem to at first glance, but in fact do not. Aggregate citation data, without controlling for output (be it faculty members or journals) tells us about the influence of the entire body of work, but tells us nothing about average quality. Like anon, I am puzzled by Prof. Leiter's decision to trash the Google Scholar rankings, which he has done with some profanity on his own blog. By his reasoning, his own citation studies are "BS," to use his terminology for the Google Scholar citations. I personally think his studies are quite useful for the same reasons I think the Google Scholar citations are useful. They don't answer all questions, but careful thinkers are aware of their limitations and can be appropriately skeptical about using only one metric, or using individual metrics to answer inappropriate questions.

Posted by: anonprof | Jul 27, 2016 12:44:13 PM

Perhaps Prof. Leiter's "Most Cited Faculty" rankings should also be updated to control for the number of articles a scholar publishes to eliminate the bias caused by indiscriminate publishing.

Posted by: anon | Jul 27, 2016 10:09:24 AM

In reply to Prof. Anderson:

Life is short, so I'm going to keep this short:

1. If a journal publishes more articles, it has more chances of publishing a highly cited article (this is especially true with law reviews, where almost anyting published, e.g., in the Harvard Law Review will be read and cited subsequently). That drives up the h5-index.
2. That latter fact will affect the h5-median.

This ain't rocket science! Everyone in academic philosophy now knows this, everyone in academic law should figure it out too!

Posted by: Brian | Jul 27, 2016 8:33:27 AM

If you care about the credibility of your social science work, you need to go to a good peer reviewed journal. Student law review editors can be damn good editing law. Experimental design, statistics, other aspects of data analysis, drawing of inferences, etc., not so much.
If you are a faculty member with a colleague up for tenure who has published social science work, you need an outside reviewer who can review all the relevant social science stuff.
If you are a member of a faculty which demands publications in law reviews even for social science work, you should get the faculty to change its policy.

Posted by: Ken Gallant | Jul 26, 2016 10:50:54 PM

Like other rankings, the Google Scholar ranking has limitations but it certainly is not "meaningless." My response here: http://witnesseth.typepad.com/blog/2016/07/google-scholar-releases-2016-journal-rankings-controversy-ensues.html

Posted by: Robert Anderson | Jul 26, 2016 10:48:40 PM

This is neither helpful nor useful; it's meaningless because there is no control or the number of issues or pages published by each journal. Journals that churn out more pages fare better than their peers that do not.

Posted by: Brian | Jul 26, 2016 1:03:47 PM

Thanks. This is quite helpful and useful, when used in concert with the U.S. News peer assessments and the Wash. & Lee citation markers.

Posted by: Alexander Tsesis | Jul 26, 2016 12:59:20 PM

The comments to this entry are closed.