« Where are June and Wong? | Main | Critical Consciousness & Law as an Engine for Social Change: 60 years to Brown »

Wednesday, April 08, 2015

Productivity Metrics for Legal Scholarship

As I wrote last week, some universities are using Academic Analytics to assess the academic productivity and excellence of their various departments.  As promised, this post will offer a few metrics that are more effective than the metrics currently used for other disciplines.

Before I set out those metrics, I want to offer a few qualifications.  First, Academic Analytics claims only to quantify faculty scholarship.  Law faculty are usually assessed based not only on their scholarship, but also on their teaching and service.  So while this post will focus only on metrics for assessing legal scholarship, we should also think about how to quantify faculty’s teaching and service contributions.

Second, it is worth asking what these metrics are supposed to capture.  Put differently, why are university administrators seeking this data?  I don’t know the answer to this question.  I suspect, however, that they are, at a minimum, looking to do the following: (a) ensure that the faculty in all of their departments are meeting a minimum level of productivity; (b) determine which of their departments are performing well as compared to other departments across the country; (c) determine which departments are underperforming; (d) make marketing, funding, and organizational decisions that reward departments in category b and reform (or perhaps punish) departments in category c.

Third, while university administrators may choose to use this data to assess their departments, law school administrators may wish to use this data to assess their individual faculty members.  While the university compares its department to departments at other universities, law schools could use the data to compare faculty members either to other faculty at the same school or to faculty at peer institutions.

Finally, I am personally ambivalent about quantitative assessments of faculty. Quantitative assessments give us some concrete way to measure scholarship, but I don’t think that these quantitative metrics can serve as a substitute for a qualitative assessment. 

Now some proposed metrics.

In my mind, a quantitative assessment should seek to measure both productivity and impact.  This post will focus on productivity, and my next post will address impact.

Productivity seems as though it should be straightforward. After all, determining how much a faculty member publishes should be a simple matter of counting.  But what are we counting?  If we look only at the number of publications that a faculty member publishes in a year, then we would not distinguish between one faculty member who publishes a three page commentary in a local bar journal and another who publishes a monograph with a prestigious university press.  But even if we agree that the monograph represents more productivity than the 3 page commentary, that does not tell us how to compare one to the other.

We can avoid some of these questions by counting different types of publications independently rather than trying to determine how one type of publication might compare to another.  So, for example, rather than deciding whether a monograph is “worth” the same as a law review article, I would simply have separate counts for monographs and law review articles.  Off the top of my head, I would include the following categories:

  • Law review articles, essays, and book chapters (at least 25 pages in length)
  • Shorter publications (between 5 and 25 pages)
  • Book reviews
  • Monographs
  • Edited volumes
  • Textbooks and treatises (perhaps separating out new editions?)

There are, of course, other types of publications – such as editorials and white papers – but those strike me as outside the core of what is generally considered legal scholarship.  Are there other categories that I am missing?

The second major challenge for measuring productivity is deciding whether to include only those publications that meet some minimum threshold for quality.  So, for example, in other disciplines only peer reviewed publications count towards productivity.  Similarly, other disciplines sort journals into different categories --- those categories are well known and well defined.  These qualitative limitations and distinctions could be imported into law.  For example, only publications in top 50 journals could count towards productivity.  Alternatively, a placement in a top 10 journal could “count” for more than other publications.

I’d be interested to hear what others think about qualitative limitations and distinctions.  My instinct is to exclude them.  For one thing, deciding which journals qualify as top 50 or top 10 would engender its own controversy.  For another, limitations and distinctions would muddy the water, as they are not measures of productivity.

More to come . . .

Posted by Carissa Byrne Hessick on April 8, 2015 at 11:00 PM in Life of Law Schools | Permalink

Comments

Why would anyone want either word count or page count? Is the idea to give law professors an incentive to pad their articles to even greater lengths?

Posted by: Stuart Buck | Apr 10, 2015 10:49:40 AM

Why would anyone want either word count or page count? Is the idea to give law professors an incentive to pad their articles to even greater lengths?

Posted by: Stuart Buck | Apr 10, 2015 10:49:19 AM

There's no perfect way to quantify productivity. Different people write different kinds of works in different formats at varying lengths. No one measure can be fair to everyone except actually reading through each person's written product and making a qualitative assessment of productivity. If that's not feasible -- as the saying goes, Deans can count but they can't read -- I think you just need to draw some plausible lines and do your best.

On the broader project of assessing faculties, I think it's helpful to look at lots of different ways of measuring productivity or impact rather than just a few. It's kind of like the old story about the blind men and the elephant. Any one measurement is just one part of the elephant.

Posted by: Orin Kerr | Apr 10, 2015 1:05:55 AM

It strikes me that quantitative measures could be of some help in providing guidance for junior scholars, particularly when tenure and promotion standards are fairly opaque. And to that end, it might be a good idea for a faculty to ask itself from time to time, "Why do we have so little regard for book reviews?" [or treatises, co-authored articles, or what have you]. That all strikes me as very healthy.

I can also see how these measures might help the law school keep track of its productivity relative to other law schools, which is valuable information although I am not sure how much more valuable than the information already available. Perhaps the more useful metric would be how well the law school has performed relative to itself over time. Sudden dips (or improvements) in productivity, however, might say more about the school's recent forays into the entry-level hiring market (or lateral gains or losses) rather than the overall productivity of the faculty.

Where I have the least amount of faith is how well these measures would serve law schools in distinguishing faculty members' productivity. Perhaps you could use something like this to put people in large categories ("really productive" "sort of productive" "not productive at all"), but I assume deans do that kind of thing already just by eyeballing summer funding applications and similar reports. I wouldn't harbor too much faith in anything more granular than that, but maybe I lack vision. In any event, I look forward to your future posts on measuring impact ....

Posted by: Miriam Baer | Apr 9, 2015 8:10:22 PM

Will is totally correct about Marty Lederman. I can't find anything by him on Westlaw that mentions Hobby Lobby, and his blog posts are extremely long and substantive. They clearly required substantial research and reflection.

Hmmm . . .

Posted by: carissa | Apr 9, 2015 5:43:15 PM

Carissa, you are right--you didn't equate short articles with blog posts. I got wrapped up in some of the comments. But you did distinguish short articles from book reviews, and those could be of similar lengths, so I assume the principle underlying that division must be qualitative.

I'm afraid I'm partly to blame for getting you off the quantitative track. It's just because I am anxious to get beyond that (less important) metric and to what I see as a much more important metric of quality, and I'm trying to push back against the development of categories based on productivity first. I'll try to exhibit more restraint.

Posted by: Scott Dodson | Apr 9, 2015 5:26:59 PM

Carissa, you are right--you didn't equate short articles with blog posts. I got wrapped up in some of the comments. But you did distinguish short articles from book reviews, and those could be of similar lengths, so I assume the principle underlying that division must be qualitative.

I'm afraid I'm partly to blame for getting you off the quantitative track. It's just because I am anxious to get beyond that (less important) metric and to what I see as a much more important metric of quality, and I'm trying to push back against the development of categories based on productivity first. I'll try to exhibit more restraint.

Posted by: Scott Dodson | Apr 9, 2015 5:26:49 PM

One counter-example to the three major types of substantive blog posts that Carissa lists would be something like the collected work of Marty Lederman on Hobby Lobby, http://balkin.blogspot.com/2014/02/compendium-of-posts-on-hobby-lobby-and.html. It seems to me that these posts are the result of a great deal of reflection and research but so far as I know aren't and won't be part of a different scholarly format. But it may well be that these exceptions are rare, I don't know.

Posted by: William Baude | Apr 9, 2015 5:25:06 PM

Hmmm . . . it seems as though the discussion is veering off in the direction of impact. Maybe the categories I proposed pushed us there.
But, in any event, I'm wondering if we can't get some rough consensus on quantitative measures for productivity first.

Paul -- when you say "mean number of words per year" in your various categories, what do you mean? Just what is the word count by a particular professor in that category per year? And then the word mean is just meant to give an average across years?
Academic analytics is often conducted on an annual basis, and I could see how we might want to make sure that we aren't unfairly judging a faculty member if they had an uncharacteristically less productive year. Is that what you are getting at? And if so, should we just look at the average for a discrete period of time? Say 3 years?

Scott --- you are obviously correct that word counts are proxies for productivity. And you are also correct that they are often poor proxies. That's why I wanted to pick ranges with an eye towards establishing a consensus. I agree that a 30K word article and a 20K article will generally require roughly the same amount of work; frankly, she who wrote the 20K word article may have spent more time and effort b/c it takes more time and effort to say things well in fewer words.
25 pages (or roughly 15,000 words) seems like a reasonable place to divide categories because it is significantly less than what most profs currently submit as a full length law review article.
Of course, some brilliant legal scholarship is less than 25K words. But that's why we should have separate measures for impact.
And, to be clear, in creating these separate categories we aren't saying that the longer articles are "worth" more. We haven't assigned different publications different points, or anything like that. Of course, someone might *assume* that the shorter articles represent less productivity/time/effort. And most (though not all) of the time, they'd likely be correct.

Scott makes another point that I am more concerned about: whether these quantitative measures are making too many impact/qualitative judgments. I take this point very seriously because I'd like to save impact questions for other metrics. But I'm actually not sure that I agree with the specific example. I don't see these categories as "distinguishing between short journal articles and equivalent-length blog posts", but that is because I don't think many blog posts are more than 5 pages/2500 words, which is the quantitative cut-off for articles. Of course, there are doubtlessly blog posts that are 2500 words long, but I've already mentioned what I perceive as the three major types of substantive blog posts in my previous comment, and explained why I'm not concerned whether they get counted in a quantitative metric.

I'm willing to be convinced that I am wrong . . .

Posted by: carissa | Apr 9, 2015 4:40:27 PM

Of course, word counts are only proxies for productivity, and they are often misleading ones. The creation of a unique data set for a relatively short empirical paper represents a lot more time, effort, and value than a 30,000-word doctrinal paper, of which 20,000 words are toward a non-novel recitation of background material.

And how do edited volumes and textbook/hornbook updates fit into a word-count category?

I'm not sure quantity/productivity can be measured any more reliably than quality.

Posted by: Scott Dodson | Apr 9, 2015 4:14:49 PM

Please don't try to rank student-edited law journals. Is there any non-endogenous (that is, no citation counts) evidence whatsoever that any measure of journal "rank" is associated with quality?

Here's a thought. For "productivity," four bands:
1) mean number of words per year in recognized academic publication outlets (academic journals whether peer reviewed or student edited, books
2) mean number of words per year made available to the public in recognized pre-publication academic outlets (ssrn, arxiv)
3) number of workshop, conference, or other academic presentations
4) mean number of words per year in non-academic publication outlets related to law, the academy, or the faculty member's particular teaching and research areas (bar journals, op eds, blogging).

Full stop.

Then quality can be assessed differently. For some people, that will be about academic impact, for some it will be about influence on practice, for some it will be about their personal judgments about goodness of research, etc. But there is no consensus about what quality even means, between the obvious extremes on either end of the spectrum (plagiarism on the low end, Lon Fuller on the high). Quantitative measures should be limited to those things we can get some reasonable degree of consensus in evaluating.

Posted by: Paul Gowder | Apr 9, 2015 3:59:37 PM

These categories blend quantitative productivity with qualitative impact. For example, I think that by distinguishing between short journal articles and equivalent-length blog posts, you are making an impact judgment. By contrast, by distinguishing a 24-page article from a 26-page article, I think you are making a quantitative judgment.

I happen to think qualitative impact is more valuable than quantitative productivity. And I think impact is something that is greater than the sum of its parts. So I wouldn't want to measure a scholar's impact on an article-by-article basis. Instead, I would measure it more holistically and over a longer period of time. (That doesn't mean you can't use article-specific indices like citations or peer assessments; it just means that you blend article-specific indices together with more general assessments of impact and reputation.)

I'm not opposed to having a quantitative component to assessing a person's scholarship, but I think it should be a secondary consideration. I would shudder at giving "The Right to Privacy" less weight just because it is under 10,000 words. Still, we might want a quantitative component to ensure continuation of scholarship because we can't predict what the future value of some scholarship will be. We may also want to value productivity for productivity's sake--to reward those who publish often but whose fields are underprivileged or more difficult to make an impact in. So I have no problem with page/word counts or textbook/monograph distinctions generating quantitative categories. But I would hope that those categories play a much smaller role than the qualitative assessments in the overall evaluation of a scholar.

Posted by: Scott Dodson | Apr 9, 2015 2:20:46 PM

Just to clarify, when I'm talking about analytical statutory annotations I'm not referring to the list of cases. I apologize if I was not clear about that.

I'm referring to the "Practice Commentaries" or other short works explaining the provision, providing the history of the provision, harmonizing the cases that deal with the provision, and/or suggesting what the appropriate interpretation of the provision should be in given situations. While that's doctrinal scholarship at a pretty pure level, and it tends to be cited by practitioners and courts rather than law review articles, I would still count it as scholarship. And there are professors who work on those pieces. Connors at Albany Law School and Alexander at St. John's spring to mind.

Regarding blogs and bar journal articles, I agree to an extent. I think there is also a (4) where the author is an expert in a particular area of law and there is a new, but narrow or small development in that area which merits some discussion but not an entire law review article's worth.

Posted by: Patrick Woods | Apr 9, 2015 1:17:44 PM

Very interesting comments so far.

James Grimmelmann makes a good point about word counts vs. page limits. So I guess I'd need to figure out what the rough word count equivalent is for the page ranges I've listed.

As for whether folks should get credit when they get paid their publications, I think that might have some unintended consequences. The monograph by Leo Katz that Derek Tokaz mentions is a great example. I'm sure Katz is getting a much smaller royalty check from U Chicago Press than, for example, Josh Dressler is getting from West for his Criminal Law textbook. But both are still getting paid.

As for the thoughts by Patrick Woods ---
(a) Are there law profs out there who are doing statutory annotations? And even if there are, I don't see the analytical contribution being made by the annotations in McKinney's---it just collects major cases interpreting a particular statutory section.
(b)Do other people agree re blog posts and very short publications (say, fewer than 2,000 words)?

My sense is that, when a prof writes something for a blog or a bar journal, many of those publications can be divided into three categories: (1) it is a summary of a larger scholarship project or agenda, (2) it will become a larger scholarship project or agenda, or (3) it is not the result of very much research/reflection. At first blush, I don't think any of these three categories should "count." The first is publicity, the second is like an early draft or outline, and the third lacks the important hallmarks of scholarship.

Is there a better case to be made for these very short contributions?

Posted by: carissa | Apr 9, 2015 12:47:53 PM

You should consider not including textbooks and treatises (and just regular books, which I didn't see listed, like Leo Katz's excellent Why the Law is So Perverse). Yes, such works are a sign of great productivity, but they may not be a good metric of productivity --as a professor--.

If professors are earning a royalty or other third party payment, this should be treated as being very productive in a side job, but not their professor job. This would be akin to a professor taking a consulting job on the side. The professor may get credit for bringing prestige to the university, but the school would also recognize that this is a second job, which may create a conflict when it comes to the professor's time.

"deciding which journals qualify as top 50 or top 10 would engender its own controversy"

Yes and no. Creating a national ranking of journals, a la US News's law school rankings, would create plenty of controversy and rightfully so. However, if each individual school had its own list, I think that would eliminate much of the problem.

"For example, only publications in top 50 journals could count towards productivity."

If you were to rank them, either on a national level or with each school creating its own list, I don't think the all or nothing approach makes sense. It's not as though publishing in the 75th ranked journal is worth nothing. Perhaps the A List gets full credit, B List gets 80%, C List gets 50%, and the D List 10%.

As for other factors, it would make sense to include the work of peer reviewing, but only if it is actually done in a rigorous manner. For instance, a study of a peer reviewed article ought not to find that 98% of its citations are fluff.

Finally, what about (non-paid) speaking engagements and conference panels? Such events can be a great way to refine one's work before sending it out for publication. This would provide professors with a bit of a stop gap in their years between publication and help to emphasize quality over quantity. Though, the school may need to distinguish between pre-publication and post-publication talks (conferences talking about work you're done working on isn't terribly productive). Perhaps the pre-publication conference would work like an advance against your publication credit. Take part of the credit now, but only part later when it is published.

Posted by: Derek Tokaz | Apr 9, 2015 9:38:06 AM

Regarding categories, a system might want to consider:

(1) Analytical statutory annotations. A number of the most influential legal writings in terms of practice are the commentaries to the frequently used consolidated statues (e.g. McKinney's in New York).

(2) Substantive blogging, by which I mean a large swath of original content rather than a re-posted link with a quote. If the page floor in the "shorter publications" category is dropped, you could include blogs in that group.

I'd say the floor should be dropped anyway and that James Grimmelmann is right that if a length cut off has to exist it should be based on word-count. More generally, though, short but substantive pieces placed in publications like the New York Law Journal or various bar journals, pretty much all of which are now primarily accessed online anyway, I think should count. How much a really short piece should count for is a separate question, but it's more than nothing.

As for making qualitative distinctions between what "counts" and what doesn't within a particular category, that's an area that needs a lot of serious consideration and is particularly tricky. Each category has its problems in line-drawing, as placements sometimes have less to do with the quality of the piece than with the author's reputation or professional network. Should someone who writes a textbook with three coauthors, each of whom assigns the text in their own classes, get the same credit as someone whose textbook is widely used? Should a book author using a vanity press get the same credit as someone picked up by a major publisher?

Regarding law reviews specifically, I think using the ranking of the journal is highly problematic. As you point out, not only is deciding which journals "make the cut" an issue (USN rank of the parent school?, W&L?, Google Scholar?, something else?) but considerations other than rank can also drive an author's placement decision (e.g., readership). Should an author have their article not count because he or she purposefully placed it in a specialty journal read by relevant experts but that's not a top 50 overall journal? Should an author lose credit for placing a state law piece in a regional journal that is known to be read by members of the relevant state's supreme court but which otherwise lacks national cachet?

There's also a time factor to be considered. What happens if a journal was barely in the top 50 (however determined) when a piece was published but it then slips to 60? What about the reverse situation?

I think I agree with you that it's probably not worth the attempt to make these kinds of distinctions, at least not using semi-static categorical cut-off points. Using citations or altmetrics to establish what "counts" is a possibility, but those approaches have their own problems that are too much to get into, even in an overly-long comment like this one.

Of course, my views are all "outside looking in" (and solely my own). Take them with the appropriate amount of salt.

Posted by: Patrick Woods | Apr 9, 2015 9:31:32 AM

If you're going to count, don't count pages, since page formatting varies extensively. Count words.

Posted by: James Grimmelmann | Apr 9, 2015 9:11:56 AM

What about categories for law peer reviewed and non law peer reviewed?

Posted by: Daniel Sokol | Apr 9, 2015 1:40:19 AM

Post a comment