« 9/11 and the Choices Ahead | Main | In Search of a Law Teaching Marketplace of Ideas »
Thursday, September 11, 2008
Faculty Productivity
As you may have seen over on Brian Leiter's blog, there's a new study of faculty productivity that was produced by Professor Yelnosky at Roger Williams Law School. The study measures the scholarly productivity of professors at law schools ranked outside the top 50 of USNews. To my delight, Florida State ranks third, trailing a bit behind USanDiego and Cardozo. Richmond is not far behind FSU. San Diego warrants mention for being just a point and a half behind Harvard. (NB: When Brian did a similar study of the "top" schools, FSU ranked 31st nationally; Cardozo and USanDiego tied at 22d. I'd be very curious to see what the numbers look like today.) The study's methodology is pretty interesting and probably somewhat controversial insofar as it measures productivity by focusing on how much scholarship gets selected for publication in the "top" journals. Yelsnosky explains that 67 journals were deemed "top":
We included the general law reviews published by the 54 schools receiving the highest peer assessment scores in the 2008 U.S. NEWS RANKINGS (47 schools had a peer assessment score of 2.9 or higher; 7 had a score of 2.8) and an additional 13 journals that appear in the top 50 of the Washington & Lee Law Journal Combined Rankings. An alphabetical listing of those journals can be found on this website, as can the U.S. NEWS & WORLD REPORT RANKINGS and Washington & Lee Law Journal Combined Rankings on which that list of 67 journals is based.
For those of you making decisions about which journals to send your stuff, your deans will be especially happy if you place in these ones, rather than other ones you might be tempted by. This study, consequently, might create certain feedback loops. Another important aspect of how this was measured includes the deduction of points for articles published in journals in one's home institution and for articles that are short:
For each qualifying article, we used Professor Leiter’s system: 0 points for articles under 6 pages; 1 point for articles 6-20 pages in length; 2 points for articles 21-50 pages in length; and 3 points for articles exceeding 50 pages. For articles appearing in a journal published by the faculty member’s home institution, the points assigned were reduced by one-half. The total number of points for all members of a faculty was divided by the number of faculty, yielding the institution’s per capita score.
Given the limited ambition of what the study purports to measure, I'm not sure I have too many quibbles with its design. I can imagine that faculties with lots of legal historians (UNC?) or other specialties might suffer under this metric. And if I had my druthers, I'd probably deduct all (not just half the) points for publication in a journal belonging to one's home institution.
By the way, Brian remarks: "For those on the law teaching market, this study is not a bad tool for gauging which more regional law schools have serious scholarly culture." Indeed, but the point should dig deeper: the study also raises the question of which of the top 50 law schools don't have a "serious scholarly culture," at least comparatively and based on this metric. Unfortunately, we don't have enough data for that determination; actually, I'm a bit surprised that Yelnosky didn't undertake that. In light of the already substantial work that was involved, I wonder how much more work it would have been to include the rest. In any event Yelnosky deserves thanks for putting this together and if your school was not measured by Yelnosky but you've done a self-study to mimic it, please feel free to share that info in the comments. Of course, if you're on the market this year, you may want to ask the schools you're meeting with about how they fare or at least what they think of the study--but probably best to do so after you get an offer!
Posted by Administrators on September 11, 2008 at 09:01 AM in Funky FSU, Life of Law Schools | Permalink
TrackBack
TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef010534999ec5970b
Listed below are links to weblogs that reference Faculty Productivity:
Comments
Dan,
I agree with your point that the study measures whether a particular school plays the placement game. But I'm not convinced that it offers much evidence about a school's scholarly culture. For example, I have heard that some schools offer bonuses for top tier placements. If I am a professor at such a school, I might be reluctant to put decent ideas in symposium pieces at journals that are outside of that top tier or to produce work on specialized, but important, legal issues out of concern that such articles won't get accepted in a top tier journal. A school that pushes its faculty to publish in top tier journals might do well in the Roger Williams study, but I don't think it would be promoting a particularly healthy scholarly culture.
Echoing another comment, let's also be careful about calling it a measure of productivity. The study measures placements, not a faculty's productivity, impact, or scholarly reputation. The latter three are much more accurately measured by citation counts, not placements. Indeed, I suspect that's why Leiter himself uses citation counts as his primary method for measuring a faculty's scholarly contributions.
So if we think of the Roger Williams study as another data point, one that simply measures where a school places articles, there's nothing wrong with it. But let's be careful not to think of it as a study that measures a faculty's culture, productivity, impact, or reputation. It doesn't.
Posted by: Andrew Perlman | Sep 11, 2008 11:28:38 AM
Discounting shorter pieces introduces a bias against empirical and formal work, and peer-reviewed work more generally. JLS is the 38th ranked journal in this study, but only one of the eight articles in the most recent issue would have received the full three points - and the authors needed a twelve page appendix of proofs to get there.
Posted by: Greg Goelzhauser | Sep 11, 2008 10:59:20 AM
These comments are helpful for putting the contribution of the study in context. That said, no one is suggesting that this study should be the sole metric by which one makes assessments. It is just another set of data, to be weighed against other such rankings.
On the merits of one of Andrew's points: if you're trying to get a sense of what a school's scholarly culture is like, I think, in addition to citation studies, I'd also like know how many people on the faculty are "in the game" of submitting papers competetively to the top journals, however defined. Your point about George Mason bears consideration but it is possibly washed out by the fact that some faculty choose placements based on W/L rankings rather than USNews ones, or alternatively, where they would rather teach -- on the weak assumption that where you publish may mildly affect the likelihood of that school offering you a job...so I wouldn't say the choice of using the academic reputation metric is so far off. But if you ran the numbers differently, I'd be more than happy to share them here. Let a thousand flowers bloom.
Michael's point about misnomers is also worth attention. I think the author's studies aren't purporting it to be of faculty productivity as such, but faculty productivity as based on placements in "top" journals. That's all it is. It's not worth large extrapolations beyond what it purports to do...
Posted by: Dan Markel | Sep 11, 2008 10:57:03 AM
I agree with anon(1) at least as to the selection of "top" journals. I think it's fine to call this a study of faculty placement, but to call it faculty "productivity" is a misnomer. There is plenty of fine scholarship that appears in the journals of tier II - IV schools, as well as specialty journals of all schools.
Case in point, I have one article in a listed journal. It has a few cites, but no one has mentioned it to me. I have another article in a specialty journal of a school ranked ~100. It has been cited numerous times, has led to speaking engagements, book chapter invitations, and has more than twice the downloads on SSRN. Is the first article true "scholarship" but the second article not?
To arbitrarily limit the data set to a small subset of all journals that exist and then pronounce that the study can aid in the evaluation of "serious scholarly culture" seems a bit of a reach. That said, if you are evaluating which schools have fared better in placement, this study seems perfectly fine.
On a side note, why not include all journals?
Posted by: Michael Risch | Sep 11, 2008 10:53:13 AM
For what it's worth, here are some slightly edited comments that I shared with Professor Yelnosky after his study came out last year:
My disagreement relates to your methodology. It seems to me that the real measure of a scholar (and scholarship) is not where work gets placed, but what impact it has. A good measure of that impact is citations. Obviously, citation counts have their own flaws, but citation counts get a lot closer to measuring a faculty's scholarly reputation than mere placements. I suspect that you went with the latter for self-serving reasons (i.e. your school does better by the placement measure than by the citations measure), and I can't blame you for doing that. But I think a citation ranking (like Leiter's) would be a much better methodology.
Assuming you want to focus on placements, I think your choices of journals are a bit odd. I don't think people weigh competing offers using the U.S. News academic reputation ranking; they use the *overall* U.S. News ranking [or the Washington & Lee ranking]. For example, I think it is uncommon for someone to turn down a law review at a higher ranked school (i.e., U.S. News overall ranking) in favor of another journal that happens to be published at a school with a lower (say 15 slots lower) U.S. News ranking but that happens to have a slightly higher academic reputation score.
To take a rather self-serving example, I have published two pieces in the George Mason Law Review. George Mason has an overall rank of 34 in U.S. News [in the 2008 edition], but its law review doesn't count in your methodology because the school's academic reputation score is (.1) too low to qualify. I was swayed by George Mason's overall rank, not just by its academic reputation score. Indeed, I got an offer from a general law review that IS on your list, but turned it down for George Mason because George Mason is ranked higher on the overall U.S. News list by a fairly good margin than the journal from which I received an offer.
Put another way, if you want to use placement as a methodology, you need to select journals using the same method that actual authors use to select journals. After all, you're trying to measure good placements, so you should use the same methodology that authors use when determining what is a good placement. I understand you have to draw the line somewhere, but I think that these difficulties just emphasize my first point about citation counts being the preferred method of measuring a school's scholarly heft.
Posted by: Andrew Perlman | Sep 11, 2008 10:42:04 AM
I notice one potential flaw in the study. From what I can tell, the study does not appear to discount for coauthoring (I don't think Leiter did either). That is not a flaw per se, but a result of this and the approach of the study (it aggregate points for each faculty member) if multiple faculty at the same institution coauthor together, this will count each article separately for each author as at the appropriate score. This multiplies the impact of coauthoring with colleagues at your own law school -- in some cases the same article will count twe, three or even fourt times -- over coauthoring with others. For example, three coauthors at the same school will each separately be allocated 3 points for publishing an article, attributing a score of 9 to an institution for a single article. A law school that want to increase in these rankings can easily move up if a Dean simply encourages faculty to add several of their colleagues as coauthors to each article, regardless of whether they actually did the work or not. Am I misunderstanding this? Might this acount for the relatively high rankings of some schools, if a law school's faculty frequently coauthors together? It would be nice to see the data.
Posted by: anon | Sep 11, 2008 10:26:55 AM
Oh great, another ranking system designed to minimize the important of schools - in this case journals - who aren't already at the top, and with a side benefit of encouraging law professors to write at greater length, whether they need the space or not. (The brevity of law professors is certainly one of the great scholarly problems I noted when reading journal submissions.) I'm sure that will benefit everyone with no negative side effects whatsoever.
Posted by: Anon | Sep 11, 2008 10:00:22 AM
The comments to this entry are closed.