« The CrimProf Shadow Conference Next Week at Law and Society in San Fran | Main | Women in the Profession »
Friday, May 27, 2011
Anecdotal Evidence of Letterhead Bias
Dave Fagundes' very interesting interview with current Minnesota Law Review Articles Editor Carl Engstrom, and the comments thereto, prompted me to relate an anecdote about letterhead bias in law review article selection. Everyone has a sense that letterhead bias exists but no one can prove it. As far as I know, there is no empirical study on how often professors at third- or fourth-tier schools are published in, say, top-10 or top-25 journals. Minna Kotkin's work, though it focuses on gender imbalance, does touch on letterhead bias (see pp.27-28), but (1) does not come to any hard-and-fast conclusions and (2) lumps together tiers 2, 3, and 4, whereas my sense is that it is far more difficult for professors at fourth-tier schools to get published in top journals than it is for those at second-tier schools. Moreover, even if there were empirical evidence that professors at third- and fourth-tier schools are rarely published in the more well-regarded law journals, there is the question of causality: are they rejected by Articles Editors at these journals because they teach at third- and fourth-tier schools, or do they teach at third- and fourth-tier schools because they do not write law review articles as well as professors at more highly ranked schools? Of course, it may be a little of each, but as someone who teaches at a fourth-tier school, I'd like to think that it is more the former than the latter.
Which leads me to my anecdote. A few years ago (summers of 2008 and 2009, to be exact), I labored over a nice little piece -- or what I thought was a nice little piece -- on the premeditation-deliberation formula, which is often used to separate second- from first-degree murder. I typically write on constitutional criminal procedure, and this was my first attempt at a substantive criminal law piece, but I was fascinated by the challenge of justifying, or at least explaining, this artifact of the law that had drawn all but universal condemnation. Having spent the better part of two summers engrossed in the works of Beccaria and Montesquieu, I sent the piece out in August 2009 to the law reviews at the top 100 schools according to the U.S. News rankings, as well as some specialty journals. I anxiously waited for a bite. And waited. And waited.
I got zero offers. I was despondent. This had never happened before. My prior work had placed in second-tier journals and some specialty journals of first-tier schools. The placements, frankly, had never been as good as I thought they should be -- are they for anyone? -- but at least I got something. I questioned whether spending two summers writing about something new had been a dreadful mistake, rather than sticking to something right in my wheelhouse. Constitutional criminal procedure, after all, is on the "sexier" side of things; eighteenth-century penological theory, apparently, is not, at least for someone writing from the fourth tier.
Then, in December 2009, I got an e-mail from David Harris, then-Chair of the AALS Section on Criminal Justice, informing me that my paper had been selected as the winner of the Section's Junior Scholar Paper Award. I was quite stunned, both because of the impressive talent I see in my colleagues in the Criminal Justice Section, whose achievements as detailed in the Section newsletter always leave me feeling somewhat intimidated, and the poor showing my piece had had in the fall submission season. The difference, of course, is that the evaluation of papers for the award was done anonymously.
Fast-forward to March 2010. I sent the article -- the exact same article, mind you -- again to the same journals to which I had sent it in August. The difference was an additional sentence, in bold-face type, in the first footnote of the paper and the same sentence in the cover letter: "This paper was the winner of the 2010 AALS Criminal Justice Section Junior Scholar Paper Award." In the letter, I also helpfully dropped a footnote citing Articles that had previously won the same award; as Engstrom says in the Fagundes interview: "[I]f an article . . . has received any . . . honors worth mentioning, authors need to explain that to us so we have a sense of why it matters." Suffice it to say, this time I got some offers, and the article just came out in the Indiana Law Journal, my best placement yet.
Of course, this is anecdotal evidence, so take it for what it is worth. And, of course, there is anecdotal evidence going in the other direction as well: both Mark Godsey and Emily Houh, who preceded me at NKU-Chase College of Law, published in top-15 journals in their time here, and John Stinneford published a fantastic article in Northwestern University Law Review while at a fourth-tier school (all three are now at second-tier schools). Yet my nagging concern: what about the runner-up for the award that I won? Doubtless, that unknown person is writing terrific stuff. But if he or she is at a third- or fourth-tier school, it may well be that no one is reading it.
Posted by Michael J.Z. Mannheimer on May 27, 2011 at 01:22 PM | Permalink
TrackBack
TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef01538ec2fa4e970b
Listed below are links to weblogs that reference Anecdotal Evidence of Letterhead Bias:
Comments
This may be off topic (such as the topic seems to have evolved) but I wonder what folks have to say about noting in the first footnote of the paper the fact that a paper was in an SSRN top 10 list. Is this normal or does it seem too much like, for a lack of a better phrase, blatant proxy baiting (not that I have a problem with this since editors so obviously take the bait)? I noted a top 10 SSRN ranking in the cover letter that accompanied a recent submission, but it appears that meant little as cover letters can get ignored during the submission onslaught. Should I just load that first footnote with pretty much anything positive I have to say about myself and the paper?
Posted by: Anotheranon | May 31, 2011 3:34:58 PM
@John Moser:
I have comparative experience with peer review in both science (ecology journals) and student-run review in law reviews. And I have a lot of friends and colleagues who have published in both. And yes, from my personal experience and those of those friends and colleagues, the comparison to me indicates that there are strengths and weaknesses in both categories.
The gate-keeping role is (as you indicate) crucial for knowledge production, but whether it occurs at the stage of publication or later is a different question. Just because something is published does not mean (in *any* discipline) that it is necessarily considered "true" or adopted as the consensus as to information. In fact, peer review often misses egregious examples of fraud, non-novel information production, and even plagiarism. Many of these problems are caught not in publication but later on, after the fact, when studies are replicated, questioned, or debunked among the majority of thinkers within a particular discipline. So the decision about whether to do more of your gate-keeping before or after publication has pros and cons. Again, you might have stricter gate-keeping up front in terms of having more expert reviewers, but with the costs I described. And yes, law students do form some level of gate-keeping: Preemption checks by law students often find articles that are non-novel and result in excluding them from publication.
Delay is a cost of the peer reviewed, single and exclusive submission model -- perhaps you could combine multiple submission with peer review (the folks at PRISM are trying that) to get the benefits of peer review with less of the costs of delay.
As for the assertion that the system must not work because no one else does it -- well, from that basis, you would never adopt any innovation and any unique organizational structure must be deemed to be unsatisfactory.
Finally, as for the proliferation of journals, this is just as true in other disciplines as in law. The key difference is that in other disciplines, proliferation has been more through specialization (i.e., the creation of new disciplines and subdisciplines in which new journals are established), while in law the proliferation has been greater in the increase of general law journals (in part because every new law school creates its own law journal) though there has been growth in the specialized journals as well. If you really want to publish something in *any* discipline, just as in law, you can find a journal that will publish it. It just takes more time because of the exclusive submission process. The other primary difference (as I indicated above) is that the sorting of journals within that hierarchy is done by peer review as opposed to student review, again with the pros and cons that I indicated.
Posted by: Eric Biber | May 31, 2011 1:06:34 AM
@JC:
Jonah Gelbach: What about anonprof's proposal to look at placements while on visits? If one looked at visitors from lower ranked to higher ranked schools who did not then lateral to them, wouldn't we see the letterhead effect, while accounting for the possibility that visitors were actually improving?
There's no perfect solution to these sorts of problems (economists often get tenure by making incremental progress on them). But, that's a nice idea; if still problematic, nonetheless better. I can think of some problems, though. Like, consider those cases for which the reason the visitor doesn't stay is that s/he doesn't get anything accepted that year at a highly enough ranked journal. Then we have reverse causality: bad paper placement causes bad job placement. But still, a nice idea.
Here's another idea. Why don't we get ExpressO to randomly assign titles, authors, and institutions to papers and then see what happens? Obviously one would have to undo the randomization post results, and it would involve some logistical machinations (need to have emails sent to the right people rather than the wrong ones, need people to agree to have their names used as random elements, need to use only papers not previously posted on the web). But if you really really really want to know the answer to this question, rather than drawing on law professors' experience and common sense, I think that would be the best way to do it.
jg
ps Dear another_social_scientist: I am sorry that you got punked by the good-natured back-and-forth between me and JC. I am also flattered by your kind words!
Posted by: jonah gelbach | May 30, 2011 4:23:57 PM
I see history and cross-disciplinary and cross-national comparison not as products of individual experience, but suffice it so say parallex can be destiny. Especially when law reviews can be considered valuable training by some. I am sure many students would do them without the credentialing effect but of course not vice versa
And I am not that John Moser.
Posted by: John Moser | May 30, 2011 3:01:52 PM
Jonah Gelbach: What about anonprof's proposal to look at placements while on visits? If one looked at visitors from lower ranked to higher ranked schools who did not then lateral to them, wouldn't we see the letterhead effect, while accounting for the possibility that visitors were actually improving?
And TJ's point seems important, which I will re-state as: there are lots of great (or attractive) articles out there. If it is free to take a fine paper from a celebrity or someone from a fancy institution (rather than an equally good one from someone with no reputation), why would an editorial board not do it?
To John Moser, as you well know (http://www.libertyguide.com/download/Law_School_and_Beyond_2d_ed.pdf) law reviews are in large part for the students. If they did not provide valuable training and credentialization, students would not staff them. Their proliferation is not entirely for faculty.
And in terms of law prof workload, we teach many more students than profs in some other disciplines. It is not clear to me that we are systematically comparatively underworked.
Posted by: jc | May 30, 2011 2:07:45 PM
John,
I continue to see things really differently than you do: I think we've had two very different experiences in legal academia.
I may be wrong, but I am right in assuming you're the John Moser who used to work at IHS and is now a history professor and blogger at "No Left Turns"?
Posted by: Orin Kerr | May 30, 2011 1:00:35 PM
The idea that law reviews developed to assist the local legal profession is pure mythology. Langdellian postgraduate education spread in the US as part of the general nationalization of the profession in the early 20th century, and the law review a status marker along with it. The plaintiff cries of irrelevance in legal scholarship has been cyclically repeated since this time as a result. Read a random lower tier school's law review from the 1930's and see how local their content is.
Again, more law professors writing in comparison to what? The number of business or english professors? Ratio it out. Number of law reviews, astronomically out of proportion to number of law professors compared to other disciplines - and the legal disciplines abroad.
The prevalence of RA abuse I have to admit is harder to quantify and document outside of my own longitudinal perspective. But when you have smart ambitious students with no vested interest in their intellectual production compared to other graduate students and professors generally untrained as scholars and with low levels of normative reflexivity in their work, a bad institutional environment is had.
Posted by: John Moser | May 30, 2011 12:25:12 PM
John Moser,
I don't recognize the world of legal academia that you describe, and your attempted explanations for the status quo don't ring true to me. For example, I don't see how the prevalence of generalist journals has anything to do with "Langdellian legal education." Rather, it would seem to me to reflect the historical development of law reviews, which were intended to assist local members of the legal profession who tended to be generalists and more focused on state/regional law. Also, the "deluge" of submissions reflects the number of law professor who are writing, not the number of students who can edit the professors' articles. And as for reliance on RAs, I don't use them much at all, and I don't know of other professors who use them as you describe except for some professors at Harvard who were caught up in some public scandals. Perhaps others over-rely on RAs, but I don't know how prevalent that is.
Posted by: Orin Kerr | May 30, 2011 11:49:33 AM
I always need hugs. I'm middle aged and tenured. So I get to be grumpy about the things that bother me with fewer than normal consequences.
But to Ani. Not only are student RA's, in the plural often, common in our neck of the woods but they are used to do citation and especially lit review work that is not only inappropriate, but demeans the scholarly process inherent in scholarship. Plus, the outright amount of writing law students do in exchange for recommendations is ridiculous. Compare when other disciplines have plagiarizing scandals vs. us. They are many retorts, but none of them are along the lines of my RA's didn't cite properly the parts of my article/book they wrote.
Posted by: John Moser | May 30, 2011 9:49:53 AM
Sounds like somebody here needs a hug!
Posted by: Bruce Boyden | May 29, 2011 10:35:39 PM
to "another_social_scientist": JC was just kidding around. He and JG are bestest pals...
Posted by: Dan Markel | May 29, 2011 9:39:21 PM
To jc: While Jonah may be a first year law student, he is also a very well regarded economist. http://econ.arizona.edu/faculty/gelbach.asp
Posted by: another_social_scientist | May 29, 2011 6:35:35 PM
John Moser: everything else aside, you cite "almost criminal over-reliance on student RA's." What?
Posted by: Ani | May 29, 2011 5:13:54 PM
You have identified a problem in the production of legal scholarship derived from the system as is - not a basis for comparison. You can have such a deluge because of the geometrically greater number of students than faculty available to staff such journals. The prevalence of generalist law reviews is again part of the historical pattern of the growth of Langdellian legal education and the lack of student competence to run specialty journals (not that this stops them when they do exist). No other discipline has such low thresholds for scholarship, structurally tied to heavy proxy reliance in superficial tenure promotion procedures and the again self-justifying disdain for the production of books.
The ugly truth is that this is the perpetuation of a convenient system for our coddled corner of the academy. Given that we already teach less than most every comparable discipline, beyond consistently great admin resources and almost criminal over-reliance on student RA's, it is again the case that some justification for the anomaly in the production of knowledge is required rather than comparison based on the anomaly itself.
Posted by: John Moser | May 29, 2011 5:07:50 PM
Orin,
It is hard to compare the two systems because, in other disciplines, you send your piece out to one journal at a time and wait 2-6 months for a decision. On the other hand, each journal's acceptance rate is much higher: something like 5 to 40%, depending on the perceived prestige of the journal.
Posted by: Michael J.Z. Mannheimer | May 29, 2011 2:13:02 PM
John Moser,
I'm curious, what other academic fields have the same number of submissions and journals? In law, there are several thousand submissions every year and several hundred journals. Because the great majority of the journals are generalist journals rather than specialty journals, the sorting process has to fill thousands of journal spots all at once.
Can you point to the academic disciplines that use peer-review in which there are thousands of submissions every year and hundreds of journals that can publish them? Put another way, are there peer-reviewd journals that peer review (say) 2,000 submissions, and that compete with (say) 300 other peer-reviewed journals, all for submissions from the same pool of journals? If each journal peer reviews each submission, you could potentially have hundreds of thousands of peer reviews each year in each academic field. Do other fields really do that?
Posted by: Orin Kerr | May 29, 2011 1:22:05 PM
@jc:
Thanks for your evidentiary citations, which I found interesting, if not really responsive to the question I posed above. I grant that I am just a "*first year law student*" [emphasis in original]. And of course, many people might use the fact of being a "*first year law student*" as a proxy for "not knowing much about how to do causal empirical research." So, feel free to regard what I write below with whatever degree of skepticism you feel is appropriate under the circumstances.
1. Having just skimmed it quickly, I think the Yamamoto paper is interesting. But, like the Lindgren example, it involves just one paper, so at best one learns the "treatment effect" of different letterhead for only that paper (by the author/authors in question). And, it also involves a small sample of target journals (25 in each letterhead group). With samples of such magnitude, one wouldn't expect to have much statistical power. So absence of statistical significance is neither very surprising nor really very informative about the underlying relationship at issue.
2. Leaving aside point 1 above, let's suppose the same-submission-on-different-letterhead research design is compelling. There's still the problem, with respect to my question for BDG, that this research design is unimplementable with a general review of actually published law review articles. So, I object, JC, on grounds of nonresponsiveness.
Anyway, have a margarita at Poca Cosa for me before you leave town. Stay cool--it's hot out there.
Posted by: jonah gelbach | May 29, 2011 12:29:56 PM
It is easy to identify problems in the abstract without actually gauging their severity or comparative prevalence. Defenses of student-run journals are completely self-directed conversations within legal academia to defend one of the many, many amateurish and non-scholarly elements we continue to cling to. Peer review is not perfect, but no one with any serious take on the production of knowledge thinks that gate-keeping is avoidable in any evolving epistemic system. And compared to what? The gate-keeping of a student who almost by definition only knows in cursory fashion the most dominant threads in a field? And delay - the comparative "efficiency" of the student-run model is why every law school not only has a general law review, but several specialty journals leading to massive over-publication.
If this was really just an issue of a judgment call about comparative trade offs then you would see the student run model somewhere else. But it isn't. In any discipline. Anywhere in the world. And, why not, anywhere in the world in any discipline throughout scholarly history. This goes for disciplines who students who are in fact training to be scholars and far better equipped than law students to evaluate scholarship.
Please.
Posted by: John Moser | May 29, 2011 11:58:29 AM
Shine,
Peer review has advantages and disadvantages compared to the student-run system in law, but having heard plenty of horror stories from social and natural scientists about the process, I don't think that I would call it "amazingly fair and very transparent." Just to take a couple of examples: While the process may be double blind on the surface, for many research topics there is a very small subset of scholars who are expert enough to serve as peer-reviewers; moreover, those same experts are likely to comprise the vast majority of submitters in the field; thus, you have two problems. First, by looking at the citations, the type of work being done, and the questions being asked (and even sometimes writing style!) an experienced reviewer can usually tell who wrote the article under review. Second, as the leading researchers in the field, reviewers often have a lot at stake in the review process: If nothing else, they may be hyper-critical of work that is critical of their own work, looking for reasons to dismiss it. Since any piece necessarily has flaws, a reviewer can emphasize those flaws and downplay strengths, and use their gatekeeper role to keep out work that is threatening to their own. This means that work that challenges conventional wisdom or pursues new research possibilities or even critiques flawed work by senior people may face a steeper hill to be accepted. (You might well want something of a steeper hill for cutting edge work of course, but the question is how much is too much.) Of course, you could broaden the pool of reviewers to reduce this problem, but then the expertise of the reviewers begins declining (and it starts looking more like student-run reviewing). As another problem, because the system depends on very busy academics who are not (usually) paid to do reviews, the system can break down, resulting in significant delays for reviews and publication (months or even years is not uncommon in some fields).
Again, this is not to say that peer-review does not have strengths. It's just to say that it is not necessarily always better than student-run reviews. You have to decide whether getting the relevant expertise is worth possible costs in gatekeeping and delays.
Posted by: Eric Biber | May 29, 2011 11:38:54 AM
Jonah Gelbach's comment, perhaps not surprisingly coming from a *first year law student*, overlooks many available means of empirically testing the letterhead effect. There is some evidence already out there (e.g., http://taxprof.typepad.com/taxprof_blog/files/Yamamoto.pdf, and the famous James Lindgren experiment referred to here http://prawfsblawg.blogs.com/prawfsblawg/2009/12/judging-scholarship-or-would-you-kill-for-blind-review.html) A rigorous experiment would not be difficult to devise. Although Gelbach may be correct that merely looking at existing individual outcomes would be problematic. In any event, our business is hardly the only one relying on proxies to evaluate the quality of writing. http://www.museumofhoaxes.com/hoax/archive/permalink/the_steps_experiment/
Posted by: jc | May 28, 2011 10:10:44 PM
Shine,
The system you describe -- double blind peer review -- is the norm, I believe, in every discipline except one: law. Therefore, I doubt that the "high stakes" hypothesis is the correct one, for many fields have even lower stakes than ours.
Posted by: Michael J.Z. Mannheimer | May 28, 2011 9:50:19 PM
I am starting as a new professor at a non-first tier law school. Looking at some of these posts, I am already concerned about the publication process.
I come from a science background, so that is all I have to compare this publication process against. Almost every journal in the science field is peer-reviewed. The process is amazingly fair and very transparent. (Just one example of the transparency of the system is that many editors do not give out the names of the authors on the article so that there is less referee bias). Also helping the process is that the editors who screen the articles are paid professionals, so at least there is some continuity / consistency in place.
Admittedly, the stakes for scientific publications are slightly different. Not only are publications important for tenure decisions, but also for grant funding decisions (I think a common NIH grant ends up at $250,000/year). Furthermore, publications (especially in highly ranked journals) are a proxy for the ability to execute a difficult research agenda.
With this in mind, why are there not more peer-reviewed law journals out there? Is it that the pool of law professors is simply much smaller than scientists, and thus it is not economically feasible to establish a robust peer-review system? Is it that the stakes are high enough in science such that the average science professor demands a higher standard of transparency and fairness?
I apologize if this is a naive question...I'm just trying to wrap my head around this system.
Posted by: Shine Tu | May 28, 2011 8:29:57 PM
@Orin: Agree completely with the statement "we're all playing a proxy game to an extent." I just wanted to resist the thrust of earlier comments that all law review editors or committee members do is look to proxies, or that their decisionmaking is dominated by proxies at the expense of substance. I'm very confident neither of those is right.
A related point, of course, is whether a given proxy is a valid signal of quality. Proxies are only bad if they're inaccurate. And in any event, is there a general sense that what's published in top journals is unimpressive, and that great work is constantly getting passed over? Unless there's some reason to think journals aren't publishing quality work, and/or that quality work is consistently going unpublished, then I'm not sure why people are so critical of the process (especially in the absence of a plausible alternative).
Posted by: Dave | May 28, 2011 5:46:41 PM
Dave,
I take Brad's point to be that professors rely heavily on proxies, too, which in my experience is very true. For example, while your colleagues may not be so influenced by placement of journal articles, they are probably influenced by the rank of the school that the candidate attended, the prestige of the candidate's clerkship or fellowship, and the like. Those are al proxies for something else -- for the most part, a candidate's intelligence and scholarly ability.
The lesson, I think, is that we're all playing the proxy game to an extent. It's an almost inevitable move given that no one is an expert in everything and we're all picking a small number of "top" items from a large pool. We all have to rely on signals to narrow down the pool to some reasonable number and to help us get a sense of who is good when decisionmaking amidst considerable uncertainty,
Posted by: Orin Kerr | May 28, 2011 5:12:49 PM
Brad: "If tenure and hiring committees can't, or won't, judge scholarship on its intrinsic merit then it is absurd to expect unpaid, overworked students with to do so."
I'm baffled at how anyone could make this assertion based on these two threads, or based on any real knowledge of either the hiring or article-selection process. The interview with Carl suggests that if anything, the opposite of this is true, and that intrinsic merit is, as it should be, the touchstone of the article selection process. That was certainly my experience as an articles editor.
Much the same has been true of my several years' experience on a hiring committee. Placement is somewhat of an initial signal of quality, as it should be, but the merit of articles invariably swamps that weak proxy. I remember numerous discussions in which the committee expressed surprise at a prestigious placement after concluding that the article was weak, as well as the committee crediting the merit of an article that was unpublished or was published in a not very well known journal.
Posted by: Dave | May 28, 2011 4:04:11 PM
Thanks, Carl. Very interesting.
Posted by: Orin Kerr | May 28, 2011 3:46:40 PM
Prof. Kerr, the emphasis on the likelihood of a piece being cited seems pretty strong, and it is definitely tied to the W&L rankings. And how many citations an author has garnered goes into the equation of how likely it is that a piece will be cited. Rankings certainly can have a negative effect on the behavior of institutions--USNWR rankings seem to have inspired a lot of borderline unethical behavior among law schools.
At the same time, I think most editors believe that an article on an important topic with original insights is the most likely to generate citations. During this past cycle we rejected a number of articles from professors listed in Leiter's Top 10/20 citation rankings.
Brad, you are correct in the sense that there is no accountability for actions taken by a journal member. But I don't think most journal members are wired in such a way that accountability and negative consequences are what drives their behavior. I believe most of us are driven by academic interest and personal pride. That line will be on my resume regardless. But it will give me immense personal satisfaction if my journal discovers and publishes the "best" article out there.
Anonprof: The reason that Prof. Fagundes wanted to publish our interview was to establish a dialogue, not for me to act as any kind of oracle. The process is absolutely arbitrary to some extent, and that arbitrariness led many of the professors who contribute to or read this blog to discuss, to the tune of 300+ posts, the submission process. I was simply offering my 2 cents worth of insight. Apparently you feel I was overpaid in that regard. But if you are going to make personal attacks, I think posting anonymously is childish.
Posted by: Carl Engstrom | May 28, 2011 3:03:57 PM
Grimmelmann's second to last point gets to the heart of the matter. If tenure and hiring committees can't, or won't, judge scholarship on its intrinsic merit then it is absurd to expect unpaid, overworked students with to do so.
I'd take it even further - what are the incentives for an Articles Editor to select the 'best' article? Not too strong as far as I can tell. The key payoff from being a law review editor is a line on your resume. No one, and I mean that literally, is ever going to pull the issues you worked on to see what you selected. The general reputation of the school you went to can have an effect - but such things move painfully slowly and any have most impact at the very beginning of your career.
Posted by: brad | May 28, 2011 2:12:46 PM
i find it comical that professors are asking a third year student the tricks of the trade when every submission cycle we look to each other for help and most of us acknowledge that the system changes from year to year and is unpredictable. it's comforting to know that so many people think current articles editors are in the know when those of us who have been submitting works for two decades and were articles editors in law school are still in the blind.
the use of "the author's reputation and how many citations the author's work tends to generate" in selecting a particular piece of scholarship is preposterous (i suppose the good news of hearing this is that i can think of many pieces that were published without these proxies). reputation is linked to past publications, school reputation rather than an individual's personal merits, and sometimes little more than connectedness. such a proxy selection method is bound to perpetuate the publications failures of some people and keep a revolving door open for others, with little regard for quality and originality.
the notion that third year students can select the best articles is almost as ridiculous as having a college senior who read Diderot for the first time last year selecting articles on enlightenment philosophy. there aint no way you're gonna get the best results that way.
one clear indication of letterhead bias is that there are many instances of people who visit at higher ranked law schools and use those schools' letterheads publishing in significantly higher journals than those they had been publishing in using their home schools' letterheads. in the lateral hiring process, i often reflect on the timing of a highly placed article relative to the candidates visits.
Posted by: anonprof | May 28, 2011 7:14:28 AM
Also skeptical that the "market for lemons" analogy works. In Akerlof's model, the asymmetry is that sellers have all relevant info, buyers only proxies. In law review submissions, buyers/editors have plenty of info, though they're affected by proxies. I was an articles editor once upon a time, and we reviewed submissions totally blind at the first stage. Even so, it was very very easy to dismiss many, probably even most, of the submissions as clearly subpar.
Posted by: Dave | May 27, 2011 11:38:03 PM
Carl, can you clarify what you mean by "the author's reputation"? I can imagine two different meanings. The first is whether the author has a reputation for dishonesty or academic misconduct. The second is whether the author is a "big name." I can perfectly agree that the first is relevant. I am much more dubious about the second.
Because if you use the "big name" as a proxy even when the article gets to committee, that is not really different than using school rank. Holding the intrinsic quality of the article constant, I concede that a big name author is likely to get more cites (and I assume, albeit under protest, that citations rather than intrinsic quality is what matters to the law review editor). But that is true of just about every proxy that you might use. An author who is teaching at a highly ranked law school is going to get more cites, holding article quality constant, because that author is likely to become a "big name" in the future and then people will retrospectively dig up his older articles.
Posted by: TJ | May 27, 2011 9:25:05 PM
Carl, you're right. A better way of putting it is that the initial screen stage is a market for lemons. This is completely consistent with your points about what you look at in a submission.
Cover letter? Pffft. That's cheap talk -- unless it has some hard data, like SSRN downloads, in it.
A CV with a publication record? That you look at, because getting published is a costly signal.
Posted by: James Grimmelmann | May 27, 2011 7:53:52 PM
Oh, and just to be clear, when I say that "This is a new focus, I hypothesize, because the W&L rankings either didn't exist or weren't known until a few years ago," I mean to say that I am hypothesizing the reason *why* this is a new focus, not the fact that it *is* a new focus. My sense is that this is really something new, and the question is why is it new.
Posted by: Orin Kerr | May 27, 2011 6:37:28 PM
Carl, one thing that I'm really interested in is the editors' focus on the likely number of citations an article will get. My sense is that this focus is relatively new; articles editors 10-15 years ago didn't really think about it. Can you give us a sense of why the likelihood of high future citation counts is so important?
For what it's worth, my assumption has been that it's because the W&L rankings use citation to rank journals. Editors are trying to help the prestige of their journals by publishing high-citation articles which help them in the W%L rankings, bolstering the reputation of the school and helping the journal attract better articles in the future. (This is a new focus, I hypothesize, because the W&L rankings either didn't exist or weren't known until a few years ago.) Can you give us your sense of whether that is true?
Posted by: Orin Kerr | May 27, 2011 6:34:45 PM
I agree that use of proxies occurs throughout the submission process, but I think categorizing it as a "market of lemons" is an overstatement.
While it's certainly true that proxies are used in the article selection process, they are used far more at certain stages than at others. At the "filter stage," where a large number of submissions are filtered down to a smaller number and then reviewed by a larger committee, proxy use is largely inevitable.
But once an article reaches the committee that extends offers, the use of proxies is generally limited to things that should matter such as the author's reputation and how many citations the author's work tends to generate. The final decision is based largely on these editors' opinions regarding the quality, originality and potential significance of the article. These decisions certainly may be wrong, but law review editors are confident enough in themselves to believe that can make such an assessment.
Posted by: Carl Engstrom | May 27, 2011 6:10:21 PM
All of this is only a problem to the extent that professors rely on journal placement as a proxy for quality. There are enough law reviews that almost anything can be published somewhere. If we really were paying attention only to the scholarship itself, which journal published an article wouldn't matter. Complaints that students don't look at articles closely enough ring a little hollow when our complaints themselves assume that we professors don't look at articles closely enough, either.
Posted by: James Grimmelmann | May 27, 2011 6:00:43 PM
I agree with Orin that this evidences, at most, the use of proxies -- just not letterhead bias, as the post suggests. Where Orin and I might part company is whether it's a "problem." Here, an award from the relevant AALS section might (depending on the competition, criteria, etc.) be a very good indication of the quality of the paper itself, like a reviewer's report, not just an indication about the author.
A less probative proxy was also used by the writer, who cited in the same footnote previous papers that had won awards. Unless he thought the students were going to go out and read those papers, he was banking on the idea that something associational -- probably the kinds of reviews those papers ended up in, perhaps the authors' affiliations -- would be reflected back, through his same award, on his paper. Weak sauce, it seems to me.
Posted by: Ani | May 27, 2011 5:23:01 PM
BDG
What's the point of the Law Review Review data gathering? How do you avoid the omitted variables bias problem that job placement and article quality may be positively correlated? (Subtext: there are lots of uninformative "econometric" studies!)
Jonah
Posted by: jonah gelbach | May 27, 2011 3:31:15 PM
I had something similar happen to me not long ago. (I'm at the same 4th-tier school as Mike). I sent out an article but forgot to attach my c.v. indicating prior publication of a slew of articles, a respectable number of casebooks, etc. For 3 weeks: nothing. I then sent the article to an equal number of different journals, this time with a copy of my c.v. This time: 3 offers from mid-level journals within 48 hours. My take: Orin's proxy theory is spot-on.
Posted by: Rick Bales | May 27, 2011 3:26:20 PM
By the way, the Law Review Review is doing an econometric study on this now. Data gathering is under way.
Posted by: BDG | May 27, 2011 3:16:25 PM
I'm with Ani. In what way could this possibly constitute evidence of letterhead bias? If anything, it shows the bias exists LESS than you (and others) have theorized since you (a law professor at a 4th tier law school) were able to place your piece at a top 30 law review. If, however, the changed variable was you visiting at a 1st or 2nd tier school, then this would support your theory ... but the changed variables are instead spring/summer & no award/your award. What gives?
Posted by: anon vap | May 27, 2011 2:50:32 PM
Theory: law review editors aren't confident that they can properly evaluate article quality themselves. In addition to their own lack of subject-specific expertise, they're operating under severe time constraints. Instead, they must rely on letterhead, expedite requests, and other external certifications.
If so, then the law review market is a market for lemons. Articles have an intrinsic quality, which is unobservable by the buyers at the law reviews. Author/sellers, therefore, look for signals of quality.
I'm not sure it's true, but it would explain a lot.
Posted by: James Grimmelmann | May 27, 2011 2:20:24 PM
Interesting anecdote. Just to broaden it out, the general problem is reliance on proxies. It's a classic problem in selecting a meritorious winner from large set. In the end, students read the article and pick what they think is best. But along the way, they often rely on lots of signals and proxies to narrow down the pool to something manageable they can read from which to pick what they think is best. Those proxies might be letterhead, prior publication record, academic pedigree, name recognition, prior citation counts, SSRN downloads, a personal faculty recommendation, expedites from other journals, or, in your case, an award from the AALS Criminal Justice Section.
Posted by: Orin Kerr | May 27, 2011 1:34:45 PM
At least one variable changed according to your account: the cycle in which it was submitted (spring vs. summer). And you focus on a different change, the paper receiving an award, that has nothing to do with your affiliation. Anecdotal evidence of what, exactly?
Posted by: Ani | May 27, 2011 1:31:30 PM
The comments to this entry are closed.