« I Am LawProf | Main | Defending Against and Proving the Counterfactual »
Thursday, August 11, 2011
Peer Review at Student-Edited Journals: Best Practices?
Last week brought news, via Bainbridge, that Chicago is joining Harvard, Yale, and Stanford in regularly using some form of official peer review. (I say “official” because many journals also informally solicit faculty input). Bainbridge is displeased, especially about the short turnaroud. I can’t sign onto the whole rant, but there are a few points about the implementation of peer review he highlights that are worth some more discussion.
First, some stylized facts about current practices. I have a survey out now to journals that have used official peer review in the past, and so maybe soon we’ll have real facts. (Chicago editors should feel free to e-mail LawReviewReview ~at~ gmail.com for a survey of their own.) For now, though, my impressions are that: 1. journals often disregard or weight lightly the advice they get from outside reviewers; 2. reviewer comments are not shared with authors; 3. authors cannot respond to reviewer comments; 4. reviewers are anonymous but free to reveal themselves; 5. reviewers don’t know author identity (except perhaps in the new case of Chicago, which does not use blind review). I think 1 & 2 are significant problems, 3 sucks but is probably hard to fix, 4 needs some tweaks and 5 seems a'ight. After the jump: why.
As Bainbridge points out, giving no presumptive weight to your reviewers is a pretty lousy way to motivate good reviews. You’re asking me to drop everything, simply for the good of mankind, and I’m only going to get a couple of days to collect my thoughts. If my views don’t actually matter that much, why would I bother? There’s a strong norm in other disciplines that editors must make an offer if reviewers recommend publication, and at a minimum I think journals should bind themselves to that position. The same is not necessarily true of reject/revise recommendations, since scholars are by nature a disputatious and suspicious bunch, and there may be legitimate reasons for thinking a piece is publishable despite some skepticism by (let’s say) the author’s intellectual foes.
Next, it’s a major problem that reviewer comments are rarely shared. One EIC was nice enough to pass along some reviewer comments to me once, but I haven’t heard of anyone else who’s ever gotten any (of course, student editors rarely respond to any post-rejection communication of any kind). Again, this is demotivating for reviewers (not to mention aggravating for authors). What is the point of thinking deeply about the issues you’re reviewing, if there is a good chance no one will ever benefit from your thoughts? You could contact the author directly, but generally the better norm is that the author shouldn’t know that you were the reviewer (more on that in a minute). And, incentives aside, scholarship would be better overall if authors did get the opportunity to benefit from reviewer advice, especially junior authors with senior reviewers.
In most peer-review processes, providing reviewer feedback to the author goes hand-in-hand with allowing authors to revise to account for, or at least respond to, the reviewer. Some reviews are just wrong, or miss a key point of the argument (especially those that have to be completed in 5 days...), or are, shall we say, “motivated.” If reviewers have make-or-break power it’s fair to let authors point out possible flaws in the review, or acknowledge its wisdom and make the necessary adjustments. And, as Bainbridge says, circulating responses back to the reviewer gives the desirable incentive to care what the reviewer says. The timing and volume of the student-edited process probably makes this process impractical, especially revisions. But I could see offering an author the opportunity to respond as long as she was willing to commit to wait long enough for the editors to digest her response.
Lastly, I don’t have deep thoughts about author anonymity, but I do think reviewer anonymity is probably a good idea. And not just formal anonymity, but also an expectation that reviewers not reveal themselves. In fact, letting reviewers reveal if they want is probably the worst outcome, since it gives asymmetric incentives. One doesn’t want reviewers who are motivated by the rewards of the authors’ appreciation. But one also doesn’t want reviewers who are motivated by the opportunity to bury an anonymous hatchet in the work of a rival. Revealing everyone eliminates the second but exacerbates the first; shrouding everyone eliminates the first but exacerbates the second, and reviewer option to reveal exacerbates both.
On balance, I’d guess shrouding everyone is the best choice IF it’s combined with giving authors the opportunity to respond, which of course mitigates the hatchet-job dangers. I can’t think of any comparable way of mitigating the problem of over-enthusiastic reviewers, other than just discounting everyone’s views. But then we’d be back to the “why bother?” problem. So, anon + share with author + responses it is.
What do you think?
Posted by BDG on August 11, 2011 at 02:25 PM in Law Review Review | Permalink
TrackBack
TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef0153909d0e12970b
Listed below are links to weblogs that reference Peer Review at Student-Edited Journals: Best Practices?:
Comments
Dear PrawfsBlawg,
I am intrigued by BDG’s suggestions for legal peer review. As you may know, there is a consortium of thirteen student-edited legal journals, the Peer Reviewed Scholarship Marketplace (“PRSM”), that has already put in place and are using a system similar to the one BDG suggests in this post.
PRSM currently uses a double-blind peer review process. PRSM accepts submissions from authors who are interested in having their manuscripts reviewed and published in one of our member journals. PRSM requires that manuscripts be submitted exclusively to the consortium to ensure that reviewers’ work will not be put to waste by having an article selected by another journal before the reviews are complete. The PRSM Administrator redacts all identifying information from a submission and finds at least two expert reviewers who agree to review each submission. Reviewers are given six weeks to complete their reviews. The author is then given an opportunity to see the reviews and to prepare a short response. Finally, the submission, the reviews, and any response from the author are sent to the member journals, who are free to extend offers to the author directly based on the reviews and their own assessment of the submission.
In this post, BDG identified some problems with current peer review practices, which include: (1) reviewer comments are not shared with authors; (2) authors cannot respond to reviewer comments; and (3) reviewers are anonymous but free to reveal themselves.
The way in which PRSM was set up solves all of these problems. (1) Reviews are shared with authors before a submission is sent to the member journals, (2) authors have an opportunity to prepare a response to the reviews (including an indication of willingness to make suggested changes), and (3) reviewers are required to be anonymous (only the PRSM Administrator knows the identities of both the author and the reviewers).
PRSM’s website has more information about our consortium and the peer review process that we use. Also, Harvard Law School Library’s “Et Seq.” blog ran a post recently about the progress that PRSM has made since its creation.
If you are curious about PRSM, please don’t hesitate to check out PRSM's website (www.legalpeerreview.org) or contact PRSM directly at [email protected].
Regards,
Mark Ingram
Peer Reviewed Scholarship Marketplace Administrator
Peer Review Editor
South Carolina Law Review, Volume 63
Posted by: PRSM Admin | Aug 31, 2011 11:39:55 PM
Will:
To be clear, I agree with the final sentence of your Aug. 12, 12:24 comment, that a different system would require a significant commitment by law professors. I don't think a system in which student editors still have a large editorial role but a smaller selection role would make submission to publication years longer, as insufficient editorial staff is one of the key problems in peer reviewed journals. But that's an empirical question. The problem is both that all law faculty gain something out of the current system (by not having to perform external reviews except for tenure and promotion reviews), and that law faculty at top schools make out even better, both because of letterhead bias to other schools' reviews and because of the willingness of some law reviews to publish their own faculty (and faculty, quite scandalously I think, to be willing to have their own students select their pieces for publication, a tradition that dates back a century). So while I don't necessarily agree with your implication that peer review would be an awful burden (since faculty in other analogous disciplines do so willingly and still seem to be productive), I sadly agree that it's unlikely to occur. A movement to change the system would have to begin at top schools and journals, and as I suggest above, that's just not going to happen.
And meanwhile, I have found in the legal academic fields that I follow that some pieces in top journals well deserve their placement, and others don't -- not because they're bad, but because they're relatively uninteresting or not new or one of the other ways by which academic work can be slick and banal. Meanwhile, the anarchic madness of the mad scrum, especially for those who need expedites to get the attention of top reviews, can mean that some top journals pay no or little attention to "good" pieces.
To be clear, again: The system isn't bad but could stand at least some significant reforms to ease some of its irrationality. But a better one could be built. The peer review system as it exists in other academic fields that I have followed (both from an earlier academic career and from friends in History and English and PoliSci) isn't great and frequently leads to incredibly frustrating and unfair results as well. It too could stand at least some significant reforms (but unlike in law, I think reforms could be sufficient). From the outside (and having been on the inside, albeit not as an Articles Editor at a Top Journal), as both an author and one-time "peer reviewer," I don't see peer review of whatever form, standing alone, as accomplishing much if anything. I sense that we're not far apart; it's possible that the source of our disagreement has to do with our different views of the law reviews as they current exist.
Posted by: Mark Fenster | Aug 12, 2011 7:49:33 PM
Mark,
I should therefore clarify that I'm not necessarily saying that "quick look" review "makes up for" the current system-- if indeed the current system is so flawed as to need making-up-for. I'm saying that "quick look" review is one of the few forms of peer review that's consistent with the current system.
The current system, of course, involves the students doing most of the work, involves a huge number of article slots because there are so many journals with so much free labor, involves allowing simultaneous submission to those many journals, and involves quick turn-around time. If law professors, as a group, want to organize themselves to 1, spend a whole bunch of their time working on journals, while 2, contracting their ability to publish their work, and 3, making total submission-to-publication time years longer, THEY SHOULD FEEL FREE.
Posted by: WPB | Aug 12, 2011 12:24:29 PM
There's an important premise behind the way the top journals currently use peer review: that most errors in the selection process at top journals *as a whole* are false positives rather than false negatives. Thanks to the power of simultaneous submission (which the student-run system makes possible), a good article is highly likely to be selected by at least one top journal. The problem is that it only takes one top journal to be hoodwinked by a terrible or unoriginal piece.
IF that premise is right, then the quick-look system makes sense. Peer review is most necessary to keep the uneducated kids at a top journal from being hoodwinked by a lousy piece. Good pieces are likely to find purchase at at least one such journal, even without law professors telling the kids that its good.
I happen to think that this premise is basically true, so I am basically a fan of what I understand to be the current YLJ model. But it's definitely not perfect. For example, there may be particular classes of piece, such as BDG's "methodologically complex pieces" (and probably some brilliant interdisciplinary scholarship) for which the premise does not hold true. For those pieces, students are just ill-equipped to understand what makes them great, and those pieces may by more likely to end up in peer-reviewed interdisciplinary journals or some of the few peer-reviewed law journals that are springing up.
Also, if you disagree with this premise-- if you think that there are a lot of good pieces that no top journal is ever interested in (and that there aren't enough interdisciplinary and peer-reviewed journals to make up for this problem)-- then it is probably impossible to fix the current system of student-run law reviews.
Posted by: WPB | Aug 12, 2011 12:21:29 PM
I understand the idea of the "quick look," but I guess I'm still not quite seeing how it makes up for some of the basic flaws in the current system, unless one wants to argue that YLJ (and a handful of others) are different, and so all those brilliant student editors need is a fast and loose review by "experts." You still have a process in which only a small number of pieces, selected by students, makes it to that stage of review; and then you have one or two reviewers, in a very short time frame and creating only a brief written record, if anything, informing decisions. though presumably only with the power to give an advisory veto. Even if I can be persuaded that adding that step better than nothing, it's hard for me to see it doing much more than protecting against a disastrous offer. Which, again, is something, but it doesn't address the really major issues in the process.
And just to clarify, if anyone is listening (hi, editors!): I think student editors do a great job under incredibly difficult circumstances. I'm just not persuaded that the scope of their job, and the circumstances under which they work, make much sense, and the scope and circumstances are the result of historically contingent (or path-dependent, if you prefer based on your preferred inter-discipline) decisions that have been made or not made over the past century or so. They can and should be unmade -- reforms that can only begin by administrators, faculty, and students at the top schools, who have the most to lose by such reforms.
Posted by: Mark Fenster | Aug 12, 2011 12:07:41 PM
Hmm, I'm feeling kinda persuaded by Will. But I do think if journals explicitly put more weight on reviewer opinion then the reviews would also merit more weight. Whether that would improve the process or, as Mark says, just put an improbable bandage on it is a harder question. For me, the frustration in getting students to understand relatively methodologically complex pieces makes me want more experienced voices somewhere in the process. But maybe it's a hollow hope.
Posted by: BDG | Aug 12, 2011 9:08:41 AM
As a recent YLJ Articles Editor who used a form of the peer review you criticize, I have to disagree. At least at YLJ, at least when I was there, the expectation was that the reviewer comments be more on the order of a "quick look," than anything like the kinds of review done in peer review in other disciplines. So it makes sense that it isn't necessarily something you would send to an author, not necessarily something you would wight heavily etc.
And those kind of quick-look comments make sense in light of the reasons for adopting this model. Lw review editors are regularly criticized (on this blog among others) for publishing pieces that are OBVIOUSLY unoriginal or OBVIOUSLY uninsightful or OBVIOUSLY flawed-- to anybody who knows the field. (I have never heard the complaint that student law reviews are bad because they publish too many things with subtle flaws.). So reaching out to a few knowledgeable people in the field about pieces we liked allowed us to ask "are we missing something?". It is not, and is not trying to be, actual academic peer review.
Maybe the practice has changed a lot since I sat on the articles committee.
Posted by: WPB | Aug 11, 2011 10:15:17 PM
I think trying to add a peer-review component to the submission review process is like using a splint to repair a wound. Anonymous peer-review works fairly well as part of a controlled, single or double blind process. It provides both a measure (quite imperfect, mind you) of fairness and rigor to the intellectual process. When peers are both reviewing and overseeing the process, the peer review process makes sense, even if it has its own flaws and randomness in practice.
But there are so many systemic differences between the student-edited review process and the peer-edited review process that trying to graft one onto the other doesn't really add much if anything. The overwhelming number of submissions, the role of students in making at least the first pass on reviewing submissions, the effects of the known author's identity and institutional home on the student editors who make the ultimate decisions to publish, and the anarchic speed of the "window" are just the four strange components of current law review submission and review process that come to mind first.
A review process that assumes deliberation, anonymity, and iterative submissions, like the academic peer-review process, probably doesn't speak to some of the more significant downsides of student edited law reviews. To return to the stupid metaphor, the splint probably won't do the wound any harm but it's hard to see how it's going to help the wound heal. It's a curious fix for law reviews and for a process that -- like the legal academy itself these days -- seems to feel quite anxious about its legitimacy.
Posted by: Mark Fenster | Aug 11, 2011 9:35:59 PM
A couple of points --
1. You ask: where's the incentive? I don't think we have a responsibility to sweeten the pot, especially since professors -- by their very job description -- have a duty to help elevate the standards of scholarship. Currently, most professors aren't working as peer reviewers, and until they are (i.e., until they start their own journals), they ought to play by the constraints of those that are actually doing the hard work. Basically, you want professors to have all the editorial discretion but do none of the actual logistical work. That's not fair to AEs.
2. You ask: why can't comments be shared? I think faculty reviewers would want to stay discreet. Disclosure may chill the candid substance of the comments and provide a disincentive to review.
3. You ask: why can't professors respond? First, given the volume of submissions and quick deadlines, this is not feasible. We cannot set up a bureaucratic tribunal for every submission. Second, professors should circulate their work for review before submitting to journals. They can easily uncover the flaws in their work if they simply ask their peers.
Posted by: Former AE | Aug 11, 2011 4:20:37 PM
The comments to this entry are closed.