« The Significant Decline in Null Hypothesis Significance Testing? | Main | An Anti-Agency Mood? »

Sunday, March 29, 2015

Why isn't PRSM more popular?

Following the angsting thread this season and reading Dave's thread about professors breaching law review contracts has made me start thinking again about the law review submission process. Everyone, it seems, agrees that the process creates perverse incentives: professors submit to dozens of journals, so that student editors must make decisions on thousands of articles; student editors are forced to make quick decisions in competition with other journals, and so rely on proxies of dubious merit to decide what to read; students at higher-ranked journals rely on the work of students at lower-ranked journals to screen articles. What strikes me, though, is that the Peer Reviewed Scholarship Marketplace seemed to solve all of these problems when it was created in 2009. It incorporates peer-review from subject matter experts (and provides this feedback for authors to strengthen the piece, whether or not they accept a given offer). It takes away the time pressure of the compressed submission season. It protects the freedom of choice for both professors and for student journals; students still decide which pieces to make offers for (after seeing the peer review evaluations), and professors can feel free to decline offers--they are not obligated to take an offer from a journal they don't wish to publish with. When PRSM was created in 2009, I thought it would quickly become the predominant way that law journals select articles.  Why hasn't it? Do more journals need to start using it so that authors will submit to it? It seems like they have a pretty good cross-section already, as there are 20 journals listed as members, about half of which are ranked in the top 50 law journals, and some in the top 30. Do more authors need to use it, so that journals will sign on? Or is there something I'm missing--some benefit of the current practice that PRSM fails to replicate?

Posted by Cassandra Burke Robertson on March 29, 2015 at 07:05 PM in Law Review Review, Life of Law Schools, Peer-Reviewed Journals | Permalink


I submitted once to PRSM because I liked the concept a lot. I had several senior colleagues read it and provide feedback (and incorporated that feedback) before submitting it. The comments I got back from the peer reviewers were pretty dickish (basically along the lines of "I don't agree, so it sucks."). One even gave me a 2 of 5 (below average) on everything (including quality of writing ... seriously?) but no substantive comments. My take at the time was -- with the shroud of anonymity -- law profs are, perhaps, ill suited to peer review as we are a territorial lot and tend to dismiss ideas that are not in line with our own ... or we have a serious need to "win" any argument so we belittle other ideas ... or we are simply assholes. So even the T4 law reviews passed via PRSM. Easily placed it via the regular submission cycle in a T100 flagship (not my best placement, but I needed it to get a grant) and it has been cited by courts, in a SCOTUS amicus brief, in other trial and appellate briefs, and in other LR articles.

Posted by: another jr prof | Mar 30, 2015 9:13:16 PM

I participated as a peer reviewer for PRSM a few years ago. Things were a little different back then. Fewer journals were members (though Stanford Law Review was one of them). And I believe there was no exclusivity; the peer-reviewed process generated comments and recommendations that the author and journals could credit or reject.

I see no reason why I can’t explain a little about the process from my end. I’ve also been a peer reviewer for true peer-reviewed journals, and I think the PRSM process was fairly similar. The PRSM process was double-blind. I was given a questionnaire and asked to numerically rank, on a 5-point scale, various attributes, such as insight, timeliness, analytic quality, etc. I was asked to offer comments (I wrote about four pages of comments), which I believe were shared with the author. And I was asked whether I would recommend accept, accept with major changes, or reject.

Compared to the “soft” peer reviews I've done for general-interest student law journals, the PRSM process was more formal and, since comments were shared, probably more useful as well. In short, I thought favorably of the process, though I'm not sure how authors felt about it. And I'm not sure if anything at PRSM has changed since I was involved.

Posted by: Scott Dodson | Mar 30, 2015 1:22:42 PM

The trend lately seems to be away from expresso and towards scholastica (at least among tier one journals). My impression from the outside is that the best journals are looking for an application process that connotes exclusivity and quality. In effect, they appear to be marketing themselves as a luxury good. If they see themselves this way, then they probably won't want to join PRISM. They want to make the application process difficult and opaque. That's part of what creates their "brand."

Posted by: intlanon | Mar 30, 2015 10:40:09 AM

I seem to vaguely recall that Stanford Law Review was once a member of PRSM. If I am right, I wonder why they no longer are. Like a commenter above, I think a big problem is no t20 journals are a part (well, that combined with the exclusivity component).

Posted by: Anonprof | Mar 30, 2015 12:20:33 AM

@ prof jr., not sure how I would assess your idea on the substance--there are things I like and dislike intuitively, but it certainly requires a much more fundamental shift in academic publishing than what is being asked and promoted by PRSM.

@ dave, there isn't much info on the reviewers, but I noticed a bigger problem which is this: After clicking through the most prominent journals they list (based on my highly suspect and unpublished metrics !), I found that 0% of them even refer to their participation in the program. Host and iniciator South Carolina does mention it, but even they refer to it AFTER their regular process, which indicates "we strongly prefer submissions through Scholastica."

I think this answers why it is not more popular. If nobody is buying and nobody is selling and nobody is marketing then wither PRSM. If they could get a high enough profile (but not so high profile that the system implodes) journal to commit to using this as their primary selection method or even to commit to 1 PRSM article per non-symposium issue, I think it would become a real player. (Again, I am assuming the peer review is done well, which requires the gatekeeping about which Dave is presumably concerned.)

Posted by: waves? | Mar 29, 2015 10:16:15 PM

This is quite fascinating. I don't think this is the right model -- but I do think it's an important step in the right direction.

I've been playing around for a while with the idea of developing a collaborative editing/feedback platform. The idea has three components. First, authors could post papers, which readers could both read and edit/comment on (only the author would be able to see/approve edits & comments). Think a version of "track changes" open to all readers. Second, there would be a karma-based "points" system, to help provide rough peer-assessment of the papers.

And third, and most relevant to this thread, all papers would be considered "in submission" to any journals. So, if a law review editor happens to read an article she likes, the journal can make a publication offer. Or, if a journal is looking to fill a slot, it could look for subject-specific articles. (There are several issues & details to sort out for how this would actually be implemented. For instance whether there is an expectation that such offers be accepted (I would say no; but there may be a public indicator if authors have declined offers) and whether papers could be directly submitted to journals (I would say yes).)

No idea if this idea will ever go anywhere, which is why I find PRSM so interesting. It's really great to see others playing with alternative submission models. And I think there is something particularly compelling about a "flipped" model, in which journals seek out papers -- especially if combined with a constructive feedback mechanism to indicate quality to both authors and journals.

Posted by: prof jr | Mar 29, 2015 9:46:41 PM

An additional problem is that they don't appear to be transparent about who the peer reviewers are. Like the first two commentators, I hadn't heard of it, and would be reluctant to trust an article to a system that seem's to've flown a bit beneath the radar to date.

Posted by: dave hoffman | Mar 29, 2015 9:33:22 PM

Assuming there is actual and good peer review, which my one experience indicates is pretty awesome when done right, my concern with this--of which I was also unaware--is only the dynamic between this process and the submission season. For example, do I need to submit 6 weeks (or maybe 4 weeks) earlier than I otherwise would in order to get the benefit of the peer review but also coincide with when the participating and nonparticipating journals have openings. If I submit, say, today, will any of the listed journals have openings in 6 weeks and, if not, are they going to jump to their August slots (assuming they have any) for this?

I feel like they would be well-served by giving this sort of information or tips on the website, so one could take advantage without meaningfully losing other opportunities.

I also wonder about their "exclusivity". It is not really exclusive at all, right? I am under no obligation to accept an offer, if given. What they really have is a no-expedite system. What would stop an author from submitting, getting peer review, while having secured an offer elsewhere. (Not saying that is actually a bad thing, but it undermines the system).

Posted by: waves? | Mar 29, 2015 9:01:09 PM

I haven't heard of this before now, but it seems like a great idea.

Personally I would be very reluctant to submit to it at the moment for a few reasons. (1) It wants exclusive submission, but it doesn't have enough journals to have a critical mass. (2) It also doesn't have very many top journals. If I usually published in top-20 journals, say, it would be a step down, and even if I didn't, I might want the lottery ticket of a top placement. (3) There's no information about how many submissions they get and how many slots are available. So it might be significantly easier to get a placement this way, or it might be significantly harder.

Posted by: junior prof | Mar 29, 2015 7:25:49 PM

The comments to this entry are closed.