« Please Save me Five Bucks and Explain Why Obamacare Might Be Unconstitutional. | Main | How Much is Cervantes Worth? »

Friday, April 20, 2012

Underneath the Law Review Submission Process: Part VI Interviews with Those who Reject Us

In this next post on the law review submission process (see intro, part I, part II on timing of submissions,part III interview, part IV interview and part V interview if you are interested in interviews with Stanford and Vanderbilt editors), I interviewed two articles editors, Joseph Ballstaedt (JGB) and Ryan Merriman (RM), and the editor-in-chief, Joe Orien (JAO) for 2012-13.

One quick note before the interview.  I was really impressed with something that the BYU Law Review articles editors did this year before giving an offer to an author.  They did citation counts for authors to see how their previous work has been received by the academy as an indication of how important their future work will be.  This isn't necessarily helpful for junior scholars who may not have had time for their work to be cited, but I think it can be helpful for more established professors who may not teach at fancy schools but have written important pieces.  And as long as this isn't the sole or primary criteria, I think this could be a good objective measure by which to judge authors and articles.

1.  If you can briefly describe how many articles you received in this winter submission cycle, when you received the most submissions (if there were any such peaks in the submission cycle) and generally how you weeded through these submissions.

JGB: We received about 1000 submissions, I believe. We received articles for about a month and half, beginning in early February and ending in late March. It seemed like the flow of article submissions was pretty steady, but did pick up some toward the end of March. At least that is my memory. We had to make quick decisions at times to find articles worth reviewing even further and many articles were quickly cut out based on being too short, not having a clear thesis or engaging introduction, addressing an strange topic (like an article promoting incest), or sometimes based on not having a great publication history. However, we often gave great consideration and even offers to young lawyers and professors if their articles impressed us after a quick read.

 RM: Assuming the other four Articles Editors reviewed roughly the same number ofsubmissions this semester as I did, we received somewhere between 900 and 1000 articles. I personally reviewed 189 between late January and about the middleof March. It seems like we were flooded with articles towards the end ofFebruary and early March. Given the sheer volume of submissions, it'simpossible to thoroughly read through each article. The only time I ever rejectan article based on the CV alone is when the author is not a law professor, butI always take a closer look at the article when the author has a strong CV.I'll read the introduction, the conclusion, and skim through the rest if it'sinteresting. My primary goal is always to pass on articles that will generatecitations to our law review. To that end, I look for articles that are not toonarrow, make a theoretical/descriptive contribution to the literature, orpropose a change to some area of the law. A concise, well-written introductionthat clearly states why the article is an important contribution always catchesmy attention. 

2. How many levels of review do you have and do you have a vote on each article? If so is it majority or supermajority vote?

JAO: We have three levels of review: an initial prescreen stage, a reading stage, and a final review. A single editor reviews each article at the prescreen. To reach the final review, each article must have been read and accepted by two articles editors. At the final review stage, I first read the article and solicit advice from faculty members. The article is then presented to all of the articles editors for discussion and a final decision. All of our decisions this year were unanimous.

3. How do you determine whether an article should be accepted for publication?  What factors are most important to you?  Article topic? Author's credibility?

JGB: It was always nice when the author addressed either a topic we were familiar with or had interest in; however, we often had to make judgments on articles addressing topics well outside our expertise. In these situations, a indication that we should make an offer or further investigate an article was how well we understood the topic after reading the article. If an author can successfully introduce me to a new topic and unique proposal within that topic, I am satisfied. To do this, he or she must clearly and simply explain the foundation (the basics) of this topic while still engaging a new aspect of this material in a clear way. If an author cannot do this, it is much more difficult to have confidence in the author's article. Great minds and writers, in my opinion, can explain complex matters in a simple way.

JAO: In my review at the last stage of the process, I focused almost exclusively on the quality of the article and tended to only look at the author's credentials in close cases where we needed to rely on the author's credibility in explaining a complex topic. In judging the quality of the articles, I focused on each article's utility (target audience, relevance of thesis, scope), strength of arguments, writing and organization, and research quality. The best articles obviously excelled in all four areas. For articles deficient in one area or another, my decision rested on whether or not the deficiency could be improved through the editing process (e.g., it's hard to improve an article with a poorly conceived thesis).

RM: In descending order, the most important factors to me are (1) the substantive quality of the article (again, does it make a novel descriptive/theoretical contribution to anarea of the law) (2) technical quality, (3) author's prestige. If I read anarticle with great ideas, polished prose, and it looks like it's already been blue booked, I always pass it on to another editor even if the author isrelatively obscure. In fact, because we lose so many articles from professors at T14 schools to other law reviews, I really make a special effort to look foryoung scholars who've written excellent articles. Additionally, I typically reject articles that seem too narrow, or seem to belong in a niche journal (technical articles on tax or patents that don't seem broad enough for a more general audience; pieces that read more like an econ/polisci/international relations piece than a traditional law review article).   

 4. Tell me about the cover letter.  What is the relative value of the cover letter as opposed to the CV?  What were the most effective cover letters you saw?

 JGB: I did not put much weight in cover letters. In fact, I only remember reading one or two. This was because we were trained not to read the cover letters and found more use in reading the introduction of the article. I would first review the CV to see whether the author was publishing successfully. Then I would consider the substance of the article, skipping the cover letter and going to the introduction--what any future reader would use to quickly assess the utility of an article.

RM: I'll be honest--After the first 20 or 30 submissions, I stopped reading cover letters. Most of them are generic and sound exactly the same. The cover letters that Ithought were effective briefly describe why the article is important in relation to prior scholarship, but an effective introduction in the article does that anyway. For that reason, I think the CV is definitely more important than the cover letter.


 5. Describe (each of you), the top two articles you saw this submission cycle and why you believed they were the best articles.

 JGB: My favorite submission was very clean. It did what every law review article does (or tries to do), but did it better and more concisely. It did not develop any tangents or speak too long on any aspect of the topic or area of law. Rather, it gave a clear introduction that told me what the article contained, gave a concise but adequate background of the law concerning this topic, and gave me a clear idea of the author's proposal and addition to this area of law. Essentially, it was your stereotypical law review article. It wasn't fancy. I would have been able to read this article as a beginner to the topic (which I was not) and also as a student of the subject interested in the author's proposal (which I was). I didn't have to read the whole article to find the useful parts because it was organized well. The author did not hide the ball or use long-winded explanations. Other authors usually spoke too long so I was grateful to this author for taking the time to slim his article up.

My next favorite article(s) was any article that did the same. In sum, any article that clearly (and sometimes creatively) made its point and made it quickly.

JAO:  My favorite two articles both had excellent writing and organization. When an article has clear organization, road maps, transitioning, topic sentences, summaries and conclusions, and signposts, it makes a world of a difference in how I perceive it. Both of the my favorite articles did this. They were easy to read, and I didn't have to re-read paragraphs several times to figure out what the author was trying to illustrate. And I don't think they were easier to read because of the subject matter (one of them, in particular, dealt with a rather complex topic).

RM: We extended an offer (that was ultimatelynot accepted) on an article that examined the relationship between tort reform and economic activity. While we are sometimes wary of pieces that involveeconometrics (because frankly we’re not qualified to evaluate complex empirical work—we’re barely qualified to evaluate traditional law review pieces), the paper used a unique data set to evaluate untested empirical claims surrounding a highly contentious, high-profile issue. Organization was clear, writing was crisp, and technical quality looked ready to publish.

The other article that really impressed me proposed a thought provoking solution to collective action problems that did not rely on the threat of sanctions or prospect of special benefits to participants. It used some game theory, but in a straight-forward, intuitive way I thought was accessible to a general audience. The author applied her theoretical insights to some areas of the law and proposed some substantive reforms. Organization and writing was easy to follow, citations were already in good shape, so even though the publication history was pretty sparse, I felt comfortable recommending it. 
 
6. What kinds of trends did you identify that we can tell law professors about.  For instance, did any authors do anything interesting this year that you wanted to pass on?

  RM:I noticed a lot of empirical pieces. Some of them were fairly sophisticated (logit/probit regressions, difference-in-difference estimation, instrumental variables) and others were more straightforward (OLS, simple cross tabs, etc.). Personally, I loved seeing so many empirical papers (full disclosure: I was anecon/polisci undergrad). However, I think in general law students feel a little skittish about accepting complex empirical papers because most have no idea how to evaluate the methodology and results. The best articles evaluated controversial, salient legal issues or challenged long-standing assumptions in the literature. They also focused mostly on presenting the results and discussing their implications and left most of the technical explanation in anappendix.  

7. How effective is it when authors are extremely communicative with you with emails and updates?  Is this nice or annoying?  Do you prefer eager authors who may tell you that they are willing to accept an offer if you give one without expediting or does that not help?

RM:I don’t think I’d communicate much before you’ve been contacted by the law review other than to let us know you’ve received an offer. Given how much we don’t know about many areas of the law, we’re always trying to look for indirect evidence that an author produces influential scholarship that will be cited—publication history, quantity/quality of citations to author’s priorwork, prior work experience indicative of expertise, and of course, offers from other journals. If you let us know that you’re willing to publish with us no matter what, that might lead use to speculate that the article isn’t important enough to generate interest from other schools. 
 
JGB: It is certainly tempting to give more weight to an author who is willing to accept an offer if we make him or her an offer. We as article editors have a lot of work to do, and we would love to have solid articles as quickly as possible. It is somewhat discouraging to know that many of the great articles that we like find offers elsewhere. I am tempted to make an offer to an author whose article might not be quite as amazing but will be accepted rather than 3 or 4 offers to authors with stronger articles but who might not accept an offer.

8.  Do you ignore articles that do not come from expedited reviews or try to balance expedited reviews from reviews of regular articles?

 JGB: I tried to give equal weight to all articles, and I read all the articles in the order that I received them generally. But as time went on, I did start to favor expedited articles in an attempt to find better articles quicker. it is certainly hard not to read an expedited article with a presumption that it will be good. We read most articles with a presumption to reject, but an expedite automatically changes that. And they are generally better anyway
 
RM: I will read expedited articles first, but they do not receive more substantive attention than other submissions. In my experience, most expedited articles that we offer end up publishing at a higher-ranked journal anyway.
 
9. Do you try to obtain a balance in article topics that you are publishing?  First year topics v. non-first year topics?  Public vs. private law?

RM: Not really. If wealready have two articles on the same topic in an issue, we might be less inclined to extend a third offer on the same subject. But generally we’re mostconcerned with filling up our publication calendar, as are most similarly situated law reviews. That probably changes as a journal’s prestige increases.
 
JGB: I didn't favor any kind of topic, at least consciously. Rather, I try to find articles that will be read and will be cited. I want to add to current legal discussion, wherever it might be. Sometimes a topic that I found very interesting was not likely to receive attention, so I didn't give it as much weight, despite my own interest in it.

JAO: I tended to favor articles with broad applicability. Although a broad thesis can quickly become unmanageable, I tried to look for foundational articles that would lend themselves easily to further discussion in academic circles. I think a thesis can be too narrow in any area, so I'm not sure how much the subject matter affected my thinking.

10.  You have a floor here to advice law professors on their articles.  What are some tips that you would give for professors to improve their articles?

 JGB: If you co-write an article with a less prominent author, it is still your work. I feel that a few great authors let co-authors use their name, and the resulting article wasn't very impressive. Also, good Bluebooking and removing simple typos can make a great difference for two reasons. 1) We don't want to get your article ready to publish--that means a lot of work for us. We want it to come ready to publish and polish it off. 2) Poor Bluebooking and editing make your article lose credibility. Maybe we as law school students put too much emphasis on Bluebooking due to the many edits we do, but it is still something that we value (unfortunately). And grammar errors and other typos just make it look unprofessional. That goes without saying.

JAO: I'm sure most professors already recognize this, but having student-edited journals (as opposed to peer-edited) means that we are frequently unfamiliar with the law underlying each article. Unless we've taken a course on the material covered in the article, we often rely on the article to explain it for us. So if the article doesn't explain, even briefly, the underlying law, we will naturally find it more difficult to understand than an article that builds the blocks necessary to understanding the analysis. In fact, as I'm sure is the case with most editors, the enjoyable part of screening articles is learning about various areas of the law. I think we will often be drawn (perhaps unconsciously) to those articles that attempt to educate the reader generally.

RM: So many factorsthat persuade me to recommend an article are out of an author’s control by the time they submit an article. Does it matter if you went to Harvard or Yale or clerked at the D.C. Circuit? It certainly doesn’t hurt. But setting those types of factors aside, the bottom line is to make the article as ready for publication as you possibly can prior to submission. So many authors see law review editors as an army of (free) research assistants. If something in your article seems like a pain to edit, we’re not particularly excited about doing it either. If an author is clearly an established expert in his/her field with an impressive CV, or if the substance of the piece is particularly compelling,we’re more willing to take on a difficult technical project. But if a young scholar with a short publication history submits a technically deficient piece, it’s harder to justify taking on the extra work.  
 
11. Feel free to add anything else that you think may be helpful.

 JGB: One of the things that I loved to see was an article in the standard (if there is a standard) law review print format. Rather than double spacing and normal formatting, an article in law review publication format looked more like a publishable article. Though this probably shouldn't matter, this format made me feel like the article was law-review bound--rather than any other double-spaced research project that we as students have written and read thousands of times.

RM: I’ll briefly echo what my colleagues have mentioned regarding the importance of laying a little groundwork before launching in to your analysis. As second and third year lawstudents, there’s a lot we don’t know about many areas of the law we’rerequired to evaluate. We rely a lot on the article to establish a foundation.The most effective pieces in my view do several things—(1) provide a briefbackground that orients me to the prior scholarship and existing legalprecedent, (2) identifies a gap/problem/misconception in that area of the law,(3) explains briefly how the article responds to the identifiedgap/problem/misconception. The best pieces lay out all three things in the introduction.

Posted by Shima Baradaran on April 20, 2012 at 11:20 AM in Law Review Review | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c6a7953ef0168ea7541a8970c

Listed below are links to weblogs that reference Underneath the Law Review Submission Process: Part VI Interviews with Those who Reject Us:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

So, there are some 800 law reviews. As the editors here say, they only consider articles from law professors. So how hard is it to get published?

Posted by: Steve | Apr 20, 2012 2:23:50 PM

I'm fascinated by the idea of using prior-author citation counts as a factor in publication decisions. If any of the editors are available, I'd love to know a little more about how this works-- How much of a factor is it? Are judicial citations or academic citations more important? How much effort do you put into standardizing the counts? (Do you discount by the number of years a piece has been out? Do you find and discard self-citations?)

Posted by: William Baude | Apr 20, 2012 3:09:12 PM

Not all law reviews refuse to publish non-professors. I'm two-for-two the last couple of submission seasons, both at top 100 law reviews, and one at a better-ranked school than BYU.

Posted by: Joe (not that one) | Apr 20, 2012 3:09:29 PM

William Baude,

I am an Articles Editor for the BYU Law Review, and we weigh quite heavily an author's prior publication history and citation counts. Westlaw makes this easy to do. The reasoning for this is that professors whose articles are cited hundreds of times and have very strong CVs are likely to be cited heavily in the future as well, thus brining more citations to our law review (the single-most important factor for law review rankings). We also do this in an effort to gauge the likelihood of acceptance, since many of these same professors are unlikely to accept an offer from schools similarly situated with BYU. This does not mean that newer professors who have only recently published will not get reviewed; when their publication history is scant / they are cited only a few times, then we take an even closer look at the piece itself. We have discussed and recognize that pieces from up-and-coming professors will generate many citations as their work gains recognition.

From my experience, law review citations are more important than judicial citations, but our Westlaw searches do cover both. There is not an efficient, uniquely scientific way to approach citation counts, and thus we do not standardize the counts or discard self-citations. That said, it is important to remember that we consider citation counts to be an important factor, but not the most important one. We remain committed to finding articles that are relevant, well-researched, and likely to generate discussion in the academic community and beyond.

Posted by: Spencer Driscoll | Apr 20, 2012 9:36:33 PM

Why is your goal attaining the best possible "ranking" for your law review through maximizing citation counts?

Posted by: Mark | Apr 21, 2012 8:07:12 AM

No offense to the the staff at BYU or prior law reviews interviewed (all of whom have been extremely forthcoming), but this series is just reinforcing my pre-existing feelings about why the American law review system is so deeply flawed. Just to take a few examples from this interview, an article might be rejected because it is: too short, uses an atypical structure, is on a subject the editors are unfamiliar with, is not expected to receive a certain number of future citations, uses empirical methodologies not traditionally common in law review articles, was written by anyone other than a law professor, or does not have a long section laying out the prior law. On the other hand, an article might be accepted if: the author went to Harvard or Yale, the author clerked on the DC Circuit, the article is type set to look like a law review article instead of a manuscript, the author's prior articles were heavily cited, was properly Bluebooked, or it was on a subject familiar to the editors.

In aggregate, you see a system built with huge biases in favor of articles that literally *look* like law review articles, or that have been written by authors already screened for by others. All of the editors interviewed have said they are constantly looking out for the next big author, but that is not really consistent with the criteria used for selection (unless the "next big author" went to Harvard or Yale and did an elite clerkship). Indeed, the criteria seem almost designed to prefer individuals already preferred by the system—when hiring decisions are often based off of (1) where one got her JD and (2) what/where one has published, it is deeply frustrating if (2) is also dependent on where one got her JD.

Posted by: Charles Paul Hoffman | Apr 21, 2012 8:20:14 AM

Charles Hoffman,

I understand your frustration. Unfortunately, given the hundreds of submissions, it is impossible to give each an article a full review, and much easier to rely on heuristics, like BYU does.

Still, I've noticed plenty of authors at non-elite institutions(even from T4 schools) published in elite law journals. I think it's an up hill struggle, but it's possible with a quality piece.

Posted by: Former Articles Editor | Apr 21, 2012 11:03:49 AM

Hoffman> Yes, agreed.

What we have here are law review editors openly admitting that what matters in terms of getting an offer is the author's prior publication record and law professor status.

For crying out loud, law reviews are the only area of academic publishing where authors are encouraged (required?) to submit their CVs! For what purpose? To ascertain whether the author was on law review himself? TO determine whether the author's prior publication record?

This is just downright shameful. As I follow Paul Campos blog, I'm really starting to wonder whether law professors have any shame at all. I'm so glad I moved to academic medicine.

Posted by: Me | Apr 21, 2012 11:14:45 AM

There is reason to view an author's publication record before extending an offer to publish: to ensure that the current article isn't preempted by that author's prior work. It's not uncommon that authors will submit an Article that largely rehashes points made in a prior article and being able to quickly sift through their publications to see if that's occurring is valuable.

Using citation counts as a basis for acceptance, though, serves, in my view, as an impermissible entry barrier that gives authors with high citation counts an incentive to shirk in future submissions. I'm glad BYU is open about their practice, but it's a practice worth condemning.

Posted by: Former AE | Apr 21, 2012 1:13:25 PM

Why on earth would a law review cite be more important than a judicial cite? Why would a cite by a peer be more important than a cite that demonstrates that the article actually had some influence on the law?

Posted by: Anon | Apr 21, 2012 1:38:02 PM

"There is reason to view an author's publication record before extending an offer to publish.."

Funny how every other field of academia doesn't need to do this.

Posted by: anon | Apr 21, 2012 2:32:28 PM

anon,

Most fields of academia have only a few main journals, so checking for preemption is relatively easy without author identification.

Of course, only one editor would need to know the author's identity to run a preemption check, so publication record still shouldn't even unintentionally affect the votes of the rest of the editors

Posted by: Former AE | Apr 21, 2012 2:51:42 PM

I think many of you are being unfair to the editors. You must understand that the journals receive many hundreds of 1000 (or more) submissions in the Feb-March cycle. Do you have any idea how many of these pieces are garbage? Do you realize how this burns an editor out? Of course the editors should and indeed must take into account other factors such as prior publications. And there is nothing wrong with looking at the number of citations. I think the editors are doing a balanced approach - it is more difficult for young or less cited authors but it is not impossible. That is just the way it is. And BTW , no the system is not perfect but my "beef" is that many journals do not bother to check the history of the submission. I dont appreciate receiving a rejection 3 months after I expedited. Obviously the submission was accepted so why dont the journal;s check their ExpresO before sending out a rejection.

Posted by: ex-dist.ct.law.clerk | Apr 21, 2012 3:00:12 PM

I think many of you have misinterpreted the role citation counts play in our acceptance process. It is one of many factors in a holistic evaluation process, and no single factor we identified above is determinative. Most of the pieces we're publishing this year are from young authors with sparse publications histories. To be clear, the most important component to us is the quality of the piece. I have never rejected a well-written article because the author had a low cite count in Westlaw or did not graduate from a T14 law school.

Posted by: Ryan | Apr 21, 2012 3:40:54 PM

This is a great series of posts.

My view is that for all of the considerations not directly related to the merits of the piece, good articles will get picked up somewhere good because there are so many good outlets. Even an author at a 4th tier school will get fairly considered at the 100th-ranked law review, and an offer there starts the expedite process.

Posted by: Jack | Apr 21, 2012 3:43:57 PM

That citation counts are one factor in a holistic process, and that articles editors need a way to separate wheat from chaff, doesn't provide a reason why citation counts are a relevant or appropriate factor to consider.

Posted by: Mark | Apr 21, 2012 4:03:47 PM

I appreciate all of the comments and questions here on my interview.

On the citation count issue--I just want to make one point. In all three of the interviews I conducted, with Stanford, Vanderbilt and BYU, the editors made a note that expedited reviews made a difference. So, where a person had previously received an offer made a difference to the current editors reviewing the piece. And Expresso specifically informs authors to submit their CV as that is a very important factor for articles editors in considering a piece.

Given that information, why is looking at citation counts any more objectionable than looking at where the author went to school or where they have published in the past or what another law review thought of their work? I can certainly understand (and agree with) the criticism that none of these proxies should be used at all to determine what makes good scholarship--and every piece should be reviewed blindly, but I certainly don't see why citation counts are any more objectionable than looking at a person's CV--or what another law review thought about the piece. After all, law reviews (like legal academics) seek to be relevant. And publishing people who are likely to be cited is a logical way to achieve that.

Posted by: Shima Baradaran | Apr 21, 2012 4:27:37 PM

Ok, say citation count might have some merit as a factor. That being said, when assessing citations, I still don't see why cites from the judiciary are not more important than cites from another professor. Does anyone have any insight on this?

Posted by: Anon | Apr 21, 2012 5:42:49 PM

The obsessive status consciousness law reviews display when considering authors is but the mirror image of the obsessive status consciousness authors display when considering law reviews. If we didn't use journal rank as a proxy for article quality, it wouldn't matter that editors also look to proxies.

Scholar, teach thyself.

Posted by: James Grimmelmann | Apr 21, 2012 7:48:33 PM

Former AE—I see your point, but in my mind, peer review is much better at meeting these goals than the current system. Other scholars familiar with your specialty area will be much better than an articles editor at knowing whether an article says something new or is just repeating what's been said before. That said, if a journal is seriously concerned about this, the two-stage review process used at most peer reviewed journals could solve this problem (for those unfamiliar, typically an article is first reviewed by one of the main editors for the journal, who makes a decision as to whether it is an appropriate fit; it is then sent out to double-blind peer reviewers—there is no reason why that initial editor, who knows the author's identity, couldn't do a quick check to verify that the same thing hasn't effectively been published before by the author).

Anyway, I acknowledge my complaint here is really about the entire structure of the American legal publishing system—I would much rather see more specialty journals, rather than so many generalist journals, as well as more peer reviewed, rather than student-edited, journals. I doubt it will happen quickly, both because students are likely to be opposed to ceding power and professors are likely to be opposed to taking on additional responsibility (peer review is a lot more work than the current system, and exclusive submissions are much riskier). But I see a lot of subject areas moving into specialty journals, as they become so complicated that 3Ls are simply incapable of making reasonable publication decisions (you can already see this in a number of areas, such as my own (legal history)). I honestly don't know what it will take to change things in the generalist reviews, but I suspect it will have to be lead by the top schools/law reviews.

Posted by: Charles Paul Hoffman | Apr 21, 2012 9:25:50 PM

It seems to me that one way to reduce the need for student-edited law reviews to rely on proxies for quality (publication history, letterhead, etc.) would be to stretch out the submission season. I have never understood why the system has developed to allow only two windows for submissions. I realize that finals season will by necessity be a slow period for reviewing articles, but why, for instance, don't more reviews open up for business in January, thus alleviating the crush of articles in the February and March time period? It seems to me that if a critical mass of law reviews announced that they are considering submissions on a rolling basis throughout the school year, with the exception of 2-3 weeks during finals, that they wouldn't be as overworked as they are in the current system, which would give them a better chance to assess the quality of each piece, rather than being forced to rely on proxies.

Similarly, I understand why many reviews shut down for the summer, but also know that others do not (Chicago, Minnesota). However, if they could find a way to run over the summer, even remotely, this would again help alleviate the crush of the February-March season.

More modestly, even simply opening up a 10 week submission period during each semester would seem to dramatically improve the situation on both sides (students and professors), while not impacting the students' exam periods or summer.

Unfortunately, we inexplicably have the worst possible system for all involved.

Posted by: Anonymoose | Apr 22, 2012 9:19:34 AM

Charles,

Your points are well-taken, and many of the top (student-edited) law reviews are indeed moving to a hybrid system where they employ peer review: Stanford, Chicago, Yale, and Harvard all employ peer review.

I think it would be unwise to rely on peer review as the sole source of preemption checking since 1) peer reviewers, even specialist ones, aren't always reliable in knowing the latest scholarship in the field and 2) peer review is a scarce resource, and most law reviews would rather not waste it on an article that's obviously preempted by an author's own prior work.

I agree that the two-tiered system, though, is a good way for a law review to go about checking for preemption without revealing the author's identity to all of the editors.

Anonymoose,

You're right. The fact that most articles are submitted in two blocs is the single biggest reasons articles editors feel pressured to resort to proxies. Most law reviews are open at other times of the year--not just during these two periods--so, while we're clearly in a bad equilibrium, I think the solution is not obvious. Law reviews can say what they want about being open all year, but risk-averse professors seem keen on still submitting when everyone else is submitting.

Posted by: Former AE | Apr 22, 2012 12:07:37 PM

So long as an appropriate weighting for "maturity" of the author and the citation practices of a field are considered, I think the citation counts can be a useful addition to the decision making process. But, I might suggest using google scholar in addition to (or maybe even instead of) Westlaw, as that will pick up more peer-reviewed journals and books, while still getting law reviews. Influence on a broad range of fields should be valued, but will often be missed by Westlaw.

For reasons nicely stated by Dave Hoffman here (and in the linked items therein), I think that "preemption" is given much too much weight by law review editors, and should be scaled back as a consideration.

http://www.concurringopinions.com/archives/2012/04/preemption-checks.html

Not because it's good to publish the same exact thought twice, but rather because it's really quite rare that an article is "preempted" in a way that out to matter.

Posted by: Matt | Apr 22, 2012 12:27:25 PM

I believe that most law reviews open for submissions as soon as they select a new slate of articles editors. And I think most law reviews close over the summer because students are unlikely to be in the same place, and group dynamics change dramatically when meetings can't be held face-to-face. I think there's still some room to expand the duration of the articles season, but one shouldn't expect a really dramatic change without a bigger shake-up to the journal system.

Posted by: William Baude | Apr 22, 2012 1:51:42 PM

I agree that professors share a good portion of the blame by overwhelmingly submitting articles during the current submission windows. However, speaking personally, the reason for that is a fear that law reviews won't actively consider my pieces outside of those windows. If a critical mass of law reviews explicitly stated that they would consider all pieces equally over the first 10 weeks of each semester, and hold a few spaces for the end of those time spans, I wouldn't hesitate to submit off-peak.

Posted by: Anonymoose | Apr 22, 2012 3:06:59 PM

The two tiered system in Stanford, Chicago, Yale, and Harvard are not anonymous. That is a myth. At a minimum, one of the articles editors knows the author's identity, and at that submission stage screens articles for status and affiliation. (This is in fact the case with peer review as well). Neither of those screening tools, in my opinion, has aught to do with article quality. Based on my many conversations with articles editors over the years, the initial screener often passes that information onto the board.

Posted by: AnonProf | Apr 22, 2012 10:22:33 PM

While I agree the current system has flaws, I'm not sure peer review is the best solution. One problem is the potential for self-interested review. The people asked to be reviewers often have articles under submission at the same journals at the same time--and they'd rather their article be published than the one they're reviewing. I'm also skeptical that review will reliably be blind. Good articles are often circulated to academics in the field, posted on SSRN, presented at workshops, and discussed on blogs in the months ahead of submission. Prawfs ubiquitously talk about works in progress in their field. By the time you get to peer review, it's not unlikely that reviewers will recognize the work and know its author. Finally, law is not a science. You can objectively critique descriptive legal analysis, but efforts to develop doctrine beyond its current state are not readily susceptible to objective verification. Some of the best articles challenge prevailing wisdom and make provocative normative claims. Peer review may prevent the publication (or prestigious placement)of good articles that are disfavored for political or ideological reasons.

My point is not that peer review is necessarily a bad idea. I just think we should be cautious in assuming it is the perfect solution to the admittedly flawed system we have now.

Posted by: pleepleus | Apr 23, 2012 10:17:54 AM

While the current system appears strange on its face, it makes a lot more sense when we admit that the primary function of many journals is to better train law students, not necessarily to publish significant scholarship. The submission process is just a necessary step in setting up all the other important tasks law review members perform, including drafting their own work to publish alongside professors. Overall, serving on a law journal is one of the most rigorous research and writing opportunities a law student will ever experience. No other part of law school even compares. Those who get the opportunity are foreover substantively improved in their writing and research. The quality of student articles that can be found at most any journal is a testament to this.

Because our profession spends a disproportiante amount of time writing and our students get so much out of journals, we have far more journals and articles than "professors" need. I suspect we have far more than most other disciplines. For students, I see no downside to creating more journals, but the enormous number of journals creates an artificial aggregate demand for articles. Professors respond by creating an additional scholarship to fill these journal and, as the article editors suggest, much of it is of low value (and I don't exclude some of mine from this category).

Because most professors submit every one of their articles to the top 50 or 100 journals, editors do not have the luxury of picking the best articles from a stack of good articles. Rather, they must sift through a lot of poor articles to find the good ones. The waste of time involved in carefully reading every article would be enormous and, under current staffing, probably impossible.

One answer might be to expand the size of all law review staffs so there are more students to do the work (although this would not reduce the inefficiency). Another would be to consolidate journals at individual schools. This would shrink available publications slots while maintaining student opportunity. But because part of the bonus of being on law review is its exclusivity, I doubt the flagships would be interested.

We often blog about the need for professors to participate more heavily, but professors only make sense on the tail-end of the decision process. If they participated in the screening process, they might spend even more time than students reviewing articles. Surely professors' time is more valueable than students. Unless deans lightened professors' loads in other areas, which I would not recommend, I don't think most of us could keep up with or complete the law journal's work.

There must be other solutions, but my general sense is that we tolerate the current flaws in the system that disserve some law professors, practioners and scholarship because the current system serves the overall interests of better educating law students. I am not sure there is much "good" legal scholarship that goes unpublished. Our real gripe is about whether it makes its way into the top journals. The costs of correcting that problem likely outweigh the benefits.

The only serious dangers in the current system are bad habits students might develop during the submission process, but I find most editors thoughtful and professional enough to protect themselves and each other on that score.

Posted by: Derek Black | Apr 23, 2012 10:43:46 AM

I agree with pleepleus--blind peer review is difficult for heavily-workshopped pieces. Why not have a system whereby every law review publishes only their own faculty (based on first-named author) and alumni (if the author is not a law professor). In other words, why not eliminate the student-run law review article placement market altogether? That way, the law journals associated with a particular institution can serve as a means of chronicling that institution's scholarly output, rather than as a dubious signal of quality under the current system.

Posted by: anon | Apr 23, 2012 6:50:57 PM

Post a comment