Tuesday, March 17, 2009
Faculty Influence on Article Selection at the Law Reviews
I wanted to build on Dan's comments about the fascinating thread over at Brian's Leiter's place but then take them in a different direction. What I found most interesting were the comments about faculty influence on the selection process. Here are a few examples:
- from Elisabeth: "HLR and YLJ get multiple faculty reviews before publishing anything."
- from Brian Leiter: "[O]ne thing that is unclear is how much of a role in article selection faculty play at the elite law schools. They clearly play more now than even fifteen years ago, but is it really the case that any article published in Yale or Harvard or Chicago or Columbia is really a 'peer' refereed article at this point?"
- from L&E Scholar: "[S]omething interesting appears to be happening at Stanford. They sent me an econometric article they were considering (& I'm not in-house). Also, they asked me if I would agree to be on a standing referee list/advisory board for this kind of work. My respect for Stanford went up a lot after this since I see it as a push toward a real refereeing system . . . ."
- from Jeffrey Kessler: "I can confirm what is going on at Stanford. As of a few months ago, Stanford Law Review's new policy is to have all articles peer reviewed before they are accepted. (We had to make one exception because another school gave the author a one hour exploding offer.) Since this policy was put in place, most articles we selected have been reviewed by more than one professor, and we've made a special effort to reach out to experts at other schools. Faculty from across the country have been very gracious in giving us thoughtful, incredibly insightful commentary."
from Frank Cross: "I think it's good what Stanford and other schools are doing, but its not peer review. I was one of those 'faculty from across the country' who commented to Stanford's law review on a submission. It wasn't at all the same as a peer review, in part because of time constraints."
I would love to get both descriptive and normative commentary on these developments. On the descriptive side: How many law reviews are doing this? Is it a formal or informal process? Are written comments asked for? Is the author notified of the process and/or the comments? How influential are the comments on the law review? Has any review gone forward in the face of a "no" from a faculty member? And are there other, more informal methods of faculty involvement, such as when a faculty member from the home school sends over a submitted article with a positive note?
On the normative side: Is this faculty influence a good idea? If so, what is the best method of implementing it? How strong should the role of faculty be in the process? How transparent? Can "walking an article down" be considered a form of peer review?
TrackBack URL for this entry:
Listed below are links to weblogs that reference Faculty Influence on Article Selection at the Law Reviews:
Perhaps this is form of self-preservation for student-edited journals and the current, much-bemoaned law review system. It responds to one (although certainly not the only) major criticism--that 2L/3L students are not capable of truly evaluating legal scholarship--by adding a more knowledgeable reviewer into the selection process, while maintaining the current regime. I would argue that the best way to implement would be something like appellate review of the student-board decision on a clearly erroneous standard. No walk-downs and no pre-selection input; give the students their say and have faculty review as a check.
Posted by: Howard Wasserman | Mar 17, 2009 2:23:38 PM
As a fairly recent editorial board member of a law review, I question the assumption you and many others seem to have that student law reviews are invested in protecting the current system of legal scholarship publication above all else. As a student editor, I saw my job as helping authors with good articles get them into print with as few errors as possible so that they could be read by others. This job consisted of screening articles and then of checking the citations for accuracy in format and content. It took up many hours of my day that I would have rather spent studying or socializing, but I enjoyed doing it because I felt like I was helping out the authors both by getting their articles to print in a timely way and double-checking the content of their citations for accuracy. I knew then that I wanted publish articles myself some day, so I felt like I was putting in my time learning some of the technical details in order hopefully to improve my own work down the road.
I think many student editors, me included, would have been just as happy to turn the entire process over to peer-reviewed, faculty-run journals. But for all that everyone agrees these would be superior, the problem is that I don't see many faculty jumping up and down volunteering to run them. From my vantage point, it seems the job seems to fall to students mostly because no one else is willing to do it.
And I'm absolutely sure that my law review and every other made mistakes--both by publishing things that weren't very good and rejecting things that were. But at the end of the day, does it really matter so much? There are a lot of journals, and almost every article will get published somewhere. And isn't getting it into print so that it can be read by others the ultimate objective? Does it matter so much that an article doesn't get accepted by Harvard and instead ends up in x lower-ranked law review? I hear professors say sometimes that placement matters because it can weigh into things like tenure decisions. But if this is the case, then the perhaps the frustration and efforts for reform would be most effectively be directed at the faculties who are making these important decisions on the basis of 2L/3L student editor opinions rather than at the student editors themselves, who are really mostly just doing the best job they can in a task no one else seems eager to take on.
I guess I would like to know why it matters where an article ends up or why it is urgent that journals make better decisions about what to publish or not to publish. I guess if someone believes that institution quality is a good proxy for student quality and believes that "smarter" students will make better editors, it makes sense that that person might want to publish in a journal housed at a more highly ranked school for the sake of getting better editing. But I've never heard anyone make that argument. In fact, the argument I always hear is that professors don't think student editors add much value anyway. So I'm puzzled about why all of this matters so much.
As a former law review member and aspiring law faculty member, I would love to hear people's opinions on this. Why is it important for law reviews to do a better job of screening for quality?
Posted by: Christopher | Mar 17, 2009 4:34:35 PM
Accurate law review rankings are valuable for roughly the same reason as accurate FDA ratings of meat. Every consumer could do the evaluation themselves, but the transaction costs are a lot lower if there's only one rater, and that rater is accurate.
But note that there are some consumers for whom an accurate grade is sufficiently important that assessments should not be outsourced (e.g., hiring and tenure committees, or four-star restaurants).
By the way, to answer Matt's question, I've seen a top journal ignore the (negative) advice of my very learned tax colleague after asking his opinion, and on the flip side, been told that another journal ignored the advice of an outside reader to publish my work. While I wouldn't say that editors should follow every piece of advice they get, it seems to me that ignoring advice is a poor tactic for getting more of it from the same source.
Posted by: BDG | Mar 17, 2009 5:23:40 PM
What do you suggest a law review do when it receives one positive response and one negative one? We usually asked a couple of different people to review our articles. I'm hesitant to say which law review it was because it was a few years ago, and I do not know if their policy remains the same. If we sent a review to two faculty members, my recollection was that the opinion was unanimous, either for or against, only about half the time.
Posted by: Christopher | Mar 17, 2009 8:09:53 PM
BDG--I don't understand--who is the consumer of scholarship (or scholars) who needs the accurate rating, if hiring and T/P committees are supposed to read the work themselves? Why in the world would anyone need a proxy for determining the quality of scholarship?
Posted by: Confused Prof | Mar 17, 2009 11:10:35 PM
It's not time for my next guest-blog stint, but I'll say this about the usefulness of article placement as proxy. There are lots of times consumers of legal scholarship have to make decisions about what to read or (especially) what to cite, other than when they are hiring or promoting other scholars. For readers who may not know the field well, placement is a useful heuristic in sorting through which articles deserve more careful attention. That's all I'm claiming. Accurate? Nope. Time-saving. That is all. But, on the other hand, that's a lot. Reducing transaction costs, after all, is why we have corporations. Also, if you buy Coase, why we have government.
For the journal with the rare luxury of two outside opinions that conflict, probably the best diplomatic option is to tell the outside reader whose views you will not follow of that fact.
Posted by: BDG | Mar 18, 2009 9:16:45 AM
I think the experience from outside law (and I'd be shocked, given my experience and conversations, if the same isn't currently true inside law to some degree) is that article placement serves as a proxy in hiring and promotion as well. If nothing else, it gets people noticed for more careful reading and helps makes arguments about who is "important" and "moving" in the field.
I've been asked to review article manuscripts for law reviews. Sometimes it was before a publication decision had been made. Sometimes after the article had already been accepted. I'm not sure my advice ever made much difference on the outcome, and it certainly did not provide much feedback to improve the article. Nothing like a real peer review process. BTW, for a top tier peer review journal, if you have multiple good reviewers and they disagree -- then you reject the manuscript unless you as the editor are in a position to evaluate the quality of the reviews and override a poorly thought out negative review.
Posted by: kw | Mar 18, 2009 10:11:58 AM
Christopher, if all that mattered was getting articles "in print," SSRN would suffice. The fact of the matter is that we in the rest of the academy (I'm an engineer) want our journals to signal quality. We have peer review in an attempt (not always successful) to evaluate the merit of authors' articles so that we only publish those articles that are novel, accurate, and otherwise praiseworthy.
I recognize that in some kinds of scholarship it's harder to judge the quality since the material essentially only expresses the opinion of the author rather than reporting the results of physical experiments and/or making testable predictions, but that's a relatively minor difference between the legal and engineering academies given that basically every other humanities discipline uses a peer-review process.
Given that the audience for law journals is the legal academy, it seems to me that law profs would want to be heavily involved in the process. Why they are not, is beyond me.
Finally, not to be disrespectful, but I doubt that your primary reason for being on law review was to help articles get published. I'd guess that getting a law prof's job had something to do with it, too. If students didn't select articles and only did cite checking, I doubt being on law review would be nearly as prestigious. Thus, I would bet that students are highly motivated to maintain the status quo.
Posted by: billb | Mar 18, 2009 10:32:10 AM
BDG, I agree that telling the recommender whose opinion was not followed would be a good idea, but it would also have violated the policy of confidentiality with respect to outside reviews. KW, your suggestion sounds much easier in principle than in fact. Often a faculty member who was an expert in one field might have negative feedback on an aspect of the article not directly related to his or her own field of expertise but the second, positive, review was from someone who worked squarely in the field that the first reviewer found problematic. It also happened that one faculty member would suggest we read a different article because the professor believed the point had already been made elsewhere, and when we read the article, the assessment didn't seem all that accurate, and we had another professor who we asked specifically about the existing article--often because of feedback from the first professor--who felt it was not preemptive.
I hate to be overly critical, but it seems to me from the comments here and elsewhere that people touting the value of faculty reviews have often not read very many. Faculty reviews are helpful and do have value, and we were very deferential to them in most cases. But they are not the golden ticket of article quality assessment that many people seem to think they are. Professors sometimes get things wrong or don't read the article carefully or have an ideological axe to grind with the article and can't separate their personal opinions on a subject from their assessment of someone else's theory. My guess is that if all selection were done by professors, there would still be many, many complaints about bad publication decisions. Fewer, yes, but not none or even close to none.
Posted by: Christopher | Mar 18, 2009 10:55:49 AM
Since my primary appointment is outside law, I've both read and written a ton of real peer review reviews. Sounds like Christopher's problem was partly a bad selection of faculty reviewers (why pick faculty who don't have expertise?), and partly a willingness to second-guess the expert advise they were given. Perhaps the reviews weren't that good (wouldn't be shocking given how law review editors have asked me to review pieces and the limited experience law faculty have with the peer review process), and perhaps the students knew better than the faculty which articles were good and should be published (but color me skeptical). As I said, in a real peer review process, good selection of reviewers at a top tier journal creates a default assumption that disagreement among reviewers = rejection. Default can be overridden if the editor is in a position to discount a review or independently evaluate the merits of the manuscript. Of course, in a real peer review process you can go through revise and resubmit, so I don't know what to make of these informal advisory efforts by law reviews.
And yeah, peer review doesn't come close to getting rid of complaints about bad publication decisions. Just changes the nature of the complaints.
Posted by: kw | Mar 18, 2009 12:09:26 PM
This is a very interesting discussion, but one of the factors that hasn't been mentioned is time. To get good feedback on an article, you have to give the referee plenty of time to read it. A week is unlikely to be sufficient. I have no idea how much time these faculty members are being given, but one of the reasons why law professors like student edited reviews, and most other professors find the whole thing astonishing, is that the author can get into print very quickly, precisely because very limited time was devoted to gatekeeping. Of course student editors do a valuable service, and can do that during the editorial process as well as during the acceptance process. But most academic journals do not accept articles and then try to persaude the author to change things - they often accept articles contingent on further revision.
Law professors are often outraged by this suggestion. On a law journal where I was an editor, we tried a revise and resubmit once, and the response was unbelievable -- which is to say, believable coming from someone unused to the idea that actual revision might be a condition of publication, unlike suggestions that could safely be ignored once the piece had been accepted.
Faculty input is a good idea, but the suggestions that someone can make if given only a week or two to respond are not likely to be terrific, unless the query happens to arrive at a convenient moment. Often, the virtues and faults of an argument become apparent only after time for thought. And most professors submitting articles to law reviews are used to quick answers.
Posted by: Peter | Mar 20, 2009 12:15:20 AM
In case more evidence was needed for that last comment:
Hence I doubt that "faculty influence" in these cases comes very close to peer review.
Posted by: Peter | Mar 27, 2009 5:53:54 PM