Wednesday, April 19, 2006
Indefensible Things About Law Teaching (and suggestions), Part I
I love my job and I respect most folks in this profession, but there are some things about this business that are hard to defend. In this installment, I’ll talk about the elitism of law review publishing, and how we might address it.
We all know the criticisms of law review publishing: the simultaneous submissions that result in editors having to sort through thousands of articles; the general lack of peer-review; the general lack of anonymous submissions. And we’ve heard stories about articles being selected on the basis of the rank of the school of the sender (“wait ‘til you visit at that higher-ranked school to send out your publication”); the reputation of the author (“it’s by Professor Famous, so it doesn’t matter if the draft he sent is a mess”); the trendiness of the subject among law students (better to write about employment discrimination than labor law, and god have mercy on tax profs. ); the inability of law students to recognize good interdisciplinary work (history from archives, economics with hard math); and other issues not going to the merits of the piece. I know that law review editors (whom I really, really respect, especially those who will be making decisions about acceptances come fall) work hard and try to be responsible. I also know that this system of publishing is widely viewed with incredulity in much of the rest of academe.
I’m not suggesting that any of this is going to change significantly anytime soon. What can be done, however, is that law schools should stop greatly over-stressing the placement of articles.
One advantage of law review publishing is that lots of stuff gets published. A tremendous amount of elitism exists, however, regarding placement: in hiring law faculty, in votes on tenure and promotion, and in salary adjustments. Placement prestige is greatly over-stressed in these processes. The vagaries of law review publishing are such that we simply cannot use the fact that an article was published in a “top 30" journal as a proxy for superior quality to an article published in a 50-80 journal, or in a specialty journal, or even “lower” in the pecking order. I’ve read lots of great stuff in second-third tier law journals and specialty journals. Duncan Kennedy may not have gotten everything right in “Legal Education As Training For ...,” but there absolutely is irrational hierarchy in law review publishing.
I stress that this does not mean that any law review publication is as good as any other. Evaluators simply have to come up with better methods of evaluating quality. Get more folks on your own faculty to read the dang things, get more/better outside evaluations by peers, check number of citations, etc. The time to do this can come in part from not putting as much effort into trying to bid articles up the law review food chain. Just don’t rely so much on the name of the journal.
And yes, it’s possible that SSRN downloads, blogs, and other semi-democratizing forces, or more publishing of peer-reviewed university press books (buy mine here!) will make law review publications less important at some point. But for folks trying to get entry level or new teaching jobs, or trying to be viewed as a success at their current job now, the prestige factor is indefensibly overrated.
TrackBack URL for this entry:
Listed below are links to weblogs that reference Indefensible Things About Law Teaching (and suggestions), Part I:
I agree that prestige is overrated as an intrinsic gauge of the value of an article or of a faculty member. However, I wonder whether this is a futile concern to raise because it is not attaching the root cause of the emphasis on journal placement in the academy. Empirically placement does make a difference in the citation patters for articles. If citations are any measure of the value of a published article, shouldn't placement also make a difference for legal research? I wonder how far you will go with your concerns, and would you extend them to legal research as well as the production of scholarship and evaluation of faculty? If so, are there any measures of scholarly relevance for legal scholarship, or must everything be qualitative? Might the concerns better be addressed by attacking legal scholarship itself and the entrenched citation patterns we all seem to embrace that reinforce the hierarchies you are attacking?
It seems to follow from your concerns that that scholars might also cite less to articles in elite journals, or that they have an obligation to read everything published and weigh it equally on its own terms, regardless of which of the hundreds of journals in which it appears, and cite to it based purely on some subjective notion of quality. I must admit that I am more likely to read and cite to the article on a recent Supreme Court case that appear in a top journals than those that might appear in a more obscure journal. I won't completely ignore the latter, of course, but I am less likely to see it and I will not give it the same weight in making a decision where to allocate my scarece reading time. There are sometimes more than a hundred articles on a case and I cannot read them all, but maybe I am not as fast of a reader as you are. I think that we need to pay attention to this practice, which I believe many legal scholars engage in, as much as to why faculty value where their colleagues place articles.
As many studies have shown, an article in Harvard is, on average, cited more than an article in Texas, and an article in Texas is, on average, cited more than an article in Alabama, and so on. In other disciplines, such as the social sciences, department chairs, provosts and hiring committee pay attention to the ISI web of science, which actually attaches a value to publications in different journals based on impact factor. Should law be any different and, if so, why? What implications would this have for legal research? And, if it does, will this make legal research even less relevant vis-a-vis other social sciences? Or would you extend the same hierarchiy critique to the entire impact factor system that influences tenure and hiring at the modern research university? How should we measure quality and if we don't rely on the perceived quality of the journal as measured in citation patterns, what should the researcher/evaluator rely on?
Posted by: anon | Apr 19, 2006 11:39:36 AM
Joe, my first reaction to your suggestion to have more faculty input on the publication was: YES! Other than it offloads work from faculty, I am not sure what, if any, sacred cows exist around the primacy not so much of student editorship, but student selection. Is it a given that multiple submission and faculty review are mutually exclusive? What would happen if a 50-100 law review set out something like the following protocol as a way of differentiating itself competitively?
1. We will not get back to an author in fewer than X days, but the author must understand that every piece we publish is submitted to a faculty committee in a blind review.
2. We ask our student editors to screen the articles submitted for faculty review, but we also ask that X percent of the articles come from (a) junior faculty, and (b) faculty at non-top 50 schools. (That is, we ask the law review editors to make sure the faculty is seeing what the students think is the best work of the lesser-known lights.)
3. We will try to accommodate but will not prostitute the process to expedited review requests.
The core dilemma is whether this quickly competes itself into a situation in which you have the worst of both worlds (for the reviewing faculty at least): simultaneous submission and the faculty's investment of time in blind peer review. But would it? I am considering the following hypothetical. It were widely known that the Minnesota State Law Review (school ranked 95 in USNWR) is now peer reviewed, and Northern Oregon (school ranked 43 in USNWR) is not. First, from the standpoint of the author, I submit to both schools, among others, on ExpressO.
(a) Ten days later I get an offer from Northern Oregon but have not heard from Minnesota State. Do I take it and withdraw from Minnesota State because the rankings are still all that matters? Do I ask Minnesota State to expedite because I value the peer review? Do I ask Northern Oregon for an extension?
(b) Thirty days later I get an offer from Minnesota State but have not heard from Northern Oregon. Same questions as above.
Second, in view of the foregoing, and in light of everything else on one's plates, would you as faculty member invest time in reviewing the piece knowing that the author's calculus as described is still capable of going on?
Third, would hiring and tenure committees value a decision to go with Minnesota State?
I'm assuming that the mega-elites are going to be immune to most of this anyway.
Posted by: Jeff Lipshaw | Apr 19, 2006 11:59:37 AM
Obviously it would be wonderful if more people actually read your papers. But if we're talking about making law faculties more like the rest of academia, I don't think anyone else reads their colleagues' papers either. There are I think 2 reasons for this: 1) Lack of time; 2) Lack of ability to discern whether an argument outside one's narrow specialty is really insightful, hackneyed, or poorly supported.
So realistically, I think the pressure for law review reform will have to come from some other source. I put my faith in an online, *high-status*, peer-reviewed, non-specialty journal. The high-status bit, obviously, being the hardest part to achieve. But all it will take is one, and I predict a cascade will occur.
Posted by: Bruce | Apr 19, 2006 4:01:13 PM
What's wrong with BE Press? Are you saying that Chicago, Harvard, Yale or Columbia needs to do it? Berkeley is obviously too low-brow!
Posted by: anon | Apr 19, 2006 6:05:14 PM
These are great comments. Anon, you make a very good point that being cited may often be linked to the prestige of the publication in which one publishes in the first place. And there are other reasons not to use citations as an overwhelmingly important factor: some articles, just because of the subject, are less likely to be cited than others, certainly by courts, and even by other scholars. You're also right that if we discount prestige and citations, we need to lean more on "qualitative" assessments, which admittedly can be mushier. But just because something appears to have mathematical rigor (56 citations is better than 30, a law review ranked 30th is better than a law review ranked 56th) doesn't mean it's a sensible measure.
As to your question about the rest of the university, I'm on shakier ground because of lack of experience. But generally, I'm more likely to credit "prestige" of publications in fields where decisions on whether to accept articles are made by peers, in a blind review process in which the reviewers have time to read and think about the piece.
Jeff: if I understand you correctly, you and I are talking about somewhat different things. You're talking about more peer-reviewed law publications. That would be terrific if it happened, but I worry that it's not going to happen to any great extent precisely because of the reasons you list. So I was trying to play the ball where it lies, and suggest that schools use more faculty evaluations of articles that have already been published by student-edited reviews.
I know what you're saying, and I know a decent chunk of law profs might react the way you predict. But I'm very unsympathetic to the idea that law faculty are somehow more pressed for time than are faculty in the broader university. We teach fewer classes, we generally have to publish less for tenure, and we're much better paid for it. As to understanding things outside one's own speciality, first, I would want more outside reviews from folks in the same specialty. Second, whatever happened to the idea that "oh, it's all law, so Jill first-year-teacher can be assigned to teach Crim Law or Property or whatever, even though she never practiced in that area."?
Posted by: Joseph Slater | Apr 20, 2006 10:19:24 AM
I want to address anon, who asks:
"In other disciplines, such as the social sciences, department chairs, provosts and hiring committee pay attention to the ISI web of science, which actually attaches a value to publications in different journals based on impact factor. Should law be any different and, if so, why? What implications would this have for legal research? And, if it does, will this make legal research even less relevant vis-a-vis other social sciences?"
This question presumes that social sciences (such as law) should be modeled on the physical sciences, where a community of inquirers gradually converge on universally acceptable truths. I think that is incorrect because of the inevitably contested assumptions that drive legal scholarship (be they assumptions about interpretive methodology, the validity of cost-benefit analysis, metaethics, inter alia.)
Admittedly, economists' adoption of rigid mathematical methodology has conferred a patina of objectivity on their policy prescriptions. Perhaps if legal scholars "got our act together" and arranged for a comparable system of ranking or certifying "respectable work" we could share in this authority of expertise. But we should also observe the contemporary disintegration of economic methodology into subfields--e.g., environmental economics, information economics, health economics--and the rise of empiricists in that field generally. These trends suggest that small communities of expertise, not grand rankings and orchestrated research, are driving progress in economics.
As long as I have the support of a few colleagues who understand the
phenomena I'm writing about and think my work clarifies the role of law in them, I'll feel like I'm doing valuable work. I think that most of us can only ever expect that type of discursive community. To aspire to be properly ranked in the universe of legal scholars is to indulge in a fantasy of meritocracy best left for sports stars and stockbrokers.
Futhermore, pervasive ranking also threatens to coarsen culture by encouraging a false sense of commensurability. Consider Richard Posner’s recent effort to “rank” public intellectuals, a laborious project made ever easier by the powerful computing systems driving search engines. There is no way to make a ranking process "fair" unless we subscribe to the bizarre idea that the quality of the work of economists, historians, philosophers, scientists, et al. can be ordinally ranked with some commensurating metric. The danger is particularly acute in social sciences and literature, where quality is hard to measure and notoriety often becomes a substitute for it. Only positivistic delusion could lead us to ignore the contestability of the basic assumptions and aspirations of authors in such fields--for example, how closely tied they are to visions of the good society and ideological commitments. Yet the temptation to assess the importance and even quality of a person’s intellectual work with reference to the number of “hits” they generate on a search engine database is sure to advance as these services play a greater role in our lives.
Posted by: Frank Pasquale | Apr 20, 2006 10:20:31 AM