Wednesday, October 12, 2022

Advice from an "Other Other Legal Academy" Tenure Committee Chair

Back in 2017, I found myself appointed to our Tenure Committee, the thirteen-person group that does the detailed work of tracking the teaching, scholarship, and service of pre-tenure professors on our now "unified" faculty (i.e., all "doctrinal," clinical, and legal practice skills professors).  The Tenure Committee makes recommendations to the full tenured faculty, which has the final faculty say on tenure decisions. I had to miss my first meeting because of a conference commitment and was horrified to find that, in my absence, the committee had elected me the chair.  That job ended this past June 30, when I went on phase-out, removing myself from the ranks of the tenured, and ending my eligibility to serve.

I was just going through some old computer files and found some bullet points that I must have written in 2018 as content for an internal program on pre-tenure scholarship we never formally conducted.  I'm sure I said all of these things informally to somebody at some point.  In the spirit of Jeremy Telman's views on scholarship in the "Other [non-elite] Legal Academy" and my response (to the effect that I never felt that level of distance from my perch in a law school ranked somewhere outside of the US News top 100), I offer them  in their almost unexpurgated original.  They are one person's view; your mileage (and that of equivalent committees or faculty members at your particular school) may vary:

  1. Don’t get hung up on rankings when placing articles.  Yes, if you are on the faculty at a top 50 school, the placements may make a difference.  For everybody [else], except at the extremes, there are no significant pluses or minuses.  Yes, a placement in a T17 flagship will get you lots of points, and a T50 placement significant points, and a placement in a specialty journal in the unranked 4th tier will get some head scratching, but in between it doesn’t make a lot of difference.  The key thing is to be good and to be productive.  See histogram in the blog post.*  (I don’t like many of the heuristics, but the idea of placing articles in law reviews at schools ranked higher than your own doesn’t offend me.)
  2. Aim for one traditional law review behemoth a year.  But don’t overlook short pieces - reactions, brief essays, and so on.  The online supplements are nice for this.  You read a piece and have 3,000 to 5,000 words (or fewer) to say about it.  Do it!
  3. With the shorter pieces, take a shot at a peer reviewed journal.  It takes longer, but it really is a professional affirmation.  Steel yourself for evil reviewer #2, however, who hates your piece, your school, and you.  (Most peer reviewed journals have a word limit - usually 10,000.)
  4. People react far more to the gestalt of your CV than to individual items.  Hence, a lengthy list of long and short pieces has a nice visceral impact to the point of “productive scholar.”
  5. “Law and ....” is good.  So is borrowing from other disciplines of law.  But it is a two-edged sword.  If you are a tort specialist borrowing from Nietzsche, show the piece to somebody with Nietzsche chops and then put that person’s name in the starred footnote.  Disingenuousness is not your friend.
  6. When you submit, you certainly can play the expedite game, but my personal view is that it’s moderately unethical to submit to law reviews for which you would not accept an offer if it were the only one you got.  
  7. Network in your area.  If you read somebody’s article and like it, send the person a note with this in the subject line “Loved your piece....”.  Be a commenter on others’ work.
  8. Blog.  PrawfsBlawg was founded as a forum for new (i.e. “raw”) professors.   Again, it’s a two-edged sword.  If your stuff is good, it helps.  If not, it doesn’t.  When I was unsure of a blog post, I would send it to a friend first.

* That is exactly what my notes say, and I've linked the PrawfsBlawg post from 2018 to which I was referring that included the histogram.

Posted by Jeff Lipshaw on October 12, 2022 at 01:08 PM in Blogging, Life of Law Schools, Lipshaw, Peer-Reviewed Journals | Permalink | Comments (0)

Monday, July 22, 2019

Interdisciplinary Publishing

Hi folks. This is my second post in a series on being a junior, interdisciplinary, multi-subject prawf—you can find the first post here.

In my last post I said that “[i]f all I focused on during the next 2–4 years were projects that fit easily within the format and timeline of law review publishing it would be virtually guaranteed that much of what I bring to the table would fade away.” Recognizing that is one thing, figuring out how to avoid it via, among other things, one’s publications is entirely another. (I should also note that the law review cycle & format don't represent logistical challenges for all  interdisciplinary legal scholars who want to remain interdisciplinary.) 

After the jump, I’ll outline a few things that are beginning to make themselves clear to me. I'd love to hear from other interdisciplinary folks about their goals and their approaches to realizing those goals via publication strategy.

First, for me, being an interdisciplinary scholar means speaking to two very broad audiences (in my case, law and law & society) using both my disciplines (law and anthropology) and I can’t really do that if I don’t go to where those audiences are. This means I want to publish in both student-edited law reviews and peer-review law & social science journals. I also eventually want to produce books, which are increasingly a valid scholarly currency among legal academics and are necessary signals of credibility among social scientists. This isn’t the only way to be interdisciplinary—you might be primarily or solely interested in bringing discipline “A” into conversation with discipline “B,” in which case you would likely emphasize B’s publications, conferences, and internal debates to see where you can make an intervention using A. But it’s the approach I find compelling. 

Second, I recognize that publication outcomes may not be evenly distributed between both types of journals or between presses that specialize in one field or the other. There will almost definitely be an imbalance in any given year but there is also quite likely to be an imbalance overall, especially pre-tenure. I don’t really think this is cause for regret: I feel I owe something special to my primary affiliation (how successful I am at delivering on it is a different issue). But I don’t think—and I’ve had some pushback on this from fellow junior legal anthros—that the imbalance needs to be great or that having any kind of imbalance indicates some level of failure.

Third, at least as far as my own work is concerned, timing or spacing publications for each type of venue does not pose as much of a challenge as one might imagine. Because I am trying to maintain a commitment to both law as a topic and field research as a method, I don’t construct specific articles for peer review versus law review journals. Instead, I tend to think of “projects” that have life cycles of several years and that are centered on some kind of fieldwork in the expectation that these projects will generate publications suitable to each type of venue. (I would say I’ve gone through two such “projects” so far, but they are proving to have long after-lives and aren’t really over yet.)

Over the course of any project’s life cycle, the pieces that somewhat de-emphasize field research tend to appear at the beginning and at the end. This means that, for the bulk of the project, I’m working with at least one long-horizon writing task and one that has a short- to mid-range horizon, and it means that I can’t work sequentially on publications as I would probably prefer to do. But it also means that I can realistically hope to speak to multiple audiences over a limited period of time.

As that last point suggests, there are costs to making this particular type of attempt at interdisciplinarity. For me, the most immediately obvious cost has been the way my writing instincts now occupy a kind of no-man’s land. For instance, I find it very hard to write a paper that’s over ~ 35 pages in Word, but I also find it very hard to write a paper that’s not heavily footnoted (and when you’re writing an 8,000–10,000 word peer review article—not “essay”!—every footnote or bibliographic entry represents serious opportunity cost). There are also now at least two different sets of canned phrases and sentence structures that irk me. My introductions and abstracts are too short for law review editors and on the long side for peer review editors. And I have lost the ability to instinctively format footnotes in either Bluebook or Chicago Manual of Style; these days I have to sit down with a citation guide before I send off any manuscript whereas at various points in the past I could write in either format from memory. (I know I should probably use citation software but I’ve never gotten into it and the initial learning curve always puts me off.)

Much of this is a problem of imperfect code-switching, inasmuch as we think of bodies of scholarship as extended conversations one has with an audience of peers and with oneself. I’m hopeful that the ability to move between writing styles develops in the same way as the ability to move between languages or linguistic registers: with the kind of fluency that comes from time and practice. In the meantime… I have a book chapter, a law review article, and a peer review article to finish. 

Posted by Deepa Das Acevedo on July 22, 2019 at 12:05 PM in Jr. Law Prawfs FAQ, Peer-Reviewed Journals | Permalink | Comments (2)

Monday, September 17, 2018

Reconstructed Ranking for Law Journals Using Adjusted Impact Factor

I would like to thank everyone for their comments and especially USForeignProf who added an important perspective. The main  motivation of our study was to expose the risks of blindly relying on rankings as a method for evaluating research. While we do not have data about the impact of metrics on the evaluation of research in law, we suspect that law schools will not be insulated from what has become a significant global trend. Our study highlights two unique features of the law review universe, which suggest that global rankings such as the Web of Science JCR may produce an inaccurate image of the law journals web: (1) the fact that the average number of references in SE articles is much higher than in articles published in PR journals; and (2) the fact that citations are not equally distributed across categories. In our study we tried to quantitatively capture the effect of these two features (what USForeignProf has characterized as the dilution of foreign journals metrics) on the ranking structure.

To demonstrate the dilution effect on the Web of Science ranking, we examined what happens to the impact factor of the journals in our sample, if we reduce the “value” of a citation received from SE articles from 1 to 0.4. We used the value of 0.4 because the mean number of references in SE journals is about 2.5 times greater than the mean number of references in PR journals (in our sample). For the sake of the experiment, we defined an adjusted impact factor, in which a citation from the SE journals in our sample counts as 0.4, and a citation from all other journals as 1. I want to emphasize that we do not argue that this adjusted ranking constitutes in itself a satisfactory solution to the ranking dilemma. We think that a better solution would also need to take into account other dimensions such as journal prestige (measured by some variant of the page-rank algorithm) and possibly also a revision of the composition of the journals sample on which the WOS ranking is based (which is currently determined - for all disciplines - by WOS stuff). However, this exercise is useful in demonstrating numerically the dilution effect. The change in the ranking is striking: PR journals are now positioned consistently higher. The mean reduction in impact factor for PR journals is 8.3%, compared with 46.1% for SE journals.  The table below reports the results of our analysis for the top 50 journals in our 90 journals sample (data for 2015) (the complete adjusted ranking can be found here). The order reflects the adjusted impact factor (the number in parenthesis reflects the un-adjusted ranking). In my next post I will offer some reflections on potential policy responses.

  1. Regulation and Governance (10)
  2. Law and Human Behavior (13)
  3. Stanford Law Review (1)
  4. Harvard Law Review (2)
  5. Psychology, Public Policy, and Law (18)
  6. Yale Law Journal (3)
  7. Texas Law Review (4)
  8. Common Market Law Review (22)
  9. Columbia Law Review (5)
  10.  The Journal of Law, Medicine & Ethics (29)
  11. University of Pennsylvania Law Review (8)
  12. Journal of Legal Studies (15)
  13. Harvard Environmental Law Review (14)
  14. California Law Review (6)
  15. American Journal of International Law (19)
  16. Cornell Law Review (7)
  17. Michigan Law Review (9)
  18. UCLA Law Review (12)
  19. American Journal of Law & Medicine (36)
  20. Georgetown Law Journal (11)
  21. International Environmental Agreements-Politics Law and Economics (41)
  22. American Journal of Comparative Law (25)
  23. Journal of Law, Economics, & Organization (37)
  24. Journal of Law and Economics (35)
  25. International Journal of Transitional Justice (42)
  26. Law & Policy (44)
  27. Harvard International Law Journal (26)
  28. Chinese Journal of International Law (47)
  29. Journal of International Economic Law (48)
  30. Law and Society Review (46)
  31. Antitrust Law Journal (27)
  32. Indiana Law Journal (24)
  33. Behavioral Sciences & the Law (51)
  34. Virginia Law Review (16)
  35. New York University Law Review (17)
  36. Journal of Empirical Legal Studies (39)
  37. Leiden Journal of International Law (54)
  38. University of Chicago Law Review (20)
  39. Social & Legal Studies (58)
  40. World Trade Review (61)
  41. Vanderbilt Law Review (23)
  42. Harvard Civil Rights-Civil Liberties Law Review (32)
  43. Modern Law Review (63)
  44. Annual Review of Law and Social Science (49)
  45. European Constitutional Law Review (64)
  46. Oxford Journal of Legal Studies (59)
  47. Journal of Environmental Law (65)
  48. European Journal of International Law (57)
  49. Law & Social Inquiry (62)
  50. George Washington Law Review (31)

Posted by Oren Perez on September 17, 2018 at 02:53 AM in Article Spotlight, Howard Wasserman, Information and Technology, Law Review Review, Peer-Reviewed Journals | Permalink | Comments (13)

Tuesday, July 05, 2016

The ABF and the Legal Academy

Happy 4th of July!  Thanks again to Howard, Sarah, and the rest of the Prawfs community for allowing me to be a guest blogger during the month of June.  I’ve been a longtime admirer of PrawfsBlawg, and I had the honor a couple of years ago to participate in a PrawfsBlawg book club on my book, Making the Modern American Fiscal State (thanks to Matt Bodie for helping organize that online discussion).  It was a real privilege this time around to share with you some background about the ABF and a few of our research highlights.

In my last post before I depart, I thought I’d discuss how the ABF connects to the legal academy.

Well, first and foremost, the ABF is an empirical and interdisciplinary research institute that studies law, legal institutions, and legal processes.  In this way, all of our projects should be of some interest to legal academics.  Since our research focuses on long-term, rigorous, empirical projects, we frequently publish our findings in peer-reviewed social science journals and university press books, rather than law reviews.  But as many other observers have noticed the world of legal academic publishing is changing dramatically.  Thus, we hope that many Prawf readers are, and will continue to be, consumers of our published research.

Another way in which the ABF is linked to the legal academy is through our honorary organization: The Fellows of the American Bar Foundation.  Like other law-related honorary associations, the Fellows is comprised of leading legal professionals who have made a significant contribution to the profession or legal scholarship.  Technically, we often describe the Fellows as an organization of legal professionals (attorneys, judges, law faculty, and legal scholars) “whose public and private careers have demonstrated outstanding dedication to the welfare of their communities and to the highest principles of the legal profession.”  Of course, for the honor and privilege of becoming a Fellow, we provide members an opportunity to help support the ABF’s research and programming with their tax-deductible, charitable contributions.  And, perhaps more importantly, Fellows assist the ABF with their intellectual engagement with our research and programming.

The Fellows, in this way, act as an important bridge between the research community and the practicing bar and bench.  For example, at this year’s ABA annual meeting in San Francisco, we’ll be hosting several Fellows events, including a CLE research seminar on “Civil Rights Advocacy: Past, Present and Future.”   This panel discussion will bring together a number of prominent legal scholars and civil rights lawyers, and will be moderated by our former ABF colleague Dylan Penningroth (Berkeley Law & History).

So, how does one become an ABF Fellow?  Nominations for the Fellows are culled by chairs from each of the fifty states and from an international contingent.  The state chairs solicit names from other Fellows, and look to participation in all the usual places where legal leaders reside, such as the American Law Institute, ABA section leadership, and the deans and leading scholars/teachers of law schools.  The nominations go through a fairly rigorous review process and all nominees are ultimately approved by the ABF Board of Directors.  Although the Fellows membership has been expanding in recent years (and thankfully become more diverse in the process), being nominated to be am ABF Fellow remains a huge honor for legal professionals of all types.  Thus, if you receive a nomination letter from the ABF, we hope you’ll give it some serious consideration.

Legal scholars often wonder if the ABF can support their research in some way.  Despite our name, however, we are not a foundation that provides financial support for faculty outside of our own.  Many of our faculty members collaborate with legal scholars and social scientists throughout the world, as my previous posts have mentioned.  And as I’ve written, much of our work is funded in part by external sources, such as the National Science Foundation and the Public Welfare Foundation.  Ultimately, the legal academy is a consumer of our research.  We hope in the future, though, to integrate our Fellows who are legal scholars into our research community, perhaps by having them help us disseminate ABF research and programming – yet another reason to think about becoming an ABF Fellow.

Finally, in addition to our research, we also have a number of programs that should be of interest to law professors who are working with grad students or undergrads.  Our Montgomery Summer Diversity Research Fellowship is designed to bring talented college students to the ABF for a summer to work with our faculty as research assistants and to learn more about scholarship at the intersection of law and the social sciences.  Thanks to a variety of funders, this program has been in existence for nearly three decades and has produced a number of leading lawyers, academics, and now a California Supreme Court Justice.  We’re rightfully quite proud of this program.  We hope that Prawf readers who work with excellent college students will think about encouraging them to apply to this program.

For grad students, we also have a doctoral and postdoctoral fellowship program that has been around almost as long as our undergraduate program.  These residency-based fellowships allow grad students finishing up their dissertations or those who have recently completed their dissertations to join our research faculty for a short period of time (generally two years) before they embark on their academic careers.  This program has also been a leading incubator for many socio-legal and other interdisciplinary legal scholars (including yours truly).  Grad student readers of this blog and their mentors should definitely keep this fellowship in mind.

For those scholars who reside in the Chicago-land area, the ABF also hosts a weekly Wednesday research seminar.  Like most law school workshops, this seminar brings visitors to the ABF for a day to share their works-in-progress and receive feedback from a truly interdisciplinary group of leading social scientists and legal scholars.  Readers can learn more about our workshop series on our webpage, or the can feel free to follow our institutional tweeter handle (@ABFResearch) or my own tweets about our many ABF events (@AjayKMehrotra).

I hope readers found these series of posts about the ABF interesting and helpful.  Thanks again to Sarah and Howard for inviting me to be a guest blogger.  I hope to see some of you soon at an ABF event.  Thanks.

Posted by Ajay K. Mehrotra on July 5, 2016 at 12:09 AM in Books, Peer-Reviewed Journals | Permalink | Comments (2)

Friday, June 03, 2016

ABF Research on Display at LSA

In my previous post, I provided a broad overview of what the ABF is, namely, a research institute dedicated to the empirical and interdisciplinary study of law, legal institutions, and legal processes. In this post, I was planning to describe some of the ABF’s hallmark research and current projects. But, for those who are attending the Law & Society Association’s annual conference in NOLA, an even better way to learn about ABF research is to attend one of the many panels and events that include ABF scholars. Let me mention a few.

Since its founding in 1952, ABF scholars have been studying nearly all aspects of the legal profession. Among the most well-known early studies, the work on Chicago lawyers led by Jack Heniz has become a canonical part of the socio-legal literature on the legal profession. The ABF dedicated an issue of our 2012 newsletter “Researching Law” to Jack’s scholarship.

In recent years, we’ve followed this research tradition with one of our best known projects: “After the JD” (AJD). This bold and ambitious project has been following the career trajectories of a cohort of roughly 5,000 lawyers from the class of 2000. This longitudinal study has gathered a tremendous amount of quantitative and qualitative data on legal careers, with some fascinating findings about career satisfaction and the stubborn persistence of inequalities within the profession. For those who are attending the LSA conference, you can learn more about the AJD study at a panel on Sat. morning (8:15 in Marriot Salon H-G, 3rd Floor) on “Longitudinal Studies of Lawyers’ Careers.”

Throughout its existence the ABF has had strong ties to the LSA and the broader socio-legal community. That connection is also on display at this year’s conference, as we celebrate the 50th anniversary of the LSA’s flagship journal, Law & Society Review. There are two panels on the journal, one with past editors which includes Shari Diamond (Northwestern Law/ABF) held on Friday afternoon (2:45 in Marriott 41st Floor, St. Charles Room), and a second panel immediately afterwards on the shifting field that the journal represents with ABF Research Professor Terry Halliday, and ABF Faculty Fellow Sida Liu.

Last but not least, the Life of the Law podcast will be hosting what is sure to be an entertaining and riveting session on “LIVE LAW New Orleans - Living My Scholarship” (ticketed event on Friday night) that is chocked full of speakers with ABF connections.

Next week, I’ll circle back to other ABF research that isn’t on in display at LSA but that should be of great interest to PrawfsBlawg readers.

Posted by Ajay K. Mehrotra on June 3, 2016 at 12:20 PM in Life of Law Schools, Peer-Reviewed Journals, Research Canons | Permalink | Comments (1)

Friday, April 10, 2015

Except for All the Others

Except for Fenway Park, there is no green grass in New England right now.   Still, I'm sympathetic to those who skim the law review submissions angsting thread, close their browser window in embarassment when a colleague happens by, and then think to themselves, "There's got to be another way."  

In that spirit, I thought it might useful to our reform conversation to report my experiences with peer-reviewed econ and l&e journals.  I've had half a dozen or so, of which one was constructive, pretty fast, and what I expected of a process run by fellow professionals.  The others...well, some are still ongoing.  Suffice it to say that it's a lot like sitting in a busy dentist's office, only for 8 to 12 months and without any good magazines.  

I don't think it's unreformable.  Indeed, I think a good starting place for a conversation about where to go with legal scholarship would be to talk more about which system's flaws are easier to mitigate.  

So, for example, it's possible that the peer-review market could function a lot better with better information.  There's almost no reliable information about how long each journal takes, on average, to complete reviews.  (In fact, it's a little bizarre that a profession whose central premise is the efficiency of well-informed markets would tolerate such an opaque system.)  Mandatory compilation and disclosure of that information would probably create at least some competitive pressure to bring those times down, which might eliminate at least the worst instances of needless delay.  There is a site for griping about long waits, but it is surely not a representative sample.  A laudable exception is AER, which reports a cumulative distribution table (see p.623) of wait times.

At a minimum, journals published by professional associations, such as ALER and JELS, should lead by example on this front.  Board members, are you reading?   

Posted by BDG on April 10, 2015 at 03:18 PM in Law Review Review, Peer-Reviewed Journals | Permalink | Comments (4)

Sunday, March 29, 2015

Why isn't PRSM more popular?

Following the angsting thread this season and reading Dave's thread about professors breaching law review contracts has made me start thinking again about the law review submission process. Everyone, it seems, agrees that the process creates perverse incentives: professors submit to dozens of journals, so that student editors must make decisions on thousands of articles; student editors are forced to make quick decisions in competition with other journals, and so rely on proxies of dubious merit to decide what to read; students at higher-ranked journals rely on the work of students at lower-ranked journals to screen articles. What strikes me, though, is that the Peer Reviewed Scholarship Marketplace seemed to solve all of these problems when it was created in 2009. It incorporates peer-review from subject matter experts (and provides this feedback for authors to strengthen the piece, whether or not they accept a given offer). It takes away the time pressure of the compressed submission season. It protects the freedom of choice for both professors and for student journals; students still decide which pieces to make offers for (after seeing the peer review evaluations), and professors can feel free to decline offers--they are not obligated to take an offer from a journal they don't wish to publish with. When PRSM was created in 2009, I thought it would quickly become the predominant way that law journals select articles.  Why hasn't it? Do more journals need to start using it so that authors will submit to it? It seems like they have a pretty good cross-section already, as there are 20 journals listed as members, about half of which are ranked in the top 50 law journals, and some in the top 30. Do more authors need to use it, so that journals will sign on? Or is there something I'm missing--some benefit of the current practice that PRSM fails to replicate?

Posted by Cassandra Burke Robertson on March 29, 2015 at 07:05 PM in Law Review Review, Life of Law Schools, Peer-Reviewed Journals | Permalink | Comments (9)

Sunday, March 01, 2015

Why Do Peer Review?

A recent post by Steve Bainbridge raises a nice issue: how should we think about peer review? Traditional peer-edited legal journals have established procedures (JELS pays honoraria and blinds; JLS pays but doesn’t; JLEO has fantastic peer comments, etc). But in the last five years, most of the top student-edited journals have moved to some kind of peer system – and many of us are now routinely asked, after a student-led process, to review for publication. That peer review is never paid, and very often professors are asked to review for journals that have never accepted them. *cough. Yale Law Journal I love and hate you. cough*  That can frustrate even non-curmudgeons.  Why do it?

  1. For institutional credit. I’m aware of no school that gives formal credit for these student-edited peer reviews. Are you? If so, what does it look like?
  2. For Law Review credit. One explanation I’ve heard for doing a review for, say, Harvard Law Review, is to motivate them to feel  that they owe you at least a rejection on your own work, instead of a magnificent silence. In my experience, there’s some truth in this: doing peer review gives you the email of an AE, and credit with that person. I routinely have succeeded at being at least read by a journal I’d just done peer review with. I haven't yet moved from a read to an acceptance.  But I did get a personalized email from HLR once.  It mentioned that they had an unusual number of great articles that cycle, which meant that they couldn't publish even good work like mine.  I thought that was nifty! Of course, the credit isn't  merely transactional: being a peer reviewer means you are an “expert” in the field, which should provide your article some kind of halo effect. Of course, this feeling is a quickly depreciating asset, and never rolls over from year-to-year. Use it or lose it!
  3. For the love of the game: For those of us who think that student journals should move exclusively to double-blind review, with faculty participation is a veto, participating is a price we should gladly pay. The problem is that the system isn’t perfectly constructed. Law journals should insist that peer comments will be conveyed to authors – this makes the comments much less likely to be petty (“cite me!”) and more likely to be constructive.

Bainbridge argues against mixed peer review systems, but none of his objections strike me as particularly relevant if the process is "student-screen, peer-veto." That is how I understand the system to work at SLR, YLJ and HLR. I don’t know about Chicago – I would’ve thought their selection involves a maximizing formula and ended with a number. 

Posted by Dave Hoffman on March 1, 2015 at 12:36 PM in Peer-Reviewed Journals | Permalink | Comments (5)

Wednesday, October 23, 2013

Law Review Publication Agreements


It might be useful for folks to have access to law reviews' publication agreements, whether to help with negotiations, compare copyright provisions, or whatever. I've begun a spreadsheet with links to such agreements that are available on the web. If you are aware of other such links, please add them in the comments to this post or email me directly, slawsky *at* law *dot* uci *dot* edu, and I will add them to the spreadsheet.

If this is duplicative of another such effort, please let me know, and I will (gleefully) ditch my spreadsheet and add a link to the other resource.

I am interested in links to any law review publication agreements, whether main journal, secondary journal, peer-reviewed, or student reviewed. 

The spreadsheet so far is here:



Update: I included a link in the spreadsheet to the Miami law wiki page on Copyright Experiences. This is a very helpful resource that includes links to information about the copyright policies of a large number of law journals. 

Update 2: I have now gone through the Miami law wiki and added to the spreadsheet links to the full text of journal agreements presented on the Wiki (as opposed to descriptions of copyright policies). I have indicated which links these are by marking a column in the spreadsheet "Miami Wiki." I will now attempt to augment this list as well as replace, where possible, the Miami Wiki links with links to the publication's web page. Three cheers for the Miami Wiki!

Posted by Sarah Lawsky on October 23, 2013 at 02:03 PM in Law Review Review, Peer-Reviewed Journals | Permalink | Comments (1) | TrackBack

Tuesday, May 21, 2013

Sperm Donation, Anonymity, and Compensation: An Empirical Legal Study

In the United States, most sperm donations* are anonymous. By contrast, many developed nations require sperm donors to be identified, typically requiring new sperm (and egg) donors to put identifying information into a registry that is made available to a donor-conceived child once they reach the age of 18. Recently, advocates have pressed U.S. states to adopt these registries as well, and state legislatures have indicated openness to the idea.

In a series of prior papers I have explained why I believe the arguments offered by advocates of these registries fail. Nevertheless, I like to think of myself as somewhat open-minded, so in another set of projects I have undertaken to empirically test what might happen if the U.S. adopted such a system. In particular, I wanted to look at the intersection of anonymity and compensation, something that cannot be done in many of these other countries where compensation for sperm and egg donors is prohibited.

Today I posted online (downloadable here) the first published paper from this project,Can You Buy Sperm Donor Identification? An Experiment, co-authored with Travis Coan, and forthcoming in December 2013 in Vol. 10, Issue 4, of the Journal of Empirical Legal Studies.

This study relies on a self-selected convenience sample to experimentally examine the economic implications of adopting a mandatory sperm donor identification regime in the U.S. Our results support the hypothesis that subjects in the treatment (non-anonymity) condition need to be paid significantly more, on average, to donate their sperm. When restricting our attention to only those subjects that would ever actually consider donating sperm, we find that individuals in the control condition are willing-to-accept an average of $$43 to donate, while individuals in the treatment group are willing-to-accept an aver-age of $74. These estimates suggest that it would cost roughly $31 per sperm donation, at least in our sample, to require donors to be identified. This price differential roughly corresponds to that of a major U.S. sperm bank that operates both an anonymous and identify release programs in terms of what they pay donors.

We are currently running a companion study on actual U.S. sperm donors and hope soon to expand our research to egg donors, so comments and ideas are very welcome online or offline.

* I will follow the common parlance of using the term "donation" here, while recognizing that the fact that compensation is offered in most cases gives a good reason to think the term is a misnomer.

- I. Glenn Cohen


Posted by Ivan Cohen on May 21, 2013 at 01:53 PM in Article Spotlight, Culture, Current Affairs, Peer-Reviewed Journals, Science | Permalink | Comments (5) | TrackBack

Wednesday, May 15, 2013

Rationing Legal Services

In the last few years at both the federal and state level there have been deep cuts to providing legal assistance to the poor.  This only  only makes more pressing and manifest a sad reality: there is and always will be persistent scarcity in the availability of both criminal and civil legal assistance. Given this persistent scarcity, my new article, Rationing Legal Services just published in the peer-reviewed Journal of Legal Analysis, examines how existing Legal Service Providers (LSPs), both civil and criminal, should ration their services when they cannot help everyone.

To illustrate the difficulty these issues involve, consider two types of LSPs, the Public Defender Service and Connecticut Legal Services (CLS), that I discuss in greater depth in the paper. Should the Public Defender Service favor offenders under the age of twenty-five years instead of those older than fifty-five years? Should other public defenders offices with death eligible offenses favor those facing the death penalty over those facing life sentences? Should providers favor clients they think can make actual innocence claims over those who cannot? How should CLS prioritize its civil cases and clients? Should it favor clients with cases better suited for impact litigation over those that fall in the direct service category? Should either institution prioritize those with the most need? Or, should they allocate by lottery?

I begin by looking at how three real-world LSPs currently rationi(PDS, CLS, and the Harvard Legal Aid Bureau). Then, in trying to answer these questions I draw on a developing literature in bioethics on the rationing of medical goods (organ, ICU beds, vaccine doses, etc) and show how the analogy can help us develop better rationing systems. I discuss six possible families of ‘simple’ rationing principles: first-come-first-serve, lottery, priority to the worst-off, age-weighting, best outcomes, and instrumental forms of allocation and the ethical complexities with several variants of each. While I ultimately tip my hand on my views of each of these sub-principles, my primary aim is to enrich the discourse on rationing legal services by showing LSPs and legal scholars that they must make a decision as to each of these issues, even if it is not the decision I would reach.

I also examine places where the analogy potentially breaks down. First, I examine how bringing in dignitary or participatory values complicates the allocation decision, drawing in particular on Jerry Mashaw’s work on Due Process values. Second, I ask whether it makes a difference that, in some cases, individuals who receive legal assistance will end up succeeding in cases where they do not “deserve” to win. I also examine whether the nature of legal services as “adversarial goods”, the allocation of which increases costs for those on the other side of the “v.”, should make a difference. Third, I relax the assumption that funding streams and lawyer satisfaction are independent of the rationing principles selected, and examine how that changes the picture. Finally, I respond to a potential objection that I have not left sufficient room for LSP institutional self-definition.

The end of the paper entitled “Some Realism about Rationing”, takes a step back to look for the sweet spot where theory meets practice. I use the foregoing analysis to recommend eight very tangible steps LSPs might take, within their administrability constraints, to implement more ethical rationing.

While this paper is now done I am hoping to do significant further work on these issues and possibly pursue a book project on it, so comments on or offline are very welcome. I am also collaborating with my wonderful and indefatigable colleague Jim Greiner and a colleague in the LSP world to do further work concerning experimentation in the delivery of legal services and the research ethics and research design issues it raises.

- I. Glenn Cohen

Posted by Ivan Cohen on May 15, 2013 at 02:57 PM in Article Spotlight, Civil Procedure, Law and Politics, Legal Theory, Life of Law Schools, Peer-Reviewed Journals | Permalink | Comments (2) | TrackBack

Wednesday, May 08, 2013

“Why is a big gift from the federal government a matter of coercion? ... It’s just a boatload of federal money for you to take and spend on poor people’s health care” or the mysterious coercion theory in the ACA case

At oral argument in NFIB v. Sebelius, the Affordable Care Act (ACA) case, Justice Kagan asked Paul Clement:

“Why is a big gift from the federal government a matter of coercion? It’s just a boatload of federal money for you to take and spend on poor people’s health care. It doesn’t sound coercive to me, I have to tell you.”

The exchange is all the more curious because, despite her scepticism, Kagan signed on to the Court’s holding that the Medicaid expansion in the ACA was coercive, as did all but two of the Justices (Ginsburg and Sotomayor). What happened? I try to answer this question, suggesting the court misunderstood what makes an offer coercive, in this article published as a part of a symposium on philosophical analysis of the decision by the peer-reviewed journal Ethical Perspectives.

First a little bit of background since some readers may not be as familiar with the Medicaid expansion part of the ACA and Sebelius: The ACA purported to expand the scope of Medicaid and increase the number of individuals the States must cover, most importantly by requiring States to provide Medicaid coverage to adults with incomes up to 133 percent of the federal poverty level. At the time the ACA was passed, most States covered adults with children only if their income was much lower, and did not cover childless adults. Under the ACA reforms, the federal government would have increased federal funding to cover the States’ costs for several years in the future, with States picking up only a small part of the tab. However, a State that did not comply with the new ACA coverage requirements could lose not only the federal funding for the expansion, but all of its Medicaid funding.

In Sebelius, for the first time in its history, the Court found such unconstitutional ‘compulsion’ in the deal offered to States in order to expand Medicaid under the ACA. In finding the Medicaid expansion unconstitutional, the Court contrasted the ACA case with the facts of the Dole case, wherein Congress “had threatened to withhold five percent of a State’s federal highway funds if the State did not raise its drinking age to 21.”In discussing Dole, the Sebelius Court determined that “that the inducement was not impermissibly coercive, because Congress was offering only ‘relatively mild encouragement to the States’,” and the Court noted that it was “less than half of one percent of South Dakota’s budget at the time” such that “[w]hether to accept the drinking age change ‘remain[ed] the prerogative of the States not merely in theory but in fact’.”

By contrast, when evaluating the Medicare expansion under the ACA, the Sebelius Court held that the

financial “inducement” Congress has chosen is much more than “rela- tively mild encouragement” – it is a gun to the head [...] A State that opts out of the Affordable Care Act’s expansion in health care cover- age thus stands to lose not merely “a relatively small percentage” of its existing Medicaid funding, but all of it. Medicaid spending accounts for over 20 percent of the average State’s total budget, with federal funds covering 50 to 83 percent of those costs [...] The threatened loss of over 10 percent of a State’s overall budget, in contrast [to Dole], is economic dragooning that leaves the States with no real option but to acquiesce in the Medicaid expansion.

I argue that this analysis is fundamentally misguided, and (if I may say so) I have some fun doing it! As I summarize the argument structure: If the new terms offered by the Medicaid expansion were not coercive, the old terms were not coercive, and the change in terms was not coercive, I find it hard to understand how seven Supreme Court Justices could have concluded that coercion was afoot; the only plausible explanation is that these seven Justices in Sebelius fundamentally misunderstood coercion. This misunderstanding becomes only more manifest when we ask exactly ‘who’ has been coerced, and see the way in which personifying the States as answer obfuscates rather than clarifies matters.

The paper is out, but I will be doing a book chapter adapting it so comments still very much approeciated.

- I. Glenn Cohen

Posted by Ivan Cohen on May 8, 2013 at 12:01 PM in Article Spotlight, Constitutional thoughts, Current Affairs, Legal Theory, Peer-Reviewed Journals | Permalink | Comments (11) | TrackBack

Friday, August 10, 2012

The Angsting Thread (Law Review Edition, Autumn 2012)

Friends, the time has come when Redyip is visible.  You know what that means. Feel free to use the comments to share your information (and gripes or praise) about which law reviews have turned over, which ones haven't yet, and where you've heard from, and where you've not, and what you'd like Santa to bring you this coming Xmas, etc. It's the semi-annual angsting thread for the law review submission season. Have at it. And do it reasonably nicely, pretty please.

Update: Here is a link to the last page of comments.


Posted by Administrators on August 10, 2012 at 04:08 PM in Blogging, Law Review Review, Life of Law Schools, Peer-Reviewed Journals | Permalink | Comments (877) | TrackBack

Tuesday, June 12, 2012

Are All Citations Good Citations?

A_PaperThere’s a saying in the public relations field that “all press is good press.” The main premise is that, regardless of positive or negative attention, the ultimate goal is to be in the public eye. Does this same concept extend to legal academia? When our work is cited, but somehow questioned for its accuracy, merit, or value, is that better than not being cited at all?

Posted by Kelly Anders on June 12, 2012 at 12:03 PM in Deliberation and voices, Legal Theory, Peer-Reviewed Journals | Permalink | Comments (4) | TrackBack

Wednesday, May 02, 2012

Bias in ABA judicial ratings

I would like to highlight a forthcoming article in Political Research Quarterly entitled "Bias and the Bar: Evaluating the ABA Ratings of Federal Judicial Nominees." In the article Susan Smelcer of Emory, Amy Steigerwalt of Georgia State and Rich Vining of the University of Georgia (a Georgia Trifecta) find systematic liberal bias in the evaluations of judicial nominees by the ABA.  The research was featured in an New York Times article a few years ago and now it has gone through the peer review process. 

Of course it is controversial, and is already the subject of a rebuttal in the January 2012 issue of Judiciature. Take a look and decide.

Posted by Robert Howard on May 2, 2012 at 11:41 AM in Article Spotlight, Law and Politics, Peer-Reviewed Journals | Permalink | Comments (1) | TrackBack

Saturday, January 02, 2010

Data collection, the pursuit of knowledge, and intellectual property rights

First, I'd like to thank Dan and the rest of the Prawfs gang for inviting me to guest blog here - it is truly an honor. My posts are usually relegated to a blog with a much smaller following that I run with co-editor Andy Whitford. As Dan noted I am a professor of political science at Binghamton University. However, before I went to graduate school and began my second career as a social scientist, I was an attorney. It is this intersection of law and social science that has always intrigued me and my question for this post has a lot to do with both topics. 

In conducting social science research I perform a lot, and I mean a lot, of data collection and coding. This is a process that is annoyingly both mundane and challenging (at least at times). Since most of you have at least some experience with this process I will not bore you with the challenges and pitfalls of performing quality data collection and coding. What I do want to stress though is that it is work. It requires time, expertise (this varies of course by context), and effort (i.e. it's not fun). Finally, this work adds value. 

In political science there are very strong professional mores to share data with other researchers. In fact, it is usually expected immediately after publication of your first article using the data if not before that time (e.g. after presenting a working paper at a conference). I profess some ignorance of the social mores on data sharing in empirical legal studies, but from my few conversations on this point, I think that they might be somewhat different. In political science this norm of sharing your data is usually rationalized along the lines of "don't you want to aid the pursuit of knowledge?" or "surely you support the advancement of science, right?" or similar call to a higher good. Now, don't get me wrong, I have asked people for data and they have shared it with me (and vice versa) - this process is all fine and good. But isn't this generalized rationalization a bit simplistic? If people begin losing incentives to spend significant time collecting data, then doesn't that inhibit the pursuit of knowledge? It seems that within this norm there is a tacit winner (data analyzers) and an implicit, well, loser-chump (data collectors). Isn't a more nuanced discussion in order? Can't we find some way to satisfactorily compensate/protect data collection efforts? I'd be very interested to hear what intellectual property scholars think about this situation. Okay, a few observations to clarify my discussion here:

- Of course, I am not talking about data collected with the help or aid of a funding entity such as the National Science Foundation - clearly such data should be shared and my understanding is that it is part of the grant agreement. 

- I am also not talking about a situation in which a researcher has access to information that other researchers do not have. The usual situation is publicly available data that has been collected, organized, and put into a spreadsheet - all with some toil and sweat. 

- I am not suggesting that data be kept forever - just that we might think about a protection period or establishing guidelines that adequately compensates the investment of time that data collection warrants and perhaps that we have some uniformity in the process and an enforcement mechanism.

- Re the duty to further the pursuit of knowledge - aren't there competing duties to the entities providing the opportunity to collect the data. I'm wondering if your university pays you summer money to collect data (or for a research assistant to do so), then doesn't the institution have a proprietary interest in that data and shouldn't it be compensated when the effort it funded is used? Isn't this what happens when science professors get patents on things they discover or develop on university funded projects? It is my understanding that the entities producing Pacer and Trac data charge users for their products.

- One defense of requiring "quick" data sharing is that we must do this to make sure that the researcher has competently collected and analyzed the data. Really? I'm pretty sure that a journal editor could require that authors submit data for peer review with an agreement that the person performing the robustness/accuracy testing does not publish with the data or release it - simple enough. 

- Another defense is that data collectors are adequately compensated by the social capital and/or citations (acclaim) that they receive by making their data available to other researchers - curiously, we don't apply this rationale (at least to the same degree) to music, writing, trademarks, or product design.

I'll shut up for now, but I would be interested in hearing other peoples' thoughts on these matters. As I said before, I've been on both sides of the data sharing situation, so I actually have somewhat mixed feelings on the subject. It just seems to me that this is a very under explored area of IP, especially given the increase in the importance of data driven processes in recent years in the private sector. A (very) brief search on Lexis revealed some law review articles on data and IP, but not as many as I had expected. 


Posted by on January 2, 2010 at 02:50 PM in Blogging, Law and Politics, Peer-Reviewed Journals, Science | Permalink | Comments (6) | TrackBack

Thursday, April 02, 2009

What is the Future of Empirical Legal Scholarship?

First, thanks to Dan and everyone else at Prawsblog for inviting me to post here this month. I'm really looking forward to it.

I found Jonathan Simon's recent posts about the future of empirical legal scholarship quite interesting. On the one hand, as someone in the ELS field, I found his general optimism about its future uplifting. But I'm not sure I share it, although my concern is more internal to the field itself than external. I'm going to be writing about this a lot this month, so I thought I'd use my first post to just lay out my basic concerns.

Jonathan focuses on trends outside of ELS: cultural and attitudinal shifts within the law as a whole, and more global changes in, say, economic conditions. And at one level I think he is right--the decreased cost of empirical work, the PhD-ification of the law, the general quantitative/actuarial turn we have witnessed over the past few decades all suggest ELS is here to stay. But there are deep internal problems not just with ELS, but with the empirical social sciences more generally, that threaten their future. I will just touch on the major points here, and I will return to all of these issues in the days ahead.
To appreciate the problem, it is first necessary to note a major technological revolution that has taken place during the past three decades. The rise of the computer cannot be overstated. Empirical work that I can do in ten minutes sitting at my desk would have been breathtakingly hard to do twenty-five years ago and literally impossible fifty years ago. The advances have been both in terms of hardware (computing power and storage) and software (user-friendly statistics packages). The result is that anyone can do empirical work today. This is not necessarily a good thing.

So what are the problems we face?

1. An explosion in empirical work. More empirical work is, at some level, a good thing: how can we decide what the best policy is without data? But the explosion in output has been matched by an explosion in the variation in quality (partly because user-friendly software allows people with little training to design empirical projects). The good work has never been better, and the bad work has never been worse. It could very well be that average quality has declined. Some of the bad work comes from honest errors, but some of it comes from cynical manipulation.

2. A bad philosophy of science. Social scientists cling to the idea that we are following in the footsteps of Sir Karl Popper, proposing hypotheses and then rejecting them. We are not. We never have. This is clear in any empirical paper: once the analyst calculates the point estimate, he draws implications from it ("My coefficient of 0.4 implies that a 1% increase in rainfall in Mongolia leads to a 0.4% increase in drug arrests in Brooklyn"). This is not falsification, which only allows him to say "I have rejected 0%." Social science theory cannot produce the type of precise numerical hypothesis that falsification demands. We are trying to estimate an effect, which is inductive.

3. Limited tools for dealing with induction. Induction requires an overview of an entire empirical literature. Fields like medicine and epidemiology have started to develop rigorous methods for drawing these types of inferences. As far as I can tell, there has been no work in this direction in the social sciences, including ELS, of any sort. This is partly the result of Problem 2: such overviews would be unnecessary were we actually in a falsificationist world, since all it takes is one black swan to refute the hypothesis that all swans are white.

As a result of these three problems, we produce empirical knowledge quite poorly. To reuse a joke I've made before and will likely make again at least a dozen times this month, Newton's Third Law roughly holds: for every empirical finding there is an opposite (though not necessarily equal) finding. 

With more and more studies coming down the pike, and with little to no work being done to figure out how to separate the wheat from the chaff, ELS could defeat itself. If it is possible to find any result in the literature and no way to separate out what is credible and what is not, empirical research becomes effectively useless. (This only exacerbates the problem identified by Don Braman, Dan Kahan and others that people choose the results that align with their prior beliefs rather than adjusting these beliefs in light of new data.)

So what is the solution? Empirical research in the social sciences needs to adopt a more evidence-based approach. We need to develop a clear definition of what constitutes "good" and "bad" methodological design, and we have to create objective guidelines to make these assessments. We have to abolish string cites, especially of the "on the one hand, on the other hand" type, and replace them with rigorous systematic reviews of the literature.

Of course, these guidelines and reviews are challenging to develop for the methodologically straight-forward randomized clinical trial that medicine relies on. In the social sciences, which are often forced to use observational data, the challenge will be all the greater. But, as I'll argue later this month, the rewards will be all the greater as well.

The use of systematic reviews is particularly important in the law, for at least reasons:

1. Inadequate screening. Peer review is no panacea by any means, but it provides a good front line of defense against bad empirical work. We lack that protection in the law. There are some peer reviewed journals, but not many. And the form of peer review that Matt Bodie talked about for law reviews recently isn't enough. The risk of bad work slipping through is great.

The diversity of law school faculties, usually a strength, is here a potential problem. Even theoretical economists have several years of statistics, so everyone who reads an economics journal has the tools to identify a wide range of errors. But many members of law school faculties have little to no statistical training, making it harder for them to know which studies to dismiss as flawed.

2. Growing importance of empirical evidence. Courts rely on empirical evidence more and more. And while disgust with how complex scientific evidence is used in the courtroom has been with us since the 1700s if not earlier, the problem is only going to grow substantially worse in the years ahead. Neither Daubert nor Frye are capable of handling the evidentiary demands that courts increasingly face.

Given that my goal here was just to touch on what I want to talk about in the weeks to come, I think I'll stop here. This is an issue I've been thinking about for a while now, and I am looking forward to seeing people's thoughts this.

Posted by John Pfaff on April 2, 2009 at 11:56 AM in Peer-Reviewed Journals, Research Canons, Science | Permalink | Comments (9) | TrackBack

Tuesday, May 20, 2008

Justice Scalia's One-Way Ratchet: Congress and Federal Habeas Jurisdiction

I've just posted to SSRN a draft of a new (and short) essay that is to be published later this summer by The Green Bag. The essay, titled "The Riddle of the One-Way Ratchet: Habeas Corpus and the District of Columbia," tries to shed light on a missing piece of the ever-ongoing debate concerning Congress's power over the habeas corpus jurisdiction of the Article III courts.

To spoil some of the fun, the essay's central argument is that statutes such as the Military Commissions Act are constitutionally problematic entirely because there are no other courts in which detainees may otherwise bring habeas petitions. Ex parte Bollman arguably prevents detainees from going straight to the Supreme Court, and Tarble's Case, for better or worse, prevents detainees from pressing their claims in state courts. None of that, of course, is new.

But it's an obscure provision of the D.C. Code (section 16-1901(b)), and not a Supreme Court decision, that closes off the last possible escape valve -- the D.C. Superior Court. As the essay explains, the D.C. Superior Court would otherwise likely have the authority to issue a common-law writ of habeas corpus against a federal officer, which would vitiate any Suspension Clause-based challenges to statutes such as the MCA. In other words, what makes the constitutional question so tricky when Congress attempts to constrain the habeas jurisdiction of the Article III courts is that Congress already has constrained the jurisdiction of the one court that would otherwise be open...

In his dissent in St. Cyr, Justice Scalia suggested that Congress's power over federal habeas jurisdiction was necessarily plenary: "If . . . the writ could not be suspended within the meaning of the Suspension Clause until Congress affirmatively provided for habeas by statute, then surely Congress may subsequently alter what it had initially provided for, lest the Clause become a one-way ratchet.” As the essay concludes, such a statement is absolutely true, but thoroughly incomplete. The ratchet results only from the fact that Congress has itself already precluded access to the common-law writ in the D.C. local courts.

Posted by Steve Vladeck on May 20, 2008 at 03:21 PM in Article Spotlight, Constitutional thoughts, Current Affairs, Peer-Reviewed Journals, Steve Vladeck | Permalink | Comments (0) | TrackBack

Thursday, July 13, 2006

On SSRN and Peer Review

I had a thought today on the perenially interesting subjects of SSRN and peer review.  I got to thinking about it because I recently sent an article out for peer review -- a paper I posted last month on SSRN

Despite the anonymity the journal tries to preserve, I wonder whether referrees for economics journal -- and, to a lesser extent, political science journals (because political scientists don't upload to SSRN as often, I think) -- check to see if a paper has already been posted on SSRN to get author information and a download number count (that moronic register of quality).  This issue might also arise with certain peer-reviewed law journals, since law professors (especially those in the ELS and L & E world) routinely upload circulation drafts.  Could SSRN be undermining double-blind peer review?

Of course, not that many would admit to the practice.  But feel free to offer anonymous comments if you've got good info. 

I should note that I don't mean to single out SSRN as the only problem -- one could just as easily google a title to get author info if the paper has been blogged about or workshopped.  But I would imagine that many many more papers get uploaded to SSRN and then go into the publication submissions cycle than get blogged about or workshopped.  Thoughts?

Posted by Ethan Leib on July 13, 2006 at 01:13 AM in Peer-Reviewed Journals | Permalink | Comments (1) | TrackBack

Monday, February 27, 2006

Publication Venue as a Lagging Indicator of Quality

Over at the ELS, Max Schanzenbach weighs in on the pros and cons of peer-reviewed versus student-edited law journals; Max essentially ends in equipoise.  I recently posted on the same issue.  I am wary of perpetual navel-gazing (especially my own, which ain't pretty), but this seems like an issue that is on the minds of many young academics.   I think some of this unease is traceable to a period of transition affecting both the law and social science academies.

Specifically, law is drifting toward the peer-review model because (a) empirical legal studies are in high demand because they supply hard facts in an academic discipline awash in normative polemics; (b) barriers to data collection and analysis have fallen rapidly, which means there is an abundance of important and tractable projects awaiting capable researchers; and (c) student editors have no special ability to assess the rigor of this work, and the lack of technical sophistication by most law faculty have sent them reeling for some external assurance of quality, which peer-reviewed journals provide.

Re the tumult in the social sciences, or at least economics and its various applied branches, the interminable delay of the peer-reviewed single submission process has been one of the primary drivers for the growth of SSRN (and the crossover for economics and finance explains why biz law professors dominate the SSRN law rankings).  A scholar can lay claim to an idea by posting a polished working paper; if it is any good, it will be downloaded, discussed, referenced, and presented at conferences before it appears in a journal.  A dossier of unpublished papers on SSRN is a good strategy for a junior social scientist (or law professor) to build his or her career as he or she waits for journal acceptance and publication–i.e, validation of quality.

But for both law and the social sciences, peer-review now operates on two levels: (1) established scholars, after a lengthy editorial process, accept an  article for publication; or (2) established scholars working in the same field gravitate toward the best work in pre-publication form.  Because of changes in the cost and flow of information, reputations are now being formed based on route #2.  In other words, publication venue is a lagging indicator of quality.

Posted by Bill Henderson on February 27, 2006 at 11:23 PM in Peer-Reviewed Journals | Permalink | Comments (1) | TrackBack

Thursday, February 23, 2006

The peer-review conundrum

Michael Heise over at the ELS blog asks a provocative question that I encourage other readers to answer: “[Do law professors] see (or perceive) shifts in overall law school culture or tenure expectations germane to scholarly publication options[?].  If so, when it comes to decisions about where to place scholarship how are folks assessing such options as traditional student-edited law reviews, faculty-edited peer-reviewed journals (in law or other fields), and university press books?”

It would be interesting to know the answer to this question. I suspect the norms vary pretty dramatically, even among elite schools.  For example, publishing in Top 10 law journals generally helps one's career.  But schools like Cornell, Northwestern, and Texas are trying to build empirical powerhouses.  Thus, insofar as peer-review becomes the preferred outlet at those law schools, I suspect that those expectations will influence the publication norms of the wider legal academy, especially with empirical work. 

I had this same question on my mind several months ago when I asked a prominent empiricist (an eminent social scientist who is now on the faculty of a Top 15 law school) whether it was better to publish empirical work in peer-reviewed journals rather than law journals.  [I am not naming names because the question was not asked in the context of posting the answer on a blog.]  I fully expected to get an answer to the affirmative.  But much to my surprise, she/he said that extensive workshopping was as good or better for quality control than the peer-review process.  So to his/her mind, student-edited journals were just fine.  The burden of quality control was on the author.

I think the issue here comes down to a signaling heuristic:  If you get published in JELS or JLS or ALER, then distinguished people in the field said it was good enough to publish.  From the perspective of a junior faculty person, that is pretty attractive.  There will be a presumption of quality that a negative review in your tenure file is less likely to rebut. That said, it is another matter entirely to believe that the peer-review process catches all significant mistakes.  The economics of refereeing a journal article are replete with agency costs, and I have collected anecdotes on this topic over the years.  But that is the topic for a separate post.

By the way, PrawfsBlawg alumni Joelle Moreno had some interesting thoughts on this topic a few months ago.

Posted by Bill Henderson on February 23, 2006 at 04:43 PM in Peer-Reviewed Journals | Permalink | Comments (6) | TrackBack