Wednesday, October 23, 2013

Law Review Publication Agreements

 

It might be useful for folks to have access to law reviews' publication agreements, whether to help with negotiations, compare copyright provisions, or whatever. I've begun a spreadsheet with links to such agreements that are available on the web. If you are aware of other such links, please add them in the comments to this post or email me directly, slawsky *at* law *dot* uci *dot* edu, and I will add them to the spreadsheet.

If this is duplicative of another such effort, please let me know, and I will (gleefully) ditch my spreadsheet and add a link to the other resource.

I am interested in links to any law review publication agreements, whether main journal, secondary journal, peer-reviewed, or student reviewed. 

The spreadsheet so far is here:

 

 

Update: I included a link in the spreadsheet to the Miami law wiki page on Copyright Experiences. This is a very helpful resource that includes links to information about the copyright policies of a large number of law journals. 

Update 2: I have now gone through the Miami law wiki and added to the spreadsheet links to the full text of journal agreements presented on the Wiki (as opposed to descriptions of copyright policies). I have indicated which links these are by marking a column in the spreadsheet "Miami Wiki." I will now attempt to augment this list as well as replace, where possible, the Miami Wiki links with links to the publication's web page. Three cheers for the Miami Wiki!

Posted by Sarah Lawsky on October 23, 2013 at 02:03 PM in Law Review Review, Peer-Reviewed Journals | Permalink | Comments (1) | TrackBack

Tuesday, May 21, 2013

Sperm Donation, Anonymity, and Compensation: An Empirical Legal Study

In the United States, most sperm donations* are anonymous. By contrast, many developed nations require sperm donors to be identified, typically requiring new sperm (and egg) donors to put identifying information into a registry that is made available to a donor-conceived child once they reach the age of 18. Recently, advocates have pressed U.S. states to adopt these registries as well, and state legislatures have indicated openness to the idea.

In a series of prior papers I have explained why I believe the arguments offered by advocates of these registries fail. Nevertheless, I like to think of myself as somewhat open-minded, so in another set of projects I have undertaken to empirically test what might happen if the U.S. adopted such a system. In particular, I wanted to look at the intersection of anonymity and compensation, something that cannot be done in many of these other countries where compensation for sperm and egg donors is prohibited.

Today I posted online (downloadable here) the first published paper from this project,Can You Buy Sperm Donor Identification? An Experiment, co-authored with Travis Coan, and forthcoming in December 2013 in Vol. 10, Issue 4, of the Journal of Empirical Legal Studies.

This study relies on a self-selected convenience sample to experimentally examine the economic implications of adopting a mandatory sperm donor identification regime in the U.S. Our results support the hypothesis that subjects in the treatment (non-anonymity) condition need to be paid significantly more, on average, to donate their sperm. When restricting our attention to only those subjects that would ever actually consider donating sperm, we find that individuals in the control condition are willing-to-accept an average of $$43 to donate, while individuals in the treatment group are willing-to-accept an aver-age of $74. These estimates suggest that it would cost roughly $31 per sperm donation, at least in our sample, to require donors to be identified. This price differential roughly corresponds to that of a major U.S. sperm bank that operates both an anonymous and identify release programs in terms of what they pay donors.

We are currently running a companion study on actual U.S. sperm donors and hope soon to expand our research to egg donors, so comments and ideas are very welcome online or offline.

* I will follow the common parlance of using the term "donation" here, while recognizing that the fact that compensation is offered in most cases gives a good reason to think the term is a misnomer.

- I. Glenn Cohen

 

Posted by Ivan Cohen on May 21, 2013 at 01:53 PM in Article Spotlight, Culture, Current Affairs, Peer-Reviewed Journals, Science | Permalink | Comments (5) | TrackBack

Wednesday, May 15, 2013

Rationing Legal Services

In the last few years at both the federal and state level there have been deep cuts to providing legal assistance to the poor.  This only  only makes more pressing and manifest a sad reality: there is and always will be persistent scarcity in the availability of both criminal and civil legal assistance. Given this persistent scarcity, my new article, Rationing Legal Services just published in the peer-reviewed Journal of Legal Analysis, examines how existing Legal Service Providers (LSPs), both civil and criminal, should ration their services when they cannot help everyone.

To illustrate the difficulty these issues involve, consider two types of LSPs, the Public Defender Service and Connecticut Legal Services (CLS), that I discuss in greater depth in the paper. Should the Public Defender Service favor offenders under the age of twenty-five years instead of those older than fifty-five years? Should other public defenders offices with death eligible offenses favor those facing the death penalty over those facing life sentences? Should providers favor clients they think can make actual innocence claims over those who cannot? How should CLS prioritize its civil cases and clients? Should it favor clients with cases better suited for impact litigation over those that fall in the direct service category? Should either institution prioritize those with the most need? Or, should they allocate by lottery?

I begin by looking at how three real-world LSPs currently rationi(PDS, CLS, and the Harvard Legal Aid Bureau). Then, in trying to answer these questions I draw on a developing literature in bioethics on the rationing of medical goods (organ, ICU beds, vaccine doses, etc) and show how the analogy can help us develop better rationing systems. I discuss six possible families of ‘simple’ rationing principles: first-come-first-serve, lottery, priority to the worst-off, age-weighting, best outcomes, and instrumental forms of allocation and the ethical complexities with several variants of each. While I ultimately tip my hand on my views of each of these sub-principles, my primary aim is to enrich the discourse on rationing legal services by showing LSPs and legal scholars that they must make a decision as to each of these issues, even if it is not the decision I would reach.

I also examine places where the analogy potentially breaks down. First, I examine how bringing in dignitary or participatory values complicates the allocation decision, drawing in particular on Jerry Mashaw’s work on Due Process values. Second, I ask whether it makes a difference that, in some cases, individuals who receive legal assistance will end up succeeding in cases where they do not “deserve” to win. I also examine whether the nature of legal services as “adversarial goods”, the allocation of which increases costs for those on the other side of the “v.”, should make a difference. Third, I relax the assumption that funding streams and lawyer satisfaction are independent of the rationing principles selected, and examine how that changes the picture. Finally, I respond to a potential objection that I have not left sufficient room for LSP institutional self-definition.

The end of the paper entitled “Some Realism about Rationing”, takes a step back to look for the sweet spot where theory meets practice. I use the foregoing analysis to recommend eight very tangible steps LSPs might take, within their administrability constraints, to implement more ethical rationing.

While this paper is now done I am hoping to do significant further work on these issues and possibly pursue a book project on it, so comments on or offline are very welcome. I am also collaborating with my wonderful and indefatigable colleague Jim Greiner and a colleague in the LSP world to do further work concerning experimentation in the delivery of legal services and the research ethics and research design issues it raises.

- I. Glenn Cohen

Posted by Ivan Cohen on May 15, 2013 at 02:57 PM in Article Spotlight, Civil Procedure, Law and Politics, Legal Theory, Life of Law Schools, Peer-Reviewed Journals | Permalink | Comments (2) | TrackBack

Wednesday, May 08, 2013

“Why is a big gift from the federal government a matter of coercion? ... It’s just a boatload of federal money for you to take and spend on poor people’s health care” or the mysterious coercion theory in the ACA case

At oral argument in NFIB v. Sebelius, the Affordable Care Act (ACA) case, Justice Kagan asked Paul Clement:

“Why is a big gift from the federal government a matter of coercion? It’s just a boatload of federal money for you to take and spend on poor people’s health care. It doesn’t sound coercive to me, I have to tell you.”

The exchange is all the more curious because, despite her scepticism, Kagan signed on to the Court’s holding that the Medicaid expansion in the ACA was coercive, as did all but two of the Justices (Ginsburg and Sotomayor). What happened? I try to answer this question, suggesting the court misunderstood what makes an offer coercive, in this article published as a part of a symposium on philosophical analysis of the decision by the peer-reviewed journal Ethical Perspectives.

First a little bit of background since some readers may not be as familiar with the Medicaid expansion part of the ACA and Sebelius: The ACA purported to expand the scope of Medicaid and increase the number of individuals the States must cover, most importantly by requiring States to provide Medicaid coverage to adults with incomes up to 133 percent of the federal poverty level. At the time the ACA was passed, most States covered adults with children only if their income was much lower, and did not cover childless adults. Under the ACA reforms, the federal government would have increased federal funding to cover the States’ costs for several years in the future, with States picking up only a small part of the tab. However, a State that did not comply with the new ACA coverage requirements could lose not only the federal funding for the expansion, but all of its Medicaid funding.

In Sebelius, for the first time in its history, the Court found such unconstitutional ‘compulsion’ in the deal offered to States in order to expand Medicaid under the ACA. In finding the Medicaid expansion unconstitutional, the Court contrasted the ACA case with the facts of the Dole case, wherein Congress “had threatened to withhold five percent of a State’s federal highway funds if the State did not raise its drinking age to 21.”In discussing Dole, the Sebelius Court determined that “that the inducement was not impermissibly coercive, because Congress was offering only ‘relatively mild encouragement to the States’,” and the Court noted that it was “less than half of one percent of South Dakota’s budget at the time” such that “[w]hether to accept the drinking age change ‘remain[ed] the prerogative of the States not merely in theory but in fact’.”

By contrast, when evaluating the Medicare expansion under the ACA, the Sebelius Court held that the

financial “inducement” Congress has chosen is much more than “rela- tively mild encouragement” – it is a gun to the head [...] A State that opts out of the Affordable Care Act’s expansion in health care cover- age thus stands to lose not merely “a relatively small percentage” of its existing Medicaid funding, but all of it. Medicaid spending accounts for over 20 percent of the average State’s total budget, with federal funds covering 50 to 83 percent of those costs [...] The threatened loss of over 10 percent of a State’s overall budget, in contrast [to Dole], is economic dragooning that leaves the States with no real option but to acquiesce in the Medicaid expansion.

I argue that this analysis is fundamentally misguided, and (if I may say so) I have some fun doing it! As I summarize the argument structure: If the new terms offered by the Medicaid expansion were not coercive, the old terms were not coercive, and the change in terms was not coercive, I find it hard to understand how seven Supreme Court Justices could have concluded that coercion was afoot; the only plausible explanation is that these seven Justices in Sebelius fundamentally misunderstood coercion. This misunderstanding becomes only more manifest when we ask exactly ‘who’ has been coerced, and see the way in which personifying the States as answer obfuscates rather than clarifies matters.

The paper is out, but I will be doing a book chapter adapting it so comments still very much approeciated.

- I. Glenn Cohen

Posted by Ivan Cohen on May 8, 2013 at 12:01 PM in Article Spotlight, Constitutional thoughts, Current Affairs, Legal Theory, Peer-Reviewed Journals | Permalink | Comments (11) | TrackBack

Friday, August 10, 2012

The Angsting Thread (Law Review Edition, Autumn 2012)

Friends, the time has come when Redyip is visible.  You know what that means. Feel free to use the comments to share your information (and gripes or praise) about which law reviews have turned over, which ones haven't yet, and where you've heard from, and where you've not, and what you'd like Santa to bring you this coming Xmas, etc. It's the semi-annual angsting thread for the law review submission season. Have at it. And do it reasonably nicely, pretty please.

Update: Here is a link to the last page of comments.

 

Posted by Dan Markel on August 10, 2012 at 04:08 PM in Blogging, Law Review Review, Life of Law Schools, Peer-Reviewed Journals | Permalink | Comments (877) | TrackBack

Tuesday, June 12, 2012

Are All Citations Good Citations?

A_PaperThere’s a saying in the public relations field that “all press is good press.” The main premise is that, regardless of positive or negative attention, the ultimate goal is to be in the public eye. Does this same concept extend to legal academia? When our work is cited, but somehow questioned for its accuracy, merit, or value, is that better than not being cited at all?

Posted by Kelly Anders on June 12, 2012 at 12:03 PM in Deliberation and voices, Legal Theory, Peer-Reviewed Journals | Permalink | Comments (4) | TrackBack

Wednesday, May 02, 2012

Bias in ABA judicial ratings

I would like to highlight a forthcoming article in Political Research Quarterly entitled "Bias and the Bar: Evaluating the ABA Ratings of Federal Judicial Nominees." In the article Susan Smelcer of Emory, Amy Steigerwalt of Georgia State and Rich Vining of the University of Georgia (a Georgia Trifecta) find systematic liberal bias in the evaluations of judicial nominees by the ABA.  The research was featured in an New York Times article a few years ago and now it has gone through the peer review process. 

Of course it is controversial, and is already the subject of a rebuttal in the January 2012 issue of Judiciature. Take a look and decide.

Posted by Robert Howard on May 2, 2012 at 11:41 AM in Article Spotlight, Law and Politics, Peer-Reviewed Journals | Permalink | Comments (1) | TrackBack

Saturday, January 02, 2010

Data collection, the pursuit of knowledge, and intellectual property rights

First, I'd like to thank Dan and the rest of the Prawfs gang for inviting me to guest blog here - it is truly an honor. My posts are usually relegated to a blog with a much smaller following that I run with co-editor Andy Whitford. As Dan noted I am a professor of political science at Binghamton University. However, before I went to graduate school and began my second career as a social scientist, I was an attorney. It is this intersection of law and social science that has always intrigued me and my question for this post has a lot to do with both topics. 

In conducting social science research I perform a lot, and I mean a lot, of data collection and coding. This is a process that is annoyingly both mundane and challenging (at least at times). Since most of you have at least some experience with this process I will not bore you with the challenges and pitfalls of performing quality data collection and coding. What I do want to stress though is that it is work. It requires time, expertise (this varies of course by context), and effort (i.e. it's not fun). Finally, this work adds value. 

In political science there are very strong professional mores to share data with other researchers. In fact, it is usually expected immediately after publication of your first article using the data if not before that time (e.g. after presenting a working paper at a conference). I profess some ignorance of the social mores on data sharing in empirical legal studies, but from my few conversations on this point, I think that they might be somewhat different. In political science this norm of sharing your data is usually rationalized along the lines of "don't you want to aid the pursuit of knowledge?" or "surely you support the advancement of science, right?" or similar call to a higher good. Now, don't get me wrong, I have asked people for data and they have shared it with me (and vice versa) - this process is all fine and good. But isn't this generalized rationalization a bit simplistic? If people begin losing incentives to spend significant time collecting data, then doesn't that inhibit the pursuit of knowledge? It seems that within this norm there is a tacit winner (data analyzers) and an implicit, well, loser-chump (data collectors). Isn't a more nuanced discussion in order? Can't we find some way to satisfactorily compensate/protect data collection efforts? I'd be very interested to hear what intellectual property scholars think about this situation. Okay, a few observations to clarify my discussion here:

- Of course, I am not talking about data collected with the help or aid of a funding entity such as the National Science Foundation - clearly such data should be shared and my understanding is that it is part of the grant agreement. 

- I am also not talking about a situation in which a researcher has access to information that other researchers do not have. The usual situation is publicly available data that has been collected, organized, and put into a spreadsheet - all with some toil and sweat. 

- I am not suggesting that data be kept forever - just that we might think about a protection period or establishing guidelines that adequately compensates the investment of time that data collection warrants and perhaps that we have some uniformity in the process and an enforcement mechanism.

- Re the duty to further the pursuit of knowledge - aren't there competing duties to the entities providing the opportunity to collect the data. I'm wondering if your university pays you summer money to collect data (or for a research assistant to do so), then doesn't the institution have a proprietary interest in that data and shouldn't it be compensated when the effort it funded is used? Isn't this what happens when science professors get patents on things they discover or develop on university funded projects? It is my understanding that the entities producing Pacer and Trac data charge users for their products.

- One defense of requiring "quick" data sharing is that we must do this to make sure that the researcher has competently collected and analyzed the data. Really? I'm pretty sure that a journal editor could require that authors submit data for peer review with an agreement that the person performing the robustness/accuracy testing does not publish with the data or release it - simple enough. 

- Another defense is that data collectors are adequately compensated by the social capital and/or citations (acclaim) that they receive by making their data available to other researchers - curiously, we don't apply this rationale (at least to the same degree) to music, writing, trademarks, or product design.

I'll shut up for now, but I would be interested in hearing other peoples' thoughts on these matters. As I said before, I've been on both sides of the data sharing situation, so I actually have somewhat mixed feelings on the subject. It just seems to me that this is a very under explored area of IP, especially given the increase in the importance of data driven processes in recent years in the private sector. A (very) brief search on Lexis revealed some law review articles on data and IP, but not as many as I had expected. 

Jeff

Posted by Lawandcourts.wordpress.com on January 2, 2010 at 02:50 PM in Blogging, Law and Politics, Peer-Reviewed Journals, Science | Permalink | Comments (6) | TrackBack

Thursday, April 02, 2009

What is the Future of Empirical Legal Scholarship?

First, thanks to Dan and everyone else at Prawsblog for inviting me to post here this month. I'm really looking forward to it.

I found Jonathan Simon's recent posts about the future of empirical legal scholarship quite interesting. On the one hand, as someone in the ELS field, I found his general optimism about its future uplifting. But I'm not sure I share it, although my concern is more internal to the field itself than external. I'm going to be writing about this a lot this month, so I thought I'd use my first post to just lay out my basic concerns.

Jonathan focuses on trends outside of ELS: cultural and attitudinal shifts within the law as a whole, and more global changes in, say, economic conditions. And at one level I think he is right--the decreased cost of empirical work, the PhD-ification of the law, the general quantitative/actuarial turn we have witnessed over the past few decades all suggest ELS is here to stay. But there are deep internal problems not just with ELS, but with the empirical social sciences more generally, that threaten their future. I will just touch on the major points here, and I will return to all of these issues in the days ahead.
To appreciate the problem, it is first necessary to note a major technological revolution that has taken place during the past three decades. The rise of the computer cannot be overstated. Empirical work that I can do in ten minutes sitting at my desk would have been breathtakingly hard to do twenty-five years ago and literally impossible fifty years ago. The advances have been both in terms of hardware (computing power and storage) and software (user-friendly statistics packages). The result is that anyone can do empirical work today. This is not necessarily a good thing.

So what are the problems we face?

1. An explosion in empirical work. More empirical work is, at some level, a good thing: how can we decide what the best policy is without data? But the explosion in output has been matched by an explosion in the variation in quality (partly because user-friendly software allows people with little training to design empirical projects). The good work has never been better, and the bad work has never been worse. It could very well be that average quality has declined. Some of the bad work comes from honest errors, but some of it comes from cynical manipulation.

2. A bad philosophy of science. Social scientists cling to the idea that we are following in the footsteps of Sir Karl Popper, proposing hypotheses and then rejecting them. We are not. We never have. This is clear in any empirical paper: once the analyst calculates the point estimate, he draws implications from it ("My coefficient of 0.4 implies that a 1% increase in rainfall in Mongolia leads to a 0.4% increase in drug arrests in Brooklyn"). This is not falsification, which only allows him to say "I have rejected 0%." Social science theory cannot produce the type of precise numerical hypothesis that falsification demands. We are trying to estimate an effect, which is inductive.

3. Limited tools for dealing with induction. Induction requires an overview of an entire empirical literature. Fields like medicine and epidemiology have started to develop rigorous methods for drawing these types of inferences. As far as I can tell, there has been no work in this direction in the social sciences, including ELS, of any sort. This is partly the result of Problem 2: such overviews would be unnecessary were we actually in a falsificationist world, since all it takes is one black swan to refute the hypothesis that all swans are white.

As a result of these three problems, we produce empirical knowledge quite poorly. To reuse a joke I've made before and will likely make again at least a dozen times this month, Newton's Third Law roughly holds: for every empirical finding there is an opposite (though not necessarily equal) finding. 

With more and more studies coming down the pike, and with little to no work being done to figure out how to separate the wheat from the chaff, ELS could defeat itself. If it is possible to find any result in the literature and no way to separate out what is credible and what is not, empirical research becomes effectively useless. (This only exacerbates the problem identified by Don Braman, Dan Kahan and others that people choose the results that align with their prior beliefs rather than adjusting these beliefs in light of new data.)

So what is the solution? Empirical research in the social sciences needs to adopt a more evidence-based approach. We need to develop a clear definition of what constitutes "good" and "bad" methodological design, and we have to create objective guidelines to make these assessments. We have to abolish string cites, especially of the "on the one hand, on the other hand" type, and replace them with rigorous systematic reviews of the literature.

Of course, these guidelines and reviews are challenging to develop for the methodologically straight-forward randomized clinical trial that medicine relies on. In the social sciences, which are often forced to use observational data, the challenge will be all the greater. But, as I'll argue later this month, the rewards will be all the greater as well.

The use of systematic reviews is particularly important in the law, for at least reasons:

1. Inadequate screening. Peer review is no panacea by any means, but it provides a good front line of defense against bad empirical work. We lack that protection in the law. There are some peer reviewed journals, but not many. And the form of peer review that Matt Bodie talked about for law reviews recently isn't enough. The risk of bad work slipping through is great.

The diversity of law school faculties, usually a strength, is here a potential problem. Even theoretical economists have several years of statistics, so everyone who reads an economics journal has the tools to identify a wide range of errors. But many members of law school faculties have little to no statistical training, making it harder for them to know which studies to dismiss as flawed.

2. Growing importance of empirical evidence. Courts rely on empirical evidence more and more. And while disgust with how complex scientific evidence is used in the courtroom has been with us since the 1700s if not earlier, the problem is only going to grow substantially worse in the years ahead. Neither Daubert nor Frye are capable of handling the evidentiary demands that courts increasingly face.

Given that my goal here was just to touch on what I want to talk about in the weeks to come, I think I'll stop here. This is an issue I've been thinking about for a while now, and I am looking forward to seeing people's thoughts this.

Posted by John Pfaff on April 2, 2009 at 11:56 AM in Peer-Reviewed Journals, Research Canons, Science | Permalink | Comments (8) | TrackBack

Tuesday, May 20, 2008

Justice Scalia's One-Way Ratchet: Congress and Federal Habeas Jurisdiction

I've just posted to SSRN a draft of a new (and short) essay that is to be published later this summer by The Green Bag. The essay, titled "The Riddle of the One-Way Ratchet: Habeas Corpus and the District of Columbia," tries to shed light on a missing piece of the ever-ongoing debate concerning Congress's power over the habeas corpus jurisdiction of the Article III courts.

To spoil some of the fun, the essay's central argument is that statutes such as the Military Commissions Act are constitutionally problematic entirely because there are no other courts in which detainees may otherwise bring habeas petitions. Ex parte Bollman arguably prevents detainees from going straight to the Supreme Court, and Tarble's Case, for better or worse, prevents detainees from pressing their claims in state courts. None of that, of course, is new.

But it's an obscure provision of the D.C. Code (section 16-1901(b)), and not a Supreme Court decision, that closes off the last possible escape valve -- the D.C. Superior Court. As the essay explains, the D.C. Superior Court would otherwise likely have the authority to issue a common-law writ of habeas corpus against a federal officer, which would vitiate any Suspension Clause-based challenges to statutes such as the MCA. In other words, what makes the constitutional question so tricky when Congress attempts to constrain the habeas jurisdiction of the Article III courts is that Congress already has constrained the jurisdiction of the one court that would otherwise be open...

In his dissent in St. Cyr, Justice Scalia suggested that Congress's power over federal habeas jurisdiction was necessarily plenary: "If . . . the writ could not be suspended within the meaning of the Suspension Clause until Congress affirmatively provided for habeas by statute, then surely Congress may subsequently alter what it had initially provided for, lest the Clause become a one-way ratchet.” As the essay concludes, such a statement is absolutely true, but thoroughly incomplete. The ratchet results only from the fact that Congress has itself already precluded access to the common-law writ in the D.C. local courts.

Posted by Steve Vladeck on May 20, 2008 at 03:21 PM in Article Spotlight, Constitutional thoughts, Current Affairs, Peer-Reviewed Journals, Steve Vladeck | Permalink | Comments (0) | TrackBack

Thursday, July 13, 2006

On SSRN and Peer Review

I had a thought today on the perenially interesting subjects of SSRN and peer review.  I got to thinking about it because I recently sent an article out for peer review -- a paper I posted last month on SSRN

Despite the anonymity the journal tries to preserve, I wonder whether referrees for economics journal -- and, to a lesser extent, political science journals (because political scientists don't upload to SSRN as often, I think) -- check to see if a paper has already been posted on SSRN to get author information and a download number count (that moronic register of quality).  This issue might also arise with certain peer-reviewed law journals, since law professors (especially those in the ELS and L & E world) routinely upload circulation drafts.  Could SSRN be undermining double-blind peer review?

Of course, not that many would admit to the practice.  But feel free to offer anonymous comments if you've got good info. 

I should note that I don't mean to single out SSRN as the only problem -- one could just as easily google a title to get author info if the paper has been blogged about or workshopped.  But I would imagine that many many more papers get uploaded to SSRN and then go into the publication submissions cycle than get blogged about or workshopped.  Thoughts?

Posted by Ethan Leib on July 13, 2006 at 01:13 AM in Peer-Reviewed Journals | Permalink | Comments (1) | TrackBack

Monday, February 27, 2006

Publication Venue as a Lagging Indicator of Quality

Over at the ELS, Max Schanzenbach weighs in on the pros and cons of peer-reviewed versus student-edited law journals; Max essentially ends in equipoise.  I recently posted on the same issue.  I am wary of perpetual navel-gazing (especially my own, which ain't pretty), but this seems like an issue that is on the minds of many young academics.   I think some of this unease is traceable to a period of transition affecting both the law and social science academies.

Specifically, law is drifting toward the peer-review model because (a) empirical legal studies are in high demand because they supply hard facts in an academic discipline awash in normative polemics; (b) barriers to data collection and analysis have fallen rapidly, which means there is an abundance of important and tractable projects awaiting capable researchers; and (c) student editors have no special ability to assess the rigor of this work, and the lack of technical sophistication by most law faculty have sent them reeling for some external assurance of quality, which peer-reviewed journals provide.

Re the tumult in the social sciences, or at least economics and its various applied branches, the interminable delay of the peer-reviewed single submission process has been one of the primary drivers for the growth of SSRN (and the crossover for economics and finance explains why biz law professors dominate the SSRN law rankings).  A scholar can lay claim to an idea by posting a polished working paper; if it is any good, it will be downloaded, discussed, referenced, and presented at conferences before it appears in a journal.  A dossier of unpublished papers on SSRN is a good strategy for a junior social scientist (or law professor) to build his or her career as he or she waits for journal acceptance and publication–i.e, validation of quality.

But for both law and the social sciences, peer-review now operates on two levels: (1) established scholars, after a lengthy editorial process, accept an  article for publication; or (2) established scholars working in the same field gravitate toward the best work in pre-publication form.  Because of changes in the cost and flow of information, reputations are now being formed based on route #2.  In other words, publication venue is a lagging indicator of quality.

Posted by Bill Henderson on February 27, 2006 at 11:23 PM in Peer-Reviewed Journals | Permalink | Comments (1) | TrackBack

Thursday, February 23, 2006

The peer-review conundrum

Michael Heise over at the ELS blog asks a provocative question that I encourage other readers to answer: “[Do law professors] see (or perceive) shifts in overall law school culture or tenure expectations germane to scholarly publication options[?].  If so, when it comes to decisions about where to place scholarship how are folks assessing such options as traditional student-edited law reviews, faculty-edited peer-reviewed journals (in law or other fields), and university press books?”

It would be interesting to know the answer to this question. I suspect the norms vary pretty dramatically, even among elite schools.  For example, publishing in Top 10 law journals generally helps one's career.  But schools like Cornell, Northwestern, and Texas are trying to build empirical powerhouses.  Thus, insofar as peer-review becomes the preferred outlet at those law schools, I suspect that those expectations will influence the publication norms of the wider legal academy, especially with empirical work. 

I had this same question on my mind several months ago when I asked a prominent empiricist (an eminent social scientist who is now on the faculty of a Top 15 law school) whether it was better to publish empirical work in peer-reviewed journals rather than law journals.  [I am not naming names because the question was not asked in the context of posting the answer on a blog.]  I fully expected to get an answer to the affirmative.  But much to my surprise, she/he said that extensive workshopping was as good or better for quality control than the peer-review process.  So to his/her mind, student-edited journals were just fine.  The burden of quality control was on the author.

I think the issue here comes down to a signaling heuristic:  If you get published in JELS or JLS or ALER, then distinguished people in the field said it was good enough to publish.  From the perspective of a junior faculty person, that is pretty attractive.  There will be a presumption of quality that a negative review in your tenure file is less likely to rebut. That said, it is another matter entirely to believe that the peer-review process catches all significant mistakes.  The economics of refereeing a journal article are replete with agency costs, and I have collected anecdotes on this topic over the years.  But that is the topic for a separate post.

By the way, PrawfsBlawg alumni Joelle Moreno had some interesting thoughts on this topic a few months ago.

Posted by Bill Henderson on February 23, 2006 at 04:43 PM in Peer-Reviewed Journals | Permalink | Comments (6) | TrackBack