Friday, August 09, 2019

Lawyering Somewhere Between Computation and the Will to Act: The Last Outtake

I've now posted my summer project on SSRN (it's my contribution to the "Lawyering in the Digital Age" conference I mentioned earlier). The title has changed since I first posted a week or so ago - and that turns out to be one of last outtakes.  It's now Lawyering Somewhere Between Computation and the Will to Act: A Digital Age Reflection, with the following abstract:

This is a reflection on machine and human contributions to lawyering in the digital age. Increasingly capable machines can already unleash massive processing power on vast stores of discovery and research data to assess relevancies and, at times, to predict legal outcomes. At the same time, there is wide acceptance, at least among legal academics, of the conclusions from behavioral psychology that slow, deliberative “System 2” thinking (perhaps replicated computationally) needs to control the heuristics and biases to which fast, intuitive “System 1” thinking is prone. Together, those trends portend computational deliberation – artificial intelligence or machine learning – substituting for human thinking in more and more of a lawyer’s professional functions.

Yet, unlike machines, human lawyers are self-reproducing automata. They can perceive purposes and have a will to act that cannot be reduced to mere third-party scientific explanation. For all its power, computational intelligence is unlikely to evolve intuition, insight, creativity, and the will to change the objective world, characteristics as human as System 1 thinking’s heuristics and biases. We therefore need to be circumspect about the extent to which we privilege System 2-like deliberation (particularly that which can be replicated computationally) over uniquely human contributions to lawyering: those mixed blessings like persistence, passion, and the occasional compulsiveness.

The deleted title (before the colon) was Unsure at Any Speed, a bit of just-a-tad-too-clever wordplay on my part.

As you can see, the piece is an exploration of the upsides and downsides of, in Daniel Kahneman's coinage and book title, Thinking Fast and Slow.  My little joke was/is:
Over a forty-year professional career, in Kahneman’s lexicon, my thinking has been both fast and slow. What that really means is that often I was unsure at any speed. At the same time, I made binary “go/nogo” decisions in the face of complexity and uncertainty.

What I thought was really clever was the play on Ralph Nader's Unsafe at Any Speed, his classic 1965 takedown of the Chevy Corvair. One of my reader/editor/commenter/friends, clearly far too young to catch the allusion, tagged it with a big question mark.  A good reason to have a reader/editor/commenter/friend, because her suggestion that I perform a pre-colon-oscopy on the title was well-taken.

The ultimate outtake.

Posted by Jeff Lipshaw on August 9, 2019 at 10:33 AM in Article Spotlight, Legal Theory, Lipshaw, Web/Tech | Permalink | Comments (0)

Friday, August 02, 2019

Confusion of the Inverse??

At JOTWELL, Omri Ben-Shahar has a review of a forthcoming article in the Stanford Law Review claiming to have shown in a study that consumers are cowed by a consumer contract's fine print even if they believe they have been defrauded by the seller - i.e., have been expressed guaranteed A and learn later that (i) they aren't getting A, and (ii) the fine print says they have no legal right to A. (The reviewed piece is Meirav Furth-Matzin & Roseanna Sommers, Consumer Psychology and the Problem of Fine Print Fraud, 72 Stan. L. Rev ___ (2020)).

I've been blogging with outtakes from the not-quite-ready-for-prime time Unsure at Any Speed . Here the outtake intersects with another subject on which I have gotten involved recently: how to deal with the spread of detailed and unread consumer contract fine print, particularly given the ease by which it can appear to be made binding via internet click-throughs.

The question is not whether the conclusions Furth-Matzin and Sommers draw from their laboratory experiments are correct.  First, I don't know enough about qualitative research methods to assess their hypotheticals and questions to test subjects. Second, from what I can tell, they have given enough detail about the methodology to allow the tests to be repeated and therefore falsified. So I accept them for what they seem to say: people seem to take the fine print seriously even when they know they have gotten screwed.

My question is rather about empirical statements that underlie the study to begin with. Is it the case that widespread non-readership of fine print leaves consumers open to exploitation by unscrupulous firms? Is it true that sellers can outright lie about their products and services and then contradict the lie in the fine print?  The Stanford article takes the answer "yes" to those questions as a given, and then proceeds to assess the impact of fine print, given that there was fraud.  I cannot find, however, at least in the footnotes on the first six pages of the article anything other than a couple of anecdotes in support of the proposition that unscrupulous firms are a widespread problem.  I'm not saying they aren't; I just don't see any evidence one way or the other.

Is this an example of "confusion of the inverse," the subject of my outtake?

What I mean by "confusion of the inverse"

I cut from Unsure a detailed explanation of the "confusion of the inverse." It is, along with things like availability heuristic, the law of small numbers, hindsight bias, and confirmation bias, an example of the predictable divergences from actual probabilities to which Kahneman, Tversky, and others demonstrated humans are prone. My particular heuristic/bias peeve has to do with academic assumptions about the morality and competence of corporate oversight (Caremark doctrine for you governance nerds), exacerbated perhaps when, my having recently been been a corporate executive, a colleague blithely characterized corporate executives as "turnips" at a workshop shortly after I joined the faculty.

Here is the confusion of the inverse applied to my peeve.  Conditional probability is the quantification of the following question: given the probability that A is true (P(A)), what is the probability of B given A (P(B/A))?  The formula for deriving the answer is:

P(B/A) = [P(A/B) x P(A)]/P(B)

What we are trying to derive is the probability that we have a corrupt/incompetent board given that we have observed material corporate wrongdoing.

The probability of MW among the set of all corporations is P(A).

The probability of MW given CIB is P(A/B).

The probability of CIB is P(B).  Note that you can have a CIB even if you don't have MW, and you can have MW even if you don't have CIB.

Our formula now looks like this: P(CIB/MW) = [P(MW/CIB) x P(MW)]/P(CIB)

So...

Let's assume the following.  It turns out MW among all corporations is very rare.  Say P(MW) = .01 (one in a hundred).

The probability of material wrongdoing, however, is very high, IF you have a corrupt/incompetent board.  Say P(MW/CIB) = .95

The formula gives us the following numerator:  .95 (the probability of MW given that we have a CIB) x .10 (the probability we have MW).

But remember you can have a CIB even if you don't have MW, and you can have MW even if you don't have CIB.  So the denominator P (CIB) has to take all possibilities into account.

Hence, P(CIB) = [the probability that there is MW given CIB times the probability of MW] plus [the probability that there is MW with no CIB times the probability of no CIB].

So... P(CIB/MW) = (.95 x .01) /[(.95 x .01) + (.05 x .99)]

P(CIB/MW) = .16

So given that you observe material wrongdoing, the probability of also encountering a corrupt or incompetent board P(CIB/MW) is .16.  The confusion of the inverse is to believe P(CIB/MW) is .95.  It is not to say that you can't have corrupt or incompetent boards. It is to say instead that it is wrong to assume board members are turnips just because you observed material wrongdoing.

There are even more malignant examples of the confusion of the inverse.  When a police officer pulls over a car, what is the probability that there are drugs in the car, given that the driver is African-American?  When TSA does a search, what is the probability that the individual is a terrorist, given that he/she appears to be Middle Eastern?  When you are tested for a rare disease, what is the probability you have it, given that the test is positive?

Confusion of the inverse and contract fine print issues

As I said, I express no view on the study in the Stanford Law Review article.  I just don't see any evidence about the prevalence of out-and-out fraud. My intuition is there is probably less of it than the article seems to suggest.

That isn't to say there aren't real fairness issues with fine print. I have engaged with Rob Kar on his Harvard Law Review article with Margaret Radin, the thesis of which is to ground an attack on over-reaching boilerplate on a demarcation of the "true" agreement between the contract drafter and the consumer by way of Grice's "conversational maxims" and an actual shared meaning.  (Theirs is Pseudo-Contract and Shared Meaning Analysis; my response, just published in the Australasian Journal of Legal Philosophy (Vol. 43, pp. 90-105) is Conversation, Cooperation, or Convention? A Response to Kar and Radin.)

What I take from the Stanford Law Review study is that consumers aren't completely led down the primrose path by the fact of "fine print" - they expect there to be terms and conditions even if they don't read them.  The study seems to bear that out, even in the extreme where the consumer really does believe he/she/they got screwed. The real question is to what extent should the fine print be binding.  I agree with Omri that disclosure is not likely to be helpful - oy, more fine print disclaiming the fine print. Nor do I think trying to find the actual agreement or shared meaning is going to be fruitful.  Rather, there is a convention about what is and is not fair, and that probably ought to be reflected in regulation.

Posted by Jeff Lipshaw on August 2, 2019 at 11:45 AM in Article Spotlight, Corporate, Culture, Law Review Review, Legal Theory, Lipshaw | Permalink | Comments (2)

Monday, July 29, 2019

Blogging with Outtakes - Existentialists, Asymptotes, and Parachutes

Photo-1540256986065-af6d17eab40bThe bad news is that I missed the start of the guest blogging I promised Howard by a full month.  The good news is that I had two excuses (a) our first grandchild was born on July 3 and I seem to waste inordinate amounts of time curating baby pictures, and (b) I was finishing this summer's project.  The upshot of (b) is that by the process of some fairly brutal self-editing I have the drafting equivalent of a portfolio of outtakes.

The piece isn't quite ready for prime time via SSRN, but its title is Unsure at Any Speed: Lawyering Somewhere Between Algorithms and Ends.  It's a contemplation of how we'll reconcile the capabilities of digital lawyering and human lawyering. That means I thought a lot about the differences between what it means to have a brain comprised of flip-flops and P/N junctions, on one hand, and neurons, on the other.  And as it's where science melts into philosophy, it's just made for metaphors that live for a time between drafts 1.2 and, say, 1.9.  Alas, they ultimately have to be sacrificed in the interest of the reader's patience with the filigrees of my cranial neurons.

The risk of metaphor overload is highest when you are wrestling with the very concepts of complementarity, irreconcilability, paradox, and  irreducibility. Those are at the core of what I think is the difference between not just thinking like a human versus a machine, but also being like a human versus a machine. Hence, my existentialist turn. I am more than the physical or social properties a third-person could observe about me. What makes me me” is that I am capable of having an attitude about my own objective existence, that I am engaged practically in the world, that I am a subjective agent capable of action by way of my own will.  Give that one a try, ROSS.  Unless a human like me programs you otherwise, you are doomed to be the two-handed lawyer ("on the one hand; on the other hand") that business people despise.

So I'm fascinated with the ways we can try metaphorically to capture the complementarity of just thinking or even deciding, on one hand, and acting, on the other. Think about that moment after you've clicked "Start New Submission" on SSRN, uploaded the draft and the abstract, chosen your journals, and are about to submit. If you are like me, that is the equivalent in academia to stepping out of the airplane in sky diving. No amount of thinking about it substitutes for the act itself.

I wrote and never used, much less edited out, a metaphor from mathematics.  "Discrete and continuous" is another irreconcilable complementarity. In mathematics, every real number is something of an illusion.  The simplest numbers to understand are “natural” or “counting” numbers like 1, 2, or 154.  They are discrete.  You could use your and other peoples’ fingers and toes to represent them.  Rational numbers are slightly more abstract: they are numbers that can be expressed as a ratio of two natural numbers.  A fraction like 1/9 is rational, even though its decimal representation is an infinite string of ones to the right of the decimal point.  Irrational numbers are those that cannot be expressed as such a ratio; examples are the square root of 2, pi, and e, the base number for a natural logarithm.  Real numbers are the continuum of all numbers that are not imaginary, i.e. any number you could think of that is rational or irrational or sits somewhere between any two rational or irrational numbers. 

But that is the very point of the illusion of continuity.  The mathematician Richard Dedekind showed that a real number is a cut or a slice – in the jargon of calculus, a limit or asymptote – that separates all the numbers below it from all the numbers above it.  In the case of a real number that is not rational, the set of all numbers below it does not have a greatest element; it merely converges on the real number. It is, paradoxically, both a spot on the continuum of all numbers and not a spot in the sense that you can ever actually reach it.

I wanted to say in the article that one's passage through time and the actions one takes at any moment µ in that passage create a similar illusion of discrete and continuous. A single moment µ in which we act separates the set of all past moments from the set of all the future moments. All past events converge on µ, a moment which is not a member of the set of all past moments. And in that moment µ randomness, luck, or will may operate.  Yet we are inclined to see past and future moments as one continuous set, most because we cannot re-experience µ.  By the time we are considering µ at moment β, µ is merely a member of the set of all moments preceding β.

I didn't say it then.  Now I will.  "Status: Publish Now."  "Publish."  Click.  Oh no. I hope the parachute opens.

Posted by Jeff Lipshaw on July 29, 2019 at 05:05 PM in Blogging, Legal Theory, Lipshaw | Permalink | Comments (2)

Wednesday, September 12, 2018

Tacit Citation Cartel Between U.S. Law Reviews: Considering the Evidence

In my previous posts, which draw on my co-authored paper ‘The Network of Law Reviews: Citation Cartels, Scientific Communities, and Journal Rankings’ (Modern Law Review) (with Judit Bar-Ilan, Reuven Cohen and Nir Schreiber)  I described how the metrics tide is penetrating the legal domain and also described the findings of our analysis of the Web of Science Journal Citation Reports of law reviews. We studied a sample of 90 journals, 45 U.S. student-edited (SE) and 45 peer-reviewed (PR) journals and found that SE generalist journals, direct and receive most of their citations to and from SE journals. We argued that this citation pattern is a product of tacit citation cartel between U.S. SE law reviews. Most of the comments focused on the following valid point: how can we distinguish between a tacit citation cartel and epistemically-driven scientific community (generated by common scientific interests). We argue, generally, that in tacit citation cartels, the clustering observed should extend beyond what can be explained by epistemic considerations, reflecting some deep-seated cultural and institutional biases.

In the paper we provide several arguments (both quantitative and qualitative) in support of our tacit cartel thesis. While none of them is conclusive in itself we think that jointly they provide a robust support for our thesis. First, we considered whether the clustering of U.S. SE journals could be explained by geographic proximity. Our sample included 57 U.S. journals consisting of all 45 SE journals and 12 PR ones. Statistical analysis reveals however that US PR journals do not receive more citations than non U.S. ones. Second, we also analyzed separately the sub-sample of generalist (PR & SE) journals but the citation pattern remained the same. Third, we considered the hypothesis that U.S. SE journals constitute a separate epistemic field – maybe due to their emphasis on U.S. law. We rejected this explanation on qualitative grounds, primarily because U.S. SE journals have become increasingly more theoretical and interdisciplinary over the past few years (Harry T. Edwards, ‘Another Look at Professor Rodell's "Goodbye to Law Reviews’; George L. Priest, ‘The Growth of Interdisciplinary Research and the Industrial Structure of the Production of Legal Ideas). This trend should make PR journals very relevant to U.S. legal scholarship. Fourth, one may try to explain the citation pattern by assuming a deep difference in the quality of the papers published in the two journal groups. We do not think this argument stands up to scrutiny.  First, the selection practices of SE journals were subject to strong critique (e.g., Richard A Posner, ‘The Future of the Student-Edited Law Review’ (1995)). This critique casts doubts on the thesis that there is a strong and systemic difference in quality of papers published in the two categories. We also examined this claim empirically by looking into the citations received by the 10 top-cited articles published in PR journals in our dataset. We found that even these highly cited papers received only a small percentage of their citations from SE journals.

Finally, we also considered the accessibility of PR journals in Lexis, Westlaw and Hein. We found indeed that these databases only offer access to approximately half of the PR journals (See Table F, technical appendix.) However, we do not think that this fact provides a convincing explanation to the phenomenon we observed. We believe that most U.S. law schools have access to digital depositories that allow access to the PR journals in our sample. A quick search in 3 US libraries demonstrates that (https://www.law.pitt.edu/research-scholarly-journals; https://library.columbia.edu/find/eresources.html ; http://moritzlaw.osu.libguides.com/legalresearchdatabases ). Rather than providing an explanation to the citation pattern we found, this claim constitutes a manifestation of the institutional culture that facilitates the citation bias we identify. The comment we received from an AnonymousLawLibrarian (suggesting that U.S. legal academics, unlike equivalent scholars in the social science disciplines, only use Westlaw/Lexis/Hein or in-discipline journal research) seems to support our interpretation.

We think that this citation pattern is epistemically problematic because it hinders the flow of ideas. Further (and independently of the question of whether or not we are right in describing it as a tacit cartel) it can also influence the journals’ ranking. I will discuss this latter question in my next post.  

Posted by Oren Perez on September 12, 2018 at 02:10 PM in Article Spotlight, Howard Wasserman, Law Review Review, Legal Theory | Permalink | Comments (7)

Thursday, July 19, 2018

Now (or soon to be) in Paperback: Beyond Legal Reasoning: A Critique of Pure Lawyering

9781138221307A brief pause for a semi-commercial announcement.  Actually, if we consider the royalties to which I am entitled from Routledge after deducting the cost of a professional indexer, there's very little commercial about it from my standpoint.

Beyond Legal Reasoning: A Critique of Pure Lawyering first takes a granular look at "thinking like a lawyer" - its logic and theory-making - and then at the perils of succumbing to it when one is not in the traditional "lawyer as warrior" mode.  My original title, Unlearning How to Think Like A Lawyer, still lingers in various descriptions.

Apparently the law library market is price inelastic and the publisher waits eighteen months before putting out a paperback edition.  That is now available for pre-order (release date: Aug. 24) at a fraction of the hard cover price.

But ... most of us write to be read, not for the several hundred dollars of royalties that an academic book generates for the author (translating into cents per hour for the time creating it).  If you are interested in a free taste, the preface is available on SSRN.   Or the entire book is available for free at any of these fine libraries.

Or, after the break, you can watch the presentation from last April at the Harvard Law School's Center for the Legal Profession:

Posted by Jeff Lipshaw on July 19, 2018 at 06:16 AM in Books, Deliberation and voices, Legal Theory, Lipshaw, Teaching Law | Permalink | Comments (0)

Monday, July 09, 2018

Coase and Fireworks

493l4SRQTVOydKrgKSgSugIn my continuing effort to demonstrate what the mundane world looks like through the eyes of a nerdy law professor, today we will talk about Ronald Coase, recipient of the Nobel Prize in economics, and fireworks.

Before we had dogs, I liked fireworks, at least the professionally staged kind.  Up here in Charlevoix, Michigan, every year in late July the town has a week-long event called Venetian Festival.  The highlight on Friday night is a spectacular fireworks show out over the lake for which our deck is effectively a front row seat.  For the last seventeen years or so, however, I have not been out on the deck nor have I seen the fireworks.  No, I am back in a closet with the door closed, comforting our dog(s) who is/are going batshit crazy.

With the professionally staged fireworks, at least I know when to go into the closet and when I can come out.  It's the private ones that really drive me crazy.  In Massachusetts, where we live nine months of the year, I don't have worry.  Private fireworks are illegal, end of story.  

Here in Michigan, however, we have to deal with one aspect of the state legislature's Year of Living Stupidly.  In 2011, the same year it passed the law eliminating the requirement that motorcyclists wear helmets, Michigan first permitted the sale of fireworks in the state.  In 2013, it amended the law to permit local units of government to ban the use of consumer fireworks, but not on national holidays, the day before or the day after a national holiday.  (It also allows any city in the state with a population greater than 750,000 - there is only one - to ban them between midnight and 8 a.m. on such holidays, and only between 1 a.m. and 8 a.m. on New Year's Day.)

The reasons for my sitting on the beach and, like a complete dork, reading Ronald Coase's The Problem of Social Cost follow the break. If he had the house next door, and had the same issues I do, what might he say about it?

Our local unit of government, the City of Charlevoix, and the surrounding Charlevoix Township each enacted ordinances banning the private use of consumer fireworks to the extent permitted by the Michigan statute.  Thus, for three of the days we are here during the summer (July 3-5), we have to deal with the possibility that some *)&(*^*^&$ is going to be responsible for random and unexpected fireworks activity that turns our dogs' brains into petroleum jelly and causes them to (a) howl madly, and (b) scurry around the house wildly under beds, couches, and other areas of perceived safety.  

The rest of the summer we can be fairly sure that our nearby neighbors won't be using consumer fireworks because of the local ordinance.  If they did out of a misunderstanding of the law, and they were to ignore our friendly suggestion that they obey the law, we would be within our rights to call out Charlevoix's Finest. Fullsizeoutput_de4

Here's the problem.  If you happened by my earlier discussion of riparian rights, you saw this Google Earth picture. It so happens that I took the above picture just about at the tip of the red arrow.  The city proper is largely to the left (west) of the tip of the arrow.  The township pretty much ends at the other end of the arrow.  Every thing else to the right, including that peninsula (known as Pine Point) that looks sort of like India, is in Hayes Township.  Hayes Township has never passed an ordinance banning fireworks.  So just after it gets dark, for much of the summer, we are treated to a fireworks display that carries very nicely, sound and otherwise, across the mile or so to our house.

Where our dogs, having dog-like senses of hearing and smell, proceed to have their brains turned into petroleum jelly and thereupon to (a) howl madly, and (b) scurry around the house wildly under beds, couches, and other areas of perceived safety.

Now, I know that the reason for all of this fireworks activity under the current legal regime is the result not of, as Coase might hypothesize, a railroad needing to run a railroad even if sparks cause crops to catch fire, or industries needing to burn fuel even if it causes air pollution nearby.  It is the product of market activity in which the total value of production exceeds the cost of such production, and consumer activity in which the utility engendered by playing with toys that make loud booms and bright flashes exceeds the cost of such activity, at least for those engaged in it.

The social cost occurs across the lake at my house, where I am contemplating the purchase of doggy Xanax.

The popular takeaway - the "Coase Theorem" - applied to my situation is this.  In a world of zero transaction costs, the total net social welfare of setting off fireworks, on one hand, and my distress in dealing with the dogs does not depend upon the initial allocation of rights.  Assuming that we valued noise and peace in the appropriate ranges, either the celebrants would pay me for the right to have the rockets' red glare or I would pay them to cease and desist. 

It works like this. Let's assume that the pricing system works costlessly and the only actors are A across the lake who wants to use fireworks and me.  The cost to me of insulating my house against fireworks noise is $100.  If the default rule is that the fireworks can't be used without my consent, and the value to A of his (and it's always a "he") activity is more than $100, then A ought to be willing to pay me up to $100 to shoot off fireworks (the cap being $100 because for that amount he can pay for the insulation of my house).  If there is no regulation against fireworks, and I value silence at more than $100, I ought to be willing to pay A up to $100 to have him stop.  In short, with a smooth and costless pricing system, you get the same result regardless of the initial legal entitlement. But, of course, the idealized world of zero transaction costs doesn't exist, and so even if the world only consisted of A and me, and the transaction costs of paying off A creates a total cost to me that exceeds the value of silence, I won't do it, even if without transaction costs it would have been the more efficient result.  And it's not just A and me.  It's many of the good citizens of Hayes Township and many of the good citizens of Charlevoix.

Is there a market solution to my problem?!!?  It turns out that Coase didn't articulate a theorem (or at least that wasn't his object in the article).  There were no helpful hints on how to articulate a default rule so as to minimize transaction costs with the aim of an optimal allocation of resources.  In fact, he never used the word "theorem" or the term "transaction costs."

I recommend Pierre Schlag's critique of the morphing of what Coase said in Social Cost into neo-classical law and economics.  At the beach the other day, I confirmed Pierre's statement that you can get the entire basis for what others now call the Coase Theorem by page 8 of Coase's original 1960 article and skip the remaining 36 pages (actually there's a piece of it at pages 15-16 as well).  Pierre's critique is not of Coase's article. His point was that the popular takeaways - mainly Chicago Law and Economics - have transformed Coase's point into something else entirely. It wasn't Coase who developed the L&E focus on using neo-classical economics to justify legal rules, or to focus on the reduction of transaction costs in pursuit of an idealized efficient solution.  Moreover, in a different piece, Pierre observed that the L&E approach to transaction costs itself is neither theoretically intelligible nor operationally applicable.

To the contrary, according to Schlag (and, by my reading of Coase, he is right), Coase had a far different goal in Social Cost. Coase wanted neo-classical economics to take account of the real world, in particular the effect of law and legal institutions on resource allocation.  Coase's main object was to criticize the prevailing acceptance among neo-classical economists of the idea of Pigouvian taxes.  He wanted to demonstrate the problem with Pigou's approach to externalities - namely, to impose taxes or bounties to the extent that the social cost of an activity exceeded the private cost to the actor.  

Coase was skeptical of Pigou's entire approach.  The bounties or taxes were likely to be overbroad.  Indeed, the focus on making an actor's private costs equal to the total social cost of the activity was misplaced.  In the foregoing example, suppose the social cost of fireworks noise is $200 to me.  Coase criticized the knee-jerk remedy merely of taxing the activity in the amount of $200, because it is possible, in an appropriately free market, that it would only cost $100 to achieve an optimal allocation of resources. In short, the appropriate way to judge externalities (Coase didn't use that term either) was to assess the total effect on social costs both for the actors and those affected by the actors and not simply to add costs to deter the unwanted activity.

But, wait. If the market is not going to work, am I out of luck?  I don't think so.

If Professor Coase lived next door and I were to walk over there and find him, like me, huddled in a closet with his batshit crazy dogs, I don't think, based at least on what he said in The Problem of Social Cost, that he'd rule out the idea of having government rather than the market decide how resources are to be allocated. Firms get organized when there are opportunities for value-enhancing transactions, but only under a scheme where less expensive intra-firm administrative costs substitute for higher costs of market transactions. And then there is the case of something like fireworks noise, "which may affect a vast number of people engaged in a wide variety of activities" and so "the administrative costs might well be so high as to make any attempt to deal with the problem within the confines of a single firm impossible.  An alternative solution is direct Government regulation."  Here, Coase observed that "[t]he government is, in a sense, a super-firm (but of a very special kind) since it is able to influence the use of factors of production by administrative decision."  Coase pointed out that the "government is able, if it wishes, to avoid the market altogether, which a firm can never do."

That is an interesting point up here along the lake. Yes, government regulation can be overbroad and inefficient. 

But equally there is no reason why, on occasion, such governmental administrative regulation should not lead to an improvement in economic efficiency. This would seem particularly likely when, as is normally the case with the smoke nuisance, a large number of people are involved and in which therefore the costs of handling the problem through the market or the firm may be high.

But you have to get down to cases and not deal in abstractions. Coase thought economists and policy-makers over-estimate the advantages of government regulation, but all that does is suggest that government regulation should be curtailed. "It does not tell us where the boundary line should be drawn. This, it seems to me, has to come from a detailed investigation of the actual results of handling the problem in different ways."  The problem even with local government regulation is that it doesn't fully account for all of the social costs, because the board of supervisors in Hayes Township has not enacted the same ordinances as Charlevoix and Charlevoix Township, and parts of Hayes Township are closer to my living room than parts of my own city.

So, here I am, 1,778 words into this blog post, and discovering that, if Ronald Coase were my neighbor, I might well get him to join me in an effort to get the county or maybe the state government to understand there is a social cost to fireworks.  Not everything needs to be dealt with in terms of markets.

In this article, the analysis has been confined, as is usual in this part of economics, to comparisons of the value of production, as measured by the market. But it is, of course, desirable that the choice between different social arrangements for the solution of economic problems should be carried out in broader terms than this and that the total effect of these arrangements in all spheres of life should be taken into account. As Frank H. Knight has so often emphasized, problems of welfare economics must ultimately dissolve into a study of aesthetics and morals.

I suspect he'd agreed with me that, for fireworks, as elsewhere, "[in] devising and choosing between social arrangements we should have regard for the total effect." We could gather up the dogs and all those suffering from PTSD and march on township hall to tell them just that.

Or maybe he would tell me that I had over-thought the issue and suggest reading more appropriate for the beach.

Posted by Jeff Lipshaw on July 9, 2018 at 09:54 AM in Deliberation and voices, Law and Politics, Legal Theory, Lipshaw, Property | Permalink | Comments (5)

Thursday, June 21, 2018

SCOTUS Term: Finding the Law, Abroad and at Home

Thanks to Howard for the invitation to blog! Amid the morning’s excitement over new opinions, I’d like to add a few thoughts to Cassandra Burke Robertson’s excellent post last week on Animal Science Products v. Heibei Welcome Pharmaceuticals. Animal Science is a sleepy case in a mostly sleepy Term, but it brings up some deep issues, much deeper than the Supreme Court usually faces: what is the law, and how do judges find it?

Animal Science involved a price-fixing claim about Chinese exports of Vitamin C. The defendants said they’d been legally required to fix their prices, and China’s Ministry of Commerce agreed. To the Second Circuit, this was enough: so long as the Ministry’s position was reasonable, it was conclusive. (How could an American court instruct China’s government about Chinese law?) But to a unanimous Court, per Justice Ginsburg, the Ministry’s statement deserved only “respectful consideration”: it wasn’t binding, and U.S. courts would have to make their own judgments.

That all makes sense on the surface, but it raises at least three more fundamental concerns. Are legal questions like these all that different from ordinary questions of fact? Who do we trust to answer them? And what actually makes the answers right? When it comes to foreign law, issues like these aren’t always obvious—suggesting that the answers may not be so easy closer to home.


1. Legal questions and questions of fact. As the Court points out in Animal Science, foreign laws used to be treated as facts—they had to be pleaded and “proved as facts,” subject to rules of evidence and based on expert testimony or authenticated documents. As it turns out, these same rules applied to U.S. states—which were just as foreign to one another, except when the Constitution or Congress intervened, and which therefore needed proof of each other’s laws. (As I’ve argued before, the Full Faith and Credit Clause was mostly about these evidentiary questions: it helped establish what a particular state had said, and left it up to Congress to decide when other states should listen.) Sometimes even a state’s own laws got the factual treatment: courts could take judicial notice of public laws, but private bills were again matters for pleading and proof, as Chief Justice Marshall described:

“The public laws of a state may without question be read in this court; and the exercise of any authority which they contain, may be deduced historically from them: but private laws, and special proceedings of the character spoken of, are governed by a different rule. They are matters of fact, to be proved as such in the ordinary manner.”

Today we do things very differently. Federal and state courts take judicial notice of all kinds of American laws, and FRCP 44.1 and various state equivalents let them do the same for foreign ones. But we haven’t eliminated the basic problem of proving the law. Knowing that judges should answer these questions on their own—without simply outsourcing to juries, rules of evidence, or Ministry statements—doesn’t help us find any particular answers. If we need to know, say, whether French law allows extrinsic evidence of the contracting parties’ intent, should we look to translations of the statute book? To treatises and journal articles? To testimony by experts? And which translations, treatises, or experts should we trust?

2. Who do we trust? Giving only “respectful consideration” to the Ministry suggests that we should be sparing with our trust—making an all-things-considered judgment, looking at all the potential legal sources at once. But according to the Court, at least one kind of source gets special treatment. When a U.S. state court rules on an issue of state law, that ruling doesn’t just get “respectful consideration”; it’s considered as “binding on the federal courts.”

Why so? It’s easy to explain why federal courts might defer to Ohio courts on Ohio law, just as the Second Circuit would usually defer to the Sixth—they see more Ohio cases, so they probably know what they’re doing. But that doesn’t explain why the decisions would be binding, as opposed to just getting  extra-respectful consideration.

Maybe there’s something special about common-law courts. Maybe we might say, with Hale, that the decisions of our courts might be “less than a Law, yet they are a greater Evidence thereof than the Opinion of any private Persons, as such, whatsoever.” (When it came to the construction of “local statutes or local usages,” Justice Story in Swift v. Tyson would have agreed.) But that’s very different from claiming, as Justice Holmes later did, that whenever a state creates a supreme court it’s really creating a junior-varsity legislature, “as clearly as if it had said it in express words.” Some states might want their courts to establish the law of the state, but others might not. Georgia might want its courts to do general common law; Louisiana might want its courts to do its own civil-law thing; Canada, were it admitted as a state (as the Articles of Confederation once offered), might have its own apologetically polite take on the separation of powers. And if a legal system turns out to be very different from ours—say, with a complex network of informal councils and regional magistracies—we might have no idea which entities even count as its courts, let alone how much “respectful consideration” they’re supposed to be getting.

As I note in a draft paper on Finding Law, that’s one of the core problems with the Court’s notorious decision in Erie Railroad Co. v. Tompkins. Instead of looking to a state’s law to learn about its courts, Justice Brandeis did precisely the opposite—assuming, for bad theoretical reasons, that the law of a state is what the state courts say it is, because that’s just what courts get to do. But American courts don’t establish Chinese law when they decide cases like Animal Science. And they don’t necessarily establish American law when they decide their other cases, either. The powers of courts aren’t facts of nature, but society-specific questions on which different legal systems can disagree.

3. What makes the answers right? If courts can sometimes get the law wrong, what does it mean to get it right? How can we disbelieve the Chinese government about Chinese law, if Chinese law is just whatever the Chinese government actually does?

As Asher Steinberg points out in the comments, in some societies (like Venezuela or the former Soviet Union), government officials don’t always adhere to formal legal sources. Maybe these particular defendants’ hands were forced by Chinese law; but maybe the Ministry officials just issued them orders, the statute-books be damned. If that’s what the officials did, and if law depends on what officials do, then maybe their secret commands really were the law. (Here Steinberg invokes a great paper by Mikołaj Barczentewicz, to which Will Baude and I are currently at work on a reply.)

But law is more than what legal officials do. If the defense in the case were just ordinary duress, it wouldn’t matter whether the threats were backed by legal force (or whether, say, Al Capone had told them to fix prices for Vitamin C). Instead, the defense cited “principles of international comity,” which we usually extend to foreign governments as they’re legally constituted, and not to rogue officials on a frolic of their own. If the officials were supposed to be able to order price-fixing, under some applicable statute or common-law doctrine, then it wouldn’t matter so much if their order were secret or open. But if not—if the officials were departing from what everyone else in the Chinese system (judges, experts, law schools, and so on) would describe as Chinese law—then it’s hard to say that what they were doing was really lawful. That’s why we speak of places like the USSR as having had problems with the rule of law: because in those societies, the law wasn’t always what ruled. As far as diplomacy goes, we might want to respect official actions merely under color of law, just to avoid annoying the officials with whose governments we negotiate. Yet we still shouldn’t confuse official actions with the law—either abroad or at home.

Posted by Stephen Sachs on June 21, 2018 at 12:38 PM in 2018 End of Term, Civil Procedure, International Law, Legal Theory | Permalink | Comments (4)

Wednesday, May 09, 2018

Prejudice, Legal Realism, and the Right/Remedy Relationship

Last week, I sketched the contours of a criminal procedure puzzle that’s been on my mind lately. To briefly recap, the puzzle I’m exploring has to do with the unusual way in which courts conceptualize prejudice in two of criminal procedure’s most important doctrinal areas: (1) the Brady rule, which requires prosecutors to disclose (some) exculpatory evidence to the defense as a matter of Due Process, and (2) the Sixth Amendment right to effective assistance of counsel. For both of these rules, the Supreme Court has held that prejudice is an element of the defendant’s constitutional entitlement, which means that if no prejudice ensues from a prosecutor’s failure to disclose exculpatory evidence or from ineffective assistance of counsel (“IAC”), then no constitutional error occurs. By contrast, in most other areas of criminal procedure, courts consider prejudice only in specific remedial contexts—typically as part of harmless error review in appellate or postconviction proceedings—and do not characterize it as an element that restricts the scope of the underlying procedural rights.

Does this distinction make any practical difference? In The Path of the Law, Holmes famously defined law as “prophecies of what the courts will do in fact, and nothing more pretentious.” Inspired by this conception of law, one might dismiss the distinction I’ve identified as unintelligible or, at best, unimportant. After all, when applying any of the doctrines discussed here—Brady, IAC, and harmless error—appellate and postconviction courts will deny a remedy for alleged criminal procedure errors that are not prejudicial. Because our “prophecies” about how these courts will act does not vary across all three doctrines, it is tempting to conclude—as does Dan Epps in a provocative forthcoming article—that they are “functionally indistinguishable” from one another.

I respectfully disagree—with Holmes as to the nature of the right/remedy relationship, and with Epps regarding prejudice law. The grounds for my disagreement with each of them are intertwined. My concern with Holmes’ theory of rights and remedies—at least when applied to constitutional law (as Daryl Levinson and others have done)—is that it is unduly court-centric. By reducing the import of law to remedies supplied by courts, Holmesian legal theory obscures the fact that nonjudicial actors often make important contributions to rights enforcement. Likewise, I worry that Epps overlooks or underestimates the value of criminal procedure enforcement by nonjudicial actors when he equates the denial of appellate and postconviction remedies for nonprejudicial errors (via harmless error review) with the idea, reflected in Brady and IAC law, that nonprejudicial “errors” are not true legal errors at all. Relatedly, Epps also neglects the fact that trial judges often enforce rights that—unlike Brady and IAC, but like most criminal procedure rules—lack a prejudice element even when nonenforcement of those rights at the trial level would not prejudice the defendant and thus would not result in a remedy on appeal.

That’s my theory, anyway—what does the evidence show? In future posts I will show that, for Brady and IAC, (1) there are a number of potentially valuable enforcement mechanisms besides appellate and postconviction remedies, but (2) the prejudice element that the Supreme Court built into the definition of both rights has compromised the efficacy of these alternative enforcement strategies. Specifically, the built-in prejudice rule for Brady undermines, either directly or indirectly, (1) the scope of pretrial disclosure required of prosecutors by the Constitution, (2) the scope of disclosure required by professional ethics rules for prosecutors, and (3) efforts by trial judges to order prosecutors to fully disclose all exculpatory evidence without regard to prejudice. And for IAC, the Supreme Court’s prejudice requirement stands in the way of (1) prospective actions challenging chronically underfunded indigent defense systems through class actions or other devices and (2) attorney malpractice suits by criminal defendants.

Stay tuned as I build my case for these claims in later posts. In the meantime, please send your comments if you think I might have missed other potential lines of argument or would otherwise like to share your thoughts. And thanks to those of you who previously commented on the first installment!

Posted by Justin Murray on May 9, 2018 at 06:25 PM in Constitutional thoughts, Criminal Law, Legal Theory | Permalink | Comments (4)

Wednesday, May 02, 2018

Prejudice Rules and Criminal Procedure Enforcement

Hello! As Howard mentioned, I’ll be contributing to the blog this month as a guest. Thanks to Howard and Richard (Re) for the opportunity.

By way of introduction, my research focuses mainly on constitutional remedies and other mechanisms for enforcing constitutional rights. As a former public defender, I’m especially interested in constitutional criminal procedure and the various regulatory systems it has produced to bring about compliance with its strictures. These regulatory systems have failed in many different domains of criminal procedure. But few have failed as spectacularly as those pertaining to prosecutors’ evidentiary disclosure obligations under Brady and the right to counsel, as recent work by Jason Kreag, Eve Primus, and others has shown. Through a series of posts over the course of the month, I will ask why these two enforcement regimes have fared so badly, how we can make them better, and what broader implications this analysis may have for constitutional law and theory.

In particular, I’d like to explore the possibility that the failure of these regimes stems in part from an anomalous legal premise that the Supreme Court has embraced in relation to Brady and the right to counsel but that courts have rejected in virtually every other area of criminal procedure. In its cases involving Brady and the right to counsel (more specifically, the right to effective assistance of counsel), the Supreme Court has held that no constitutional violation occurs unless the defendant proves that the alleged error prejudiced the defendant in the sense that it may have altered the outcome of the proceeding. Simply put, the Court has held that no harm means no foul—no matter how extensively the prosecutor suppressed exculpatory evidence or how egregiously defense counsel performed in representing the defendant—for these two rights. No other significant area of constitutional criminal procedure works this way. To be sure, appellate and postconviction courts generally can (and routinely do) consider prejudice when applying the harmless error doctrine to decide whether criminal procedure errors justify setting aside the defendant’s conviction or sentence. But the harmless error doctrine presupposes that an error occurred regardless of whether that error caused prejudice. By contrast, no prejudice means no error under the Supreme Court’s Brady and effective assistance precedents.

Is this a distinction without a difference? If the defendant is going to lose on appeal anyhow, due to her inability to show prejudice, does it really matter whether the court rejects the defendant’s claim on the theory that the lack of prejudice (1) means that no constitutional error occurred (as the Brady and effective assistance doctrines hold) or (2) disentitles the defendant to the remedy of reversal (as the harmless error doctrine holds)?

I think it matters a great deal, for reasons I’ll describe in future posts. I will also touch on some larger theoretical implications—regarding the nature of the right/remedy relationship, departmentalism, and other topics—that I hope will interest readers who do not ordinarily follow doctrinal debates in criminal procedure. Please share your initial thoughts in the comments section. And stay tuned!

(Note: this post was edited on 5/7/2018 to fix the URL for the last source cited.)

Posted by Justin Murray on May 2, 2018 at 11:54 AM in Constitutional thoughts, Criminal Law, Legal Theory | Permalink | Comments (6)

Monday, March 19, 2018

Writing is Architecture First, Interior Design is Secondary: On Trains, Houses & Pyramids

That's a variation on Hemingway, again. I posted a few days ago a fun, though a bit random list of quotes about writing (oh the Internet, where curating quotes has become the soul-less pastime of too many who've never actually read those they quote. May we always quote soulfully is my wish to us prawfs and writers at large...). Hemingway said prose is architecture, not interior design and that the Baroque is over. I think he meant that the substantive of what you want to say needs to guide the writing and that you need to write in a punchy concise way, avoiding fluff for merely decorative purpose. Say what you mean and mean what you say and get rid of all the garnish. I like garnish and I think interior design is important too. I'd even argue for bringing a bit of Baroque back (Bach J), carefully . But I completely agree that the structure is first and foremost in writing a good article or book. The bare bones are the piece of the writing puzzle that needs to be done right.

Today I spoke with my seminar students about their research projects and I thought I'd offer here, as a second installation of posts about writing, the metaphors I use with my students to help us think about structure. One of my favorite teachers in law school, who later became one of my doctoral advisors, was Martha Minow. I remember her telling us in a seminar on law and social justice, similar to the one I teach today, that you can write a house or a train. I think she said houses are what books look like and trains are articles. I don't agree with that division, I think both articles and books can be houses or trains. But the visual I've always found useful in thinking about what I am doing and how to build my project. If you are building a house, you take the reader with you through a pathway into a place where you have a nice entrance, a main hall and some public spaces, and then doors, and windows into rooms, each holding an interesting set of ideas about a related topic. Together the house makes sense but each room also stands on its own. If you are building a train, you think linearly about your project. It could be chronological or it could be a problem in search of a solution and the solution unfolds as your present and analyze layers of evidence, perhaps empirical data, theoretical arguments, policy claims. To the houses and trains I added today in class the visuals of pyramids and reverse pyramids. In every discipline, a good portion of research involves the qualities of lumping or splitting. In legal scholarship, often insights come from taking a broad issue, a broad base of a pyramid, a classifying and regrouping the issues to show how we actually have separate questions emerging from different subcategories and these should be addressed distinctly. We also often have insights when we look sideways, from a reverse pyramid narrow tip into horizontal fields, related topics that offer new insights. Research is often an import-export business.

I don’t know if these visuals are useful only to me or beyond but I’ve found that sketching my next writing project, including actually drawing stuff, not just outlining gets me into better architectural shape and only then can I begin to think about the décor.  

Posted by Orly Lobel on March 19, 2018 at 05:22 PM in Blogging, Legal Theory, Life of Law Schools, Odd World, Teaching Law | Permalink | Comments (1)

Wednesday, July 05, 2017

SCOTUS OT16 Symposium: How to Argue About Personal Jurisdiction

Cassandra’s post below strikes me as basically right: after a long drought, the Court is paying serious attention to personal jurisdiction. So it’s worth looking at the state of the field.

The personal-jurisdiction debates I’ve seen—on blogs or Facebook posts, in email chains or in briefs and opinions—invoke a wide variety of different arguments. What’s striking, at least to me, is a lack of substantial attention to determining what counts as a good argument—what makes particular claims about personal jurisdiction either true or false. (As noted below, this is part of a broader failing in constitutional scholarship, effectively discussed in Chris Green’s work-in-progress on constitutional truthmakers.) In other words, a great many personal-jurisdiction arguments seem to be largely talking past each other, rather than joining issue on something we can resolve.

 

For example, many arguments I’ve seen are openly prudential. They argue that upholding (or denying) jurisdiction in such-and-such a case would be a good policy idea, that it would make the legal system better rather than worse, that it would open courthouse doors to sympathetic plaintiffs or lift heavy burdens from sympathetic defendants. But the law does lots of things that are terrible policy ideas, in all sorts of ways: just think of the tax code. So it’s not clear why we should feel confident that any particular good idea would be the right answer on the law—or that any given bad idea is therefore the wrong answer on the law.

Other arguments root themselves in judicial doctrine: personal jurisdiction is present or not because the courts have so held, or because the best reconciliation of their past decisions would so hold, or (to be more Holmesian) because that’s what they’re most likely to hold in the future. On the most extreme account, personal jurisdiction is whatever the courts say it is, so it’s impossible for the courts to be wrong. But many people who deploy these arguments seem to use them to criticize judicial decisions—as if the courts have somehow made mistakes in predicting their own rulings. And even paying due respect to accumulated doctrine, what the courts seem to be saying here is that personal jurisdiction isn’t whatever they say it is: they keep rooting their jurisdictional holdings in other legal rules, with sources external to judicial doctrine alone.

Usually courts root their holdings in the Due Process Clause, ostensibly as generous here as elsewhere (“Turn it over, and turn it over, for all is therein”). But here, too, there’s little effort spent on identifying what counts as a good due-process argument—on what makes claims about jurisdiction-being-consistent-with-due-process true or false. It might involve the defendant’s burden, or the state’s legitimate interests, or fundamental fairness, or a political-theory concept like sovereignty, or history-and-tradition, or some complicated weighted sum of the above. (And over all of these looms the ghost of Pennoyer, which still casts its dark shadow over the U.S. Reports no matter how often academics declare that it was killed off, once and for all, by Insurance Corp. of Ireland or by International Shoe.)

Put another way, the same inattention to truthmakers that we see in con law debates shows up in personal jurisdiction too. This makes some sense, because personal jurisdiction is all about the scope of the powers exercised by various state or federal officials; that’s a topic in small-c constitutional law, whether or not it’s actually resolved by the contents of the U.S. Constitution. But it also explains some of the pathologies of personal-jurisdiction scholarship, because members of different schools will insist loudly on particular priors—the role of interstate federalism, the needs of plaintiffs, the apparently prophetic authority of von Mehren and Trautman—without trying to explain why other people ought to be convinced of them too, on grounds that they might share. There's no escape for civil procedure folks, who often imagine their field to be more rigorous and determinate than that of their con-law colleagues down the hall, from stating and defending their constitutional commitments.

The best way to understand the current confusion is probably to see where it came from. On my reading of the history, the phrase “due process of law” wasn’t supposed to enact substantive standards for jurisdiction—as opposed to a means of enforcing standards supplied by other sources, such as general and international law. Trying to squeeze detailed jurisdictional rules out of those four words is like trying to squeeze blood from a stone. So it shouldn’t surprise us that, after nearly a century of misattributing complex general- and international-law rules to a single phrase in the Constitution, we’d find our underlying jurisdictional principles hard to state or explain—much less to apply to new circumstances, or to ground in more general understandings of the law.

Likewise, it’s not surprising that standards derived from older doctrines of general and international law might prove somewhat awkward, from a policy perspective, in an era with more extensive cross-border activity. That’s why jurisdiction might be an area most properly addressed by statute. Looking to some future decision of the Court to sort everything out for us is a false hope: nine Justices and their clerks don’t have enough time to work out good policy solutions for all of America, and they also lack the legal authority to try. Congress may have the right to make certain kinds of arbitrary compromises, in pursuit of rough justice, that courts in our system don’t. Failing that, the courts will continue to muddle through. I wouldn’t call this pessimism, so much as appropriate caution about what judges and courts can properly achieve.

But it would help, in the meantime, if we who think and write about the subject were better about clarifying our terms, and about trying to argue with rather than against one another. If we think a result is bad policy, we should say that it’s bad policy. If we think that a holding is inconsistent with the deep principles of International Shoe, we should say that instead, and defend why those principles should matter to those who view them with indifference. And if we think that a particular decision is wrong on the law, we should be clear about what we mean by that, and on the sources of the legal rules that we invoke. Doing all this may not lead to consensus or agreement, at least not right away; but at least we’ll be talking about the same thing, which is the first step to understanding it.

Posted by Stephen Sachs on July 5, 2017 at 11:43 AM in 2018 End of Term, Civil Procedure, Constitutional thoughts, Legal Theory | Permalink | Comments (3)

Thursday, August 11, 2016

Copyright Doctrine: IPSC2016

IPSC - Breakout Session II - Copyright Doctrine

Summaries and discussion below the break. If I didn't know the questioner, I didn't guess. If you asked a question and I missed you, feel free to identify yourself in the comments.

Copyright State of Mind – Edward Lee

Reforming Infringement – Abraham Bell & Gideon Parchomovsky

Authorship and Audience Appeal – Tim McFarlin

Free as the Heir?: Contextualizing the Role of Copyright Successors – Eva Subotnik

 Leveraging Death: IP Estates and Shared Mourning – Andrew Gilden

 

Copyright State of Mind – Edward Lee

Offering a descriptive taxonomy about how state of mind is used in copyright law.

2d Circuit in Prince v. Cariou: transformative use, the first factor in the fair use test: objective state of mind

9th Circuit in Lenz v. Universal: DMCA 512(f) violation: subjective state of mind

State of mind re: copyright liability - it is often said that copyright infringement strict liability. This differs from criminal law, where mens rea (criminal intent) typically matters.

If we look beyond liability, state of mind figures prominently in many different copyright doctrine. For example, authorship, including intent to be joint authors (both objective indicia and subjective intent). We haven't considered the intent of the lawsuit - are we protecting copyright or privacy, for example, but Judge McKeown on the Ninth Circuit recently argued we should. For ISPs, we have the red flag cases which have both subjective and objective elements.

Dave Fagundes: Property also deals with intent. Adverse possession and first possession have a whole mess of intent-related doctrines. Perhaps the ownership intent doctrines might help conceptualize these issues.

Pam Samuelson: Think about remedies as well. Innocent infringement, as well as willful infringement. It can play out also in relation to injunctive relief. Plaintiff's state of mind might matter with regard to obtaining injunctive relief. See also the new Kirtsaeng attorneys' fee case.

Ed Lee: Perhaps I should also look at the Supreme Court's patent cases.

Matthew Sag: If there is a universal theory about what state of mind should be for any of these doctrines, is there a logic that connects us to why we have copyright in the first place?

Ed Lee: I'm skeptical of a uniform theory. See, for instance, DMCA which is a negotiation between stakeholders.

Dmitry Karshtedt: My understanding is that civil liability more objective than subjective, while for criminal liability, intent is more subjective. and should we see the same play out in copyright?

 

Reforming Infringement – Abraham BellGideon Parchomovsky

We have an immodest goal of reforming remedies in copyright, more systematically including culpability in the analysis. Under the reformed regime, we would treat inadvertent infringement (where the infringer was unaware and couldn't reasonable become aware) and willful infringement (blatant disregard of copyright law) different from standard infringements (with a reasonable risk assumption).

The close cases are in the middle category of standard infringement. The default is standard infringement. Compensatory damages should be awarded in every case. Injunctions would be rare and no restitution for lost profits awarded in the inadvertent cases. We are trying to preserve statutory damages only for cases where it is difficult to prove actual damages. So the defendant in the standard infringement case could argue that statutory damages exceed actual damages.

Why bring it in? 1) Information forcing - incentivize owners of copyright to clarify ownership and terms of licenses. 2) Avoid overdeterrence of follow-on creation. 3) Increase fairness.

Ted Sichelman: In the patent context, we worry about transaction / licensing costs. It may matter for copyright as well. For example, if the work is an orphan work, why should I face huge potential liability?

Abraham: The inquiry should account for the difficulty of finding the copyright owner.

Ian Ayres: Does any kind of negligence go to willfulness because there is no reasonable basis for non-infringement?

Abraham: It's not clear how we would calculate such a thing: What is a reasonable risk, re: evaluation of risk of law. We're treating standard as a residual category. But we are still arguing about this point.

Pam Samuelson: Have you been thinking about remedies re: secondary liability? The framework appears to deal with direct liability, but secondary liability cases may be the more complicated cases, where we wonder how culpable is the platform? The statute tries to grapple with through 512.

Abraham: We didn't think about secondary liability until we talked with Lisa Ramsey last week.

Pam: Secondary liability is the area that needs the most reform!

Abraham: We'll have to bracket this right now. Secondary seems to follow primary, and we don't have a better model right now.

Shyam Balganesh: How much of your proposal unravels other parts of the system? Are you accounting for systemic effects? For example, if information forcing matters, why not deal with that through a heightened notice requirement? Do you think infringement is independently problematic, or is it the best place for achieving information forcing goals?

Abraham: Unlike information forcing, overdeterrence is harder to fix with levers in other places. This isn't the only way to accomplish these goals, and we don't claim that, or that it's the best way.

Jerry Liu: Is it necessary, from an overdeterrence standpoint, to distinguish between willful and standard infringement? Google Books was arguably willful infringement, but it was also efficient infringement.

Abraham: I think Google probably was a standard infringer, from a culpability standpoint. They took a fair use gamble, and they won.

Jerry: How about the MP3.com case?

Abraham: You can make an argument that format change / transferring medium is fair use, so standard.

 

Authorship and Audience Appeal – Tim McFarlin

Recent projects have looked at disputes between Chuck Berry and his piano player, and Orson Welles and a script-writer. In both cases, questions of audience appeal have been nagging at me, and I want to explore that further.

Can we better use audience appeal in the infringement context than the authorship context?

Audience appeal, from the Aalmuhammed v. Lee case (9th Cir 2000), is an important factor. Audience appeal turns on both contributions, (by potential coauthors), but "the share of each in the success cannot be appraised," citing Learned Hand.  If that's right, and we can't evaluate audience appeal in the authorship context, is it a junk factor? If we can, how do we do it? And if we can, should we?

What do courts do with audience appeal? Mentioned in 21 cases, but 9 ignored it in reaching the decision. 9 found it weighed in favor of joint authorship, and 3 found it weighed against joint authorship?

How do we appraise it? If we find evidence of audience appeal from both contributions, at what point is the smaller contribution too small? 60/40?

Might audience appeal help with questions of infringement, for example in the Taurus / Led Zepellin case? Might we consider the appeal of Stairway to Heaven v. the appeal of Spirit's Taurus as a reason for public interest to weigh against injunctive relief? See Abend v. MCA (9th Cir. 1998).

Jake Linford: Perhaps talk to Paul Heald about his research on how musicians copy from each other. There is some potential danger in using audience appeal to decide infringement, injunctive relief, or damages, because that leads to a copyright regime where the party who is best-placed to take advantage of the works gets to use and make money with it, even if that party doesn't pay.

Peter DiCola: You are right to challenge Learned Hand. Audience appeal can be appraised. The question is whether it can be appraised convincingly. The part about in general where does audience appeal matter may be too general, and may not be at the heart of your paper.

Pam Samuelson: Some works have audience appeal, some don't, and it might not be relevant for unconventional expressive works For example, the internal design of computer programs are not appealing. You may need to unpack works where appeal matters and where it doesn't.

Jani McCutcheon: Watch where trademark and copyright protection overlap on this issue.

 

Free as the Heir?: Contextualizing the Role of Copyright Successors – Eva Subotnik

This paper is inspired by two recent controversies surrounding Harper Lee and To Kill a Mockingbird: the appearance of Go Set a Watchmen, and the decision by her estate to pull the student-priced paperback from the marketplace. Both of these stories are murky. Lee may not have been in her right mind when Go Set a Watchmen was released, and the announcement from Hachette about the student-priced paperback suggest both the estate and Lee wanted the low-priced version discontinued.

Should motivations of the author or the heir matter for copyright decisions? Eva argues that it should. The law should be tougher on post-death copyright successors. We should treat them more like stewards, and require some duties on their part. If copyright ownership limits post-mordem access, heirs should be encouraged to take care.

What might stewardship mean? It has its origins in theology, traditionally applied to land. It's taken on a secular cast today. Stewardship suggests that the owner has duties as well as rights. Stewardship has something in common with commons advocates - copyright should be forward looking, and concerned about future generations. Bobbi Kwall has argued that authors are stewards, and I think it should be applied to heirs as well. Unlike authors, publishers, and distributors who did work with the work, stewardships step in as recipients of a gift, and perhaps they should step into some duties.

Application: Eva doesn't argue for a statutory change, and it's not clear stewardship would change the analysis of the Harper Lee issues, but stewardship could change fair use analysis, for example with biographers and scholars. When the heir has the ownership of a sole copy, stewardship could matter [JL: unclear to me how]. Perhaps stewardship could allow authors to better shape stewardship of their legacy. [JL: Doesn't the termination provision already exclude wills?]

Brad Greenberg: A potential disconnect between assignments and statutory heirs of termination rights. What if the author's assignee is a good steward, and the children are poor heirs, from a stewardship standpoint? Is Stewart v. Abend's analysis of the renewal right a problem for your analysis? Should we also apply stewardship duties to non-author copyright owners?

Eva: To my mind, a post-death successor gains enhanced prominence in managing the copyright after death, and I'm trying to say something specific to that group of copyright owners.

Dave Fagundes: I like the idea of stewardship, but it's still inchoate, and I can't tell to whom is the steward responsible? The work? The public? The author's intent? What if authors wanted their families to be taken care of?

Eva: You could also add the author's legacy, which may differ from author's intent. [JL: This reminds me of Mira Sundara Rajan's project from the first breakout session.] 

Ed Lee: Perhaps the moral rights of integrity literature could also be helpful, which is more about legacy than children.

Giancarlo Frosio: French case 2007 might be helpful. See also Kant.

 

Leveraging Death: IP Estates and Shared Mourning – Andrew Gilden

Scholars seem to distrust claims by estates and heirs, but the tend to succeed in advocating for statutory change, and winning cases before the courts. But I found some recent claims that sound in mourning and grief that perhaps we shouldn't discount in copyright and right of publicity cases.

IP Narratives that are traditionally invoked:

1) Anti-exploitation. Randy California was badgered for years to sue Jimmy Page, but his heirs stepped in to claim some recognition for him.

2) Family privacy. James Joyce / J.D. Salinger estates

3) Purity narratives. Limit downstream uses, especially those that raise potential sexual purity. 

4) Inheritance. It's all that the author left to the family.

5) Custody (like child custody). Children as caretakers of the work.

Copyright scholarship tends to ignore these types of claims, but we see them invoked successfully in cases like family businesses, bodily disposition, organs and genetic information, digital assets, like email, and succession laws dealing with omitted family members.

What would happen if IP took these interests seriously? Perhaps there is a desire for shared mourning and grief, both by authors' heir and fans. Fans circulate and disseminate broadly as part of public mourning, but mourning families look inward, seek silence, achieve some semblance of privacy. These interests might not be as irrational as we might think.

One solution might be to bring issues of estate planning more to the fore. Marvin Gaye and Frank Sinatra created a family business when they secured copyright, whether they meant to or not.

Rebecca Curtin: You've made a very sympathetic case, and you've repeatedly spoken about family. Do you mean family, or could you include designated heirs, like the Ray Charles foundation? What might that mean?

Andrew: We may need to think differently about those who inherit intestate and those who don't.

Brad Greenberg: The incentive theory of inheritance suggests that authors will create in part to benefit children. But there could be a labor theory of inheritance: this was the authors, like the children, and it goes to the children. In addition, is this really about IP, or just copyright?

Andrew: Copyright and right of publicity. My take is more of the labor than the incentive theory.

Q: Why does the right publicity survive death?

Andrew: Jennifer Rothman has a very good paper on this. Right of publicity is labelled as property, and property descends, so in some states it descends.

Peter DiCola: I enjoy the presentation, and I ask not to upset the applecart, but what might the First Amendment tell us about these arguments about importance of controlling meaning?

Andrew: I don't think these insights should change fair use outcomes, but my concern is that heirs' motivations are okay, especially in light of how they work in other cases. The emotional appeals are not inherently problematic. (Although I have some problems with the purity rationale).

Jake Linford: Is this project normative as well as descriptive?

Andrew: It started more descriptive, but normatively, I see no problem. Prescriptively, perhaps we could ask authors to be more clear about how their intent at registration / protection, for example.

Giancarlo: Is there space for a moral rights style argument here? 

Andrew: Perhaps attribution is the best moral rights claim.

Giancarlo: Is there a mechanism is the composers of Blurred Lines had said no? Can you make the heirs grant a license?

Andrew: Blurred Lines is a declaratory judgment action - the derivative authors brought the case to foreclose liability.

Tim: The estate's emotional appeal in the Taurus complaint may have been somewhat strategic, trying to deal with the perception of greedy, rent-seeking heirs by promising to give money to sick children.

 

 

Posted by Jake Linford on August 11, 2016 at 06:37 PM in Blogging, Criminal Law, Information and Technology, Intellectual Property, Legal Theory, Property, Torts, Web/Tech | Permalink | Comments (0)

Tuesday, July 19, 2016

Black and Blue in Baltimore

Was it worth it? A judge, after a bench trial, just acquitted the third and highest ranking of the Baltimore police officers charged with killing Freddie Gray. So far there have been no convictions. Should the Baltimore District Attorney prosecute the others? More generally, is there a duty to prosecute public officials, even if there is only a remote chance of success on the merits?

I think the work of Antony Duff might prove helpful here. He believes wrongdoers are a specific category of people identified by a duty that they are under: to answer to those they have wronged for their unjustified and harmful act. The duty to answer is, so Duff thinks, a feature of responsibility: wronging someone puts the wrongdoer in a relationship with their victim. The victim has the duty (not just the right, but—Duff believes—the duty) to call the wrongdoer to account; and the wrongdoer owes the victim a response: the wrongdoer has a duty to account for her wrongdoing by giving reasons to justify, excuse, or accept the blame for her wrongdoing, and then take action to expiate her wrong. Owing a response places the onus on the wrongdoer to come forward with her account; morally, she cannot just stand pat and hope no-one notices the wrong, or her responsibility for it.

Duff draws a line between ordinary moral wrongs and extraordinary criminal wrongs. What makes criminal wrongs so extraordinary, he thinks, is that they are wrongs that the public ought to take an interest in. Failing to buy a beer when it is your round is a wrong, but unless I’m one of the folks you are drinking beer with, it’s none of my business that you are stingy and selfish. Engaging in an act of domestic violence is a wrong, but even though it may occur in a private place, it is a wrong that affects the community as a whole, and which the public has an interest in seeing prosecuted. Moreover, the community enacts criminal laws to express the fact that it is the public’s business. People whose wrongs affect the community are not just ordinary wrongdoers; they are criminal offenders and have a duty to come forward to answer the community, to whom they are accountable, in a public forum, such as a trial.

Duff’s special significance as a theorist of punishment and criminal responsibility is (as Malcolm Thorburn points out) in identifying the trial (rather than the punishment) as the focal point of the criminal justice system. The trial is centerpiece of the accountability because it is a communicative forum. It is there, in public, that the offender answers to the community and (if the law provides) suffers public censure. Responsibility for wrongdoing demands (for Duff) that the offender answer to someone; responsibility for criminal activity requires an offender answer to the public through the trial process. The result of the trial (conviction or acquittal) is secondary to calling the offender to account.

Duff’s view suggests that whenever the community plausibly suspects that someone is a wrongdoer, then both the community and the wrongdoer have a positive duty discuss it: to demand and provide a rational accounting of the wrong. Where the wrong is one that touches the community as a whole, then the proper forum for such an accounting is the criminal trial.

Duff’s argument about communities and the criminal law is quite compelling. At the very least, it provides an important moral basis for criminal law: that it is the moral law of the public, the community; not just a set of wrongs that the politicians decide to sanction with an especially harsh or significant punishment. The wrongs of the criminal law are extraordinary ones which affect the community as a community. And when the wrongs are those engaged in by public officials, then the community and the state has an especial interest in ensuring that the official publicly accounting for those wrongs. (Duff has some radical and interesting things to say on this, which would take too much time here. See his Punishment, Communication, and Community at 183-17; see also Ekow Yankah, Legal Vices and Civic Virtues). [As a side note, Duff, Yankah, and Thorburn are not just theorists of criminal law; what they have to say about criminal procedure, and in particular its relation to political theory, deserves much more attention in the world of mainstream American criminal procedure than they are currently receiving). 

So trying the other Baltimore officers involved in the Freddie Gray killing is not a waste of time: it is an important way to treat the community as wronged and the officers as responsible—as individuals who are capable of being held responsible and so have a duty to answer in a public forum. It is not enough: if there was a wrong, then the officers in addition deserve public censure and should make some form of reconciliatory act to the public and the victims—the Freddie Gray family. If the court fails to acknowledge the officers’ wrong, they still remain on the hook as wrongdoers if not as offenders. But now the legal system too is on the hook, for failing to provide an adequate forum, not only for accountability, but also for censure and expiation. Without these further possibilities, the community—the public, the people—are inadequately valued by the state, and will continue to feel that they have been denied the justice they deserve as equal members of the polity.

One final thought: in her excellent book, Prosecuting Domestic Violence, Michelle Madden Dempsey also discusses the role of the prosecutor in constituting the community. While she and Duff have important differences, Dempsey's discussion of the ways in which the prosecutor constitutes the community on behalf of the state, and so the prosecutor's duties to the community as a public official, is essential reading for anyone interested in this topic. I hope to say a little more about Dempsey's work in a later post.

Posted by Eric Miller on July 19, 2016 at 12:54 PM in Criminal Law, Deliberation and voices, Law and Politics, Legal Theory | Permalink | Comments (13)

Friday, February 05, 2016

The Rule of Law in the Real World.

This round of prawfsblawgging comes at an exciting and terrifying time for me: my first book, The Rule of Law in the Real World, comes out in a few days, courtesy of Cambridge University Press. It's an attempt to reconcile the philosophical, legal, and empirical literature on the ideal of "the rule of law," and show its symbiotic relationship with genuine legal equality. I think the official release date is February 11, although at least one person has already gotten her hands on a copy (before me!). Pre-orders are open (Cambridge, Amazon). I've also put up a website at rulelaw.net, mainly as a home for some cool interactive data visualizations---but I also hope to make it a live, ongoing thing, collecting other rule of law scholarship, data, and knowledge in general.

So the exciting is obvious, buy why terrifying? Well, I think that all of us academics are subject to quite a bit of imposter syndrome, and none more than those of us doing interdisciplinary work. No matter how good you are, even if you're Richard Posner Himself, you can't produce high-quality scholarly work in every discipline at once.  So anyone who publishes an extremely interdisciplinary book---and this book is that, in spades, delving into political philosophy, classics, game theory, empirical analysis, and other areas---surely must live in terror of opening up the journals or getting a Google Scholar alert to see his or her book get shredded by someone who actually is good at one of the disciplines the book has invaded.  And while there are treatments for this condition---serious cross-training, showing your work to people who know more than you before rather than after publishing it--- there is no certain cure. 

Yet some research topics really can only be handled by using methods from every field at once. The rule of law is definitely one of those: it has such a long historical provenance, has been the object of so many conflicting interpretations from lawyers, philosophers, historians, economists, political scientists, and others (Waldron once called it an "essentially contested concept"), and has such growing policy relevance in a world where hundreds of millions of dollars are spent promoting it (or the promoters' conception of what it might be) in places like Afghanistan, that the only way to really get any traction and make any progress is to try to bring something together from those disparate domains.  This is, I think, why Brian Tamanaha's wonderful rule of law work has become so influential: he really made the first big attempt to listen to all the diverse conversations on the subject.

So hopefully the terror of the review pages will prove unfounded, and it'll turn out that I'm really not faking competence in all those things.  The next half a year or so will tell.  In the meantime, I'll be blogging about The Rule of Law in the Real World throughout the month, along with whatever other crazy topics happen to cross my mind.  Onward!

Posted by Paul Gowder on February 5, 2016 at 06:29 PM in Books, Legal Theory | Permalink | Comments (2)

Wednesday, January 20, 2016

How Being a Struggling Student of Talmud Made Me a Better Professor of Law

My mother passed away last March. With my dad’s passing six years earlier, my brother and I suddenly found ourselves parentless while still in our 30s. Dealing with the grief has been difficult enough. Equally difficult in many ways has been the challenge of administering my mom’s estate—working through the modern morass of medical forms, bills, taxes, mail and magazine subscriptions, bank accounts, and credit cards is essentially a second full-time job. It turns out that dying in the twenty-first century involves a tremendous amount of paperwork.

The silver lining to all this, I suppose, is that acting as personal representative of my mom’s estate has allowed (forced?) me to employ several long-dormant aspects of my legal education. I have reviewed more contracts, communicated with more federal and state agencies, and spent more time at the probate court clerk’s office in the last year than at any time since I left full-time practice (and maybe ever). Like working an underused muscle for the first time in a long time, doing this kind of legal work is simultaneously invigorating, exhausting, and humbling. I am despondent about the circumstances, but grateful for the experience.

The circumstances have created another unexpected educational benefit: I have been reintroduced to the awesome challenge of Talmud study. In a year when many things have been cloudy and overwhelming, a weekly dip into Talmudic debates has sharpened my mind and changed some of my perspective on teaching.

The Talmud is a compilation of commentaries surrounding Judaism’s Oral Law (that is, the law said to be provided directly to Moses and orally transmitted through the generations, before the teachings were compiled in written form around 200 CE). Serious Talmud scholars intensely focus on a single page of text each day (Daf Yomi). A statement of law or practice in the center of the page is accompanied (literally surrounded) by a variety of rabbinic debates on the meaning and application of the statement, or offering proof for the statement. Commentaries build upon commentaries, and pull in citations from a variety of other textual sources. For a very rough sense of what it feels like, imagine a treatise on the First Amendment written by a squabbling committee of brilliant academics over the course of several centuries, and referencing a dizzying array of cases, law review articles, statutes, regulations, and local practices.

My entry into the Talmudic waters has been far less intense than daily study, but still offers plenty to digest. I meet with a small group of adult learners once a week shortly before evening minyan (the service that permits me to say Kaddish, the obligatory mourning prayer said daily for eleven months after a parent’s death). We have an excellent instructor, who is both prepared and patient. I dutifully bring my book, puzzle over the debates with the others around the table, and try to understand each strand of argument line by line, paragraph by paragraph.

In some ways, my legal training has been immensely helpful for this kind of work. I can easily recognize and appreciate some of the tools of argumentation: reasoning by analogy, reasoning from history, reasoning by custom, etc. It’s Cardozo, 1500 years before Cardozo. In other ways, my American legal training is virtually useless: because the debates in the Talmud operate in a closed environment in which text, history, and practice are of divine origin, the policy arguments that animate difficult legal questions in our time are noticeably absent. You cannot just say, “Why does any of this matter? “ One must take it as a given that it matters—even when the debate is about something as arcane as when to celebrate the New Year for Vegetables. (Yes. Really.) Nor can one simply dismiss a purported proof text as wrong; since the point of the exercise is to explain the law rather than develop or discover it, rejection of one proof requires the submission of an alternative proof. Once you accept these parameters, it’s a wonderful stretching exercise for the logical mind.

More strikingly, my journey into Talmud study has been humbling. If you were to ask me at the end of each study session whether I understood what we covered, the answer would be an unequivocal yes—and an unequivocal no. I understand the scope of the debate as presented in the limited form we discussed, but at the same time I realize how little I understand of how it fits into the larger discussion. So I get it—and I don’t. And it occurs to me that only years of consistent and rigorous study will truly make some of it clear (or more accurately, clearer).

This realization has had effects on the way I teach civil procedure. My own experience suggests to me that student silence (especially among 1Ls) almost certainly does not have a uniform meaning. Some students may be quiet because they are unprepared and cannot follow the discussion in a meaningful way. Others may think they understand, but need time to process the discussion and rearticulate it in their own words. They are not ready to ask questions or jump in. Still others may understand the terms of the specific discussion we are engaged in at the moment, but (like me at Talmud study) don’t know enough (or don’t feel comfortable enough) trying to tie it together to other topics in the course. I have to try to reach all of these groups in different ways—through classroom discussion, formative assessment methods, and one-on-one meetings.

So I will stick with Talmud study, even when my other executor duties are complete. I think my mom would approve.

I would be curious to hear from others who had the simultaneous experience of being a teacher in one discipline and a student in another. How did your experience in one area influence your approach to the other?

Posted by Jordan Singer on January 20, 2016 at 10:35 AM in Culture, Legal Theory, Religion, Teaching Law | Permalink | Comments (1)

Thursday, December 17, 2015

Subotnik on Copyright and T&E

In a post last week, I emphasized the need for a better grasp on what motivates intellectual property estates.  Well, hot off the SSRN press is Eva Subotnik's excellent new article, Copyright and the Living Dead? Succession Law and the Postmortem Term, forthcoming in Harvard J. L. & Tech.  Here's the abstract:

A number of commentators have recently objected to the existence of any postmortem period of copyright protection. Absent from the contemporary debate over this issue, however, is a systematic study of how longstanding succession law theories and doctrines, which govern the at-death transmission of other forms of property, bear on the justifications for, and scope of, postmortem copyrights. This Article takes up that task. It applies the justifications for, and incidents of, the generally robust principle of testamentary freedom to the particular case of copyrights.

The comparative analysis undertaken here suggests two principal lessons. First, succession law principles do provide discrete, though qualified, support for a postmortem term that, in addition to property theories more generally, should be considered in any rigorous debate over copyright duration. Second, more precision should be used in categorizing the costs associated with postmortem protection. In particular, in many instances the costs should be conceptualized as resulting from suboptimal stewardship by the living rather than from dead-hand control. This is not merely a matter of semantics. Distilling the most pressing costs is key to identifying the most appropriate means of addressing them, such as the shortening of the postmortem term, the reining in of dead-hand control where it does exist, and/or the instantiation of better stewardship practices among the living.

From my perspective, Subotnik's article makes at least two important contributions to the literature:

First, she brings copyright law more explicitly into conversation with trusts & estates theory and scholarship.  The basic term of copyright is the author's life plus an additional 70 years, meaning that succession law issues are baked deeply into the structure and day-to-day practice of copyright.  Yet although copyright scholars have looked to a variety of other fields in/outside the law (e.g. property law, economics, psychology, literary theory) to unravel some of the difficult questions at the core of copyright (e.g. author incentives, dissemination of creative works, and cultural "progress"), only a small body of scholarship has grappled with how wills, trusts, and intestacy laws can help mediate competing claims to valuable resources.  Subotnik provides some useful new ways of using succession law to think about the very long postmortem copyright term, and her article more broadly reads as a blueprint for some fruitful conversations between and among copyright and T&E scholars.  

Second, Subotnik's article begins the useful task of disaggregating the initial "life" term from the "plus 70."  Often in debates around the lengthy copyright term, the life+70 term is treated as one continuous time period, in which the marginal incentive to an author of each additional year of exclusivity declines into essentially nothing.  However, recognizing the discontinuities between the life and postmortem terms can shed light on questions of both author incentives and cultural stewardship.  As Subotnik observes, succession laws generally recognize the strong desire for individuals to provide for their loved ones, the sentimental attachment to particular items, and an interest in preserving legacy.  Structuring copyright around a postmortem term might accordingly provide a qualitatively different set of incentives than the financial incentives typically acknowledged in the case law.  On the cultural stewardship issues, authors and their heirs are often differently situated with respect to downstream uses of copyrighted work--e.g. Sub0tnik mentions that Kurt Vonnegut gave permission to a biographer to publish portions of his private letters, only to be revoked by his children after his death.  She accordingly usefully suggests that problems with a postmortem copyright term should not be thought of solely in terms of the author's deadhand control of the living but in terms of suboptimal stewardship by the living.  Definitely worth a read!

Posted by Andrew Gilden on December 17, 2015 at 02:27 PM in Intellectual Property, Legal Theory | Permalink | Comments (0)

Wednesday, November 11, 2015

The Fungibility of Intentional and Unintentional Punishment

In my prior post, I argued that punishment theorists often speak of punishment in a narrow sense that only applies to intentional inflictions while people more generally tend to think of punishment in a broader sense that includes not only intentional inflictions but others that are foreseen (and maybe even just foreseeable). Much is at stake here because if retributivists only attempts to justify intentional inflictions, they will fail to justify anything like our actual punishment practices which include lots of harm that are foreseen but are arguably not intended as punishment (e.g., the harms to offenders and their families from being deprived of each other; the reduction of First Amendment rights when imprisoned, the emotional distress of confinement, etc.).

Alec Walen, in his helpful and interesting entry on retributivism in the Stanford Encyclopedia of Philosophy, tries to fill the gap. He offers what I think of as a non-punishment shadow theory to justify aspects of our punishment practices not directly addressed by the retributivist justification of punishment. To my claim that retributivists fail to justify punishment to the extent that they fail to justify the varied emotional suffering prisoners experience, Walen writes:

[E]ven unintended differences in suffering are morally significant. But they can justifiably be caused if (a) the punishment that leads to them is itself deserved, (b) the importance of giving wrongdoers what they deserve is sufficiently high, and (c) the problems with eliminating the unintended differences in experienced suffering are too great to be overcome.

Kudos to Walen for acknowledging that (non-accidental) unintended inflictions of harm require justification. That's a point I've been emphasizing for a while. Some retributivists (see p. 24 here) would like to say that they simply need not justify side effects of punishment because these side effects are not punishment. But all of our actual punishment practices involve both intentional and unintentional harms. Carving off intentional inflictions of punishment from the broad notion of punishment leaves retributivist discussion cut-off from real-world punishment practices.

I'm afraid, though, that Walen says too little to defend his three-part test. He simply asserts it (perhaps confident in its double-effect style reasoning). His (a) and (b) essentially say that if the value of retributive punishment is high enough, then it justifies punishment side effects. But this is exactly the claim I've been calling on retributivists to justify and explain in more detail. Here it's just an assertion and not clear that the condition is ever satisfied. Moreover, it provides no affirmative reason to inflict side effect harms. On Walen's view, retributivism offers no justification for side-effect harms except to the extent that they are needed to inflict intentional harms. And these are serious harms. Imagine if we put school children in an environment with a high risk of sexual assault. Outrageous! Yet that's what we do with prisoners. So the justification of the side-effect harm has to be quite strong. It can't be "used up" by the fact that we've already relied on desert to justify delivering proportional punishment. Moreover, can we not imagine non-incarcerative methods of punishment with fewer side effect harms? (To the extent retributivists support incarceration, it's awfully convenient for them that this method of giving people what they deserve also happens to incapacitate the dangerous.) 

Two further points: First, Walen's view seems to accord with my own claims that we need to measure the subjective experience of punishment, at least in some respects. How can we be confident that the value of retributive punishment exceeds the side effect harms if we don't measure those harms?

Second, Walen is saying that we have affirmative reasons to impose the intentional inflictions of punishment and permission to impose side effects. I wonder, however, why we don't have to adjust the purposeful inflictions to accommodate the side effects. If A and B are equally blameworthy but A will experience his confinement much more severely, why incarcerate A and B for the same period of time if there is an easy method of making their total harm more equal?

My point is easiest to understand when the units of intentional infliction of harm are the same as the units of side-effect harms. Imagine a futuristic method of punishment. Rather than incarcerating offenders, we spray them with "gravitons" that limit their liberty by slowing them down. Future retributivists have solved problems of proportionality and simply look up an offense, say 100 units of crime seriousness, and then set their guns to 100 gravitons so that the intentional infliction of punishment precisely matches offense seriousness.

There is a catch, however. Graviton guns fire 15 extra units 98% of the time. So setting the gun for 100 units will typically spray 115 units. If the value of retribution is significantly high in some case, Walen seems committed to the view that you can set the gun to 100 and fire away, almost certainly leading to someone receiving 115 gravitons total. I think most of us would say, and perhaps Walen would agree, that you have to set the gun to 85 units to achieve the ultimate 100 units. But notice that doing so falls short of the goal of intentionally inflicting 100 units of punishment. (One might quibble about what your intentions really are if you set the gun to 100, given that it fires in excess so frequently. But note this is not an unrealistic assumption. We sentence people to deprivations of liberty in prison knowing that they will suffer side effect harms with probability greater than 98%.)

In any event, if you agree that it would be better to set the gun to 85 units, then don't we have to shorten prison sentences to accommodate harms that we inflict as side effects? This is where standard double-effect reasoning may break down. The intentional portion of sentences can be titrated up and down to make up for foreseen side effects. So the crux of the debate may turn on how fungible the intentional and unintentional harms of punishment are and how strong the obligation is to avoid side-effect harms. Walen offers some comments in his piece suggesting that side-effect harms are not fungible with intentional inflictions, a topic I'll discuss in an upcoming post.

I should add that one cannot fault Walen for his brief discussion of how he would justify the side-effect harms of punishment. He's writing, after all, in an encyclopedia entry. But to the extent he claims to cite a flaw in my reasoning, I see insufficient discussion to backup his claim. (Adapted from work in progress.)

Posted by Adam Kolber on November 11, 2015 at 12:56 PM in Criminal Law, Legal Theory | Permalink | Comments (1)

Tuesday, November 10, 2015

Broad and Narrow Punishment

H.L.A. Hart famously claimed that a central feature of punishment is that it is "intentionally administered” to an “offender for his offence.” Many punishment theorists share Hart's view that punishment essentially concerns an intentional infliction. I will emphasize, however, how extraordinarily narrow this view of punishment is:

(1) Credit for time served inconsistent with the narrow view: When people are detained before trial, we typically don't say they are being punished. After all, they haven't been convicted of any crime. So, consistent with Hart so far, we are not intentionally administering painful or unpleasant consequences on pretrial detainees for an offense. However, when pretrial detainees are subsequently convicted, they almost always receive credit against their punishments for each day they were detained. We reduce the punishment they must serve by the supposed non-punishment of detention. We do the reduction, I believe, because we think the harms of detention (even though not intended as punishment) are essentially the same as punishment, such that we reduce punishment day-for-day with detention. Hence, contra Hart, amounts of punishment depend on more than just those inflictions that are intentionally administered as punishment.  (See here for more.)

 (2) Our intuitions of punishment severity extend beyond the narrow view: Suppose Judge A in State A sentences a defendant to four years in prison for a particular crime. The judge, and if you'd like, the citizens and legislators in his state have a very vivid idea of what goes on in prison and it is the purpose of the judge (and the citizens and legislators) that the defendant undergo many hardships in prison (worse food; separation from family; loss of sex life, etc.). By contrast, Judge B in State B sentences a different defendant to four years in prison for a crime of equal seriousness. Both defendants, let us assume, are equally worthy of blame. In State B, however, the judge and the citizens and legislators have only vague notions of prison life. It's their purpose that B be deprived of liberty for four years, but they don't think about the side effects, such as the food being bad or the harms of separating from one's relatives. We could say that the judge et al. know about these things, but it's not their purpose that the defendant undergo these hardships.

So A and B are otherwise alike in all pertinent respects; they are incarcerated in identical conditions for four years and they experience confinement in exactly the same way. Wouldn't we say that their punishment severity is the same? That is, even though A and B differ dramatically in the amount of "narrow" punishment they receive, when asked whether their punishment severity is the same, we're inclined to say yes. That is, we are inclined to focus on punishment in the broad sense. We probably don't care precisely which hardships of prison are purposeful and which are merely foreseen when assessing amounts of punishment. (See here for more.)

Why does all of this matter? Those punishment theorists who only purport to justify intentional inflictions (see p. 24 here) cannot justify real-world punishments (or must offer a non-punishment, shadow theory to justify these other aspects of punishment) for all punishments include unintended side effects. I'll also use the distinction between broad and narrow punishment in my next post to reply to Alec Walen's comments on my work in his entry on retributivism in the Stanford Encyclopedia of Philosophy .

Posted by Adam Kolber on November 10, 2015 at 04:47 AM in Criminal Law, Legal Theory | Permalink | Comments (5)

Wednesday, August 12, 2015

Introduction and Dedication

Hello Prawfs! It is already August 12, and I am posting my first post to Prawfs this month. For that, I apologize. But I will make up for it in the coming weeks.

First, some introductions. My name is Ari Ezra Waldman. I'm on the faculty at New York Law School, where, in addition to teaching intellectual property, internet law, privacy, and torts, I run our academic center focused on law, technology, and society. My research and writing focus on privacy, the bridge between privacy and intellectual property, and cyberharassment. You can find some of my publications on SSRN, although I have a handful in the works or under submission at the moment. More on that later. My partner and I are the human parents to a wonderful dog named Scholar. She's a dachshund-beagle mix.

Second, I would like to dedicate all my posts this month to Dan. I didn't know Dan as well as some others, but in the short time I knew him, he was a friend and mentor.

Now on to substance. In my short time at Prawfs, I would like to use several posts to talk about teaching and some other posts to tell one story, hoping to flesh out ideas about an ongoing project about information diffusion, privacy, and intellectual property. I start with identifying a theoretical problem.

In an important and oft-cited essay, Professor Jonathan Zittrain came to the profound conclusion that intellectual property owners and personal data owners want the same thing: “control over information.” That control was being eroded by the early internet: “perfect, cheap, anonymous, and quick copying of data” endangered copyright owners’ ability to control dissemination of their content and threatened to make private personal data a market commodity. Using the illustrative case studies of copyrighted music and patient health data, Zittrain suggested that privacy advocates could learn from content owners’ use technological systems that prevented the unlawful mass distribution of copyrighted data.

Professor Zittrain’s view that copyright owners and patients both shared the same fear of loss of control over data makes a great deal of sense: it appeals to an intuitive and dominant understanding of privacy as control over information and reflects centuries of legal thought, from British common law to Samuel Warren’s and Louis Brandeis’s groundbreaking article, The Right to Privacy, that saw the overlap between privacy and intellectual property. But recognizing that the fields share the same “deep problem” of loss of control is only a first step. We all want to maintain control over the dissemination of our data, whether it’s Taylor Swift removing her music from Spotify or internet users opting out of the use of cookies. But musicians also want many people to buy and listen to their songs, and individuals need at least some other people to have access to their data. Loss of control and, thus, loss of legal protection, has to happen sometime later, after some other publicity trigger. This suggests that the word “control” does not fully capture the problem; rather, it is about the social process that transforms information from under control to out of control.

This correlative inquiry is important. Control is an empty concept without knowing what it means to lose it, and the conceptual vacuum has contributed to haphazard and, at times, harsh, unjust results. Often, courts conclude that personal information and intellectual property is out of an individual’s control if even just a few other people know or have access to it. At other times, decisions are more nuanced. But they all ask the same question: When is information, already known by some, sufficiently out of the owner’s control such that it can be deemed public? Conceptualizing the problem of privacy and intellectual property merely as loss of control does not give us the tools to answer this question.

In subsequent posts, I will lay out a proposed answer to this second inquiry. In short, I argue that loss/retention of control has everything to do with information diffusion, social networks, and trust. 

 

Posted by Ari Ezra Waldman on August 12, 2015 at 01:07 PM in Dan Markel, Information and Technology, Intellectual Property, Legal Theory | Permalink | Comments (1)

Thursday, July 23, 2015

God Doesn't Play Dice, Spooky Action at a Distance, If You Have a Hammer, Everything Looks Like a Nail, Ships Passing in the Night, and Other Metaphors For Belief and Debate

Canstockphoto12155245This is a reflection about disciplines and theory, in particular, law and economics.  I preface it by saying that I think economics is a fascinating subject, I took a lot of econ classes in college (mostly macro), and I was an antitrust lawyer for a long time, which meant that I had to have some handle on micro as well.  What provokes this particular reaction is a new piece by Bob Scott (Columbia), a far more distinguished contract theorist than I, on the same subject, contract interpretation, on which I've been writing and blogging this summer.  Bob and I aren't just ships passing in the night. (If we were, he'd be the aircraft carrier in the photo at left.) We are sailing in different oceans. I have been thinking the last few days about why. (I should say that Bob and his frequent co-author, Alan Schwartz, have acknowledged my previous critiques in print. The sailing metaphor is about our concepts, not the fact of the dialogue!)

I'll come back to the specifics later. What I want to consider first is those circumstances in which reasoned discussion is or is not even possible. A couple years back I read a fascinating article by a philosopher named Brian Ribeiro, in which he assessed truly hard cases of conflicting belief, i.e., those instances in which the interlocutors disagree but are not ignorant of critical facts, are sufficiently educated, and are under no cognitive disabilities. A perfectly good example is religious belief. If you are a Mormon or a Catholic, you are going to believe things about which no amount of reasoned argument will change my belief. Rather, a change has to be the result of a conversion.  To quote Ribeiro, "If reconciliation is to occur, then one of us must forsake reason-giving (non-rationally) reject our old rule, and (non-rationally) accept a new rule, thereby ending the dispute."

It's pretty easy to see that issue in the case of religion, but my contention here is that it happens all the time in academia, i.e., we are ships passing in the night because we begin with an affective set of foundational beliefs upon which we base our sense-making of experience, and the affect is simply not amenable to anything but a conversion experience if there is to be a change.  The first part of the title is a reference to Einstein's famous quip about quantum mechanics, and has to do with something very fundamental about how you believe one event causes another (like particles influencing each other simultaneously at distances greater than light could travel in that instant - the issue of "entanglement" that Einstein called "spooky action at a distance").

I'm not saying that one can't be converted. I suspect there would be some experiment that could have brought Einstein around, just like Arthur Eddington's experiment brought Newtonians around to Einstein's general relativity. The issue arises at a meta level, when you don't believe that there can be evidence that would change your belief. Sorry, but I don't think even my believing Christian friends whose intellects I  respect beyond question are going to get me to believe in the divinity of Jesus Christ.

I'm pretty sure that there's no bright line that cabins off the meta issue of belief solely to matters of religion, however. My friend and next door neighbor, David Haig, is an esteemed evolutionary biologist at Harvard. He and I occasionally partake of a bottle of wine on a Saturday or Sunday afternoon, and come around at some point to the "hard question of consciousness." This is the unresolved scientific and philosophical question of the phenomenon of consciousness. At this point, the debate is not so much about whether there is a reductive explanation, but whether there can ever be one (that's why it's still as much a philosophical as scientific debate). David and I pretty much agree to disagree on this, but my point is that reasoned discussion morphs into belief and conversion at some point.  That is, if presented with a theory of consciousness that comports with the evidence, I'd be pretty stupid not to be converted (just as if Jesus showed up with Elijah at our next Passover Seder and took over reading the Haggadah). But for now, he believes what he believes and I believe what I believe. (There's a philosophical problem of induction buried in there, because usually the basis of the belief that we'll solve the problem is our past experience of solving heretofore unresolvable problems.)

How this ties back to something as mundane as contract law after the break.

First, I owe it to Bob to plug his forthcoming Marquette Law Review article, Contract Design and the Shading Problem, the abstract of which is as follows:

Despite recent advances in our understanding of contracting behavior, economic contract theory has yet to identify the principal causes and effects of contract breach. In this Essay, I argue that opportunism is a primary explanation for why commercial parties deliberately breach their contracts. I develop a novel variation on opportunism that I identify as “shading;” a behavior that more accurately describes the vexing problems courts face in rooting out strategic behavior in contract litigation. I provide some empirical support for the claim that shading behavior is both pervasive in litigation over contract breach and extremely difficult for generalist courts to detect, and I offer an explanation for why this is so. In contrast to courts of equity in pre-industrial England, generalist courts today are tasked with the challenge of interpreting contracts in a heterogeneous global economy. This has left generalist courts incapable of identifying with any degree of accuracy which of the litigants is behaving strategically. I advance the claim that ex ante design by commercial parties is more effective in deterring opportunism in litigation than ex post evaluation of the contractual context by generalist courts. I illustrate this claim by focusing on the critical roles of uncertainty and scale in determining how legally sophisticated parties, both individually and collectively, design their contracts. By deploying sophisticated design strategies tailored to particular environments, parties are able both to reduce the risk of shading and to cabin the role of the decision maker tasked with policing this difficult to verify behavior. I conclude that judges and contract theorists must attend to the unique characteristics of the contracts currently being designed by sophisticated parties because it is the parties, and not the courts, that reduce the risks of opportunistic shading in contract adjudication. 
What Bob is wrestling with is how to fit the problem of contract language into the law and economics of contracts.  "Theory" would predict that contracts are a check on opportunism, and therefore we ought to see a reduction in opportunistic behavior, particularly as between sophisticated parties who write complex agreements. But we see LOTS of opportunistic behavior and so how do we explain it? Well, it must be because somebody is acting opportunistically, and pushing an ex post interpretation of the language that couldn't realistically have been what it meant when the parties agreed to it ex ante.
 
Economic theory of contract law - i.e., the relationship of contracting behavior to the reduction of opportunism - demands a causal relationship between the act of making a contract and the application of that contract to resolve a dispute that occurs later in time.  Moreover, if the contracting parties are rational, they ought to trying to make their contracts as "complete" as possible, that is, to anticipate as many "state contingencies" as they can. To quote Bob Scott: "Faced with this wide gap between theory and reality, the answers to a critical empirical question remain elusive: how do sophisticated parties adjust ex ante to the prospect of breach ex post?"
 
Bob and I don't disagree that the world is rife with opportunistic behavior, and it occurs as much in the case of sophisticated market actors as with anybody else.  Why we are ships passing in the night has to do with our respective orientation to theory and causation. I'm being presumptuous here, but I think for an economist to delink the ex ante contracting behavior from the ex post opportunism is, like Einstein, to accept spooky action at a distance. The theory is the hammer and, if you have it, the problem looks like a nail.
 
As I've written (ad nauseam, but at least here and here), I have a completely different view of the causal connection (or, to put it more bluntly, the lack of one) between the creation of ex ante contract text and ex post contract opportunism.  All law and economics scholars would (I think) agree that "complete contracts" - i.e. contracts that can in theory anticipate every state contingency - don't and will never exist in the real world. I think the concept, as a matter of fundamental belief, is so ephemeral and fantastical that I can't accept it even as the basis from which to begin an argument. Similarly, I believe the phrase "mutual intention of the parties" is right up there with "the present King of France" in terms of nominally coherent descriptions of non-existent things.  On the other hand, I can understand if an economist would look at my view as saying, in essence, God plays dice with the world, or as contending that I've reduced the behavior to something like spooky action at a distance.
 
What's interesting about all of this is my suspicion (confirmed by my exchanges with Bob offline) that we'd probably face practical problems as pragmatic lawyers in very similar ways. The dialogue is really about fundamental orientations to making sense of the world.

Posted by Jeff Lipshaw on July 23, 2015 at 10:21 AM in Article Spotlight, Legal Theory, Lipshaw | Permalink | Comments (2)

Sunday, July 12, 2015

"No Contracts"

For all that lawyers and law professors traffic in language, sometimes I think language is to lawyers as water must be to fish. That is, if you live in it, it's kind of hard to step back and realize the universe could be constituted out of some other medium.

Cutie-fish-in-waterUp here, the cable provider is Charter, and it runs a lot of commercials. The actor in the commercial for its business services trumpeted yesterday that one of the benefits of subscribing was "no contracts!"  Well, you and I both know that there HAS to be a contract. God knows Charter will be disclaiming SOMETHING - like, for example, the potential for consequential damages to a business if the internet connection goes down.  

What we all know is that "no contracts" actually means something other than its literal meaning.  "No contracts" means only that the subscriber won't be held to a fixed term, and will be able to cancel its service without much notice to Charter. OMG, the plain meaning is precisely the opposite of the plain meaning!

The particular conceit of the smartest people in our profession - and I mean both practitioners and professors - is that words and sentences are capable, with the right skills, of exactitude that approaches an asymptotic limit. Within a certain school of contract law theorists, this gets expressed as the idea of an "incomplete contract," as though the idea of a complete contract, one that contemplates EVERY possible state contingency, is something any more conceivable than the Kabbalists' notion of God (the Ayn Sof - "there is no end"). I put the term "complete contract" in the same conceptual category as I do non-words like "gruntled," "dain," and "combobulated." 

Below the break, I fulminate on this idea - that plain meaning is like Schrödinger's cat, existing and not existing at the same time - in the context of statutes (i.e. King v. Burwell) and contracts. (Full disclosure: I'm the guy who, when any student in my contracts class says the words "mutual intention of the parties," starts making "woo-woo" noises and acting out the Vulcan mind-meld.)

I don't usually wade into the great issues of the day, but I thought I ought to read the King v. Burwell opinions.  If you put aside the politics, Chief Justice Roberts's opinion is a pretty well-trod exercise in the interpretation of a text: what does it mean for a health care exchange to be "established by the state"? Does that mean state itself  has to put the exchange in place under its law, or does it also mean an exchange that the federal government has established for the state as the default?  

For contracts professors, it's not too surprising.  If you read Justice Traynor's opinion in Pacific Gas & Electric Co. v. G.W. Thomas Drayage & Rigging Co., a seminal case in the law of interpretation, it's the same "literal reading" versus "contextual reading" of an indemnity clause. Indeed, if you look at the language in PG&E, it's the equivalent of Charter's "no contracts," and the court says, "Oh no, it can't possibly mean that!"

Two implications come to mind.

First, whether language ever really maps even an individual purpose or intention, much less the elusive "mutual intention of the parties" in a contract or "congressional intent" is the subject of the piece I posted on SSRN several weeks ago: Lexical Opportunism and the Limits of Contract Theory. My point there is that the elusiveness of language as map undercuts attempts to make broad economic or moral theoretical statements about contract law; I suspect it's the same for statutory interpretation. The text is the text and, in any hard case about its application, we are all opportunists.

Second, it's also almost impossible to state a rule for when you ought to abide by the plain textual meaning or look at the context. Sometimes "no contracts" could really mean "no contracts." There are some documents whose very value is in their formalism - letters of credit, negotiable instruments, promissory notes - and you really do do a disservice by allowing a contextual reading of the language.  Hence, Judge Kosinski's criticism of the PG&E rule in the Trident case: it "casts a long shadow of uncertainty over all transactions negotiated and executed under the law."

Personally, I don't know what the hell "established by the State" was supposed to mean, and was relieved to have the ACA once again upheld because I think it's good policy (or better than the non-policy that existed  before). 

But in terms of the language issue, I can't help hearing the debate as though I'm listening to two fish argue how wet the water is.

Posted by Jeff Lipshaw on July 12, 2015 at 08:04 AM in Article Spotlight, Legal Theory, Lipshaw | Permalink | Comments (1)

Monday, April 27, 2015

Natural Rights and the "Human Right" to Intellectual Property

I am picking up from where I left off in my prior post on human rights and intellectual property. My concern with embracing a human right to intellectual property arises from the possibility that it will lead to more expansive intellectual property protections. I would tend to agree, therefore, with the report by the United Nations Special Rapporteur in the field of cultural rights (mentioned by Lea Shaver in her comment), which characterizes copyright as distinct from the human right to authorship.

Human rights are generally understood to be natural rights. If one accepts this proposition, how does treating intellectual property protection as a human right relate to the natural rights intellectual property scholarship? The intellectual property and human rights conversation is primarily an international intellectual property conversation. However, the natural rights framing of intellectual property rights is primarily a domestic intellectual property conversation. Both of these frameworks are based on natural rights theories, yet they appear to reach opposite conclusions. With some exceptions, proponents of natural rights justifications for intellectual property tend to support more expansive intellectual property protections. On the other hand, proponents of a human right to intellectual property speak of “balance” and of using human rights frameworks to respond to excessive intellectual property rights.

One might be inclined to dismiss the theoretical foundations for intellectual property as irrelevant to the practical aspects of intellectual property law. However, the framing of intellectual property rights can impact the way private citizens, including judges and policy makers, view intellectual property protection and infringement. Gregory Mandel’s study on the public perception of intellectual property rights, for instance, found that individuals who view intellectual property rights as natural rights tend to support more expansive intellectual property protection. This is consistent with legal scholarship that takes a natural rights approach to intellectual property. My inclination, then,  is that distinguishing between copyright protection and the human right to the moral and material interests arising from one’s literary or artistic production is a step in the right direction.

Posted by Jan OseiTutu on April 27, 2015 at 03:03 PM in Culture, Intellectual Property, International Law, Legal Theory | Permalink | Comments (0)

Sunday, March 29, 2015

The Significant Decline in Null Hypothesis Significance Testing?

(Cross-posted at Co-Op.)

Prompted by Dan Kahan, I've been thinking a great deal about whether null hypothesis significance testing (NHST, marked by p values) is a misleading approach to many empirical problems.  The basic argument against p-values (and in favor of robust descriptive statistics, including effect sizes and/or   Bayesian data analysis) is fairly intuitive, and can be found here and here and here and here.  In a working paper on situation sense, judging, and motivated cognition, Dan, I, and other co-authors explain a competing Bayesian approach:

In Bayesian hypothesis testing . . .  the probability of obtaining the the effect observed in the experiment is calculated for two or more competing hypotheses. The relative magnitude of those probabilities is the equivalent of a Bayesian “likelihood ratio.” For example, one might say that it would be 5—or 500 or 0.2 or 0.002, etc.—times as likely that one would observe the results generated by the experiment if one hypothesis is true than if a rival one actually one is.

Under Bayes’ Theorem, the likelihood ratio is not the “probability” of a hypothesis being true but rather he factor by which one should update one’s prior assessment of the probability of the truth of a hypothesis or proposition. In an experimental stetting, it can be treated as an index of the weight with which the evidence supports one hypotheses in relation to the another.

Under Bayes’ Theorem, the strength of new evidence (the likelihood ratio) is, of course, analytically independent of one’s prior assessment of the probability of the hypothesis in question. Because neither the validity nor the weight of our study results depends on holding any particular prior about the [question of interest] we report only the indicated likelihood ratios and leave it to readers to adjust their own beliefs accordingly.

To be frank, I've been resisting Dan's hectoring entreaties arguments to abandon NHST.  One obvious reason is fear: I understand the virtues and vices of significance testing well.  It has provided me a convenient heuristic to know when I've "finished" the experimental part of my research, and am ready to write the over-promising introduction and under-delivering normative sections of the paper.  Moreover, p-values are widely used by courts (as Jason Bent is exploring).  Or to put it differently, I'm well aware that the least positive thing one can say about a legal argument is that it is novel.  Who wants to jump first into deep(er) waters?  

At this year's CELS, I didn't see a single paper without p-values. So even if NHST is in decline, the barbarians are far from the capital.  But, given what's happening in cognate disciplines, it might be time for law professors to get comfortable with a new way of evaluating empirical work.

Posted by Dave Hoffman on March 29, 2015 at 03:13 PM in Dave Hoffman, Legal Theory | Permalink | Comments (2)

Sunday, November 23, 2014

Judicial Elections and Historical Irony

Last week I was privileged to participate in a conference in New Mexico on the judiciary.  The debates and assigned readings focused especially on judicial elections (a new issue-area for me).   There, I learned that a little historical context can radically change the aspect of many current debates about the choice between an elected or appointed judiciary (and the many variants in between, including systems of merit selection and appointment with retention election).  

“Judicial independence” is the rallying cry today for those who want to eliminate or at least tame judicial elections in the states.  This “judicial independence” variously refers to judges’ freedom or willingness to take unpopular stances on policy and constitutional interpretation (think of same-sex marriage in Iowa), or judges’ impartiality and freedom from undue influence in particular disputes (think of business complaints that judges have become too thick with the plaintiffs’ bar, or of corporate efforts to use campaign contributions to buy case outcomes as suggested in Caperton v. Massey Coal).  

 With many judicial elections now under the shock of increasing party polarization, interest-group mobilization, and campaign spending, it seems likely that these calls to end judicial elections for the sake of judicial independence will only intensify.  Yet one of the historical ironies I learned from the conference readings is that “judicial independence” was also the primary value that was put forward as the rationale for creating elected judges in the first place.  

 In the mid-nineteenth-century campaigns for an elected judiciary, however, the sort of judicial dependence that was especially targeted by reformers was judges’ dependence on state legislatures and associated party machines that had become corrupt or spendthrift (especially in economic development projects).  It was hoped that a switch to elected judges would empower judges to reign in discredited legislatures, policing them for their fidelity to the state constitutions (“the people’s law”) while keeping judges accountable to the people through elections (and later, recalls).  

 The longer history of elected judges in the United States offers many other enlightening contrasts with today’s premises. (The stance of the professional bar towards the desirability of elected judges flipped over time.  The dominant presumption about whether appointed or elected judges are the ones more likely to lean conservative or liberal also flipped over time…)  For now, however, I only want to ask one question of this rich history—whether it makes plausible the possibility that, in some states, contemporary reform movements to eliminate elected judges will have unintended adverse consequences for democratic responsiveness and the separation (or balance) of powers between the judiciary and other branches of government.

 My question is prompted--not by a preference for elective over appointive judiciaries--but by the historical scholarship that shows that the nineteenth-century push for elected judges was often packaged with—and used as a justification for—very substantial expansions of judicial power and very substantial curtailments of legislative power.  Making state judges electorally accountable was supposed to make it safe to greatly expand the role of judicial review of legislation, and to give judges much more independence from the other branches in the terms and conditions of their appointments.  

 This new form of judicial accountability to the electorate even justified a judicial role in which judges were tasked to police procedural constraints on the legislatures, including rules that had previously been considered essentially internal to the legislature (perhaps—I wonder—starting to unravel some of the Anglo-American tradition of legislative autonomy and privileges that had taken centuries to develop).  Meanwhile, this change in the role of judges may also have coincided with the decline of juries.

 If much of the nineteenth-century judicial empowerment and legislative disempowerment was enacted on the premise of it being bundled with judicial elections, then I ask—if some states now revert to appointed judiciaries without also considering the larger package—do they risk an institutional imbalance or loss of democratic accountability in the legislature and executive?  (Perhaps this question is already asked and answered somewhere in current policy debates or scholarship?)

 It would be nice to think these structural matters of constitutional development tend towards equilibrium in some organic fashion.  At the least, we can expect that state legislatures and executives will long retain the cruder sorts of tools for reining in abuses of appointed judges.  Depending on the particular state, these might include decisions about judicial budgets, impeachment or removal of a judge upon legislative address, jurisdiction-stripping, court packing, or informal control of judges through the influence of political parties and the professional bar.  Nonetheless, I find it just as easy to imagine that judicial empowerment at the expense of legislatures might be ‘sticky’, if never a one-way ratchet.  Here I am influenced by the social science accounts that suggest that, around the world today, judicial power has been much expanding at the expense of legislatures.  I am also thinking about the possibility that there may be institutional biases in some states against structural adjustments (like ’single subject rules’).

In theory, the public should have the capacity to ensure that one branch of government never gets too big or unaccountable.  In the many states that are characterized by constitutions relatively easy to amend, constitutional change is, after all, supposed to occur more through formal amendment processes than through judicial interpretation.  Even so, query whether such large structural questions lend themselves to retrospective scrutiny and popular oversight.  (This is a real, not rhetorical, question for someone who has a lot more knowledge about the states and judicial reform movements than I now have.)

 John J. Dinan, The American State Constitutional Tradition (Univ. Press of Kansas, 2006)

 John Ferejohn, “Judicializing politics, politicizing law,” Law and Contemporary Problems 65 (3): 41–68 (2002).

 Jack P. Greene, The Quest for Power: The Lower House of Assembly in the Southern Royal Colonies (Norton, 1972)

 Jed Handelsman Shugerman, The People’s Courts: Pursuing Judicial Independence in America (Harvard Univ. Press 2012)

 G. Alan Tarr, Without Fear or Favor: Judicial Independence and Judicial Accountability in the States (Stanford Univ. Press 2012)

Posted by Kirsten Nussbaumer on November 23, 2014 at 10:34 PM in Constitutional thoughts, Current Affairs, Judicial Process, Law and Politics, Legal Theory | Permalink | Comments (7)

Tuesday, November 04, 2014

Election law as contextual: a universal truth? (And, happy election day to U.S. readers!)

I am grateful to Dan Markel for this chance to spend another month in conversation at Prawfsblawg.  As with my last go-around, my focus is on U.S. election law.  This time, however, I get to talk about election laws on an election day. 

 When the voting and vote counting unfold, we’re bound to see election laws and administrative practices in the news.  Even if the odds-makers are proven correct in their forecast of an election day that is characterized by relatively low voter turn-out and relatively few close contests, there will be questions or controversies about the effects of heightened voter identification requirements, the counting of provisional ballots, the scheduling and ballot design for a gubernatorial run-off, and the like.  Those of us who follow politics have come to instinctively associate some of these contested laws and practices with a particular effect (a tendency to expand or narrow the electorate), and with a particular political valence (a tendency to disenfranchise or dilute the votes of one or another party or racial or socioeconomic group).  

Of course, election rules, such as the new voter identification requirements in Texas, will, at times have their strongest bite in the lives of individuals (see, e.g., Eric Kennie’s story at  http://www.theguardian.com/us-news/2014/oct/27/texas-vote-id-proof-certificate-minority-law). But politicos and scholars usually train their attention more on election rules as they might tip a contest for a particular candidate or party.  To be sure, different political camps tend to have different empirical and normative premises about election rules’ operations.  Voter i.d. requirements are about culling the poor, the disabled, and racial minorities from the electorate.  They are a procedural tool for disenfranchising eligible voters.  Or, no, these requirements are about screening out fraud and low-information voters.  They are about protecting the eligible and informed voters from vote dilution.  All sides, however, can instinctively agree on a rule’s expected effect and valence:  Strict voter i.d. rules contract rather than expand the electorate, and they can be expected to do so to the benefit of Republicans.

 I now want to take many steps back from the immediacy of these voter i.d. rules and today’s election.  (It’s not like you have any election results to follow!)  I want to consider whether perceived regularities in the consequences of elections laws (large and small) may hold true across many different contexts.  

 Political scientists (one of my tribes) have often assumed that the answer is “yes”, and they have precisely defined their scholarly enterprise to be a search for the generalizations that will not be context-bound.  The successes of this research program have been real.  We have learned that election rules can exhibit regularities, sometimes ones that operate behind the backs of the political actors.  A particularly successful example is Duverger’s Law which states that legislative elections by single-member-district and ‘first-past-the-post’ rules (such as in the U.S., Canada, and Great Britain) are correlated with two-party systems while proportional-representation rules are correlated with multiparty systems.  

 This generalization is powerfully universal.  Except when it isn’t.  Many times, political scientists have found the need to qualify it.  It fails to hold true in a country where there is no widely shared information or expectations about the different parties’ electoral prospects, or in a political culture where voters do not mind ‘wasting’ their votes on a third-party candidate who can’t win (Powell 2013).  It fails to hold true in a federal system at the national level if the national parties are really sectional parties (Chhibber and Kollman 2004.)  And so on.

 If even Duverger’s Law is highly context-bound, then we may suspect that there are few, if any, (non-trivial) regularities in the consequences of election rules that are not similarly context-bound.  And in fact, G. Bingham Powell has used this example to make a (to me) compelling case that the proper study of the scientific ‘laws’ of election law can’t be (or, at least, it can’t be restricted to) a search for big universals.  Even when generalizations are prized over local knowledge, election laws need to be studied closer to the ground in order to unearth the local and temporal conditions that may limit an otherwise robust pattern, or that may set in motion a new one.  

 Duverger himself recognized that the consequences of election rules are mediated by context, and he classified some of these contextual factors as (1) “the mechanical” (the interaction between votes and election rules if the latter are properly administered—conditions that may depend on the strength of a country’s tradition of rule of law and technical competence)  and (2) “the strategic” (the effects of citizen or elite anticipations of these mechanical operations).  

 We might think about recent voter identification laws in a similar fashion:  Under current conditions, heightened documentation requirements can be expected, at least at the margins, to disproportionately shave the vote totals for some Democratic-leaning constituencies.  This effect may seem almost mechanical.  Yet, as we have apparently witnessed in recent years, some election reforms that raise the costs of voting for particular classes of voters (such as proof of citizenship requirements, or cut-backs in early voting days like ‘Souls to the Polls”) can occasionally result in an increase in the vote totals through the mechanism of ‘backlash’ mobilization against the reality or perception that the reform was an intentional form of disenfranchisement.  (On such backlash, see, e.g., Rick Hasen’s Voting Wars).  My (perhaps, not so social-scientific) spin on this example:  human agency and innovation matter.

 Powell offers his insights about the contextual nature of election law for the sake of a positive research program into election laws’ consequences.  I, however, want to use these insights to conclude with two simple points that are more normative in nature.  

 First, as citizens or election reformers, the contextual nature of election rules means that we should be wary of categorical judgments about particular election rules.  Changes in the environment, human behavior, or the law's internal design may flip expected realities.  (Just as, at one time, the secret ballot served to free humble tenant voters from the pressure of their landlords, so at another time and place, it worked to disenfranchise the humble illiterate…)  Voter documentation requirements, for example—if they are the responsibility of government, and not voters themselves—may have an entirely different effect and valence than what we’ve come to expect in the U.S.  

 To judge from the experience in some countries at least, it seems possible that voter documentation can operate to expand, not contract, the electorate, and that it can operate without benefit to a particular party (other than the ‘partisan’ benefit that is likely to accrue from fully documenting an eligible electorate).  If this is right, then—yes, of course—government-controlled voter i.d. will run into other objections (such as those of the civil libertarians worried about runaway uses of national i.d.).  But the point stands that our political (politicized?) instincts about the natural effect and valence of voter id would no longer hold.

 Second, if the consequences of most or all election rules are highly context-bound—meaning that an election law that is benign in one context can be malign in the next—then the quality of our processes and institutions for evaluating and changing election rules may be far more important than the static quality of any particular election rule.  I’ll say more about this latter point at another time.

 Now back to the immediacy of election results and (perhaps) election administration debacles.

 Sources:

 Pradeep Chhibber and Ken Kollman, The Formation of National Party Systems: Federalism and Party Competition in Canada, Great Britain, India, and the United States. Princeton: Princeton University Press, 2004.

 Maurice Duverger, Political Parties: Their Organization and Activity in the Modern State. New York: John Wiley, 1954.

 Richard L. Hasen, The Voting Wars: From Florida 2000 to the Next Election Meltdown. New Haven: Yale, 2012.

 G. Bingham Powell, Jr., “Representation in Context: Election Laws and Ideological Congruence Between Citizens and  Governments,” Perspectives on Politics, Vol. 11/No. 1, March 2013.

Posted by Kirsten Nussbaumer on November 4, 2014 at 04:22 PM in Constitutional thoughts, Law and Politics, Legal Theory | Permalink | Comments (0)

Tuesday, October 14, 2014

SEALS

Think about proposing programming for the annual meeting, or participating in a junior scholars workshop. And if you are ever interested in serving on a committee, let Russ Weaver (the executive director) know. The appointments usually happen in the summer, but he keeps track of volunteers all year long.

Posted by Marcia L. McCormick on October 14, 2014 at 11:00 AM in Civil Procedure, Corporate, Criminal Law, Employment and Labor Law, First Amendment, Gender, Immigration, Information and Technology, Intellectual Property, International Law, Judicial Process, Law and Politics, Legal Theory, Life of Law Schools, Property, Religion, Tax, Teaching Law, Torts, Travel, Workplace Law | Permalink | Comments (0)

Sunday, March 02, 2014

Legrand and Werro on the Doctrine Wars

The following guest post is a contribution to the conversation continued by Rob Howse here earlier.

Professor Pierre Legrand teaches at the Sorbonne and has been visiting at the University of San Diego Law School and at Northwestern University Law School. Professor Franz Werro teaches at the Université de Fribourg and at the Georgetown University Law Center.

When It Would Have Been Better Not To Talk About a Better Model

So, the German Wissenschaftsrat — a government body concerned with the promotion of academic research (broadly understood) — suggests that legal scholarship should become more interdisciplinary and international. And the American Bar Association — a non-government body devoted to the service of the legal profession — opines that legal education should become more practical and experiential. These pro domo pleas featuring their own interesting history and having generated much debate already, we want specifically to address Professor Ralf Michaels’s reaction.

In his post on “Verfassungsblog” dated 19 February 2014, Professor Michaels claims that “the contrast [between the two reports] points to two problems of the US law school model — and thereby highlights two attractive traits of German education”. According to Professor Michaels, the first difficulty faced by US law schools is that “they are largely financed privately”, which means that “it becomes harder and harder to justify spending significant resources on anything other than the recruitment of better students and on their ability to land well-paying jobs”. The second complication for US law schools that Professor Michaels identifies is related. For him, “[t]he consumer model of legal education requires, ultimately, that law students are taught nothing other than skills”. His reasoning is as follows: “[I]nterdisciplinary scholarship may decline, but doctrinal scholarship cannot take its place because academic understanding of doctrine has been thoroughly discarded”, ergo, “scholarship of any kind may be viewed as useless” and “[l]aw schools may, finally, turn into pure trade schools”. But, in Professor Michaels’s words, “in Germany, this is unlikely to happen”. Professor Michaels’s two-prong explanation is that, on the one hand, “[p]ublic financing of law schools guarantees that the public good aspect of legal education can be maintained” and, on the other, that “the continued acceptance of doctrine as a subject worthy of scholarly attention means not only that scholars will continue to be able to produce scholarship; it also means that the quality of this scholarship will remain at its high level”. To emphasize his claim on the subject of legal doctrine, Professor Michaels writes that “German doctrinal scholarship will always be superior to that of other countries”. He also refers to “the historic advantage [that German law schools] have in excelling at legal doctrine”.

After Professor Robert Howse had replied on “PrawfsBlawg”, Professor Michaels wrote a rejoinder, again on “Verfassungsblog”, with a view to clarifying his initial comments though in effect changing his argument. Professor Michaels’s revised version of his initial assertion is that “the basic claim that German legal scholarship excels more in doctrine while American legal scholarship excels more in interdisciplinarity […] has become almost a truism in comparative law”. Still in his second post, Professor Michaels notes that there are “real institutional differences that perpetuate cultural differences” and that these “cannot simply be wished away”. He adds that “[t]o recognize such cultural differences is our daily job as comparative lawyers”. With specific reference to the statement in his first post that “German doctrinal scholarship will always be superior to that of other countries”, Professor Michaels writes that his “intent” was “quite the opposite [of] claim[ing] superiority of one tradition over the other”. Rather, he says, “[he] tried to make a point about relative incommensurability”. Still in his second post, Professor Michaels insists that “[l]egal education and legal scholarship in different countries are not culturally determined. Nor are they immune to change. At the same time, they exist within the constraints of cultural and institutional traditions, and they respond to these constraints in idiosyncratic ways”. He adds as follows: “[T]he idea that excellence will look similar, at some point, in all systems of the world, appears to me not only unrealistic, but also undesirable”. In his own words, Professor Michaels seeks to “encourage German scholars to keep playing to their strength” while “the US should play to [its] strengths” also. The conversation spurred by Professor Michaels’s intervention has since continued both on “Verfassungsblog” and on “PrawfsBlawg” — and presumably elsewhere also.

In the way senders of hasty e-mails have been writing to take them back, Professor Michaels has wanted to reclaim his statement that “German doctrinal scholarship will always be superior to that of other countries”. Professor Michaels must, of course, be allowed his afterthoughts. But there is a clear sense in which once words have been released in writing, whether in a hasty e-mail or otherwise, any attempt at reconsideration can appear unconvincing. To suggest, as Professor Michaels did after Professor Howse’s initial reply, that he was only advocating that both German and US legal scholarship should be “playing to their strength[s]” strikes us as being indeed unconvincing. After all, elsewhere in his two posts Professor Michaels mentions how German legal scholarship is destined to “remain at its high level”, how it enjoys a “historic advantage”, and, in sum, how it “excel[s] at legal doctrine”. While we are not in a position to divine Professor Michaels’s intent, his many iterations seem difficult to reconcile with anything other than a genuine belief in the German scholarly advantage. Needless to say, Professor Michaels is welcome to his faith. But we think it behooves a seasoned comparativist carefully to distinguish between an expression of preference and an allegedly scholarly formulation whose language may fairly be taken to suggest that a model — one’s “home” model, of all models! — can act as some sort of universal referent (in line with a metric which remains unspecified).

The fundamental point here is that it cannot do to defend the idea that German legal scholarship would be excellent as such. Indeed, Professor Michaels’s assertion is as implausible as if he maintained that “French literary criticism will always be superior to that of other countries” or that “Japanese aesthetics will always be superior to that of other countries” or for that matter that “the Spanish language will always be superior to that of other countries”. The ascertainable fact is that German legal scholarship, French literary criticism, Japanese aesthetics, or the Spanish language — to the extent that such entities can be persuasively delineated — are cultural formations. They are made, fabricated, constructed by women and men interacting in a certain place and at a certain time. They are artefacts. It is not then that there would be something like “cultural excellence” an sich, for all to see. Rather, the quality of excellence is ascribed by an ascertainable constituency of individuals who appreciate “excellence” according to local criteria. For example, the matter of “excellence” in legal scholarship will be attributed by a group of jurists who have been trained to deem certain scholarly forms to be “excellent”, that is, who have been inducted into appreciating certain scholarly practices and socialized into favoring certain scholarly values. To be sure, German scholarly undertakings will often, perhaps typically, adopt a conceptual form and eschew the candid policy concerns that are familiar to US academics. And the reader of German legal scholarship can therefore expect more on systemics and less on patriarchy, more on categories and less on externalities, more on subsumption and less on critical race theory. But none of these German predilections is intrinsically “excellent” or “superior” to prevailing perspectives in other countries. In other words, scholarly excellence very much lies in the eye of the beholder. In the end, there is neither more nor less to be said for or against the “excellence” of German legal scholarship — which, if we are willing to assume such a configuration, illustrates but one way among others to approach the study of law, no matter how influential. Lest influence be confused with rightness or truthfulness, let us emphasize that it is not because German legal scholarship enjoys a substantial and longstanding following that it can claim any particular entitlement to being right or true. Nor is it the case that the tiresome repetition on the part of so many German jurists that their scholarly model is best can, in time, somehow elevate it to the exalted status of universal yardstick by which other forms of scholarship would be assessed. Needless to add, precisely the same reservations must be entered as regards United States legal scholarship, which must also confine any claim to excellence it may wish to hold to a specifiable horizon.

As regards scholarship “US style”, Professor Michaels, while asserting its successful approach to interdisciplinarity, claims to be in a position to identify various and serious deficits. In this respect, we are moved to make two points and two points only (there would be more to say, for instance as regards the distinction Professor Michaels appears to be drawing between what he calls “the public good aspect of legal education” and the teaching of “skills” or with respect to his assumption that doctrinal writing would have fallen into discredit in the United States after US academics had realized that it could not be “sufficiently exact” or indeed as concerns his basic postulate about the absence of doctrinal work on the US academic scene).

First, even if Professor Michaels’s argumentum in terrorem were to be vindicated and even if at some point in future US “law students [were to be] taught nothing other than skills”, it would not follow that US law schools would “turn into pure trade schools”. There is at least one reason why Professor Michaels’s conclusion comes across as a non sequitur, and it is that for the most part scholars in US law schools do not pursue their scholarship to fit their teaching. It is not, of course, that scholarship does not inform teaching. It does, and it must. But scholarship is not beholden to teaching such that whatever happens to make teaching more practical or experiential will ipso facto disincentivize scholarship. (In fact, one can imagine that a number of law teachers being invited to teach more practically or experientially would take to scholarship with renewed vigour.) In other words, even if Professor Michaels is right and, concessio non dato, the class on anticipatory breach of contract were somehow to become strictly doctrinal or skills-oriented, there is nothing in this development that would inevitably discourage contract law professors from continuing to research Max Weber’s sociological understanding of contractual relationships or to pursue an investigation into the economics of early termination of contracts. To suggest, as Professor Michaels does, that “legal scholarship ends up as subordinate to legal teaching” is an overstatement. Rather, US legal scholarship can be expected to resist the commodification of teaching in significant ways — as, indeed, it demonstrably does at present. If anything, the key issue lies elsewhere — and it is one that Professor Michaels apparently misses although it is currently being fiercely debated in the United States. What if law teachers in US law schools were made to teach more than they is the case at present and found themselves having less time to research and write as a result? Arguably, scholarship would then be detrimentally affected, at least quantitatively (though one could claim that such a market correction is long overdue).

Secondly, Professor Michaels’s assumption that students are narrowly focused on obtaining gainful employment and that they will therefore enrol only in courses featuring strictly practical and measurable benefits strikes us as painting an unduly philistine picture of the student body (not to mention the law school’s curriculum committee). We both regularly teach comparative law in US law schools, and we both find that despite real financial pressures and legitimate concerns with life after law school, a significant group of law students — often some of the best ones — remains interested in “enrichment” courses ranging beyond the bar examination. Year after year, our offerings on comparative law continue to attract a critical mass of students, a number of those being sincerely committed to the issues and genuinely interested in the materials. We do not doubt that our experience is also that of many of our colleagues teaching, let us say, “non-mainstream” subjects — and we suspect our experience may well tally with that of Professor Michaels himself. In sum, we take the view that the US law school runs little risk of being visited by Professor Michaels’s dire predictions.

It remains for us to salute how in the two posts of his that we have addressed, though mostly in his second one, Professor Michaels emphasizes the cultural character of legal scholarship (and how he mentions that culture is neither immutable nor determined), how he insists that scholarly cultural response is singular (he calls it “idiosyncratic”), how he argues that the matter of cultural difference cannot be eliminated at will, and how he indicates that the idea that legal scholarship would be the same across legal traditions “appears […] not only unrealistic, but also undesirable”. As Professor Michaels insightfully articulates the matter, in the end variations in legal scholarship pertain to “incommensurability”. In our view, Professor Michaels does well to contend that given incommensurability, “[t]o recognize […] cultural differences is our daily job as comparative lawyers”. We can only hope that this heterodox claim will find a devoted following — not least in Germany where, as all comparativists know, comparative research, largely made inHamburg, has sought to implement an alternative set of assumptions focusing at once on the ascertainment of similarities across laws and on the identification of the better law.

Posted by Administrators on March 2, 2014 at 09:56 PM in Article Spotlight, Legal Theory | Permalink | Comments (0) | TrackBack

Saturday, March 01, 2014

Waldron v. Seidman, and the obligations of officials and the rest of us

"Never Mind the Constitution." That's the awesome title of this characteristically sharp and learned essay by Jeremy Waldron, reviewing in the HLR Mike Seidman's new book, On Constitutional Disobedience.  Seidman's got a cheeky and funny short reply to Waldron, entitled, appropriately enough, "Why Jeremy Waldron Really Agrees With Me."  I wonder if Seidman's Response will continue the apparent trend of the personal title for scholarship, e.g., Why Jack Balkin is Disgusting. If Susan Crawford's Response in the Harv. L. Rev. Forum to the review of her book by Chris Yoo is any indication, I suspect at most we can use these few data points only to identify a trend in favor of the  "meta" title and not make broader generalizations just yet.

Moving past the title to something like the merits, I'll confess I'm pretty skeptical toward the general thrust of Seidman's argument (as characterized by Waldron and as evidenced in his NYT oped from last year). He is, as Waldron notes, basically a philosophical anarchist and that's a position I find largely untenable under particular conditions of a reasonable well-working liberal democracy. (Importantly, some of Waldron's work on political obligation was what led me down that path but little of Waldron's work on that subject figures into his review of Seidman.) One last mildly interesting thing to note is that Seidman's embrace of philosophical anarchism and his export of it to constitutional theory basically coincides with the thrust of Abner Greene's recent book, Against Obligation.  There are differences between them, some of which are discussed here (review of Seidman by Greene) and here (review of Greene by Seidman). For those interested in these overlapping and important projects, the BU Law Review published a symposium on these two books last year, and you can find the contributions here, which I'm looking forward to exploring further, since, full disclosure, I am writing dreaming up something inspired by these various works on the moral and political obligations of prison or other corrections officials as a distinct class of officials).

 

Posted by Administrators on March 1, 2014 at 04:19 PM in Article Spotlight, Blogging, Books, Constitutional thoughts, Dan Markel, Legal Theory | Permalink | Comments (13) | TrackBack

Tuesday, February 25, 2014

Banning home plate collisions: An exercise in statutory interpretation

Major League Baseball yesterday announced an experimental rule banning, or at least limiting, home-plate collisions. The rule is intended to protect players, as home-plate collisions are a common cause of concussions and other injuries to catchers. Whether it does or not provides an interesting exercise in statutory interpretation.

New Rule 7.13 provides:

A runner attempting to score may not deviate from his direct pathway to the plate in order to initiate contact with the catcher (or other player covering home plate). If, in the judgment of the umpire, a runner attempting to score initiates contact with the catcher (or other player covering home plate) in such a manner, the umpire shall declare the runner out (even if the player covering home plate loses possession of the ball). In such circumstances, the umpire shall call the ball dead, and all other baserunners shall return to the last base touched at the time of the collision.

An interpretive comment adds:

The failure by the runner to make an effort to touch the plate, the runner's lowering of the shoulder, or the runner's pushing through with his hands, elbows or arms, would support a determination that the runner deviated from the pathway in order to initiate contact with the catcher in violation of Rule 7.13. If the runner slides into the plate in an appropriate manner, he shall not be adjudged to have violated Rule 7.13. A slide shall be deemed appropriate, in the case of a feet first slide, if the runner's buttocks and legs should hit the ground before contact with the catcher. In the case of a head first slide, a runner shall be deemed to have slid appropriately if his body should hit the ground before contact with the catcher.

Unless the catcher is in possession of the ball, the catcher cannot block the pathway of the runner as he is attempting to score. If, in the judgment of the umpire, the catcher without possession of the ball blocks the pathway of the runner, the umpire shall call or signal the runner safe. Notwithstanding the above, it shall not be considered a violation of this Rule 7.13 if the catcher blocks the pathway of the runner in order to field a throw, and the umpire determines that the catcher could not have fielded the ball without blocking the pathway of the runner and that contact with the runner was unavoidable.

The rule reportedly reflects a compromise between MLB, which had wanted a must-slide-can't-block rule that would have eliminated all collisions and thus done the most for player safety, and the MLBPA, which did not want to make such a major change so close to the season, fearing the players would not have time to adjust.

The basic rule prohibits a runner from deviating from the direct path home to initiate contact with the catcher (or whoever is covering the plate)--that is, from going out of his way to make contact rather than running directly for the plate. But the rule does not prohibit collisions where the runner runs directly into the catcher in trying to score. So, reading only the text, it is not clear the new rule eliminates most collisions, since most collisions come when runner, catcher, and ball all converge at the plate and running through the catcher is the most direct route to scoring. It thus is not clear that it provides the safety benefits it is intended to provide.

The solution may come in the interpretive comments and a more purposivist approach. An umpire may find that the runner deviated if the runner fails to make an effort to touch the plate, lowers his shoulder, or pushes with his hands, elbows, or arms. On the other hand, a runner does not violate the rule if he slides into the plate in an "appropriate manner," meaning his body hits the ground before making contact with the catcher. The upshot of the comments is to grant the umpires discretion to judge when the runner has "deviated" from the path, and thereby to apply the rule so as to further its purpose. The comment incentivizes runners to slide in most cases, since a proper slide per se will not violate the rule, while running through the catcher might be deemed deviating, subject to how the umpire exercises his discretion in viewing the play (whether the runner lowered his shoulder or raiseed his arms, etc.).

The rule seems unnecessarily complicated, given the player-safety goals involved. Especially since they simply could have modeled this rule after the rules that apply at the other three bases. But the sense seems to be that this is experimental, designed to be revisited during and after the upcoming seasons and to function as a first step to get players used to this new way of playing. Think of it as the legislature phasing-in new rules so as to also phase-in new, preferred behavior.

Posted by Howard Wasserman on February 25, 2014 at 12:09 AM in Howard Wasserman, Legal Theory, Sports | Permalink | Comments (0) | TrackBack

Monday, February 24, 2014

American legal scholarship and legal education misconceived

Duke's Ralf Michaels has undertaken to celebrate Germany superiority in legal scholarship.  This is a peculiar venture, one that Rob Howse has skewered elsewhere on this blog, he focusing on the comparative aspects of the project.  This seems to me a good enough skewering, although I would have to leave to the experts in the comparative law & German elements to speak knowledably about Michaels' perspectives on this subject.  

Let me just say a few things about the depiction of contemporary American legal scholarship. 

Here, says Michaels, "faith in legal doctrine as a sufficiently exact tool to deal with social issues has been destroyed."  ???!!!  I suppose one can say that everything is embedded in the meaning of "succiently exact."  Here, as elsewhere, law in action is seen as a necessary supplement to law in books.  Legal doctrine doesn't enforce itself; the social elements of doctrine in, at the very least, framing fundamentally matters of implementation and administration of public policy are well understood.  This is not about the "here," after all. Max Weber understood this.  So did William Blackstone.  So, who does Michaels imagine believes that doctrine is sufficient or is exact?

The notion that American legal scholarship does not include foci in earnest on doctrine, its content and shape, is naive.  The work of the American Law Institute, on whose council I am proud and privileged to serve, illustrates powerfully the enduring contributions of essentially doctrinal work.  And the connection between doctrinal exegesis and analysis and social advancement has been embedded in the work of the ALI for decades.  Such work thrives in American law schools as well, as does interdisciplinary work of the highest order.

But here is where Michaels' essay takes a peculiar turn.  Here is what he says by way of framing the current critique of American legal education:

"The consumer model of legal education requires, ultimately, that law students are taught nothing other than skills. Doctrine itself has only instrumental value for students, but importantly, “mere doctrine” has no scholarly value for academics. The consequence for scholarship may be dire: interdisciplinary scholarship may decline, but doctrinal scholarship cannot take its place because academic understanding of doctrine has been thoroughly discarded."

The dots Michaels wants to connect are these:  American legal education is attacked because it is insufficiently skill-centered; law schools cannot advance skills-training under extant economic models; they have, as the only alternative, relentless interdisciplinary scholarship; attention to doctrine is impossible because it has been "descredited;"  Germans have figured this out and thus the future of German law schools is comparatively rosy.

This narrative is highly problematic.  Skills training is largely a product of American legal educators, especially clinicians, who have developed curricula and deployed resources to the salutary aim of improving the practical skills of (post-graduate) law students.  To be sure, this development is resource intensive and is challenging in the current environment in which costs of legal education loom large.  But the notion that this can be recast as a struggle between public and private modalities of financing education is seriously flawed.  With the public subsidy of European law schools, where is the attention to the sort of skills training and public service initiatives within law schools that would, presumably, advance salutary public purposes?

Moreover, the notion that American law schools will move further away from "discreted" doctrine in order to maintain their death grip on interdisciplinarity as an educational luxury in trying times seems patently absurd.  American law schools, highly imperfect and under serious strain, could be expected to adapt to currents of both legal pedagogy and legal scholarship, currents which see doctrine as a coherent and necessary element of advanced legal education and advancing professional competence.    Interdisciplinary legal scholarship need not and will not be abandoned in this quest.  Indeed, the building of bridges between law and other disciplines is a result (and not uniquely an American one) of an appreciation for the interconnectedness of academic explorations and the imperatives of solving society's central problems through combined, intersecting modalities of scholarship and knowledge.  I would have thought that Ralf Michaels, surely a scholar understanding the German conributions to the origins of the modern University, would appreciate this especially.

Michaels concludes:

"[T]he ABA report suggests that our culture of scholarship and education is untenable and must be, essentially, discarded. I hope they are wrong."

Two things wrong with this penultimate statement:  First, the so-called "culture of scholarship and education" is here misunderstood.  American law schools pursue scholarship in order to advance key purposes, including elucidating doctrine, bringing to bear insights and expertise from other disciplines in order to illuminate legal issues and ground public policy, and in order to advocate on behalf of central societal goals and initiatives.  Moreover, the best evidence -- along with a century-plus worth of experience -- suggests that American legal education, for all its flaws, does an admirable job at these ambitious ends.  Second, there is precious little reason to believe that Ralf Michaels "hope[s] they are wrong."  His essay advocates for a contrast that does not exist and an appeal for German superiority that is misguided.  Whatever the essay's merits as a depiction of contemporary German legal scholarship, is deeply flawed as it pertains to American legal scholarship and the nexus between such scholarship and trends in contemporary legal education in the U.S.

 

Posted by Dan Rodriguez on February 24, 2014 at 04:15 PM in Legal Theory, Life of Law Schools | Permalink | Comments (0) | TrackBack

Wednesday, February 19, 2014

The myth of the trial penalty?

Every now and then, I like to spotlight some articles that unsettle the conventional wisdom, particularly in criminal law. Add this one to the file. Almost every teacher of criminal procedure is aware of the idea of the "trial penalty," which conveys the sense that defendants who exercise their right to a trial will invariably get a worse result if convicted than if they plea bargain. The leverage prosecutors have in exploiting the trial penalty dynamic was described by my friend Rich Oppel in a front page NYT story he wrote a few years back.

Comes now (or relatively recently at least) David Abrams from Penn with an article that slays the sacred cow of the trial penalty by providing, you know, data. And the data is the best kind of data because inasmuch as it's true, it is SURPRISING data. Specifically, Abrams argues that based on the study he performed (which originally appeared in JELS and now appears in a more accessible form in Duquesne Law Review), the data supports the view that in fact there's a trial discount not a trial penalty. Fascinating stuff. Abrams offers some suggestions for what might explain this surprise: possibly a salience/availability bias on the part of the lawyers who remember the long penalties imposed after dramatic trials. Regardless of what explains the conventional wisdom, the competing claims should be ventilated in virtually every crim pro adjudication course.

Since this empirical stuff is far outside my bailiwick, I wonder if those who are in the know have a view about how Abrams' research intersects with the Anderson and Heaton study in the YLJ, which argued that public defenders get better results in murder cases than court appointed defense counsel, or Bellin's critique of that YLJ study here.  Anderson and Heaton basically argue that public defenders get better results because they get their clients to plea bargain more frequently than court appointed counsel and that explains the outcome. As I recall dimly, that conclusion may have been true for the murder cases but the study didn't purport to make the claim that PDs were better across the board and maybe that's consistent with Abrams' views too. It would be odd (wouldn't it?) if comparatively fewer murder cases involve a trial penalty while the many other cases do not and in fact show a trial discount. Granted, these studies took place in different cities, etc., so I am also wondering if the various studies can be reconciled. Thoughts?

Posted by Administrators on February 19, 2014 at 11:30 AM in Blogging, Criminal Law, Dan Markel, Legal Theory | Permalink | Comments (14) | TrackBack

Monday, January 13, 2014

A couple reading suggestions for students in criminal law and the Spring 2014 schedule for the NYU Crim Theory Colloquium

N.B. This post is a revised version of an earlier post and is basically for crimprofs and those interested in crim theory.

This week marks the onset of classes for many law schools across the country, and that means  the first criminal law class is here or around the corner for some 1L's.  As many crim law profs lament,  first-year criminal law casebooks generally have pretty crummy offerings with respect to the state of the field in punishment theory. (The new 9th edition of Kadish Schulhofer Steiker Barkow, however, is better than most in this respect.) Most first year casebooks give a little smattering of Kant and Bentham, maybe a gesture to Stephen and, for a contemporary flourish, a nod to Jeffrie Murphy or Michael Moore or Herb Morris.

Murphy, Morris, and Moore deserve huge kudos for reviving the field in the 1970's and since.  Fortunately, the field of punishment theory is very fertile today, and not just with respect to retributive justice.  But for those of you looking to give your students something more meaty and nourishing than Kantian hand-waving to fiat iustitia, et pereat mundus, you might want to check out and possibly assign either Michael Cahill's Punishment Pluralism piece or a reasonably short piece of mine, What Might Retributive Justice Be?, a 20-pager or so that tries to give a concise statement of the animating principles and limits of communicative retributivism.  Both pieces, which come from the same book, are the sort that law students and non-specialists should be able to digest without too much complication.  Also, if you're teaching the significance of the presumption of innocence to your 1L's, you might find this oped I did with Eric Miller to be helpful as a fun supplement; it concerns the quiet scandal of punitive release conditions.

Speaking of Cahill (the object of my enduring bromance), Mike and I are continuing to run a crim law theory colloquium for faculty based in NYC at NYU. On the heels of AALS, we had Francois Tanguay-Renaud and Jenny Carroll present last week, and the schedule for the balance of the semester is this:

February 25: Stuart Green (Rutgers) and Joshua Kleinfeld (Northwestern)

March 31: Amy Sepinwall (Wharton Legal Studies) and Alec Walen (Rutgers)

April 28: Corey Brettschneider (Brown/NYU) and Jennifer Daskal (American)

As you can see, the schedule tries to imperfectly bring together crim theorists of different generations and perspectives. This is now the seventh semester of the colloquium and we are grateful to our hosts at NYU and Brooklyn Law School who have made it possible. If you're a crimprof and interested in joining us occasionally, let me know and I'll put you on our email list for the papers.

Posted by Administrators on January 13, 2014 at 04:44 PM in Article Spotlight, Criminal Law, Dan Markel, Legal Theory | Permalink | Comments (9) | TrackBack

Friday, November 22, 2013

Making Law Sex Positive

It has been a good decade for sexual freedom. The Supreme Court issued opinions protecting the rights of gay individuals to engage in sexual relationships and striking down a ban on the federal recognition of same-sex marriages. Two gay teen characters were portrayed as having a positive sexual relationship (leading to a marriage proposal) on network television. Sexual practices formerly viewed as perverse, such as role playing and sado-masochism, seem almost provincial now that there is a copy of Fifty Shades of Grey on every great-aunt’s bookshelf.

But, in an op-ed published in the Washington Post this weekend, I argue that even among this legal and pop culture sexual revolution, much of our law remains curiously silent, squeamish, or disapproving on the topic of sexual pleasure itself. Indeed, several areas of the law rely on the counterintuitive assumption that sexual pleasure has negligible or negative value and that we sacrifice nothing of importance when we curtail it. This phenomenon extends even to legal realms that regulate behaviors central to the experience of sexual pleasure.

The assumption that sexual pleasure has negligible or negative value is simply unfounded, and unfounded assumptions create bad laws and policies. Legal regulation generally sacrifices our freedom to engage in certain activities because the activities result in harm or because regulation generates benefits. Devaluing sexual pleasure distorts this calculus. In truth, sexual pleasure is actually a very good thing, simply because it is pleasurable.

Truly progressive legal reform would recognize the inherent value of sexual pleasure. This would have significant implications for several areas of law, ranging from obscenity to rape law. The op-ed out this weekend is part of a larger project challenging the sex-negativity of law and envisioning how simply valuing sexual pleasure in itself would require us to rethink different areas of law.

Obscenity law, for example, relies on the assumption that offensive speech that is intended merely to arouse is entitled to less constitutional protection than any other type of offensive speech. The Miller test allows states to freely ban any material that depicts sexual activity “in a patently offensive way” and “appeals to the prurient interests.” The First Amendment only protects this material if it has some serious literary, artistic, political, or scientific value to redeem it. In contrast, states may not ban other types of offensive material unless they can show it is likely to cause some harm. If sexual pleasure in itself is valuable, then we can’t justify banning offensive prurient material more freely simply because its primary purpose is to arouse people. Instead, we have to think more carefully about how (and whether) states should be able to regulate any offensive materials.

Recognizing sexual pleasure would also require state courts and legislatures rethink the criminalization of sado-masochistic sexual activities (or “BDSM”). BDSM has become so prevalent in popular culture that it seems almost quaint. But even some consensual spanking can lead to an assault or battery charge in most states. In contrast, the law permits violent sports, cosmetic surgery, tattooing, and skin piercing, in large part because courts and legislatures accept their value. We can’t justify this distinction if we acknowledge that sexual pleasure has as much value as the pleasure derived from a boxing match or cheek implants.

Recognizing the value of sexual pleasure doesn’t mean we have to value it above everything else. We regulate the things that bring people pleasure all the time. We value the pleasure we experience from music, but I may not kidnap Beyoncé and force her to join me on a song-filled road trip, no matter how magical the experience would be for me. Sexual pleasure is no different—we can acknowledge it is important and still regulate it.

But valuing sexual pleasure does require us to regulate more honestly. It allows a more complete and well-reasoned discussion of what we choose to regulate, what we fail to regulate, and our justifications for those choices.

The op-ed “The Joyless Law of Sex,” is available here. “Sex-Positive Law” will appear in the 87th volume for the NYU Law Review in April.

Posted by Margo Kaplan on November 22, 2013 at 05:12 PM in Criminal Law, Culture, First Amendment, Legal Theory | Permalink | Comments (14) | TrackBack

Friday, May 31, 2013

Non-State Law Beyond Enforcement II

With grading finally behind me, I wanted to post again about non-state law "beyond enforcement."   The question I've been exploring is in what ways do various forms of non-state law (such as international law and religious law) function as law even when these forms of law lack the ability to enforce their legal rules?

In my last post, I mentioned a forthcoming book by Chaim Saiman, which conceptualizes Jewish Law as "studied law" as opposed to enforced law.  In making this point, Saiman highlights some Jewish legal doctrines that the Talmud explicitly notes are not meant to be applied in the public square, but simply dissected in the study hall.  In this way, Saiman disaggregates the very concept of Jewish law from the enforcement of Jewish law.

Now there is a tendency to think that religious law - as opposed to other forms of non-state law - is particularly susceptible to manifesting law-like characteristics outside the context of enforcement.  Religious law, at its core, is intended to connect individuals to something outside of this world and so it is not surprising that certain facets of religious law might be directed not to practical this-world enforcement, but to achieving some other-worldly religious value.      

While I think this sentiment is true, over-emphasizing the point would lead us to miss the ways in which other forms of non-state law exhibit law-like features even in the absence of enforcement.  At the symposium I ran a few weeks back on "The Rise of Non-State Law," Harlan Cohen (Georgia) presented a great paper titled ""Precedent, Audience and Authority."  The paper wrangled with the following question: why is it that, even though international law denies international precedent any doctrinal force, precedent is cited constantly as authority in any number of international law fields?

To answer the question, Cohen emphasizes the way in which law - and in particular international law - is a practice with its own (often unspoken) interpretive rules and norms.  On this account, Cohen focuses on how precedent speaks to the members of the international law community - the ways in which using precedent generates legitimacy for international law in the eyes of those within the international law community.  

One of the striking features of Cohen's analysis - at least striking to me - is the persistence of precedent in the eyes of consumers of law even absent an actual doctrinal basis.  It is almost as if, at least in certain legal communities, that law struggles to separate itself from an interpretive method that discounts precedent.  All of this struck me as a bit Dworkinian, capturing another important way in which non-state law can function as law outside the context of enforcement.  Put differently, certain legal systems can be identified as being systems of law not simply based upon the extent to which the law is enforced, but based upon certain methods of interpretation endemic to law.  

In this way, Cohen's notion of international law as a practice parallels Saiman's formulation of Jewish law as studied law.  In both instances, we find important ways in which non-state law functions internally as law based upon the way in which the law is interpreted and analyzed.  On this account, non-state law can function as law irrespective of whether it is enforced.  

Posted by Michael Helfand on May 31, 2013 at 02:34 PM in International Law, Legal Theory, Religion | Permalink | Comments (0) | TrackBack

Friday, May 24, 2013

Non-State Law Beyond Enforcement

So I've been a bit behind in posting as I slowly drag myself toward the grading finish line (aside: thanks to all my Prawfs' Facebook friends who have been regularly taunted me by noting how long ago they finished grading.  I get it - I'm slow).  But today I wanted to post again about non-state law, focusing on what it might mean to be law even when the law in question is not enforced.

As an example of this dynamic, I've been reading some advanced chapters of Chaim Saiman's forthcoming book Halakhah: The Rabbinic Idea of Law (Princeton U. Press).  One of the key questions Saiman tussles with in the book - and also addressed in his public Gruss Lecture in Talmudic Law - is why there are multiple Jewish legal doctrines which the Talmud expressly states are not intended to be enforced in any circumstance.  As examples, Saiman notes how regarding doctrines like the "rebellious son" and the "rebellious city," the Talmud states the "law never did, nor ever will apply."  In response to questions as to why there exist laws that are not intended to be enforced, the Talmud simply responds "To study and receive reward."

Saiman's book interrogates this response, exploring what it means to have "studied law" as opposed to "enforced law" - and by extension what it means to be unenforced law.  Much of his analysis revolves around contrasting philosophical inquiry and legal inquiry, with the latter funneling the reader into concrete application of core values (in ways that abstract philosophical inquiry often does not) and requiring the reader to inhabit a particular religious world that can more effectively convey principles and values.  

In this way, his project is a quintessential example of how the discursive practice of law - and not merely the enforcement of law - serves a unique legal purpose.  It is the concrete and detailed method of legal analysis the pulls the reader into the legal text - much like a novel pulls the reader into a narrative - that captures a key facet of how Jewish Law functions as law (one hears strong elements of Robert Cover in Saiman's analysis).  Moreover, it also provides important guidance to thinking about the internal elements (as opposed to external manifestations) of law and legal practice - a topic which I hope to explore a bit further in my next post.

Posted by Michael Helfand on May 24, 2013 at 02:04 PM in Legal Theory, Religion | Permalink | Comments (4) | TrackBack

Friday, May 17, 2013

Non-State Law and Enforcement

As I mentioned in my last post, I've been doing some thinking about what it means to be non-state law and looking to different types of non-state law - such as international law or religious law - to consider some common dynamics that consistently arise.  

One theme that regularly emerges - and is often discussed - in the context of non-state law is the problem of enforcement.  Put simply, without the enforcement power of a nation-state, non-state law must typically find alternative mechanisms in order to ensure compliance with its rules and norms.  This hurdle has long figured into debates over whether one can properly conceptualize international law as law.

But the focus on enforcement is problematic for a couple of reasons.  First of all, the challenge of enforcement for non-state law is in many ways overstated.  For example, in a 2011 article titled Outcasting: in Domestic and International Law, Oona Hathaway and Scott Shapiro explored this issue, emphasizing - especially in the context of international - how certain forms of nonviolent sanctions, such as denying the disobedient the benefits of social cooperation and membership, can be deployed as a form of non-state law enforcement.  Indeed, the use of outcasting has long been prominent in other areas of non-state law, such as a method to enforce religious law within religious communities.  

There's, of course, much more to be said on the relationship between non-state law and enforcement (something I may explore in a subsequent post).  But too heavy an emphasis on this piece of the non-state law puzzle is problematic for a second reason - it too often obscures other important ways in which non-state law functions as law.  In my next couple of posts what I'd like to do is consider other ways in which various forms of non-state law function as law by focusing more directly on the internal practice of law within the relevant communities.

Posted by Michael Helfand on May 17, 2013 at 04:46 PM in International Law, Legal Theory, Religion | Permalink | Comments (2) | TrackBack

Wednesday, May 15, 2013

Rationing Legal Services

In the last few years at both the federal and state level there have been deep cuts to providing legal assistance to the poor.  This only  only makes more pressing and manifest a sad reality: there is and always will be persistent scarcity in the availability of both criminal and civil legal assistance. Given this persistent scarcity, my new article, Rationing Legal Services just published in the peer-reviewed Journal of Legal Analysis, examines how existing Legal Service Providers (LSPs), both civil and criminal, should ration their services when they cannot help everyone.

To illustrate the difficulty these issues involve, consider two types of LSPs, the Public Defender Service and Connecticut Legal Services (CLS), that I discuss in greater depth in the paper. Should the Public Defender Service favor offenders under the age of twenty-five years instead of those older than fifty-five years? Should other public defenders offices with death eligible offenses favor those facing the death penalty over those facing life sentences? Should providers favor clients they think can make actual innocence claims over those who cannot? How should CLS prioritize its civil cases and clients? Should it favor clients with cases better suited for impact litigation over those that fall in the direct service category? Should either institution prioritize those with the most need? Or, should they allocate by lottery?

I begin by looking at how three real-world LSPs currently rationi(PDS, CLS, and the Harvard Legal Aid Bureau). Then, in trying to answer these questions I draw on a developing literature in bioethics on the rationing of medical goods (organ, ICU beds, vaccine doses, etc) and show how the analogy can help us develop better rationing systems. I discuss six possible families of ‘simple’ rationing principles: first-come-first-serve, lottery, priority to the worst-off, age-weighting, best outcomes, and instrumental forms of allocation and the ethical complexities with several variants of each. While I ultimately tip my hand on my views of each of these sub-principles, my primary aim is to enrich the discourse on rationing legal services by showing LSPs and legal scholars that they must make a decision as to each of these issues, even if it is not the decision I would reach.

I also examine places where the analogy potentially breaks down. First, I examine how bringing in dignitary or participatory values complicates the allocation decision, drawing in particular on Jerry Mashaw’s work on Due Process values. Second, I ask whether it makes a difference that, in some cases, individuals who receive legal assistance will end up succeeding in cases where they do not “deserve” to win. I also examine whether the nature of legal services as “adversarial goods”, the allocation of which increases costs for those on the other side of the “v.”, should make a difference. Third, I relax the assumption that funding streams and lawyer satisfaction are independent of the rationing principles selected, and examine how that changes the picture. Finally, I respond to a potential objection that I have not left sufficient room for LSP institutional self-definition.

The end of the paper entitled “Some Realism about Rationing”, takes a step back to look for the sweet spot where theory meets practice. I use the foregoing analysis to recommend eight very tangible steps LSPs might take, within their administrability constraints, to implement more ethical rationing.

While this paper is now done I am hoping to do significant further work on these issues and possibly pursue a book project on it, so comments on or offline are very welcome. I am also collaborating with my wonderful and indefatigable colleague Jim Greiner and a colleague in the LSP world to do further work concerning experimentation in the delivery of legal services and the research ethics and research design issues it raises.

- I. Glenn Cohen

Posted by Ivan Cohen on May 15, 2013 at 02:57 PM in Article Spotlight, Civil Procedure, Law and Politics, Legal Theory, Life of Law Schools, Peer-Reviewed Journals | Permalink | Comments (2) | TrackBack

Wednesday, May 08, 2013

“Why is a big gift from the federal government a matter of coercion? ... It’s just a boatload of federal money for you to take and spend on poor people’s health care” or the mysterious coercion theory in the ACA case

At oral argument in NFIB v. Sebelius, the Affordable Care Act (ACA) case, Justice Kagan asked Paul Clement:

“Why is a big gift from the federal government a matter of coercion? It’s just a boatload of federal money for you to take and spend on poor people’s health care. It doesn’t sound coercive to me, I have to tell you.”

The exchange is all the more curious because, despite her scepticism, Kagan signed on to the Court’s holding that the Medicaid expansion in the ACA was coercive, as did all but two of the Justices (Ginsburg and Sotomayor). What happened? I try to answer this question, suggesting the court misunderstood what makes an offer coercive, in this article published as a part of a symposium on philosophical analysis of the decision by the peer-reviewed journal Ethical Perspectives.

First a little bit of background since some readers may not be as familiar with the Medicaid expansion part of the ACA and Sebelius: The ACA purported to expand the scope of Medicaid and increase the number of individuals the States must cover, most importantly by requiring States to provide Medicaid coverage to adults with incomes up to 133 percent of the federal poverty level. At the time the ACA was passed, most States covered adults with children only if their income was much lower, and did not cover childless adults. Under the ACA reforms, the federal government would have increased federal funding to cover the States’ costs for several years in the future, with States picking up only a small part of the tab. However, a State that did not comply with the new ACA coverage requirements could lose not only the federal funding for the expansion, but all of its Medicaid funding.

In Sebelius, for the first time in its history, the Court found such unconstitutional ‘compulsion’ in the deal offered to States in order to expand Medicaid under the ACA. In finding the Medicaid expansion unconstitutional, the Court contrasted the ACA case with the facts of the Dole case, wherein Congress “had threatened to withhold five percent of a State’s federal highway funds if the State did not raise its drinking age to 21.”In discussing Dole, the Sebelius Court determined that “that the inducement was not impermissibly coercive, because Congress was offering only ‘relatively mild encouragement to the States’,” and the Court noted that it was “less than half of one percent of South Dakota’s budget at the time” such that “[w]hether to accept the drinking age change ‘remain[ed] the prerogative of the States not merely in theory but in fact’.”

By contrast, when evaluating the Medicare expansion under the ACA, the Sebelius Court held that the

financial “inducement” Congress has chosen is much more than “rela- tively mild encouragement” – it is a gun to the head [...] A State that opts out of the Affordable Care Act’s expansion in health care cover- age thus stands to lose not merely “a relatively small percentage” of its existing Medicaid funding, but all of it. Medicaid spending accounts for over 20 percent of the average State’s total budget, with federal funds covering 50 to 83 percent of those costs [...] The threatened loss of over 10 percent of a State’s overall budget, in contrast [to Dole], is economic dragooning that leaves the States with no real option but to acquiesce in the Medicaid expansion.

I argue that this analysis is fundamentally misguided, and (if I may say so) I have some fun doing it! As I summarize the argument structure: If the new terms offered by the Medicaid expansion were not coercive, the old terms were not coercive, and the change in terms was not coercive, I find it hard to understand how seven Supreme Court Justices could have concluded that coercion was afoot; the only plausible explanation is that these seven Justices in Sebelius fundamentally misunderstood coercion. This misunderstanding becomes only more manifest when we ask exactly ‘who’ has been coerced, and see the way in which personifying the States as answer obfuscates rather than clarifies matters.

The paper is out, but I will be doing a book chapter adapting it so comments still very much approeciated.

- I. Glenn Cohen

Posted by Ivan Cohen on May 8, 2013 at 12:01 PM in Article Spotlight, Constitutional thoughts, Current Affairs, Legal Theory, Peer-Reviewed Journals | Permalink | Comments (11) | TrackBack

Tuesday, May 07, 2013

Non-State Law

Back in 2011, I attended a symposium on Legal Positivism in International Legal Theory: Hart’s Legacy.  The conference was a bit outside the range of topics I usually write about (e.g. religion meets private law).  But presenting at the symposium drove home the point to me that international law and religious law scholars are contending with similar inquiries, many of which flow from one core question: what does it means to be non-state law?   

When I talk about non-state law, I'm thinking collectively of various forms of law - from religious law to transnational law to international law.  Of course, thinking about these forms of law outside of the law of the nation-state has long been at the center of the legal pluralism project.  But what is often missed is that lessons from international law are  instructive for religious law - and vice versa.

This often overlooked opportunity was largely the motivation behind the "Rise of Non-State Law" symposium I organized last week.  To my mind, the papers, presentations and discussion at the symposium were extremely productive and got me thinking even more about the overlap between various forms of non-state law.  In my next couple of posts, I'm hope to say a little bit about non-state law, building on some of the insights from the symposium. 

Posted by Michael Helfand on May 7, 2013 at 03:41 PM in International Law, Legal Theory, Religion | Permalink | Comments (0) | TrackBack

Thursday, May 02, 2013

Great to be back and greetings from Washington!

It's great to be back at Prawfs for another guest-blogging stint.   I'm looking forward to spending the month talking a bit about some of my favorite topics such as co-religionist commerce, religious arbitration, and non-state law.  

My growing interest in non-state law largely traces to my sense that ASIL Pic Flier conversations in both international law, transnational law, and religious law share much in common (e.g. discussions of what is law, can there be law without enforcement, how should the state treat competing legal norms etc.).  To further this interest, I'm running a symposium in Washington, D.C. today sponsored by Pepperdine Law School and the American Society for International Law titled "The Rise of Non-State Law."  The symposium is part of a series run by ASIL's International Legal Theory Interest Group and the papers from today's symposium will eventually become part of a volume published by Cambridge University Press.  

I must say the papers submitted (and being presented) by the participants are truly fantastic and have led today to some great conversation and debate.  For those who share the interest, here's the full schedule for the day:

Symposium Schedule

8:30 a.m. Breakfast (Tillar House)

8:45 Introduction (Michael Helfand (Pepperdine), John Linarelli (Swansea))

9:00 Panel 1—Global Legal Pluralism: Trends and Challenges

10:45 Coffee

11:00  Panel 2—Non-State Law and Non-State Institutions

1:00 p.m. Lunch

2:00 Panel 3—The Role of Religion and Culture in Non-State Law

3:45 Coffee

4:00 Open Forum

5:00 Closing Comments

Posted by Michael Helfand on May 2, 2013 at 12:11 PM in Culture, International Law, Legal Theory, Religion | Permalink | Comments (0) | TrackBack

Tuesday, February 05, 2013

What Mainstream Criminal Procedure Overlooks (and Why)

In the words of a friend of mine, who worked for years at a very prominent public interest law firm in the South, "everyone is overlooking everything."  By this, I mean that the adjudication portion of the criminal procedure syllabus for the most part leaves students with no idea what goes in the sorts of low-level criminal courts so nicely described by Amy Bach in her book "Ordinary Injustice," which might be thought of as a journalistic follow-up to Malcolm Feeley's pathbreaking work, "The Punishment is the Process." 

I'm going to hazard the thesis that the reason we have no idea what goes on in the courts that process the bulk of our criminal cases is an "elite" focus on doctrine.  First, these courts are largely invisible to "doctrine."  They do not produce many opinions, their other operations are hard to access from the comfort of a law-school office or library, and so there is a paucity of materials readily at hand produced by the courts. Because of our reliance on "well reasoned opinions" (or at least pedagogically-useful-badly-reasoned ones), the gold standard for teaching criminal procedure is either the elite federal court system, or the differently elite state appellate court system, which do produce opinions that are readily accessible from a computer or library. 

Second, state trial and (especially) municipal courts are often bereft of "doctrine."  There is little doctrine in municipal court, where lawyering depends upon interpersonal interactions between members of the court "workgroup" (as the sociologists put it).  In these courts, appeals to doctrine may actually be counterproductive: a nuclear option utilized only when workgroup relationships break down or do not yet exist. 

Third, in order to access the operation of these low-level courts we depend upon either anecdotal data or social science data.  The first is unreliable but emphasizes "practice-based knowledge" of the sort that is currently popular; the latter is much more reliable and useful, but emphasizes a discipline that is generally held in disregard by law faculties in the United States (but not, intriguingly, in Europe or the British Commonwealth countries). 

Fourth and finally, (as Alexandra Natapoff compellingly argues) we tend to prioritize felonies over misdemeanors, on some scale of seriousness, despite the fact that for many individuals the impact of a misdemeanor may be as severe as some felonies.  Accordingly, we have little or no knowledge about what happens to the 13 million people who cycle through the misdemeanor system and who are afforded a rough and ready sort of justice. 

While I don't think this is the whole story, I think it is a start.  [I do think that another part of the story is who is writing the scholarship: primarily scholars employed in clinical programs, low-level judges, and criminologists and sociologists working through the data.  My sense—though anecdotal—is that there is a little bit of snobbery about the producers of this scholarship, though I’d be happy to be wrong about that.  I’ll discuss this part of politics of scholarly production and recognition in a subsequent post.]

Problem-solving courts afford one window into this type of court, albeit a specialized version of the system.  What they reveal is a system of justice that is marginal, political, and administrative, dominated by the judge as much as the prosecutor, and in which the Sixth Amendment notion of rights to counsel and adversarial testing are largely absent.  Furthermore, the ideal of an administrative system of justice based on legal-rational decision-making largely absent: the decisions are made through a mixture of conflict and collaboration that is often actively non-bureaucratic (as Feeley first argued). 

Over the next few days I’ll engage a little with some of the great scholarship out there that has yet to make its way into the traditional course.  But one central point worth making is that the focus on low-level criminal courts, given the nature of the process (non-doctrinal) and the sort of issues raised is—if it is to be descriptively accurate and normatively productive—must be both inter-disciplinary and practice-oriented.  The sort of interdisciplinarity I have in mind looks at how practice happens on the ground, and how political institutions, like courts, operate.  One nice example of the latter is Lisa Miller’s book, The Perils of Federalism, which looks at crime, politics, and criminal justice at the community level in Philadelphia.

It ought to be the sort of thing that the various theories of punishment—sociological, criminological and philosophical—attend to.  Often, however, these are top-down theories, primarily concerned with the policies (actuarialism, control, risk) and officials (legislators, perhaps prosecutors, appellate judges) that are perceived as having wide political influence over the criminal justice system: but certainly not low-level judges.  What I am proposing, then, is a bottom-up look at the criminal justice system for the sorts of institutional resistances to legislation that (as criminologist Pat O’Malley argues) are often invisible from the top down perspective of governance.   Problem-solving courts offer a neat example of this sort of institution.  

Posted by Eric Miller on February 5, 2013 at 10:22 AM in Criminal Law, Legal Theory | Permalink | Comments (4) | TrackBack

Wednesday, January 30, 2013

ost Book Club: Justifying IP -- Putting the Horse Before Descartes (Response to Duffy)

In this, my final response to the many interesting posts in my book, I want to traverse some comments that John Duffy made. To the other authors of posts, especially those who wrote reactions to my responses -- we will have to continue offline. I have taken too much space already. And the many readers of Prawfsblawg who care nothing for IP are I am sure tired of all this.

I am going to skip over the blush-inducing praise in John's post, and get right to his main point. He says:

" [I]f we are frustrated with the complexities of economic theories and are searching for a more solid foundation for justifying the rules of intellectual property, is Kant (or Locke or Rawls or Nozick) really going to help lead us out of the wilderness?"

John says no. He says further that just as Descartes' doubts drove him to embrace foundations that were thoroughly unhelpful when it came to elucidating actual physical reality, such as planetary motion, so my doubt-induced search for solid foundations will lead nowhere (at best), and maybe to some very bad places (at worst).

This argument may be seen to resolve to a simple point, one often made in legal theory circles: "It takes a theory to beat a theory." (Lawrence Solum has an excellent entry on this topic in his Legal Theory Lexicon, posted on his Legal Theory Blog some time back.) The idea here is that utilitarian theory is a true theory, because it is capable of proof or refutation and because it guides inquiry in ways that could lead to better predictions about the real world. By this criterion, deontic theories are not real theories because they cannot be either proven or refuted. Einstein's famous quip comes to mind; after a presentation by another scientist, Einstein supposedly said "Well, he wasn't right. But what's worse is, he wasn't even wrong."

My response starts with some stark facts. We do not know whether IP law is net social welfare positive. Yet many of us feel strongly that this body of law, this social and legal institution, has a place in a well-functioning society. Now ,we can say the data are not all in yet, but we nevertheless should maintain our IP system on the hope that someday we will have adequate data to justify it. The problem with this approach is, where does that leave us in the interim? We could say that we will adhere to utilitarian theory because it stands the best chance of justifying our field at some future date -- when adequate data are in hand. But meanwhile, what is our status? We are adhering, we say, to a theory that may someday prove true. By its own criteria it is not true today, not to the level of certainty we require of it (and that it in some sense requires of itself.) But because it will be "more true" than other theories on that magic day when convincing data finally arrive, we should stick to it.

My approach was to turn this all upside-down, I started with the fact that the data are not adequate at this time. And I admitted that I nevertheless felt strongly that IP makes sense as a field; that it seems warranted and even necessary as a social institition. So it was on account of these facts that I began my search for a better theoretical foundation for IP law.

If you have followed me so far, you will not be surprised when I say that for me, Locke, Kant and Rawls better account for the facts as I find them than other theries -- including utilitarianism. Deontic considerations explain, to me at least, why we have an IP system in the absence of convincing empirical evidence regarding net social welfare. Put simply: We have IP, regardless of its (proven) effect on social welfare -- so maybe (I said to myself) *it's not ultimately about social welfare*.

This is the sense in which, to me, deontic theory provides a "better" theory of IP law. It fits the facts in hand today, including the inconvenient fact of the absence of facts. Of course, we may learn in years to come that the utilitarian case can be made convincingly. I explicitly provide for this in JIP, when I say that there is "room at the bottom," at the foundational level, for different ultimate foundations and even new ultimate foundations. It's just that for me, given the current data, I cannot today make that case convincingly. And it would be a strange empirically-based theory that asks me to ignore this key piece of factual information in adopting foundations for the field. To those who say deontic theories cannot be either proven or disproven, I offer the aforementioned facts, and say in effect that an amalgam of deontic theory does a better job explaining why we have IP law than other theories. And therefore that it is in this sense "more true" than utilitarian theory. Again, it fits the facts that (1) we do not have adequate data about net social welfare; and (2) we nevertheless feel IP is an important social institution in our society and perhaps any society that claims to believe in individual autonomy, rewards for deserving effort, and basic fairness.

One final point: to connect Kant with Hegel with Marx, as John does, is a legitimate move philosophically. But I have to add that for many interpreters of Marx, he is the ultimate utilitarian. What is materialism, as in Marxist historical materialism, but a system that makes radically egalitarian economic outcomes the paramount concern of the state? The famous suppression of individual differences and individual rights under much of applied Marxist theory represents the full working out of the utilitarian program under which all individuals can be reduced to their economic needs, and all government can be reduced to a mechanistic system for meeting those needs (as equally as possible)? If we are going to worry about where our preferred theories might lead if they get into the wrong hands, I'll take Locke and Kant and Rawls any day. In at least one form, radical utilitarian-materialism has already caused enough trouble.

This is hardly all there is to say, but it is all I have time to say. So I will keep plodding along, like a steady plow horse, trying not only to sort out the foundational issues, but also to engage in policy discussion and doctrinal analysis. And with this image I close, having once again put the (plow) horse before Descartes in the world of IP theory.

 

 

Posted by Rob Merges on January 30, 2013 at 08:50 PM in Books, Intellectual Property, Legal Theory | Permalink | Comments (2) | TrackBack

Book Club: Even More on Midlevel Principles in IP Law - Response to Bracha

In a previous post I explained the concept of midlevel principles in IP law. In this post I respond to a couple of detailed points made in a very insightful post on this topic by Oren Bracha. Oren has a number of interesting things to say, but his critique has two main points: (1) the conservative bias of midlevel principles; and (2) the fuzzy nature of midlevel principles, a product of their origin in a (hypothetical) consensus-building procedure.

(1) The conservative bias: I think there are two senses of "conservative." In my view, what are conserved are meta-themes that derive from but transcend specific practices. These themes do not uniformly point to results that are "conservative" in the other sense -- tending to preserve the status quo; continuing with trends currently in place. Let me illustrate with two specific examples. When Wendy Gordon introduced the idea of "fair use as market failure," she tied together a number of emerging themes in copyright law and connected them with a large body of thought (including caselaw) that came before. But her ideas -- based largely on what I would call the efficiency principle, though surely infused also with considerations of proportionality, nonremoval (public domain), and perhaps even dignity -- were not conservative with respect to outcomes. In fact they created a revolution in consumer or user rights, by shifting the focus from the copyright owner's interests, the amount copied, etc., to higher-level issues such as transaction costs and the nature of markets for IP-protected works. 

A second example is eBay. The majority opinion, based on traditional equity doctrine (as codified in the Patent Act), was conservative in the sense that it deployed well-known rules. The Kennedy concurrence had a richer policy discussion, which centered (in my view) on the proportionality principle. The basic idea was that sometimes the automatic injunction rule gives patent owners "undue leverage" in negotiations; and that equity was flexible enough to take this into account. I see this as the embodiment of a very general principle, one that finds expression in many areas of IP law, from the rules of patent scope (enablement, written description, claim interpretation, etc.) to substantial similarity in copyright law, and so on. Again the discussion "conserved" on meta-principles by deploying a familiar theme from the body of IP law. But the outcome was not therefore necessarily conservative in the sense of preserving the staus quo. The status quo heading into the case was the automatic injunction rule. And that was rejected in favor of a more flexible approach.

(2) The fuzz factor: Oren's second point is that the midlevel principles just do not seem to have the requisite level of granularity to resolve difficult problems in IP policy. This leads him to conclude that the only way to gain true resolution is to engage each other at the (admittedly contentious) level of our foundaional commitments.

Here I would advert to the master for some guidance. John Rawls, in A Theory of Justice, describes a detailed multi-stage procedure by which fair institutions can be established. In the course of the discussion he says this about the problem of fuzziness:

"[O]n many questions of social and economic policy we must fall back upon a notion of quasi-pure procedural justice: laws and policies are just provided that they lie within the allowed range, and the legislature, in ways authorized by a just constitution, has in fact enacted them. This indeterminacy in the theory of justice is not in itself a defect. It is what we should expect. Justice as fairness will prove a worthwhile theory if it defines the range of justice more in accordance with our considered judgments than do existing theories, and if it singles out with greater sharpness the graver wrongs a society should avoid." (A Theory of Justice, sec. 31, pp. 200-201).

So foundational consensus will inevitably be general. But that does not mean that citizens cannot engage each other in contentious argument at more operational, implemenetation-oriented stages. The way I see things, the midlevel principles are expansive enough to cut through the generality required to agree on them. (Note that this pluralistic sensibility is a product not of the early Rawls of A Theory of Justice but of the later Rawls of Political Liberalism.) These principles admit of sharper disagreement and a deeper level of engagement than Oren seems to believe. Perhaps they require greater elaboration than my brief treatment made possible. But they are not in my view fatally vague as a vocabulary of policy debate.

I should add one additional point. Oren notes my emphasis in JIP on the complete independence of foundational commitments and midlevel principles. I have begun to rethink that a bit, based in large part on a thoughtful critique of this aspect of the book by David H. Blankfein-Tabachnick of Penn State Law School. His critique and my response are both still in process and are forthcoming in the California Law Review, so I do not want to say too much. But suffice it to say that I have rethought the "complete independence" thesis a little bit. I can see that in a few rare instances, where policy issues are in equipoise, resort to one's ultimate commitments -- the foundations of the field as one sees them -- may be useful and even necessary. So, to close with Oren's wonderful imagery, after the flash of white light on the road to Damascus, the rider surely does remount and head on down the road. But he or she is changed utterly at some level -- and that change is bound to peek out, now and then, in the clinch.

Posted by Rob Merges on January 30, 2013 at 01:52 PM in Intellectual Property, Legal Theory, Property | Permalink | Comments (4) | TrackBack

Book Club: Justifying IP -- Midlevel Principles: Response to Jonathan Masur

In this post I respond to some comments on my book (abbreviated "JIP") by Jonathan Masur. It is not surprising to me that Jonathan takes aim at Part II of JIP, in which I introduce and explain what I call the midlevel principles of IP law. It seems whenever the book is addressed in depth (most notably at a full-day conference at Notre Dame organized by Mark McKenna; and a number of discussions at a conference on the Philosophy of IP rights at San Diego convened by Larry Alexander), this is the topic that seems to stir up the greatest interest.

Before I turn to Jonathan's specific points, let me say a word about what I mean by midlevel principles. Basically, these are meta-themes in IP law that mediate between pluralist foundational commitments and detailed doctrines and case outcomes. They are meant to serve as the equivalent of shared basic commitments in the “public” and “political” sphere as described by Rawls in his book Political Liberalism (2005). That is, midlevel principles supply a shared language, a set of conceptual categories, that are consistent with multiple diverse foundational commitments. They are more abstract, operate at a higher level, than specific doctrines and case outcomes; but they are pitched in a language that is distinct from that of foundational commitments. They create, as I say in JIP, a shared public space in which abstract (non-case-specific) policy discussions can take place. The payoff is this: a committed Kantian can conduct a sophisticated policy argument with a firm believer in the Talmudic (or Muslim, or utilitarian) basis of IP law about the proper scope of fair use in copyright, or the proper length of the term for patent protection, or what should be required to prove that a trademark has been abandoned. The argument can proceed without the Muslim needing to convert the Kantian or utilitarian to a religious worldview, and without the Kantian talking others out of the view that religious texts provide a set of workable guiding principles for right behavior. Diverse people can – and indeed, often do! – speak in terms of an appropriate public domain (i.e., the nonremoval principle); a fair reward for creators (the proportionality principle); the importance of moral rights (the dignity principle); or the cheapest way to offer legal protection at the lowest net social cost (the efficiency principle). All without the conversation devolving into fights over ultimate commitments.

Jonathan Masur recognizes the versatility of the midlevel principles. And he acknowledges that although these principles are fully consistent with utilitarian foundations, the IP system as a whole has failed to fully implement the policies called for by those with a thorough commitment to utilitarian foundations. As he puts it:

"The problem, as Merges correctly describes it, is that IP doctrine, as implemented by courts and other parties, has failed to advance the economic aims that it set out. This is an empirical judgment, and quite possibly a correct one."

As Masur notes, I have come to believe that utilitarian foundations are inadequate in the IP field. The data required by a comprehensive utilitarian perspective are simply not in evidence in this field -- at least not yet. Put simply, I do not think we can say with the requisite degree of certainty that IP systems create net positive social welfare. Yet I still had the intuition that IP rights are a valuable social institution. Which is what led me to search for alternate foundations. Hence Part I of JIP, in which I describe foundational commitments growing out of the ideas of Locke, Kant and Rawls. These deontic conceptions provide a better set of foundational commitments for the IP field, in my view. Others of course disagree, which is why the midlevel principles are so important as a shared policy language for those with divergent foundational commitments.

Masur notes the lack of empirical support for utilitarian IP foundations, but says in effect that deontic foundations do not provide much of an alternative. As he puts it,

"But what is the comparable standard by which a deontic conception of IP is to be judged? What would it mean for IP doctrine in practice not to have properly advanced Lockean or Kantian ethics? How could anyone tell? The problem—or, more accurately, the advantage for Kant and Locke—is that those approaches are purely theoretical and do not generate testable predictions. Economic theory has foundered on a set of tests that cannot be applied to the alternatives Merges proposes."

The way I see things, Jonathan has conflated two separate issues here. The first is whether IP can be justified at all. The second is how well any particular IP system is performing, given that there is a basic consensus that there should be such a system in the first place. The first issue is where foundational commitments come in. The second is operational; it is a question more of "how" or "how well" as opposed to "whether." (I address this in more detail in an article forthcoming in the San Diego Law Review, "The Relationship Between Foundations and Principles in IP Law.")

Seen in this light, there is no need for empirical tests to prove the viability of Lockean, Kantian, and/or Rawlsian foundations for the field. The only question that needs to be answered is whether a body of IP law can be envisioned that is consistent with these systems of philosophical thought. If so, the foundational question has been successfully answered. Then it's on to the operational level -- designing actual institutions and rules to implement a workable IP system. In my view this is where the efficiency principle comes into play: one important design principle for IP law is and should be getting from our IP system the greatest social benefit at the lowest net cost (as best we can estimate these values). Efficiency is an operational (midlevel) principle, in other words. It does not (and in my view cannot) justify the existence of the field. But it can serve us well in crafting the detailed operations of the field -- once we decide, consistent with ultimate commitments, that it makes sense to have such a field in the first place.

Posted by Rob Merges on January 30, 2013 at 12:14 PM in Intellectual Property, Legal Theory, Property | Permalink | Comments (0) | TrackBack

Tuesday, January 29, 2013

Merges on Gordon on Rawls and IP

Wendy Gordon, as might be expected, gets right to the heart of the most difficult issues in her post on the Rawls chapter in my book, Justifying Intellectual Property ("JIP"). In this post I want to give some quick context and then point the interested reader to the fuller discussion that addresses the issues Wendy raises. Chapter 4 of JIP is on "Distributive Justice and IP Rights." It comes after an introductory chapter that lays out the architecture of the book, and then two chapters on foundational figures in the philosophy of property rights, Locke and Kant. While Locke and Kant are both sophisticated enough to include "other-regarding" features in their accounts of property, I wanted to include a more thorough, systematic, and comprehensive account of distributive justice issues in my discussion of IP rights. So naturally I turned to Rawls. Rawls himself, especially the early Rawls of A Theory of Justice, is fairly lukewarm on private property. But there is a good bit of subsequent literature that extends and adapts Rawls's framework in various ways that reflect more contemporary concerns. And of course since the 1970s there has been a huge upswelling of interest in property theory and philosophical discussions of private property. (Think Jeremy Waldron, The Right to Private Property; Stephen Munzer, A Theory of Property; Richard Epstein's Takings book and subsequent writings; Henry Smith, Lee Ann Fenell, Carol Rose, Greg Alexander, etc. etc. And in IP law, Peggy Radin, Wendy herself (indispensable on Locke) and others.) So it was in this spirit of updating and adapting that I tried to defend IP rights as consistent with a comprehensive Rawlsian account of distributive justice. I began, reasonably enough I think, with Rawls' two principles of justice. Principle 1 says that all persons have an equal right to the most extensive system of basic liberties that is consistent with the liberty of others (the "liberty principle"). The balancing of individual ownership with the interests and rights of the community is a major theme of contemporary property theory -- arguably *the* major theme. So it was relatively easy to draw on the property rights literature for a defense of property (and particularly IP rights) under the liberty principle. I will spare you the details here; but I would add that for me Kant's emphasis on property as a way to facilitate personal autonomy factors heavily into my description of IP as a true, basic individual right. Rawls aficionados will recognize that I could have stopped there. Under his "lexical priority" approach, if a right is demanded by the liberty principle it need not be justified in terms of the second principle. Because I was not sure everyone would buy my defense of IP under the first principle, and more importantly because I could not resist the challenge, I also tried to defend IP under Rawls's second principle. The second principle is the famous "difference principle." A deviation from strictly equal resource allocation can be justified only if it results in the greatest benefit to the least advantaged members of society. My argument here is based on the fact that industries reliant on IP rights contribute significantly to the quality of life of the poorest members of society. Popular culture (including much TV programming); technological improvements such as air conditioning; low-cost long-distance communication and transportation (especially important for immigrants); and cost-saving innovations of all kinds (mobile phones, hypertension medicines, etc.) are, the data show, highly valued by low-income members of our society. These data are surely what Wendy Gordon has in mind when she says that I have not persuasively defended IP under Rawls's second principle. She makes a good point. IP rights stand behind a number of personal fortunes that in themselves represent wildly extravagant deviations from a pure egalitarian distribution (think Bill Cosby, Bill Gates, Jay-Z, George Lucas, Oprah Winfrey). Consumer enjoyment, particularly for the least advantaged, must be factored into a discussion of these fortunes and the institutions (including IP rights) that make them possible. I would note, incidentally, an interesting feature of my list of IP-backed fortunes. Did you notice that 3 of the 5 people mentioned are African Americans? While not strictly relevant to the second principle, I think it is interesting that so many prominent fortunes in the African American community have been enabled by IP rights. This may not be justifiable unless the poorest of our citizens somehow benefit from the conditions that make these fortunes possible, but it is surely an interesting point from the general perspective of distributional concerns in our socio-economic system. (Incidentally, Justin Hughes and I have undertaken some joint work to pursue this idea in more depth). Nevertheless, as I acknowledge in JIP, what I provide is really not much more than a sketch of a full-blown defense of IP under the difference principle. A fuller defense would have to accept the higher marginal prices brought about by IP, and balance these against the consumer surplus created even for the poorest members of society by IP-based entertainment and technology products. My defense gestures in this direction but falls far short of being truly comprehensive. On the other hand, at least I have tried to integrate a comprehensive account of distributive justice into the discussion of IP rights. It may be less than a full feast. But perhaps it's also more than chopped liver.

Posted by Rob Merges on January 29, 2013 at 02:23 PM in Intellectual Property, Legal Theory | Permalink | Comments (0) | TrackBack

Saturday, November 10, 2012

Score 1 for Quants, but Score 5 for Pollsters

There's been a lot of talk after the election about how one big winner (after Obama, I imagine) is Nate Silver, of the FiveThirtyEight blog. He had come under fire in the days/weeks leading up to the election for his refusal to call the race a "toss up" even when Obama had only a narrow lead in national polls. He even prompted a couple of posts here (in his defense). Turns out that Silver called the election right - all fifty states- down to Florida being a virtual tie.

But that's old news. I want to focus on something that may be as, or even more, important. The underlying polling. We take it for granted that the pollsters did the right thing, but their methodology, too, was under attack. Even now, there are people - quants, even - who were shocked that Romney lost because their methodology going in to the election was just plain wrong.

So, that's where I want to focus this post after the jump - not just on "math" but on principled methodology.

It's easy to take the pollster methodology for granted. After all, they've been doing it for many, many years. That, plus the methodology is mostly transparent, and past polls can be measured against outcomes. Taking all of this methodology information into account is where Silver bettered his peers who simply "averaged" polls (and how Silver accurately forecasted a winner with some confidence months ago). Everybody was doing the math, but unless that math incorporated quality methodology in a reasonable way, the results suffered. 

It didn't have to be that way, though. As Silver himself noted in a final pre-election post

As any poker player knows, those 8 percent chances [of Romney winning] do come up once in a while. If it happens this year, then a lot of polling firms will have to re-examine their assumptions — and we will have to re-examine ours about how trustworthy the polls are.

This is the point of my title. Yes, Silver got it right, and did some really great work. The pollsters, however, used (for the most part) methodologies with the right assumptions to provide accurate data to reach the right answers. [11/11 addition: Silver just added his listing of poll result accuracy and methodology discussion here.]

The importance of methodology to quantitative analysis is not limited to polling, of course. Legal and economic scholarship is replete with empirical work based on faulty methodology. The numbers add up correctly, but the underlying theory and data collection might be problematic or the conclusions drawn might not be supported by those calculations.

I live in a glass house, so I won't be throwing any stones by giving examples. My primary point, especially for those who are amazed by the math but not so great at it themselves, is that you have to do more than calculate.  You have to have methods, and those methods have to be grounded in sound scientific practice.  Evaluation of someone else's results should demand as much.

Posted by Michael Risch on November 10, 2012 at 12:51 PM in Law and Politics, Legal Theory | Permalink | Comments (5) | TrackBack

Tuesday, August 21, 2012

A couple reading suggestions and the schedule for the NYU Crim Theory Colloquium

N.B. This post is basically for crimprofs and those interested in crim theory.

Apropos Rick's recent mention that he assigned an old favorite of mine, the Speluncean Explorers, for his first crim law class, I thought I'd share some (self-serving) recommendations, since this week marks the onset for many law schools across the country, and that means  the first criminal law class is here or around the corner for some 1L's.  (After the jump, I also share the schedule for the crim law theory colloquium at NYU this coming year.)

As many crim law profs lament,  first-year criminal law casebooks generally have pretty crummy offerings with respect to the state of the field in punishment theory. (The new 9th edition of Kadish Schulhofer Steiker Barkow, however, is better than most in this respect.) Most casebooks give a little smattering of Kant and Bentham, maybe a gesture to Stephen and for a contemporary flourish, a nod to Jeff Murphy or Michael Moore or Herb Morris. Murphy, Morris, and Moore deserve huge kudos for revivifying the field in the 1970's and since.  Fortunately, the field of punishment theory is very fertile today, and not just with respect to retributive justice.  

For those of you looking to give your students something more meaty and nourishing than Kantian references to fiat iustitia, et pereat mundus, you might want to check out either Michael Cahill's Punishment Pluralism piece or a reasonably short piece of mine, What Might Retributive Justice Be?, a 20-pager or so that tries to give a concise statement of the animating principles and limits of communicative retributivism.  Both pieces, which come from the same book, are the sort that law students and non-specialists should be able to digest without too much complication.  Also, if you're teaching the significance of the presumption of innocence to your 1L's, you might find this oped I did with Eric Miller to be helpful as a fun supplement; it concerns the quiet scandal of punitive release conditions.

Speaking of Cahill (the object of my enduring bromance), Mike and I are continuing to run a crim law theory colloquium for faculty based in NYC at NYU. The goal for this coming year is to workshop papers on and by:

September 10: Re'em Segev (Hebrew U, visiting fellow at NYU); James Stewart (UBC, visiting fellow at NYU)

October 29: Amanda Pustilnik (U Maryland); Joshua Kleinfeld (Northwestern)

November 26: Dan Markel (FSU); Rick Bierschbach and Stephanos Bibas (Cardozo/Penn)

January 28: Rachel Barkow (NYU) and Eric Johnson (Illinois)

February 25: Miriam Baer (BLS) and Michael Cahill (BLS)

March 18: Josh Bowers (UVA) and Michelle Dempsey (Villanova)

April 29: Daryl Brown (UVA) and Larry Alexander (USanDiego)

As you can see, the schedule tries to imperfectly bring together crim theorists of different generations and perspectives. This is going to be the fourth and fifth semesters of these colloquia. Let me know if you'd like to be on our email list for the papers.

Posted by Administrators on August 21, 2012 at 03:07 PM in Article Spotlight, Criminal Law, Legal Theory | Permalink | Comments (2) | TrackBack

Tuesday, June 12, 2012

Are All Citations Good Citations?

A_PaperThere’s a saying in the public relations field that “all press is good press.” The main premise is that, regardless of positive or negative attention, the ultimate goal is to be in the public eye. Does this same concept extend to legal academia? When our work is cited, but somehow questioned for its accuracy, merit, or value, is that better than not being cited at all?

Posted by Kelly Anders on June 12, 2012 at 12:03 PM in Deliberation and voices, Legal Theory, Peer-Reviewed Journals | Permalink | Comments (4) | TrackBack

Wednesday, June 06, 2012

The Doctrine of Efficient Breach: Faculty/Student Edition

Dan has graciously allowed me to extend my May guest-stint in order for me to explore a couple of topics that I had intended to -- but did not -- cover in May.  I hope these remaining two posts are interesting to readers and do not lead Dan to regret his order granting me late-check out. 

Perhaps the most talked about post on this site from the month of May was Rick’s discussion of a student’s decision to back-out on a commitment to serve as his research assistant.  The student, it seems, secured another opportunity that was more consistent with his or her professional aspirations.  Rick addressed the frustrating position the decision left him in, and also the sufficiency of the manner in which the decision was communicated.  First, I think Rick deserves a lot of credit for sharing his very thoughtful reflections triggered by the significant -- and largely critical -- reaction to his initial post.  I do not want to focus at all the contents of the initial post or the subsequent reflective post.  Rather, what struck me was a comment to the initial post, suggesting that the student’s decision is akin to an efficient breach, and that defenders of the doctrine should have no problem with the student’s decision.

While the analogy isn’t perfect, I think the commenter was on to something.  In both situations, performance was expected and the non-breaching party is compelled to scramble to find someone to take the place of the breaching party.  The breaching party now has an entity that he or she values more and thus social welfare is supposed to be enhanced in this respect, assuming there is some compensation to the non-breaching party. 

As a matter of full disclosure, I have my doubts about the doctrine of efficient breach.  The doctrine was the subject of my first law review article.  In it, I argued that an efficient breach is morally problematic because it degrades contracts, which are an instrument of social cooperation and mutual trust, and are not “efficient” because compensatory damages do not place the non-breaching party in the place he or she occupied in the absence of the breach and because such breaches lead to a discounting of that which is exchanged the market due to the possibility that a contracting party may not perform.  A student’s decision to renege on a commitment to work for a faculty member, to the extent it has any relationship to the doctrine, seems to cut in favor of arguments against the doctrine.  Indeed, an efficient breach is said to work only if the non-breaching party is made “indifferent” to the breach through the receipt of compensation.  But I am not sure that a faculty member could be made truly “indifferent” in this situation.  While an efficient breach is very difficult to find in real-life, a question that I examine in a more recent law review article is whether we, as professors, should be engaged in an active effort to promote the doctrine.  My sense is no, because its theoretical benefits are an almost practical impossibility in real life, and because, on balance, the supposed benefits are outweighed by costs to contracting as a reliable form of social cooperation and by costs that are not compensated for and are thus borne by the non-breaching party. 

I invite readers to explore this link between a decision by a student to refuse to perform an agreed-upon and voluntarily assumed obligation with an efficient breach.  I suspect that, as faculty members, we may have encountered a situation similar to Rick’s and can appreciate how much it would, for lack of a better term, “suck.”  This more accessible and relatable situation may provide us with a helpful lens through which to view and assess the value of an efficient breach in society and the propriety of its open promotion by law faculty.

I hope readers will excuse me for any typos or errors in this post, which was written rather quickly as I am attending the Law and Society conference.  I hope to meet readers attending the conference at the AALS Law & the Social Sciences Section Happy Hour (tonight at Tropics from 4-6pm) and/or the Faculty Lounge-Prawfs Happy Hour (tonight, at the Tapa Bar starting at 9pm and ending as soon as Hilton wisely places us on "double secret probation").

Posted by Dawinder "Dave" S. Sidhu on June 6, 2012 at 07:26 PM in Legal Theory | Permalink | Comments (5) | TrackBack

Thursday, May 31, 2012

A Coasean Look at Commercial Skipping...

Readers may have seen that DISH has sued the networks for declaratory relief (and was promptly cross-sued) over some new digital video recorder (DVR) functionality. The full set of issues is complex, so I want to focus on a single issue: commercials skipping. The new DVR automatically removes commercials when playing back some recorded programs. Another company tried this many years ago, but was brow-beaten into submission by content owners. Not so for DISH. In this post, I will try to take a look at the dispute from a fresh angle.

Many think that commercial skipping implicates derivative work rights (that is, transformation of a copyrighted work). I don't think so. The content is created separately from the commercials, and different commercials are broadcast in different parts of the country. The whole package is probably a compiliation of several works, but that compilation is unlikely to be registered with the copyright office as a single work. Also, copying the work of only one author in the compilation is just copying of the subset, not creating a derivative work of the whole.

So, if it is not a derivative work, what rights are at stake? I believe that it is the right to copy in the first place in a stored DVR file. This activity is so ubiquitous that we might not think of it as copying, but it is. The Copyright Act says that the content author has the right to decide whether you store a copy on your disk drive, absent some exception.

And there is an exception - namely fair use. In the famous Sony v. Universal Studios case, the Court held that "time shifting" is a fair use by viewers, and thus sellers of the VCR were not helping users infringe. Had the Court held otherwise, the VCR would have been enjoined as an agent of infringement, just like Grokster was.

I realize that this result is hard to imagine, but Sony was 5-4, and the initial vote had been in favor of finding infringement. Folks can debate whether Sony intended to include commercial skipping or not. At the time, remote controls were rare, so skipping a recorded commercial meant getting off the couch. It wasn't much of an issue. Even now, advertisers tolerate the fact that people usually fast forward through commercials, and viewers have always left the TV to go to the bathroom or kitchen (hopefully not at the same time!). 

But commercial skipping is potentially different, because there is zero chance that someone will stop to watch a catchy commercial or see the name of a movie in the black bar above the trailer as it zooms by. I don't intend to resolve that debate here. A primary reason I am skipping the debate is that fair use tends to be a circular enterprise. Whether a use is fair depends on whether it reduces the market possibilities for the owner. The problem is, the owner only has market possibilities if we say they do. For some things, we may not want them to have a market because we want to preserve free use. Thus, we allow copying via a DVR and VCR, even if content owners say they would like to charge for that right.

Knowing when we should allow the content owner to exploit the market and when we should allow users to take away a market in the name of fair use is the hard part. For this reason, I want to look at the issue through the lens of the Coase Theorem. Coase's idea, at its simplest, is that if parties can bargain (which I'll discuss below), then it does not matter with whom we vest the initial rights. The parties will eventually get to the outcome that makes each person best off given the options, and the only difference is who pays.

One example is smoking in the dorm room. Let's say that one person smokes and the other does not. Regardless of which roommate you give the right to, you will get the same amount of smoking in the room. The only difference will be who pays. If the smoker has the right to smoke, then the non-smoker will either pay the smoker to stop or will leave during smoking (or will negotiate a schedule). If you give the non-smoker the right to a smoke-free room, then the smoker will pay to smoke in the room, will smoke elswhere, or the parties will negotiate a schedule. Assuming non-strategic bargaining (hold-ups) and adequate resources, the same result will ensue because the parties will get to the level where the combination of their activities and their money make them the happiest. The key is to separate the analysis from normative views about smoking to determine who pays.

Now, let's apply this to the DVR context. If we give the right to skip commercials to the user, then several things might happen. Advertisers will advertise less or pay less for advertising slots. Indeed, I suspect that one reason why ads for the Super Bowl are so expensive, even in a down economy, is that not only are there a lot of viewers, but that those viewers are watching live and not able to skip commercials. In response, broadcasters will create less content, create cheaper content, or figure out other ways to make money (e.g. charging more for view on demand or DVDs). Refusing to broadcast unless users pay a fee is unlikely based on current laws. In short, if users want more and better content, they will have to go elsewhere to get it - paying for more channels on cable or satellite, paying for video on demand, etc. Or, they will just have less to watch.

If we give the right to stop commercial skipping to the broadcaster, then we would expect broadcasters will broadcast the mix they have in the past. Viewers will pay for the right to commercial skip. This can be done as it is now, through video on demand services like Netflix, but that's not the only model. Many broadcasters allow for downloading via the satellite or cable provider, which allows the content owner to disable fast forwarding. Fewer commercials, but you have to watch them. Or, in the future, users could pay a higher fee to the broadcaster for the right to skip commercials, and this fee would be passed on to content owners.

These two scenarios illustrate a key limit to the Coase Theorem. To get to the single efficient solution, transactions costs must be low. This means that the parties must be able to bargain cheaply, and there must be no costs or benefits that are being left out of the transaction (what we call externalities). Transactions costs are why we have to be careful about allocating pollution rights. The factory could pay a neighborhood for the right to pollute, but there are costs imposed on those not party to the transaction. Similarly, a neighborhood could pay a factory not to pollute, but difficulty coordinating many people is a transaction cost that keeps such deals from happening.

I think that transactions costs are high in one direction in the commercial skipping scenario, but not as much in the other. If the network has the right to stop skipping, there are low cost ways that content aggregators (satellite and cable) can facilitate user rights to commercial skip - through video on demand, surcharges, and whatnot. This apparatus is already largely in place, and there is at least some competition among content owners (some get DVDs out soon, some don't for example).

If, on the other hand, we vest the skipping right with users, then the ability for content owners to pay (essentially share their advertising revenues) with users is lower if they want to enter into such a transaction. Such a payment could be achieved, though, through reduced user fees for those who disable channel skipping. Even there, though, dividing among all content owners might be difficult.

Normatively, this feels a bit yucky. It seems wrong that consumers should pay more to content providers for the right to automate something they already have the right to do - skip commercials. However, we have to separate the normative from the transactional analysis - for this mind experiment, at least.

Commercials are a key part of how shows get made, and good shows really do go away if there aren't enough eyeballs on the commercials. Thus, we want there to be an efficient transaction that allows for metered advertising and content in a way that both users and networks get the benefit of whatever bargain they are willing to make.

There are a couple of other relevant factors that imply to me that the most efficient allocation of this right is with the network:

1. DISH only allows skipping after 1AM on the day the show is recorded. This no doubt militates in favor of fair use, because most people watch shows on the day they are recorded (or so I've read, I could be wrong). However, it also shows that the time at which the function kicks in can be moved, and thus negotiated and even differentiated among customers that pay different amounts. Some might want free viewing with no skipping, some might pay a large premium for immediate skipping. If we give the user the right to skip whenever, it is unlikely that broadcasters can pay users not to skip, and this means they are stuck in a world with maximum skipping - which kills negotiation to an efficient middle.

2. The skipping is only available for broadcast tv primetime recordings - not for recordings on "cable" channels, where providers must pay for content.  Thus, there appears to already be a payment structure in practice - DISH is allowing for skipping on some networks and not others, which implies that the structure for efficient payments are already in place. If, for example, DISH skipped commercials on TNT, then TNT would charge DISH more to carry content. The networks may not have that option due to "must carry" rules. I suspect this is precisely why DISH skips for broadcasters - because it can without paying.  In order to allow for bargaining however, given that networks can't charge more for DISH to carry content is to vest the right with networks and let the market take over.

These are my gut thoughts from an efficiency standpoint. Others may think of ways to allow for bargaining to happen by vesting rights with users. As a user, I would be happy to hear such ideas.

This is my last post for the month - time flies! Thanks to Prawfs again for having me, and I look forward to guest blogging in the future. As a reminder, I regularly blog at Madisonian.

Posted by Michael Risch on May 31, 2012 at 08:05 PM in Information and Technology, Intellectual Property, Legal Theory, Television, Web/Tech | Permalink | Comments (7) | TrackBack