Wednesday, April 09, 2014
A Typology of Authorship in Highly Collaborative Works
To paraphrase Anna Karenina for the kajillionth time, all copyright scholars think Garcia was wrongly decided,* but every copyright scholar thinks so in their own way. When the Ninth Circuit held a couple months back that an actress has a “copyright interest” in the film in which she briefly performed, the (understandably) apoplectic reaction was as entertaining as the decision was mysterious. I’m on board with the general reception that the Garcia opinion was the copyright equivalent of sitting on a whoopee cushion, so instead of beating that long-deceased equine, I will instead explore a related issue raised by the case.
Copyright’s notion of authorship works great when we’re dealing with the classic, solo Romantic author: Some genius artist sits alone in a room painting a masterpiece all of her own invention, and—boom—thanks to section 201(a), the copyright in that work vests in her, making her the author of the work for the duration of the copyright, and the owner of the work until she transfers her copyright.
But a much harder question arises when we complicate the story of authorship to include multiple collaborators on a project. The solo writer or painter is clearly the author of their work, but when we imagine a fashion photograph involving a photographer, model, makeup people, and numerous technicians, the notion of authorship becomes far murkier. This is, then, one of the major issues raised by Garcia: how do we allocate authorship when many people make expressive contributions to a final creative product?
So this post seeks neither to praise Garcia (obv.) or to bury it (that’s been done amply and adequately already). Instead, below the fold, I want to develop a typology of the different kinds of creative contributions people make to works, and how these different kinds of contributions might give rise to what we call copyright authorship. Importantly, this is not a normative claim that all of the contributors in these classes are or should be entitled to joint or freestanding copyrights, but merely to organize and make sense of the different kinds of contributions to works that could plausibly be understood to be the result of creative authorship.
First is what I will call visionaries. This is a grandiose term because I can’t at present think of a less pretentious one, but I mean it simply to refer to the person who is in charge of the overall vision of a highly collaborative work of authorship—the director of a film, the producer of a sound recording, and perhaps the photographer of a sophisticated, artistic photograph (hence there will be no rehashing of the Ellen’s-selfie debacle here).
The visionary comes closest to the person who fits the Romantic notion of authorship of a work. The director of a film, for example, typically has the initial vision of and the most creative control over the content of the entire film. Hence courts have tended to conclude that (presuming we are to regard works as unitary rather than comprised of many different subworks by many different artists, which Garcia surprisingly called into question) the person exercising this visionary function is the presumptive author of a highly collaborative work. E.g., Burrow-Giles v. Sarony (U.S. 1884) (holding that Napoleon Sarony was the author of a famous photograph of Oscar Wilde because Sarony determined the setting, lighting, subject placement, and other features of the work).
Second, consider performers—actors in films, models in photographs, singers and session musicians in sound recordings). It was the Garcia court’s willingness to consider performers as authors of works that was so jarring to settled understandings of copyright (and also to the Copyright Office, which had rejected Garcia’s application for a copyright in the same performance that the Ninth Circuit held was protected).
I share the intuition that something seems very wrong about extending Garcia a copyright in her performance. But what complicates this is that I don’t have that same intuition in the context of sound recordings.** It does not seem obviously wrong to me that singers and musicians should not be the owner of the sound recordings they create at a studio. Their performances vivify the otherwise highly abstracted musical works on which they are based, and comprise the substance of the recorded sounds themselves. The seeming plausibility that musical performers might have a copyright in their sound recordings makes it a little harder to reject out of hand the notion that dramatic performers can never have a copyright interest in the audiovisual works to which they contribute.
The third category is the technician. This is the person who actually causes sounds or images to be fixed in the tangible medium of expression that is required for federal copyrightability—the cinematographer in film, the sound engineer in a recording studio, or the person taking a photograph (modernly, this is usually the visionary as well, but this was not always the case—Napoleon Sarony, for example, never touched a camera in his life).
A colleague once pointed out to me a formalist argument for why such technicians should have authorial status. The work in photographic works, audiovisual works, and sound recordings is pretty much indistinguishable from the fixation. So for a sound recording, the work is the actual sounds fixed in the studio’s digital audio tape. By this logic, then, the person who is actually creating the work is the person who is actually fixing the sounds (or in the case of other works, fixing the images).
This argument works well when the technician also makes crucial creative decisions about the work. The best example is the photographer. Eddie Adams or Manny Garcia (no relation to the “Innocence of Muslims” actress—as far as I know, anyway) are both the visionaries who imagine their photos (to the extent possible with photojournalism, which typically requires spontaneous creation) as well as the technicians who execute the fixation of their creative vision. Sound recordings are a harder case. Some sound engineers make creative contributions, while others act at the direction and discretion of producers. And the case where this makes the least sense is the cinematographer, who exercises great technical skill to operate the camera but who typically acts in the service of realizing the director’s creative vision (again, there are exceptions—Spielberg, for example, takes a relatively greater technical role in his films than most Hollywood directors).
The fourth and final category is the writer. This category will be populated only where the highly collaborative work is derivative of some other work—a screenplay, a musical work—so would exclude works like a painstakingly posed photograph. And it is beyond obvious that in order to create the film or sound recording at all, the creator of the derivative must get a license, either through bargaining (in the case of a film) or through section 115’s compulsory license provisions (in the case of a sound recording). But the fact of acquiring a license does not diminish the central role that the writer’s contribution plays in the creative impact of films or sound recordings. It just means that here, unlike with other categories, the copyright ownership issues are reasonably well demarcated and understood.
These categories—not meant to be exhaustive, but just illustrative—comprise four different ways that one might contribute to a highly collaborative work in a creative way that approaches copyright’s notion of authorship. One could contribute an overall guiding vision, or provide an original and electric performance, or supply the work’s underlying narrative structure, or contribute technical expertise in a thoughtful way that contributed to the aesthetic success of the final creative product.
The problem with acknowledging this multiplicity of forms of creative contribution for the purposes of law, though, is that copyright is ill-suited to manage the descriptive reality of authorship in highly collaborative works.*** This may suggest that Garcia is flawed pragmatically more than doctrinally. There may be some plausibility to the idea that a performance could be copyrighted, but the practical implications of going down that rabbit hole are just too messy to contemplate. So while the Romantic notion of locating authorship of all works in a single individual—visionary, technician, or whoever—may not square with the need to have a manageable notion of authorship (and, related, ownership). Hence this may be one rare instance in which Romanticism and pragmatism are on the same page.
*In all fairness, there were apparently a handful of Garcia supporters (other than members of industry groups benefited by the decision’s outcome).
**Based solely on casual empiricism, I think others share this intuition. I always ask my class (before we get into what law actually says about these things) who they think the author of a movie should be, and most people answer "director." But when I ask them who the author of a sound recording should be, the most common instinctive response is "the vocalist." No love for the producer, I guess.
***This may be a problem endemic to all property, actually. Real property law does ok with the idea of limited co-ownership, but once the owners of a given plot become too numerous, management problems and devaluation kick in. This is a particular problem for familial or tribal holdings over time.
Wednesday, April 02, 2014
A salience-bias defense of marginal law reforms
Hey y’all. It’s always good to be back guesting at Prawfs. I’m looking forward to sharing thoughts about property—physical, intellectual, and otherwise—over the course of the next month. I’ll kick it off with a news item that caught my eye today: The UK just announced a forthcoming reform to its copyright law. Among other things, British citizens and subjects are now free to—wait for it—make personal copies of legally acquired copies of digital media (e.g., eBooks, CDs) for format-shifting or backup purposes.
This aspect of the British copyright reform strikes me as a perfectly good and sensible idea (as did its other features, like broadening the UK notion of fair use), but response to it sounded more in the register of “meh” or “so what?” than “hallelujah.” After all, this part of the revision legalized conduct that most people assumed was already legal (and may indeed be legal in other countries with broader notions of users’ rights), was certainly widely underenforced (because it doesn’t make a lot of sense to spend resources breaking into people’s homes to see if they’ve made a nefarious illicit backup CD copy of, say, Fartbarf’s “Dirty Power”*), and was, in any event, largely a moot point thanks to the increasing marginality of the relevant technologies (because, as my students helpfully point out to me when I refer to this medium for experiencing music, who uses CDs anymore, Grandpa?).
And yet I think there is something interesting about the UK’s move, not so much for the substantive impact on copyright law or user practices, but about a strategy for how and why we may want to reform laws generally. I explore this notion below the fold.
The major justification for these reforms (which grew out of the very thoughtful Hargreaves Report, which, for what it’s worth, could be a model for US copyright reform, in the vanishingly unlikely event that any congressfolks are reading this) is simply that it makes sense to update law to reflect actual practices. By one estimate, 85% of people in the UK assumed that making personal use copies was already legal, and the practice is already widespread. On this explanation, the personal-use element of the UK's copyright reform is well-taken but inconsequential, like fixing a spelling error that didn’t really confuse anyone about the meaning of a sentence.
But there’s another, broader, reason why this reform might be good even—perhaps especially—for the kind of copyright industries who were likely to resist it. This kind of conspicuous gap between social norms and practices on one hand and regulation on the other can be an embarrassment to the law that exacts outsized costs in terms of credibility. The reason that law/norm disjunctures can be especially problematic is that non-specialists may generalize about the entire law based on one conspicuous silly or outdated provision. This is a species of salience bias or the availability heuristic. Observing one particularly notable example about a place or, say, a body of law can falsely lead us to believe we have a true sense of its overall character.
The UK group Consumer Focus made just such a leap in this setting, pointing out that the illegality of innocuous conduct like making personal backup copies had caused the credibility of all “UK copyright law to fall through the floor.” This move—deriving the character of an entire body of law from its worst provisions—is not limited to copyright. A roughly analogous phenomenon is the tendency of laypeople to assume that when one (purportedly) guilty man goes free, that the criminal law system is generally very lenient—despite the overwhelming rates of conviction for accused criminals.
This is sort of like synechdoche in law—using a part, and especially a flawed or discordant part—to represent the whole. And what it means for law reform, and in particular the reform of statues like the Copyright Act, is that law/norm disjunctures may be more problematic than is usually appreciated. We generally tend to think that these kind of disparities between law on the books and actual practices are bad because of the people they unwittingly regulate. Out of date laws could impose sanctions for conduct that has become widely, imposing outsized penalties on unsuspecting people for trivial violations. But the UK example reminds us that the law/norm gap may be a major problem for law itself, especially in light of the tendency of lay observers to infer from a single out-of-step provision that an entire regulatory structure is flawed.
*Yes I used the name of this band in this illustration for amusement (mainly my own). But also yes, there actually is a band called Fartbarf, and perhaps more surprisingly, they actually have appeal once the juvenile humor value of their name fades, assuming that you’re into 80s-inflected synth-pop performed by a bunch of guys in gorilla masks. And hey, isn’t everyone?
Sunday, February 09, 2014
Misunderstanding of fair use? Shrewd marketing move? Or both?
Friday, June 14, 2013
The Fine Details of Molecular Biology
So the most anticipated of yesterday's decisions is obviously Myriad Genetics, the gene-patenting case. (The less said about my embarrassingly wrong prediction in Tarrant Regional Water District, the better!) The Court's decision seems to be pretty much what everybody expected after oral argument. But after straining to follow all of the majority opinion, I enjoyed Justice Scalia's brief concurrence:
I join the judgment of the Court, and all of its opinion except Part I–A and some portions of the rest of the opinion going into fine details of molecular biology. I am unable to affirm those details on my own knowledge or even my own belief. It suffices for me to affirm, having studied the opinions below and the expert briefs presented here, that the portion of DNA isolated from its natural state sought to be patented is identical to that portion of the DNA in its natural state; and that complementary DNA (cDNA) is a synthetic creation not normally present in nature.
Some people have called this bizarre or mocked it as anti-evolution. Others defend it as intellectual humility. I have to say, my sympathies are with Justice Scalia. Whenever I read a long, complicated fact section in an opinion I cringe. (Which of these facts are really relevant? And if really relevant, how confident are we that they are correct?)
Indeed, Justice Scalia puts me in mind of the work of Allison Orr Larsen, who's written several interesting articles that are skeptical of the Supreme Court's treatment of questions of legislative fact. (I think that the molecular biology in Myriad would qualify as a legislative fact rather than an adjudicative fact, but I am not 100% sure I always understand the distinction.) Given that it is not clear that this is something the Court does well, it may be better for it to do less of it.
I also appreciate Scalia's candor, and wonder if it reflects something about the Court's attitude in its relatively large recent patent docket. Perhaps it is not a coincidence that the concurrence appears in a case where the Court seemed particularly eager to seize a middle position proposed by the government. The Court now has several cases (Mayo, Bilski) where it seeks to intervene in the Federal Circuit's patent jurisprudence withouw necessarily having a super clear idea what it wants to replace it with. Of course, the lack of a fully developed legal theory and the lack of a fully developed understanding of the "fine details" of the facts need not be connected -- but maybe they are. Maybe part of the reason it is hard for the Court to do patent law is because it is hard to understand the underlying science in any of the disputes it actually wants to resolve.
Tuesday, May 14, 2013
Is a broadcast to everyone private under the Copyright Act?
For the final post in my extended visit here, I want to focus on another example in my series of discussions about formalism vs. policy in copyright. Today’s case is WNET v. Aereo, which allowed continued operation of a creative television streaming service. As I’ll discuss below, the case pretty clearly complies with the statutory scheme, much to the relief of those who believe content is overprotected and that new digital distribution methods should be allowed. This time, the policy opposition is best demonstrated by Judge Chin’s dissent in the case.
In the end, though, the case shows what all of the cases I’ve discussed show: copyright was not really developed with digital content storage and streaming in mind. While some rules fit nicely, others seem like creaky old constructs that can barely hold the weight of the future. The result is a set of highly formalistic rules that lead to services purposely designed inefficiently to either follow or avoid the letter of the law. This problem is not going to get any better with time, though my own guess hope is that the pressure will cause providers to create some better solutions that leave everyone better off.
Here are the basic facts. Aereo runs a system with thousands of dime size antennas. Each of these antennas can capture over-the-air broadcast television, but not cable or satellite signals. OTA signals are “free” – viewers don’t have to pay for access to them the way they do for cable.
Aereo then runs what is essentially a remote digital video recorder for each subscriber. That is, when a user wants to watch or record a program, the Aereo system tunes one of the antennas to the appropriate channel at the appropriate time, saves the resulting TV signal (a show) to disk, and then either streams it to the user over the internet or stores it for the user for later viewing.
Aereo does this for every single subscriber; if 10,000 people want to record a show, then 10,000 antennas store 10,000 copies of the program. Why, you ask, would it do something so ridiculously costly and redundant? Because it’s the law, of course. A prior case, called Cartoon Network stands for this proposition. Here’s the logic: a) a user can use DVRs to store recordings at home (relatively well settled law since the Supreme Court’s decision not to hold VCR makers liable back in 1984); b) a cable operator can store those DVRs at the cable site, because where a customer’s DVR is located does not change the nature of its use, but c) the cable operator must maintain each customer’s choices like a DVR, meaning that the customer chooses what to record, and that a separate copy must be maintained for each customer.
The question in Aereo, then, is whether this basic framework changes if the “cable provider” is now an “antenna farm” provider. There are some differences. The cable subscriber is paying a fee that allows for the rebroadcast of content from the cable operator to the subscriber. Without such a fee/license, such rebroadcast would be infringement. Aereo has no such license, and thus its service could be considered a rebroadcast, which is a no-no. Just ask the folks who tried to rebroadcast NFL games into Canada.
The Aereo Court agreed with the rationale in Cartoon Network, however; the license was not relevant. Instead, the individualized copies were simply not “public” performances. They were private: selected by the user, recorded in the user’s disk quota, and shown in that form only to the user. As the court noted, it was as if the user had a private antenna, DVR, and Slingbox located at Aereo’s facility, and the fact that Aereo owned it and charged for the service was irrelevant.
Judge Chin dissented from the opinion, and took an opposite view, best described using the original dissent’s text:
Aereo's "technology platform" is, however, a sham. The system employs thousands of individual dime-sized antennas, but there is no technologically sound reason to use a multitude of tiny individual antennas rather than one central antenna; indeed, the system is a Rube Goldberg-like contrivance, over-engineered in an attempt to avoid the reach of the Copyright Act and to take advantage of a perceived loophole in the law.
Judge Chin’s dissent goes on to argue that the formalistic reading of the statute fails, and that we should see Aereo’s acts for what they are: a transmission of content to members of the public, which thus constitutes public performance.
This disagreement is a great ending illustration of the cases I’ve blogged about this month. The tension between formalistic statutory reading and policy based glosses is palpable. In my last post, I made clear that I favor following the statute unless convinced otherwise.
But that doesn’t answer the fundamental question, which is: what do we make of all this? Sure, this case was rightly decided. Perhaps now this might lead to the formation of an efficient/licensed broadcast network streaming service that costs users less than Aereo because it is less resource intensive.
I’m not sure the Aereo ruling is the right one in the long run. One of the thorny issues with broadcast television is range. Broadcasters in different markets are not supposed to overlap. Ordinarily, this is no issue because radio waves only travel so far. When a provider sends the broadcast by other means, however, overlap is possible, and the provider keeps the overlap from happening. DirecTV, for example, only allows a broadcast package based on location.
Aereo is not so limited, however. Presumably, one can record broadcast shows from every market. Why should this matter? Imagine the Aereo “Sunday Ticket” package, whereby Aereo records local NFL games from every market and allows subscribers to stream them. Presumably this is completely legal, but something seems off about it. While Aereo’s operation seems fine for a single market, this use is a bit thornier. I’m reasonably certain that Congress will close that loophole if any service actually tries it.
Thus, dealing with what should be clearly legal under the statute is thornier than it appears at first. While I believe that more and cheaper streaming options would be a good thing, I wonder whether the disruption to local broadcast markets is the right way to get there. One thing is clear: copyright law is ill equipped to answer the question.Thanks again to Prawfs for having me, and I'll see you next time around (and in the meantime at madisonian.net).
Thursday, May 09, 2013
Teaching and Testing Law Students
I'm glad to be back for another rotation here at PrawfsBlawg. Like many of you, I've just finished up spring semester, and I'm grading exams while I think about new projects, line up my research and writing for the summer, and think about what I'd like to do differently the next time I teach. In this post, and some future posts, I'll share some things I did differently this year, and my thoughts on whether or not they were a success. I hope you'll share your ideas in the comments: I'm always on the lookout for better ways to teach my students.
This spring, in both Contracts and Copyright, I added a graded, mid-semester memo to the course requirements. In case you don't know, the typical law school class bases the entire grade on one exam at the end of the semester, so this is a departure from the norm, although I'm not the first person to try it. In fact, I shamelessly lifted the idea (and my implimentation of it) from Michael Madison at Pittsburgh. In copyright, I put together my own closed universe of materials and wrote a problem for the students to analyze. I asked them to pitch the memo at two different levels: give the client what she needs to understand what you think she should do and why she should do it, and provide the partner with a grounding in the case law and a suggestion for whether and how to litigate the case.
I tried something similar for Contracts, although I gave the students one "shadow" graded memo as a warm-up. I graded it for them, so they could see how I approached the memo, and what I was looking for. We followed it up with a graded memo a few weeks later. For both memos, I took my material from Doug Leslie's CaseFile Method assignments for contract law. I like the CaseFile method problem sets for this purpose because they provide a narrow issue, with a closed universe of reading materials.
In both cases, my hope was that the memo would help me assess how the students comprehend and synthesize the law, without worrying that they failed the assignment because they didn't find something they should have. I'm not downplaying the importance of research skills for the practising attorney, but I feel like that is a skill better handled in a course structured toward developing those skills.
The students in Contracts really rose to the challenge. The graded memo dealt with UCC 2-207 and the "Battle of the Forms." It's tricky stuff, and I feel confident that they mastered the material better than they would have after a day in class, although there were plenty of missteps in the memos themselves.
The memos written for the Copyright class collectively underwhelmed me. It's possible the problem I constructed, which asked roughly the same question that was posed in the recent litigation over custom Batmobiles, was somehow off, but they didn't come at the problem with as much energy and care as the Contracts students. Perhaps it's a difference between 1Ls and more experienced students. It's also possible that they needed the warm-up like the one I provided my Contracts students.
Despite my concerns, I feel like the memo assignment in both classes provided a unique opportunity for students to dig into a substantive area of the law and get feedback from a scholar who has developed some expertise in that area. I'm certainly not the best "legal writing" instructor that these students could have, but my perception is that the end result is nevertheless worth the effort, both for me and for the students.
Wednesday, April 24, 2013
On Policy and Plain Meaning in Copyright Law
As noted in my last post, there have been several important copyright decisions in the last couple months. I want to focus on two of them here: Viacom v. YouTube and UMG v. Escape Media. Both relate to the DMCA safe harbors of online providers who receive copyrighted material from their users - Section 512 of the Copyright Act. Their opposing outcomes illustrate the key point I want to make: separating interpretation from policy is hard, and I tend to favor following the statute rather than rewriting it when I don't like the policy outcome. This is not an earthshattering observation - Solum and Chiang make a similar argument in their article on patent claim interpretation. Nevertheless, I think it bears some discussion with respect to the safe harbors.For the uninitiated, 17 U.S.C. 512 states that "service providers" shall not be liable for "infringement of copyright" so long as they meet some hurdles. A primary safe harbor is in 512(c), which provides exempts providers from liability for "storage at the direction of a user of material that resides on a system" of the service provider.
To qualify, the provider must not know that the material is infringing, must not be aware of facts and circumstances from which infringing activity is apparent, and must remove the material if it obtains this knowledge or becomes aware of the facts or circumstances. Further, if the copyright owner sends notice to the provider, the provider loses protection if it does not remove the material. Finally, the provider might be liable if it has the right and ability to control the user activity, and obtains a direct financial benefit from it.
But even if the provider fails to meet the safe harbor, it might still evade liability. The copyright owner must still prove contributory infringement, and the defendant might have defenses, such as fair use. Of course, all of that litigation is far more costly than a simple safe harbor, so there is a lot of positioning by parties about what does and does not constitute safe activity.
This brings us to our two cases:
Viacom v. YouTube
This is an old case, from back when YouTube was starting. The district court recently issued a ruling once again finding that YouTube is protected by the 512(c) safe harbor. A prior appellate ruling remanded for district court determination of whether Viacom had any evidence that YouTube knew or had reason to know that infringing clips had been posted on the site. Viacom admitted that it had no such evidence, but instead argued that YouTube was "willfully blind" to the fact of such infringement, because its emails talked about leaving other infringing clips on the site - just not any that Viacom was alleging. The court rejected this argument, saying that it was not enough to show willful blindness as to Viacom's particular clips.
The ruling is a sensible, straightforward reading of 512 that favors the service provider.
UMG v. Escape Media
We now turn to UMG v. Escape Media. In a shocking ruling yesterday, the appellate division of the NY Supreme Court (yeah, they kind of name things backward there) held that sound recordings made prior to 1972 were not part of the Section 512 safe harbors. Prior to 1972, such recordings were not protected by federal copyright. Thus, if one copies them, any liability falls under state statute or common law, often referred to as "common law copyright." Thus, service providers could be sued under any applicable state law that protected such sound recordings.
Escape Media argued that immunity for "infringement of copyright" meant common law copyright as well, thus preempting any state law liability if the safe harbors were met.
The court disagreed, ruling that a) "copyright" meant copyright under the act, and b) reading the statute to provide safe harbors for common law copyright would negate Section 301(c), which states that "any rights or remedies under the common law or statutes of any State shall not be annulled or limited by this title until February 15, 2067." The court reasoned that the safe harbor is a limitation of the common law, and thus not allowed if not explicit.
If this ruling stands, then the entire notice and takedown scheme that everyone relies on will go away for pre-1972 sound recordings, and providers may potentially be liable under 50 different state laws. Of course, there are still potential defenses under the common law, but doing business just got a whole lot more expensive and risky to provide services. So, while the sky has not fallen, as a friend aptly commented about this case yesterday, it is definitely in a rapidly decaying orbit.
Policy and Plain Maining
This leads to the key point I want to make here, about how we read the copyright act and discuss it. Let's start with YouTube. The court faithfully applied the straightforward language of the safe harbors, and let YouTube off the hook. The statute is clear that there is no duty to monitor, and YouTube chose not to monitor, aggressively so.
And, yet, I can't help but think that YouTube did something wrong. Just reading the emails from that time period shows that the executives were playing fast and loose with copyright, leaving material up in order to get viewers. (By they way, maybe they had fair use arguments, but they don't really enter the mix). Indeed, they had a study done that showed a large amount of infringement on the site. I wonder whether anyone at YouTube asked to see the underlying data to see what was infringing so it could be taken down. I doubt it.
I would bet that 95% of my IP academic colleagues would say, so what? YouTube is a good thing, as are online services for user generated content. Thus, we read the statute strictly, and provide the safe harbor.
This brings us to UMG v. Escape Media. Here, there was a colossal screw-up. It is quite likely that no one in Congress thought about pre-1972 sound recordings. As such, the statute was written with the copyright act in mind, and the only reasonable reading of the Section 512 is that it applies to "infringement of copyright" under the Act. I think the plain meaning of the section leads to this conclusion. First, Section 512 refers to many defined terms, such as "copyright owner" which is defined as an owner of one of the exclusive rights under the copyright act. Second, the copyright act never refers to "copyright" to refer to pre-1972 sound recordings that are protected by common law copyright. Third, expanding "copyright" elsewhere in the act to include "common law copyright" would be a disaster. Fourth, state statutes and common laws did not always refer to such protection as "common law copyright," instead covering protection under unfair competition laws. Should those be part of the safe harbor? How would we know if the only word used is copyright?
That said, I think the court's reliance on 301(c) is misplaced; I don't think that a reading of 512 that safe harbored pre-1972 recordings would limit state law. I just don't think that's what the statute says, unfortunately.
Just to be clear, this ruling is a bad thing, a disaster even. I am not convinced that it will increase any liability, but it will surely increase costs and uncertainty. If I had to write the statute differently, I would. I'm sure others would as well.
But the question of the day is whether policy should trump plain meaning when we apply a statute. The ReDigi case and the UMG case both seem to have been written to address statutes who did not foresee the policy implications downstream. Perhaps many might say yes, we should read the statute differently.
I'm pretty sure I disagree. For whatever reason - maybe the computer programmer in me - I have always favored reading the statute as it is and dealing with the bugs through fixes or workarounds. As I've argued with patentable subject matter, the law becomes a mess if you attempt to do otherwise. ReDigi and UMG are examples of bugs. We need to fix or work around them. It irritates me to no end that Congress won't do so, but I have a hard time saying that the statutes should somehow mean something different than they say simply because it would be a better policy if they did. Perhaps that's why I prefer standards to rules - the rules are good, until they aren't.
This is not to say I'm inflexible or unpragmatic. I'm happy to tweak a standard to meet policy needs. I've blogged before about how I think courts have misinterpreted the plain meaning of the CFAA, but I am nevertheless glad that they have done so to reign it in. I'm also often persuaded that my reading of a statute is wrong (or even crazy) even when I initially thought it was clear. I'd be happy for someone to find some argument that fixes the UMG case in a principled way. I know some of my colleagues look to the common law, for example, to solve the ReDigi problem. Maybe there is a common law solution to UMG. But until then, for me at least, plain meaning trumps policy.
Tuesday, April 23, 2013
Impact of the “Lander Brief” in the Myriad (Gene Patent) Case – and an answer to Justice Alito’s QuestionThe Supreme Court heard oral arguments on April 15 in Association of Molecular Pathology et al. v Myriad, concerning whether human genes are patent-eligible subject matter. The case focused on Myriad’s patents on two genes, BRCA1 and BRCA2, involved in early-onset breast cancer
Surprisingly, many of the Court’s questions for Myriad’s counsel focused on what Justice Breyer dubbed the “Lander Brief” – an amicus filed on behalf of neither party by one of the country’s leading scientists, Dr. Eric Lander. (Lander was one of the leaders of the Human Genome Project and co-chair’s the Presidents Council of Advisors on Science and Technology.) [Full Disclosure: I am one of the authors of this brief] Justices Breyer, Ginsburg and Alito referred to the brief by name, and several other Justices were clearly influenced by the information in the brief.
I believe that the “Lander brief” was a hot topic of conversation because the Justices realized that it was central to applying the Court’s product-of-nature doctrine to DNA. Importantly, the brief demolished the scientific foundation of the Federal Circuit decision on appeal. The Federal Circuit panel held that human chromosomes are not patent-eligible because they are products of nature, but a majority found that “isolated DNA” fragments of human chromosomes (such as pieces of the breast cancer genes) are patent-eligible. The Federal Circuit’s distinction rested on its assumption that (unlike whole chromosomes) isolated DNA fragments do not themselves occur in nature, but instead only exist by virtue of the hand of man.
The Federal Circuit cited no scientific support for its crucial assumption – neither in the record below, nor in any scientific literature.
Embarrassingly, the Federal Circuit’s assumption turned out to be flat-out wrong. The Lander brief summarized 30 years of scientific literature showing that natural processes in the human body routinely cleave into isolated DNA fragments. Isolated DNA fragments turn out to be abundant outside of cells – including in cell-free blood, urine and stool. They are so common that they can be used for genetic diagnostics of inherited diseases and cancers. In fact, they are so prevalent that several scientific groups have shown that it is possible to determine the entire genome sequence of a fetus based on analyzing the isolated DNA fragments found in a teaspoons-worth of its mother’s blood.
Justice Breyer relentlessly pushed Myriad’s counsel to declare whether he agreed or disagreed with the Lander Brief. When the counsel finally declared that he disagreed, Justice Breyer demanded:
JUSTICE BREYER: Okay. Very well. If you are saying it is wrong, as a matter of science, since neither of us are scientists, I would like you to tell me what I should read that will, from a scientist, tell me that it's wrong.
The only reply that Myriad’s counsel could muster was to point to a declaration that had been filed (by Dr. Mark Kay) in the District Court case in 2009. (In fact, Dr. Kay’s declaration said nothing whatsoever about whether isolated DNA fragments occur in Nature. It concerned how to construe terms in Myriad’s patent.)
A few minutes later, Justice Ginsburg returned to the point:
JUSTICE GINSBURG: Do you concede at least that the decision in the Federal Circuit, that Judge Lourie did make an incorrect assumption, or is the Lander brief inaccurate with respect to that, too? That is, Judge Lourie thought that isolated DNA fragments did not exist in the human body and Dr. Lander says that --
MR. CASTANIAS: No, what -- I think Justice -- Judge Lourie was exactly correct to say that there is nothing in this record that says that isolated DNA fragments of BRCA1 exist in the body. Neither does Dr. Lander's brief, for that matter. And for that matter, those isolated fragments that are discussed in Dr. Lander's brief again are -- are what are known not -- not in any way as isolated DNA, but as pseudogenes. They're typically things that have been killed off or mutated by a virus, but they do not –
Here, Myriad’s counsel proved to be confused. Contrary to Mr. Castanias’s statement, the Lander brief (on page 16) explicitly stated that isolated DNA fragments were found covering the entire BRCA1 and BRCA2 genes. Also, “pseudogenes” had nothing to do with Lander’s brief; they arose in the ACLU’s brief for Petitioners and in Myriad’s reply. (“Pseudogenes” are sequences in the human genome that occur when RNA is rarely reverse transcribed into DNA; they are relevant to the patentability of cDNA but are unrelated to the patentability of genomic DNA.)
Justice Alito then jumped in, offering the only glimmer of hope for Myriad’s counsel:
JUSTICE ALITO: But isn't this just a question of probability? To get back to your baseball bat example, which at least I -- I can understand better than perhaps some of this biochemistry, I suppose that in, you know, I don't know how many millions of years trees have been around, but in all of that time possibly someplace a branch has fallen off a tree and it's fallen into the ocean and it's been manipulated by the waves, and then something's been washed up on the shore, and what do you know, it's a baseball bat.
In other words, Justice Alito asked whether isolated DNA fragments of the BRCA genes might be freakishly rare. Neither opposing counsel nor the Solicitor General had an opportunity to address Justice Alito’s question, because they had already spoken.
The answer to Justice Alito’s questions turns out to be: VERY common. A typical person contains roughly one billion isolated DNA fragments of the BRCA genes circulating in his or her blood.
The Lander Brief (in footnote 23) cites several papers showing that, in 1 milliliter of blood (1/4000th of total circulation), each nucleotide in the human genome was covered by about 250 fragments on average. In total circulation, this corresponds to about 1 million fragments (= 4000 x 250) covering each individual base. Across the length of the BRCA genes, this translates to about 1 billion fragments.
More explicitly, footnote 25 points to a web site published by Stanford Professor Stephen Quake (the author of one of the studies), in which he specifically reported the coverage of the BRCA genes in the blood stream. Dr. Quake’s data directly showed that a typical person carries roughly 945 million fragments of isolated DNA from the BRCA1 and BRCA2 genes.
I was very happy the Lander brief has got this much attention, since I think that once the Court understands the fundamental mistake made by the Federal Circuit (and apparently Myriad’s counsel), as several of the Justices questions suggested they did at oral argument, the outcome of the case becomes clear. The Court actually can sidestep a number of more difficult questions in patent law (about the precise meaning of the standard under Diamond v. Chakrabarty for when a molecule is “markedly different” than a product of nature), because isolated DNA fragments of the human genome are precisely products of nature themselves.
Saturday, April 20, 2013
The Securitization of Patents
[I cross-posted this at Patently-O last week, but thought it might be of interest to a more general audience]
My forthcoming article in Duke Law Journal, The Securitization of Patents, argues that the best way to create patent markets might be to start treating portfolios as securities. A full draft is accessible at this SSRN page. The article makes four basic points:
- Aggregation and trading is not limited to non-practicing entities – everyone is doing it.
- Companies are trading aggregated patent portfolios as they do other patent instruments, either through sale or licensing.
- Aggregation is beneficial, even critical, for efficiency; this is directly contrary to the conventional wisdom.
- Based on the above, markets might be improved by applying securities treatment to patent portfolios.
[NB: I focus on portfolios, not individual patents. I also focus on sale and licensing of patents, not the initial patent grant. The paper explains why in more space than I have here.]
When I first wrote Patent Troll Myths, there was very little empirical data about NPEs. Since then, such research has exploded, with new data every week seemingly counting the number of NPEs and their cases. This data, though helpful, leaves a lot to be desired, I think. First, there is a rarely a real apples-to-apples comparison with the activity of product companies (and when there is, the comparison is not very granular). To that end, I’ve been developing a matched data set for my Patent Troll Myths data so we can test what real differences in quality and quantity, if any, exist. Second, the data largely ignores licensing practices, which can be quite similar. To be fair, licensing data is difficult to come by, but without it, normative determinations are difficult. Third, studies like mine, which look at the provenance of NPE patents, are rare.
These issues lead to my first point: aggregation is not just for trolls anymore, if it ever was. The public is becoming a bit more aware of this with new focus on privateering, the outsourcing of patent enforcement by product companies to licensing and assertion specialists. The idea that aggregation is just fine when a product company does it, but suddenly evil when those same patents transferred to a third party has never sat well with me. And regardless of moral considerations, the fact of the matter is that patent aggregation is everywhere.
My second point follows from the first: aggregated portfolios are being used as assets, and traded as such by all sorts of companies. This is nothing new; people have been writing about patents as a new asset classes for a while now. Transactions are getting bigger, however, and they are hitting the news. Perhaps no transaction better illustrates my point than the recent Kodak patent auction. First, Kodak offered its patents for sale as a financing strategy in bankruptcy. Second, the eventual buyer was a consortium including, among others, Microsoft (a product company); Intellectual Ventures (a licensing company, but also one that litigates, but also one that aggregates defensively; and RPX (a defensive aggregator). This one transaction is my argument in a nutshell: everyone is aggregating, and they are doing so in buy/sell type transactions for financial purposes.
My third point is that such aggregation is not always (or necessarily often) a bad thing. This is decidedly against the conventional wisdom. Companies with large portfolios surely have the ability to cause “royalty stacking,” but in practice this is less likely than if many separate parties enforced those same patents. Litigation looks much the same; regardless of the size of the portfolio, courts are just not going to hear a case asserting 1000 patents. Only a few (at most 5 or 10) patents will be at issue, and then the aggregator looks like anyone else. Similarly, in negotiations, the parties usually haggle over a few “lead” patents. This is little different than negotiation with the owner of few patents – with one big exception. When you come to terms with the aggregator, you can settle and license hundreds or maybe thousands of patents at once. Not so with single-patent owners. These folks line up one after another, asserting a few patents at a time. The biggest NPEs will often assert patents obtained by individual inventors; would product makers really rather that the inventors assert their own patents separately? Maybe, before a time when people figured out a viable mechanism for funding patent assertion, but now that individuals might seek funding for enforcing their own patents, a single aggregator must surely be a better option than many inventor plaintiffs.
There is one difference with aggregated portfolios, of course. When the parties are done haggling over the lead patents, the portfolio owner always has more to discuss while the small patent holder has none. But rather than being the greatest cost of the portfolio, a seemingly bottomless portfolio is its greatest benefit.
And that is my fourth point: when parties are trading portfolios, the haggling should be over price instead of quality and infringement. In a large enough portfolio holding patents directly related to a particular product, there will surely be some number of patents that are both valid and infringed. The question is how many, and how much it will cost to find them. A central thesis of my article is that treating portfolios as securities will help lower transactions costs in a variety of ways by limiting the litigation costs of finding those infringing patents and instead better pricing patents in the market. For you legal sticklers, I didn’t just make this up: the paper looks at portfolios under the Supreme Court’s famous Howey test and concludes that such treatment is at least plausible under the law.
How might securities laws benefit markets? Not in the traditional “public offering” way. I suspect that most transactions would be excluded from the registration requirements. However, such transactions might be regulated as dark pools, and require clearinghouse treatment that makes such transactions public. Further, stock fraud laws might require the disclosure of information that might affect portfolio value. For example, patent holders who know of anticipatory prior art might be required to disclose it rather than keep it secret. Perhaps most important, accepting that portfolios are simply financial transactions might drive efforts to develop objective portfolio pricing. The goal of such pricing schemes is to determine a portfolio’s price even though the parties cannot agree on the price of any of the particular patent in the portfolio. I examine several pricing strategies that might work (and several destined to fail) in the paper.
There is obviously much more in this paper than I can write here. I detail my arguments in the full paper.
Tuesday, April 16, 2013
Solving the Digital Resale Problem
As Bruce Willis's alleged complaints about not being able to leave his vast music collection to his children upon his death illustrate, modern digital media has created difficulties in secondary and resale markets. (I say alleged because the reports were denied. Side note: if news breaks on Daily Mail, be skeptical. And it's sad that Cracked had to inform Americans of this...).
This post describes a recent attempt to create such a market, and proposes potential solutions.
In the good old days, when you wanted to sell your old music, books, or movies, you did just that. You sold your CD, your paperback, or your DVD. This was explicitly legalized in the Copyright Act: 17 USC Section 109 says that: “...the owner of a particular copy or phonorecord lawfully made under this title, or any person authorized by such owner, is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy or phonorecord.” As we'll see later, a phonorecord is the material object that holds a sound recording, like a CD or MP3 player.
But we don't live in the good old days. In many ways, we live in the better new days. We can buy music, books, and DVDs over the internet, delivered directly to a playback device, and often to multiple playback devices in the same household. While new format and delivery options are great, they create problems for content developers, because new media formats are easily copied. In the bad sort-of-old days, providers used digital rights management (or DRM) to control how content was distributed. DRM was so poorly implemented that it is now a dirty word, so much so that it was largely abandoned by Apple; it is, however, still used by other services, like Amazon Kindle eBooks. Providers also use contracts to limit distribution - much to Bruce Willis's chagrin. Indeed, Section 109(d) is clear that a contract can opt-out of the disposal right: “[Disposal rights] do not, unless authorized by the copyright owner, extend to any person who has acquired possession of the copy or phonorecord from the copyright owner, by rental, lease, loan, or otherwise, without acquiring ownership of it.”
But DRM is easily avoided if you simply transfer the entire device to the another party. And contracts are not necessarily as broad as people think. For example, I have scoured the iTunes terms of service and I cannot find any limitation on the transfer of a purchased song. There are limitations on apps that make software a license and limit transfers, but the music and video downloads are described as purchases unless they are "rentals," and all of the “use” limitations are actually improvements in that they allow for multiple copies rather than just one. Indeed, the contract makes clear that if Apple kills off cloud storage, you are stuck with your one copy, so you had better not lose it. If someone can point me to a contract term where Apple says you have not “purchased” the music and cannot sell it, I would like to see that.
Enter ReDigi and the lawsuit against it. ReDigi attempted to set up a secondary market for digital works. The plaintiff was Capitol Records, so there was no contract privity, so this is a pure “purchase and disposal” case. A description from the case explains how it worked (in edited form here):
To sell music on ReDigi's website, a user must first download ReDigi's “Media Manager” to his computer. Once installed, Media Manager analyzes the user's computer to build a list of digital music files eligible for sale. A file is eligible only if it was purchased on iTunes or from another ReDigi user; music downloaded from a CD or other file-sharing website is ineligible for sale. After this validation process, Media Manager continually runs on the user's computer and attached devices to ensure that the user has not retained music that has been sold or uploaded for sale. However, Media Manager cannot detect copies stored in other locations. If a copy is detected, Media Manager prompts the user to delete the file. The file is not deleted automatically or involuntarily, though ReDigi's policy is to suspend the accounts of users who refuse to comply.
After the list is built, a user may upload any of his eligible files to ReDigi's “Cloud Locker,” an ethereal moniker for what is, in fact, merely a remote server in Arizona. ReDigi's upload process is a source of contention between the parties. ReDigi asserts that the process involves “migrating” a user's file, packet by packet — “analogous to a train” — from the user's computer to the Cloud Locker so that data does not exist in two places at any one time. Capitol asserts that, semantics aside, ReDigi's upload process “necessarily involves copying” a file from the user's computer to the Cloud Locker. Regardless, at the end of the process, the digital music file is located in the Cloud Locker and not on the user's computer. Moreover, Media Manager deletes any additional copies of the file on the user's computer and connected devices.
Once uploaded, a digital music file undergoes a second analysis to verify eligibility. If ReDigi determines that the file has not been tampered with or offered for sale by another user, the file is stored in the Cloud Locker, and the user is given the option of simply storing and streaming the file for personal use or offering it for sale in ReDigi's marketplace. If a user chooses to sell his digital music file, his access to the file is terminated and transferred to the new owner at the time of purchase. Thereafter, the new owner can store the file in the Cloud Locker, stream it, sell it, or download it to her computer and other devices. No money changes hands in these transactions. Instead, users buy music with credits they either purchased from ReDigi or acquired from other sales. ReDigi credits, once acquired, cannot be exchanged for money. Instead, they can only be used to purchase additional music.
ReDigi claimed that it was protected by 17 USC 109. After all, according to the description, it was transferring the work (the song) from the owner to ReDigi, and then to the new owner. Not so, said the court. As the court notes, Section 109 protects only the disposition of particular copies (phonorecords, really) of the work. And uploading a file and deleting the original is not transferring a phonorecord, because the statute defines a “phonorecord” as the physical medium in which the work exists. Transfer from one phonorecord to another is not the same as transfering a particular phonorecord. So, ReDigi could be a secondary market for iPods filled with songs, but not the songs disembodied from the storage media.
As much as I want the court to be wrong, I think it is right here, at least on the narrow, literal statutory interpretation. The words say what they say. Even the court notes that this is an uncomfortable ruling: “[W]hile technological change may have rendered Section 109(a) unsatisfactory to many contemporary observers and consumers, it has not rendered it ambiguous.”
Once the court finds that transferring the song to ReDigi is an infringing reproduction, it's all downhill, and not in a good way. The court notably finds that there is no fair use. I think it is here that the court gets it wrong. Unlike the analysis of Section 109, the fair use analysis is short, unsophisticated, and devoid of any real factual analysis. I think this is ReDigi's best bet on appeal.
Even despite my misgivings, ReDigi's position is not a slam dunk. After all, how can it truly know that a backup copy has not been made? Or that the file has not been copied to other devices? Or that the file won't simply be downloaded from cloud storage or even iTunes after it has been uploaded to ReDigi.
If ReDigi, which seemed to try to do a good job ensuring no residual copies, cannot form a secondary market, then what hope do we have? We certainly aren't going to get there with the statute we have, unless courts are much more willing to read a fair use into transfers. The real problem is that the statute works fine when the digital work (software, music, whatever) is stored in a single use digital product. When we start separating the “work” from the container, so that containers can hold many different works and one work might be shared on several containers all used by the same owner, all of the historical rules break down.
So, what do we do if we can't get the statute amended? I suspect people will hate my answer: a return to the dreaded DRM. A kinder, gentler, DRM. I think that DRM that allows content providers to recall content at will (or upon business closure) must go -- whether legislatively or regulatorily. It is possible, of course, for sophisticated parties to negtotiate for such use restrictions (for example, access to databases), and to set pricing for differing levels of use based on those negotiations. That's what iTunes does with its "rentals."
But companies should not be allowed to offer content "for sale" if delivery and use is tied to a contract or DRM that renders that content licensed and not in control of buyers. This is simply false advertising that takes advantage of settled expectations of users, and well within the powers of the FTC, I believe.
But DRM can and should be used to limit copying and transferrability. If transferability is allowed, then the DRM can ensure that the old user does not maintain copies. Indeed, if content outlets embraced this model, they might even create their own secondary markets to increase competition in the secondary market. In short, the solution to the problem, I believe, is going to be a technical one, and that might be a good thing for users who can no credibly show that they won't copy.
And DRM is what we are seeing right now. Apparently, ReDigi has reimplemented its service so that iTunes purchases are directly copied to a central location where they stay forever. From there, copies are downloaded to particular user devices pursuant to the iTunes agreement. This way, ReDigi acts as the digital rights manager. When a user sells a song, it ReDigi cuts off access to the song for the selling user, and allows the buying user access without making a new copy of the song on its server. I presume that its media manager also attempts to delete all copies from the sellers devices.
Of course, this might mean that content, or at least transferring it, is a little more expensive than before. But let's not kid ourselves - the good old days weren't that good. You had to buy the whole CD, or maybe a single if one was available, but you could not pick and choose any song on any album. Books are heavy and bulky; you couldn't carry thousands of them around. And DVDs require a DVD player, which has several limitations compared to video files.
DRM may just be the price we pay for convenience and choice. We don't have to pay that price. Indeed, I buy most of my music on CD. And I get to put the songs where I want, and I suppose sell the CD if I want, though I never do. As singles start costing $1.50, it may make sense to buy the whole CD. Alas, these pricing issues are incredibly complex, which may take another post in the future.
Wednesday, April 03, 2013
...it's good to be here, and I'm glad to be back. I plan to blog about a few different IP/Internet topics this month, including the CFAA and some recent copyright cases.
I'm traveling this week, so likely won't jump in for a couple days. In the meantime, a few plugs.
Here are some links to recent blog posts I've written that might be of interest to readers here:
My thoughts on the SHIELD Act (one way fee shifting against patent trolls)
My Wired Op-Ed on NPEs
I'm also speaking at two events in April, including this Friday. Please come if you can make it! (You have to register for the design patent conference.)
Finally, I've finally joined twitter:
That's it for the shameless self promotion for now. I look forward to the rest of the month.
Thursday, January 31, 2013
Wrap-Up for Book Club on "Justifying Intellectual Property"
- Introductory Post
- Gordon: Thoughts on Justifying Intellectual Property
- Masur: The New Institutional Philosophy of Rob Merges
- Merges: Merges on Gordon on Rawls and IP
- Gordon: Replying to Rob Merges, Justifying Intellectual Property
- Bracha: What Good are Midlevel Principles in IP? [Thoughts on Justifying IP]
- Merges: Midlevel Principles: Response to Jonathan Masur
- Merges: Even More on Midlevel Principles in IP Law - Response to Bracha
- Duffy: Merges and Descartes
- Hughes: More on Rawls and Intellectual Property
- Bracha: Still on Midlevel Principles in IP: A Reply to Rob Merges
- Masur: Masur on Merges on Masur on Merges
- Merges: Justifying IP: Putting the Horse Before Descartes (Response to Duffy)
Wednesday, January 30, 2013
ost Book Club: Justifying IP -- Putting the Horse Before Descartes (Response to Duffy)
In this, my final response to the many interesting posts in my book, I want to traverse some comments that John Duffy made. To the other authors of posts, especially those who wrote reactions to my responses -- we will have to continue offline. I have taken too much space already. And the many readers of Prawfsblawg who care nothing for IP are I am sure tired of all this.
I am going to skip over the blush-inducing praise in John's post, and get right to his main point. He says:
" [I]f we are frustrated with the complexities of economic theories and are searching for a more solid foundation for justifying the rules of intellectual property, is Kant (or Locke or Rawls or Nozick) really going to help lead us out of the wilderness?"
John says no. He says further that just as Descartes' doubts drove him to embrace foundations that were thoroughly unhelpful when it came to elucidating actual physical reality, such as planetary motion, so my doubt-induced search for solid foundations will lead nowhere (at best), and maybe to some very bad places (at worst).
This argument may be seen to resolve to a simple point, one often made in legal theory circles: "It takes a theory to beat a theory." (Lawrence Solum has an excellent entry on this topic in his Legal Theory Lexicon, posted on his Legal Theory Blog some time back.) The idea here is that utilitarian theory is a true theory, because it is capable of proof or refutation and because it guides inquiry in ways that could lead to better predictions about the real world. By this criterion, deontic theories are not real theories because they cannot be either proven or refuted. Einstein's famous quip comes to mind; after a presentation by another scientist, Einstein supposedly said "Well, he wasn't right. But what's worse is, he wasn't even wrong."
My response starts with some stark facts. We do not know whether IP law is net social welfare positive. Yet many of us feel strongly that this body of law, this social and legal institution, has a place in a well-functioning society. Now ,we can say the data are not all in yet, but we nevertheless should maintain our IP system on the hope that someday we will have adequate data to justify it. The problem with this approach is, where does that leave us in the interim? We could say that we will adhere to utilitarian theory because it stands the best chance of justifying our field at some future date -- when adequate data are in hand. But meanwhile, what is our status? We are adhering, we say, to a theory that may someday prove true. By its own criteria it is not true today, not to the level of certainty we require of it (and that it in some sense requires of itself.) But because it will be "more true" than other theories on that magic day when convincing data finally arrive, we should stick to it.
My approach was to turn this all upside-down, I started with the fact that the data are not adequate at this time. And I admitted that I nevertheless felt strongly that IP makes sense as a field; that it seems warranted and even necessary as a social institition. So it was on account of these facts that I began my search for a better theoretical foundation for IP law.
If you have followed me so far, you will not be surprised when I say that for me, Locke, Kant and Rawls better account for the facts as I find them than other theries -- including utilitarianism. Deontic considerations explain, to me at least, why we have an IP system in the absence of convincing empirical evidence regarding net social welfare. Put simply: We have IP, regardless of its (proven) effect on social welfare -- so maybe (I said to myself) *it's not ultimately about social welfare*.
This is the sense in which, to me, deontic theory provides a "better" theory of IP law. It fits the facts in hand today, including the inconvenient fact of the absence of facts. Of course, we may learn in years to come that the utilitarian case can be made convincingly. I explicitly provide for this in JIP, when I say that there is "room at the bottom," at the foundational level, for different ultimate foundations and even new ultimate foundations. It's just that for me, given the current data, I cannot today make that case convincingly. And it would be a strange empirically-based theory that asks me to ignore this key piece of factual information in adopting foundations for the field. To those who say deontic theories cannot be either proven or disproven, I offer the aforementioned facts, and say in effect that an amalgam of deontic theory does a better job explaining why we have IP law than other theories. And therefore that it is in this sense "more true" than utilitarian theory. Again, it fits the facts that (1) we do not have adequate data about net social welfare; and (2) we nevertheless feel IP is an important social institution in our society and perhaps any society that claims to believe in individual autonomy, rewards for deserving effort, and basic fairness.
One final point: to connect Kant with Hegel with Marx, as John does, is a legitimate move philosophically. But I have to add that for many interpreters of Marx, he is the ultimate utilitarian. What is materialism, as in Marxist historical materialism, but a system that makes radically egalitarian economic outcomes the paramount concern of the state? The famous suppression of individual differences and individual rights under much of applied Marxist theory represents the full working out of the utilitarian program under which all individuals can be reduced to their economic needs, and all government can be reduced to a mechanistic system for meeting those needs (as equally as possible)? If we are going to worry about where our preferred theories might lead if they get into the wrong hands, I'll take Locke and Kant and Rawls any day. In at least one form, radical utilitarian-materialism has already caused enough trouble.
This is hardly all there is to say, but it is all I have time to say. So I will keep plodding along, like a steady plow horse, trying not only to sort out the foundational issues, but also to engage in policy discussion and doctrinal analysis. And with this image I close, having once again put the (plow) horse before Descartes in the world of IP theory.
Masur on Merges on Masur on Mergers
I greatly appreciate Rob Merges' generosity in taking the time to respond to my original post. His response is, characteristically for Rob, incisive and thoughtful. I am not sure, in the end, how much we really disagree. But I will take a shot at briefly disentangling and clarifying a few points with the goal of identifying whether or not disagreement actually exists.
Rob is absolutely correct that there are two separate questions: 1) whether an IP system can be justified at all; and 2) how well a particular system is performing. Rob argues that, with respect to question #1, the IP system cannot be justified on economic (by which we mean utilitarian or welfarist) grounds. Why would this be? One possibility is that utilitarianism or welfarism or consequentialism (which is what we mean when we talk about an "economic" foundation) cannot provide a morally satisfactory basis for intellectual property rights. There is a short section in the book (pages 151-153) that coulud be read as developing this argument, but that section is better understood as a critique of a completely unfettered free market, a point with which few economists would disagree. As a general matter, the book does not appear to be making this point, and indeed it would be a mammoth undertaking to do so (even for Rob Merges and this book) given the extensive arguments that scholars have been making for centuries about welfarism as a moral foundation. Rob will correct me if I am wrong, but I do not understand this to be his main argument.
A second possibility is that economics (read: utilitarianism or welfarism) cannot generate the midlevel principles that operate in intellectual property. But as I pointed out in my previous post, it can generate them -- or at least the ones that are really central to the American IP system.
The third possibility, and the one I understand Rob to be advancing, is that the IP system, as it is currently constituted, does not actually promote the utilitarian ends that an economic approach would demand. That is: as an empirical matter, IP doctrines as they operate today do not actually increase social welfare. As Rob wrote in his post:
"The data required by a comprehensive utilitarian perspective are simply not in evidence in this field -- at least not yet. Put simply, I do not think we can say with the requisite degree of certainty that IP systems create net positive social welfare."
That seems exactly right to me, and this is why I believe that Rob and I are actually in violent agreement as to most of the important issues. But this means that economics fails in response to Rob's question #2 -- how well is the system actually performing? -- rather than question #1, which is how the IP system can be justified on a theoretical basis. That is why I wrote that economics has failed an empirical test, while Rob's deontic theory has passed a theoretical test. This touches upon an excellent point made by a commenter to my first post. This is not a reason to abandon deontic theory; rather, the point is simply that when we evaluate different types of theories, we should do on comparable grounds.
Nor do I mean at all to say that Rob's deontic theory is not correct, or compelling, or even superior to economics. It is certainly the first two and maybe the third as well. It is just that I do not believe a utilitarian economic theory can be ruled out on the theoretical grounds used to evaluate Lockean and Kantian deontic theories. Economics is part of the overlapping consensus as well.
Book Club: Even More on Midlevel Principles in IP Law - Response to Bracha
In a previous post I explained the concept of midlevel principles in IP law. In this post I respond to a couple of detailed points made in a very insightful post on this topic by Oren Bracha. Oren has a number of interesting things to say, but his critique has two main points: (1) the conservative bias of midlevel principles; and (2) the fuzzy nature of midlevel principles, a product of their origin in a (hypothetical) consensus-building procedure.
(1) The conservative bias: I think there are two senses of "conservative." In my view, what are conserved are meta-themes that derive from but transcend specific practices. These themes do not uniformly point to results that are "conservative" in the other sense -- tending to preserve the status quo; continuing with trends currently in place. Let me illustrate with two specific examples. When Wendy Gordon introduced the idea of "fair use as market failure," she tied together a number of emerging themes in copyright law and connected them with a large body of thought (including caselaw) that came before. But her ideas -- based largely on what I would call the efficiency principle, though surely infused also with considerations of proportionality, nonremoval (public domain), and perhaps even dignity -- were not conservative with respect to outcomes. In fact they created a revolution in consumer or user rights, by shifting the focus from the copyright owner's interests, the amount copied, etc., to higher-level issues such as transaction costs and the nature of markets for IP-protected works.
A second example is eBay. The majority opinion, based on traditional equity doctrine (as codified in the Patent Act), was conservative in the sense that it deployed well-known rules. The Kennedy concurrence had a richer policy discussion, which centered (in my view) on the proportionality principle. The basic idea was that sometimes the automatic injunction rule gives patent owners "undue leverage" in negotiations; and that equity was flexible enough to take this into account. I see this as the embodiment of a very general principle, one that finds expression in many areas of IP law, from the rules of patent scope (enablement, written description, claim interpretation, etc.) to substantial similarity in copyright law, and so on. Again the discussion "conserved" on meta-principles by deploying a familiar theme from the body of IP law. But the outcome was not therefore necessarily conservative in the sense of preserving the staus quo. The status quo heading into the case was the automatic injunction rule. And that was rejected in favor of a more flexible approach.
(2) The fuzz factor: Oren's second point is that the midlevel principles just do not seem to have the requisite level of granularity to resolve difficult problems in IP policy. This leads him to conclude that the only way to gain true resolution is to engage each other at the (admittedly contentious) level of our foundaional commitments.
Here I would advert to the master for some guidance. John Rawls, in A Theory of Justice, describes a detailed multi-stage procedure by which fair institutions can be established. In the course of the discussion he says this about the problem of fuzziness:
"[O]n many questions of social and economic policy we must fall back upon a notion of quasi-pure procedural justice: laws and policies are just provided that they lie within the allowed range, and the legislature, in ways authorized by a just constitution, has in fact enacted them. This indeterminacy in the theory of justice is not in itself a defect. It is what we should expect. Justice as fairness will prove a worthwhile theory if it defines the range of justice more in accordance with our considered judgments than do existing theories, and if it singles out with greater sharpness the graver wrongs a society should avoid." (A Theory of Justice, sec. 31, pp. 200-201).
So foundational consensus will inevitably be general. But that does not mean that citizens cannot engage each other in contentious argument at more operational, implemenetation-oriented stages. The way I see things, the midlevel principles are expansive enough to cut through the generality required to agree on them. (Note that this pluralistic sensibility is a product not of the early Rawls of A Theory of Justice but of the later Rawls of Political Liberalism.) These principles admit of sharper disagreement and a deeper level of engagement than Oren seems to believe. Perhaps they require greater elaboration than my brief treatment made possible. But they are not in my view fatally vague as a vocabulary of policy debate.
I should add one additional point. Oren notes my emphasis in JIP on the complete independence of foundational commitments and midlevel principles. I have begun to rethink that a bit, based in large part on a thoughtful critique of this aspect of the book by David H. Blankfein-Tabachnick of Penn State Law School. His critique and my response are both still in process and are forthcoming in the California Law Review, so I do not want to say too much. But suffice it to say that I have rethought the "complete independence" thesis a little bit. I can see that in a few rare instances, where policy issues are in equipoise, resort to one's ultimate commitments -- the foundations of the field as one sees them -- may be useful and even necessary. So, to close with Oren's wonderful imagery, after the flash of white light on the road to Damascus, the rider surely does remount and head on down the road. But he or she is changed utterly at some level -- and that change is bound to peek out, now and then, in the clinch.
Book Club: Justifying IP -- Midlevel Principles: Response to Jonathan Masur
In this post I respond to some comments on my book (abbreviated "JIP") by Jonathan Masur. It is not surprising to me that Jonathan takes aim at Part II of JIP, in which I introduce and explain what I call the midlevel principles of IP law. It seems whenever the book is addressed in depth (most notably at a full-day conference at Notre Dame organized by Mark McKenna; and a number of discussions at a conference on the Philosophy of IP rights at San Diego convened by Larry Alexander), this is the topic that seems to stir up the greatest interest.
Before I turn to Jonathan's specific points, let me say a word about what I mean by midlevel principles. Basically, these are meta-themes in IP law that mediate between pluralist foundational commitments and detailed doctrines and case outcomes. They are meant to serve as the equivalent of shared basic commitments in the “public” and “political” sphere as described by Rawls in his book Political Liberalism (2005). That is, midlevel principles supply a shared language, a set of conceptual categories, that are consistent with multiple diverse foundational commitments. They are more abstract, operate at a higher level, than specific doctrines and case outcomes; but they are pitched in a language that is distinct from that of foundational commitments. They create, as I say in JIP, a shared public space in which abstract (non-case-specific) policy discussions can take place. The payoff is this: a committed Kantian can conduct a sophisticated policy argument with a firm believer in the Talmudic (or Muslim, or utilitarian) basis of IP law about the proper scope of fair use in copyright, or the proper length of the term for patent protection, or what should be required to prove that a trademark has been abandoned. The argument can proceed without the Muslim needing to convert the Kantian or utilitarian to a religious worldview, and without the Kantian talking others out of the view that religious texts provide a set of workable guiding principles for right behavior. Diverse people can – and indeed, often do! – speak in terms of an appropriate public domain (i.e., the nonremoval principle); a fair reward for creators (the proportionality principle); the importance of moral rights (the dignity principle); or the cheapest way to offer legal protection at the lowest net social cost (the efficiency principle). All without the conversation devolving into fights over ultimate commitments.
Jonathan Masur recognizes the versatility of the midlevel principles. And he acknowledges that although these principles are fully consistent with utilitarian foundations, the IP system as a whole has failed to fully implement the policies called for by those with a thorough commitment to utilitarian foundations. As he puts it:
"The problem, as Merges correctly describes it, is that IP doctrine, as implemented by courts and other parties, has failed to advance the economic aims that it set out. This is an empirical judgment, and quite possibly a correct one."
As Masur notes, I have come to believe that utilitarian foundations are inadequate in the IP field. The data required by a comprehensive utilitarian perspective are simply not in evidence in this field -- at least not yet. Put simply, I do not think we can say with the requisite degree of certainty that IP systems create net positive social welfare. Yet I still had the intuition that IP rights are a valuable social institution. Which is what led me to search for alternate foundations. Hence Part I of JIP, in which I describe foundational commitments growing out of the ideas of Locke, Kant and Rawls. These deontic conceptions provide a better set of foundational commitments for the IP field, in my view. Others of course disagree, which is why the midlevel principles are so important as a shared policy language for those with divergent foundational commitments.
Masur notes the lack of empirical support for utilitarian IP foundations, but says in effect that deontic foundations do not provide much of an alternative. As he puts it,
"But what is the comparable standard by which a deontic conception of IP is to be judged? What would it mean for IP doctrine in practice not to have properly advanced Lockean or Kantian ethics? How could anyone tell? The problem—or, more accurately, the advantage for Kant and Locke—is that those approaches are purely theoretical and do not generate testable predictions. Economic theory has foundered on a set of tests that cannot be applied to the alternatives Merges proposes."
The way I see things, Jonathan has conflated two separate issues here. The first is whether IP can be justified at all. The second is how well any particular IP system is performing, given that there is a basic consensus that there should be such a system in the first place. The first issue is where foundational commitments come in. The second is operational; it is a question more of "how" or "how well" as opposed to "whether." (I address this in more detail in an article forthcoming in the San Diego Law Review, "The Relationship Between Foundations and Principles in IP Law.")
Seen in this light, there is no need for empirical tests to prove the viability of Lockean, Kantian, and/or Rawlsian foundations for the field. The only question that needs to be answered is whether a body of IP law can be envisioned that is consistent with these systems of philosophical thought. If so, the foundational question has been successfully answered. Then it's on to the operational level -- designing actual institutions and rules to implement a workable IP system. In my view this is where the efficiency principle comes into play: one important design principle for IP law is and should be getting from our IP system the greatest social benefit at the lowest net cost (as best we can estimate these values). Efficiency is an operational (midlevel) principle, in other words. It does not (and in my view cannot) justify the existence of the field. But it can serve us well in crafting the detailed operations of the field -- once we decide, consistent with ultimate commitments, that it makes sense to have such a field in the first place.
Tuesday, January 29, 2013
Merges on Gordon on Rawls and IPWendy Gordon, as might be expected, gets right to the heart of the most difficult issues in her post on the Rawls chapter in my book, Justifying Intellectual Property ("JIP"). In this post I want to give some quick context and then point the interested reader to the fuller discussion that addresses the issues Wendy raises. Chapter 4 of JIP is on "Distributive Justice and IP Rights." It comes after an introductory chapter that lays out the architecture of the book, and then two chapters on foundational figures in the philosophy of property rights, Locke and Kant. While Locke and Kant are both sophisticated enough to include "other-regarding" features in their accounts of property, I wanted to include a more thorough, systematic, and comprehensive account of distributive justice issues in my discussion of IP rights. So naturally I turned to Rawls. Rawls himself, especially the early Rawls of A Theory of Justice, is fairly lukewarm on private property. But there is a good bit of subsequent literature that extends and adapts Rawls's framework in various ways that reflect more contemporary concerns. And of course since the 1970s there has been a huge upswelling of interest in property theory and philosophical discussions of private property. (Think Jeremy Waldron, The Right to Private Property; Stephen Munzer, A Theory of Property; Richard Epstein's Takings book and subsequent writings; Henry Smith, Lee Ann Fenell, Carol Rose, Greg Alexander, etc. etc. And in IP law, Peggy Radin, Wendy herself (indispensable on Locke) and others.) So it was in this spirit of updating and adapting that I tried to defend IP rights as consistent with a comprehensive Rawlsian account of distributive justice. I began, reasonably enough I think, with Rawls' two principles of justice. Principle 1 says that all persons have an equal right to the most extensive system of basic liberties that is consistent with the liberty of others (the "liberty principle"). The balancing of individual ownership with the interests and rights of the community is a major theme of contemporary property theory -- arguably *the* major theme. So it was relatively easy to draw on the property rights literature for a defense of property (and particularly IP rights) under the liberty principle. I will spare you the details here; but I would add that for me Kant's emphasis on property as a way to facilitate personal autonomy factors heavily into my description of IP as a true, basic individual right. Rawls aficionados will recognize that I could have stopped there. Under his "lexical priority" approach, if a right is demanded by the liberty principle it need not be justified in terms of the second principle. Because I was not sure everyone would buy my defense of IP under the first principle, and more importantly because I could not resist the challenge, I also tried to defend IP under Rawls's second principle. The second principle is the famous "difference principle." A deviation from strictly equal resource allocation can be justified only if it results in the greatest benefit to the least advantaged members of society. My argument here is based on the fact that industries reliant on IP rights contribute significantly to the quality of life of the poorest members of society. Popular culture (including much TV programming); technological improvements such as air conditioning; low-cost long-distance communication and transportation (especially important for immigrants); and cost-saving innovations of all kinds (mobile phones, hypertension medicines, etc.) are, the data show, highly valued by low-income members of our society. These data are surely what Wendy Gordon has in mind when she says that I have not persuasively defended IP under Rawls's second principle. She makes a good point. IP rights stand behind a number of personal fortunes that in themselves represent wildly extravagant deviations from a pure egalitarian distribution (think Bill Cosby, Bill Gates, Jay-Z, George Lucas, Oprah Winfrey). Consumer enjoyment, particularly for the least advantaged, must be factored into a discussion of these fortunes and the institutions (including IP rights) that make them possible. I would note, incidentally, an interesting feature of my list of IP-backed fortunes. Did you notice that 3 of the 5 people mentioned are African Americans? While not strictly relevant to the second principle, I think it is interesting that so many prominent fortunes in the African American community have been enabled by IP rights. This may not be justifiable unless the poorest of our citizens somehow benefit from the conditions that make these fortunes possible, but it is surely an interesting point from the general perspective of distributional concerns in our socio-economic system. (Incidentally, Justin Hughes and I have undertaken some joint work to pursue this idea in more depth). Nevertheless, as I acknowledge in JIP, what I provide is really not much more than a sketch of a full-blown defense of IP under the difference principle. A fuller defense would have to accept the higher marginal prices brought about by IP, and balance these against the consumer surplus created even for the poorest members of society by IP-based entertainment and technology products. My defense gestures in this direction but falls far short of being truly comprehensive. On the other hand, at least I have tried to integrate a comprehensive account of distributive justice into the discussion of IP rights. It may be less than a full feast. But perhaps it's also more than chopped liver.
Thoughts on "Justifying Intellectual Property" from Wendy Gordon
Here is an initial post for the book club from Wendy Gordon, William Fairfield Warren Distinguished Professor, Boston University, and Professor of Law, BU School of Law:
Rob Merges’s new book book is an immense achievement. Intellectually it is stunning, plus Rob is an amazing and appealing writer.
Not since Peter Drahos’s 1996 book, A Philosophy of IntellectualProperty, has someone attempted to bring together a plethora of philosophic perspectives on IP. Rob adds to this panoptic philosophic view a sharp knowledge of economics, and he puts at the center an acute recognition of how much we need – and lack—crucial empirical evidence about the effects of IP.
Ironically, it’s Rob’s valuable focus on the need for better facts that fails him in the chapter on Rawls. Rob argues that broad IP rights are consistent with giving Rawlsian priority to the worst off in society. But the Rawls chapter is riddled with factual assumptions which, if empirically investigated, might well prove the opposite.
One could quibble on philosophic grounds with Rob’s interpretation of Rawls (details of quibble available on request), but even on Rob’s own terms it’s far from clear that the worst-off benefit from the restraints that patent and copyright impose on the use of inventions and works of authorship.
Book Club on "Justifying Intellectual Property"
Welcome to the Book Club on Robert Merges' Justifying Intellectual Property. Joining us for the club will be:
- Oren Bracha, University of Texas School of Law
- John Duffy, University of Virginia School of Law
- Wendy Gordon, Boston University School of Law
- Justin Hughes, Benjamin N. Cardozo School of Law
- Jonathan Masur, University of Chicago Law School
- and our author, Robert Merges, University of California Berkeley School of Law
Tuesday, December 11, 2012
Incentive Granularity and Software Patenting
For my last post, I would like to address a comment repeatedly seen on my prior post: “Show me an invention that would not have happened for the entire patent term, and maybe then we can discuss whether the patent system does any good.”
I’m not convinced this is the right level of granularity. But first, a couple caveats:
- I tend to think the patent term is too long for the speed at which technology develops today, especially computer software. This may not be true for pharmaceuticals, which leads to tension in the system.
- Of course we should look at whether individual patents were incentivized by the patent grant. It would be a bad system indeed if we protected everything that would have occurred anyway. Note that I think the “inducement” standard proposed by Duffy & Abramowicz and discussed in my previous post has some real merit.
But even with these two caveats, that’s not the question we should be starting with. The goal of the patent system is to promote progress of the useful arts. That might happen by encouraging investment in start-ups. That might happen by encouraging research & development funding. That might happen by inventions that come earlier than they would have, even if they would have otherwise come within 20 years. That might happen by allowing inventors some breathing room to invest in commercialization and dissemination of the invention. That might even happen by ending duplicative (wasteful) races carried out in secret. And all of these things might create costs, perhaps tremendous costs for some who come later.
To be sure, there is great (and I do mean tons of) study and debate about whether any of these benefits actually materialize and outweigh the costs. The analysis, though, takes place at a higher level than whether each and every invention would have come about within 20 years. That analysis – or something like it – certainly has its place, but not when assessing the system as a whole. And that's all I have to say about that.
Thanks again to Prawfsblawg for having me back. I enjoyed my stint, in what may be the most active commenting I’ve received (which may not be a good thing!).
Tuesday, November 27, 2012
Two Worlds of Software Patents
I recently participated in Santa Clara Law School's great conference on "Solutions to the Software Patent Problem." The presentations were interesting and thoughtful, and...short! A total of 34 presentations in one day, including some Q&A from the audience. Op-Eds from the conference are continuing to appear at Wired Magazine's blog, and Groklaw has a fairly thorough article summarizing the presentations.
I want to focus this post on an epiphany I had at the conference, one that is alluded to at the end of the Groklaw article. In short, there appear to be at least two world views of software patenting (there is probably a third view, relating to natural rights and property, but I'm going to put that one to the side). More after the jump.
On the one hand, you have the utilitarians, who believe that the costs of patenting might be worth the benefits of patenting. Or maybe they aren't, but that's the important question to them: to what extent does allowing software patent drive innovation? The Groklaw article implies that this group is primarily large corporate interests, but I think that's too restrictive. For example, I'm unabashadly a member of this world view, and my affinity is toward start-ups.
On the other hand, you have what I'll call the friends of free software (more fully called FOSS - Free and Open-source Software). These individuals believe that software is thought, and math, and that no one can own it. I've found that some take this view to the extreme - they have no problem with a circuit that performs the same thing as software, so long as it is performed in hardware. Members of this group believe that software patents should be unpatentable as a matter of principle, and that by allowing any kind of software patenting bad things will happen to individual programmers, to free software, and in the world generally. As further evidence that the divide is not just about large corporate interests, there are plenty of people who subscribe to this world view that started large successful companies.
Now, here is the epiphany - I belive that bridging these two worlds is possible if one believes that any software patent should issue. (If you agree that software patents can never satisfy utilitarian ends, then you can bridge the worlds. Benson Revisited by Pamela Samuelson is a great example of such a bridge.)
Believe me, I tried to make the leap. I wrote a lengthy post at Groklaw that garnered more than 1300 comments where I tried to better understand the free software view and they tried to understand mine.
Surely, I thought, they might see that there are some lines that can be drawn that would allow for inventive software innovations. Surely, I thought, we can discuss some tweaks that would help alleviate the deleterious effects of low quality patents but save the system for one good software patent.
Surely, they thought, I would see how software patents are a bane to society, and must just go. Surely, they thought, I would see that there is no such thing as a good software patent.
The problem is that the goals of each world view are just too different. The following exchange from the Santa Clara conference between John Duffy and Richard Stallman drives the point home. I'm paraphrasing the statements, of course:
[Stallman's keynote]: Companies don't need software patents to innovate - just look at the rise of Google. [later] My proposal is that we can enforce software patents in standalone devices but not in general purpose computers.
[Duffy's talk]: I'm glad Stallman points out that software companies don't need patents - I think we agree on a solution. My proposal is that if an inventor is not induced to invent because of the prospect of a patent, then the invention is obvious and no patent should issue.[later]Stallman's proposal, though, is a kludge - a patch on the system rather than an elegant solution like redefining obviousness.
[Stallman in response to Duffy]:It doesn't matter if the patent induced the invention, it is still a bad patent. It may actually be worse, because now it can't be invalidated. My solution is not a kludge, because it handles the very real problem of software patents and eliminates it.
[Duffy]: But you have to look at the ex ante incentive to invent. If we don't allow patent enforcement, inventions might not happen that would have happened with the patent system.
[Stallman]: It's OK if we don't get those inventions. Maybe they will be developed, maybe they won't, maybe they will take longer, but the harm to any future software programmer/company is never justified by encouraging that investment with a patent.
And there you have the core of the problem. Utilitarians like Duffy (and me) believe that it is worth driving the ex ante incentive to innovate, but trying to hone the system to minimize collateral damage. Free software folks like Stallman (and probably 99% of Groklaw readers) believe that the collateral damage never justifies the ex ante incentive in a practical way.
You can see the core of these arguments in the debate about whose invention is elegant and whose is a kludge. Duffy believes that tweaking inducement to invent is elegant because that's what utilitarianism is all about. Just barring patents on general purpose computers is a patch, because there might be valuable innovations in the use of general purpose computers that are worth encouraging. Investment in standalone software might decline if there is not general purpose application at the end of the rainbow, especially in the age of smartphones.
On the other hand, Stallman believes that barring enforcement on general purpose computers is elegant, because it eliminates the most harmful effects to programmers. He believes that changing obviousness is a kludge, because it refuses to acknowledge that even the patents that come from the new rules will be bad for society. As Stallman commented to me after the conference: "There may be weak patents, and there may be strong patents, but they are all bad patents."
So, where does that leave us? I don't know, but I have to think it is helpful to understand why we can't seem to understand each other. I'm not sure where it leaves the utilitarians. They seem to be winning in policy circles, as this recent speech by PTO director David Kappos shows, but utilitarians can't even seem to agree among themselves the best course of action with software patents. Perhaps this recognition will aid those with the free software view to hone their arguments in a way that will get more policy traction - by making their same important points, but somehow framing them in a langauge utilitarians will hear. Samuelson's Benson Revisited article is a good example.
UPDATE: Thank you all for the thoughtful comments. It's really the only way to know anyone is reading at all. Because there are some common themes in the comments, I thought I would respond here rather than a long comment.
Theme 1: Software is just ideas and math, the debate isn't utilitarian because you never get to patenting in the first place. I would submit that a) this is evidence of a separate world view (and one widely shared - by calling it separate, I don't mean to disparage it). However, it also reveals an important definitional divide -one I thought about putting into the main post, but then decided against as it ran too far afield of my point. Maybe I was wrong about that. The question is what is software. One comment below essentially says, "Well, of course circuits are patentable and software isn't. Software is just abstract math." The problem is that most patents don't claim just the abstract math part. They claim "The steps of making A happen by doing X, Y, and Z." Once you view a patent that way, a circuit and software are equivalently infringing if they do X, Y, and Z - they are the same - a means to some other patented end. This is another bridging difficulty. FOSS folks may think utilitarians are not hearing their points about math, but reading some of these comments (asking me to "wake up," for example) sure makes me feel like my points about how process patents work are not being understood.
Theme 2: Software patents harm innovation. This might be true, but you have to look at the benefits on the front end. It's an ex post v. ex ante thing. Utilitarians will agree that software patents harm innovation on the backend if they can get the benefits on the front end. This leads to...
Theme 3: Software patents don't help innovation on the front end. This is facially a utilitarian argument, I will admit. Commentors ask, "show me the evidence of such benefits." And that is what utilitarians debate about - whether the evidence is there.
But here's the thing - and the reason why I put in the Duffy/Stallman exchange that was so eye-opening for me. Duffy was quite clear: If there is no software patent that would have been induced by the patent system, then fine, there should be no software patents. He thought there was agreement. But Stallman was quite clear that no, even if there was such a patent that withstood that test, that survived the evidence, it would still be bad and should still be unenforceable. That was the point of my post. For all those people who say there is no evidence, I ask you: what if that evidence came? What then? Would you change your mind? I suspect most would say no.
Thursday, November 15, 2012
Sowing Self-Replicating Seeds of Change
The Supreme Court will soon be hearing oral arguments for another case pitting agricultural giant Monsanto against a farmer, Vernon Bowman, whom Monsanto accuses of infringing its genetic seed patents. First some background, then some patent law, then a handful of questions...
At the beginning of any given soybean season, Bowman plants a first soybean crop using expensive patented seeds containing a genetic modification that makes the resulting plants resistant to Roundup® (a common herbicide not coincidentally also manufactured by Monsanto). Bowman harvests the soybeans and sells them to a local grain elevator. Later in the growing season, Bowman plants a riskier (because it’s so late) second soybean crop, but instead of using expensive patented seeds purchased from Monsanto’s licensees, Bowman buys cheap commodity seeds from a grain elevator (maybe the same ones he sold the year before, the commodity seeds are saved from the previous year’s growing season). Because so many farmers use Roundup Ready® brand seeds from Monsanto, about 94% of the undifferentiated seeds available for purchase from the grain elevator display Roundup Ready® traits.
Bowman (and anyone who purchases patented seeds from Monsanto’s licensees) must enter into a technology agreement with Monsanto that forbids him from replanting saved seeds but does not forbid him from selling his harvested crop to a grain elevator (for obvious reasons). Bowman is not violating his license agreement. The problem? The harvested crop consists of seeds ready for replanting (soybeans, by their nature, self-replicate). Bowman and his fellow farmers avoid buying expensive seeds for that risky second season: if everyone sells their seeds back to the grain elevator in season 1, then when the seeds are bought in season 2, they will be predominantly Roundup Ready® at a fraction of the price. Monsanto sued, claiming that Bowman infringes regardless of being square with his agreement because he is growing Roundup Ready® plants without permission.Patent law can be tricky in this area. The gist is that once the patent owner makes an authorized sale of a patented good, the good can move in commerce (it can be used, sold or offered for sale) without liability to the patent owner for infringement. This is a pretty unremarkable doctrine. If I buy a ballpoint pen covered by a patent, I can resell it on eBay or write a letter with it without paying more to the patentee. Where it gets tricky: determining what “authorized sale” means in a world of licenses and post-sale restrictions. For a long time, courts have refused to enforce post-sale restrictions as to price and territory—we hate those because either they alienate chattel or they give us antitrust heebie-jeebies or both (for example, if my pen had a label that said I could only use it in Louisiana or that I could only resell it for $2). But we’ve held onto field of use licenses (i.e., a licensed radio manufacture may be licensed only to manufacture radios for home use) such that when the license is violated (the licensee sells to a home user in violation of the license), we find infringement. Post-sale restrictions not involving price or territory must be reasonable and not outside the scope of the patent in order to prevent an authorized sale (and thus exhaustion of patent rights) from occurring. For example, when I stamp my patented pre-filled syringe with “no refills, single use only” and sell it to you, provided I’m not violating antitrust laws and provided my patent claims are reasonably related to this condition, when you refill it and sell it to someone else, you are infringing—it’s not an authorized sale because, according to the Federal Circuit in Mallinckrodt, it’s a sale conditioned by the post-sale restriction. These questions were supposed to be resolved by the Court’s 2008 case, LG v. Quanta, but the Court was able to punt them by resolving that case narrowly on the terms of the license to the manufacturer (which was broad enough to give the licensee all of the rights necessary to exhaust the patent with respect to sales to downstream purchasers).
How should the Court decide Bowman then? Monsanto conditionally licenses to the seed companies and the seed companies conditionally sell to the farmers. The Court could hold that the sales are conditioned, so there can be no exhaustion as to farmers who purchase the seed; Monsanto wins. (The Federal Circuit might like this view, as it comports with Mallinckrodt and its other seed cases, Scruggs and McFarland.) Or they could hold that because none of these conditions are violated, it would seem that the patent rights to use and sell the patented technology (the seeds) are exhausted—I’m not infringing when I buy the seeds from the grain elevator, sell them on eBay, or use them as jewelry. But this leaves the patentee’s right to exclude others from making the patented technology. When Bowman buys the undifferentiated commodity seeds from the grain elevator, plants them during his later season and grows plants that produce second-generation seed, is he making the second generation seeds in violation of the patent right to make? And when he sells these second generation seeds, is he running afoul of Monsanto’s right to exclude others from using or selling the technology, creating a perpetual loop of growing and making unauthorized sales back to the grain elevator pool?
The Supreme Court has to figure out whether to clarify (or shutter) Mallinckrodt’s conditional sale doctrine and reconcile Quanta and early twentieth century case law born during a time of great suspicions regarding patent monopolies, or create a rule of law specifically addressing how this all works for self-replicating technology. The term “make” in this context makes very little sense—the licensee seed companies insert the modification into the genome of their seed lines and then produce seeds, the only true manufacturing done in the case. Those seeds carry genetic material that dictates traits, and sexually reproduce to create copies of the genetically modified seeds. Notably, the Plant Variety Protection Act of 1970, a special (and limited) patent right directed toward sexually propagating plant varietals, never mentions the word “make”, just the using and selling of the patented invention. Will the Court recognize, as in the PVPA, that authorizing what you do with the seed (sell to neighbors, keep for yourself, sell to a grain elevator as a commodity, etc.) generates the value for the patentee rather than authorizing creation?
More importantly, as a matter of patent policy, is it better to scrap Mallinckrodt and state more clearly that no post-sale restrictions are enforceable as a matter of both patent and contract law, or would it be better to acknowledge the ill fit of self-replicating technology (seeds in this case, but also nanotechnology and other things on the horizon) in our exhaustion law and create a carve out that leaves post-sale restrictions, whatever may be left after Quanta, intact? Or some third way?
Software Patents and the Smartphone
I will be speaking at Santa Clara Law School's outstanding conference about Solutions to the Software Problem tomorrow. It promises to be a great event, with academics, public interest advocates, and government officials all weighing in.
As a lead-in to the conference, I want to discuss an oft repeated statistic: that there are 250,000 patents that might be infringed by any given smartphone. I'm going to assume that number is accurate, and I have no reason to doubt its veracity. This number, many argue, is a key reason why we must have wholesale reform - no piecemeal action will solve the problem.
Here are my thoughts on the subject:
1. Not all of these patents are in force. Surely, many of them expired due to lack of maintenance fee payments.
2. Not all of the remaining patents are asserted. After all, we don't see every smartphone manufacturer being sued 250,000 times.
3. Many of these patents are related to each other or are otherwise aggregated together. Thus, there are opportunities for global settlements.
4. Even if you think that 250,000 is huge number of patents (and it is, really - there's not disputing that), it is unclear to me why anyone is surprised by the number when you consider what's in a smartphone. More specifically:
- A general purpose computer and all that comes with it (CPU, RAM, I/O interface, operating system, etc.).
- Active matrix display
- Touch screen display
- Cellular voice technology
- 1x data networking
- 3G data networking
- 4G data networking
- Wi-Fi data networking
- Bluetooth data networking
- GPS technology (and associated navigation)
- Accelerometer technology
- Digital camera (including lens and image processing)
- Audio recording and playback
- Battery technology
- Force feedback technology (phone vibration and haptic feedback)
- Design patents
The areas above are by and large "traditional" patent areas - they aren't software for the most part. And there are thousands of patents in each category, before we even get to the potential applications of the smartphone that might be patented (and these are of greater debate, of course).
So, yes, there are many, many patents associated with the smartphone, but what else would you expect when you cram all of these features into a single device? Perhaps smartphones are the focus of the software patent problem because, well, they do everything, and so they might infringe everything. I'm not convinced that this should drive a wholesale reform of the system. Maybe it just means that smartphones are underpriced given what they include. Not that I'm complaining.
Tuesday, November 13, 2012
A couple of years ago, Chief Justice Roberts presided over our College of Law’s moot court competition and our faculty held a reception in his honor. After being introduced to the Chief Justice as our faculty’s one and only patent person, he turned to me and asked, “Are we getting our patent cases about right?” Flustered, I answered, “I think so, sir. They're difficult cases and your opinions are very thoughtful.” I’ve been thinking about this very brief exchange quite a bit these days. The Court has four more intellectual property cases on its docket this term: Kirtsaeng v. John Wiley & Sons (whether copyright exhaustion applies to works purchased legally overseas and imported), Already, LLC v. Nike, Inc. (whether a covenant not to sue for trademark infringement moots a declaratory judgment counterclaim for invalidity), Bowman v. Monsanto Co. (whether the first sale doctrine precludes infringement liability for using seeds produced by GMO plants purchased under a limited license) and Gunn v. Minton (whether federal courts and the Federal Circuit have jurisdiction over a legal malpractice claim involving an underlying patent case). The court has already heard arguments in Kirtsaeng and Already: the transcripts can be found here and here. These cases (copyright and trademark, respectively) likely will have implications for patent law, and Bowman and Gunn address two patent issues that have been percolating in the Federal Circuit and lower courts for some time.
Will the Court get these cases “about right”? We’ll soon find out.
(The Chief Justice was very gracious, by the way. It was a memorable exchange.)
Thursday, November 08, 2012
Cease and Desist
For nearly 10 years, scholars, commentators, and disappointed downloaders have criticized the now-abandoned campaign of the Recording Industry Association of America (RIAA) to threaten litigation against, and in some cases, sue downloaders of unauthorized music. The criticisms follow two main themes. First, demand letters, which mention of statutory damages up to and including $150,000 per infringed work (if the infringement is willful), often lead to settlements of $2,000 - $3,000. A back of the envelope cost-benefit analysis would suggest this is a reasonable response from the receipient if $150,000 is a credible threat, but for those who conclude that information is free and someone must challenge these cases, the result is frustrating.
Second, it has been argued that the statutory damage itself is unconstitutional, at least as applied to downloaders, because it is completely divorced from any actual harm suffered by the record labels. The constitutional critique has been advanced by scholars like Pam Samuelson and Tara Wheatland, accepted by a district court judge in the Tenenbaum case, dodged on appeal by the First Circuit, but rejected outright by the Eighth Circuit. My intuition is that the Supreme Court would hold that Congress has the authority to craft statutory damages sufficiently high to deter infringement, and that there's sufficient evidence that Congress thought its last increase in statutory damages would accomplish that goal.
We could debate that, but I have something much more controversial in mind. I hope to convince you that the typical $3,000 settlement is the right result, at least in file-sharing cases.
The Copy Culture survey indicates that the majority of respondents who support a penalty support fines for unauthorized downloading of a song or movie. Of those who support fines, 32% support a fine of $10 or less, 43% support fines of up to $100, 14% support fines of up to $1,000, 5% support higher fines, 3% think fines should be context sensitive, and 3% are unsure. The average max fine for the top three groups is $209. Let's cut it in half, to $100, because roughly half of survey respondents were opposed to any penalty.
How big is the typical library of "illegally" downloaded files? 10 songs? 100 songs? 1,000? The Copy Culture study reports the following from survey respondents who own digital files, by age group:
18-29: 406 files downloaded for free
30-49: 130 files downloaded for free
50-64: 60 files downloaded for free
65+: 51 files downloaded for free
In the two cases that the RIAA actually took to trial, the labels argued that the defendants had each downloaded over 1,000 songs, but sued over 30 downloads in one case, and 24 downloads in the other. As I see it, if you're downloading enough to catch a cease and desist letter, chances are good that you've got at least 30 "hot" files on your hard drive.
You can see where I'm going here. If the average target of a cease and desist letter has 30 unauthorized files, and public consensus centers around $100 per unauthorized file, then a settlement offer of $3,000 is just about right.
Four caveats. First, maybe the Copy Culture survey is not representative of public opinion and that number should be far lower than $100. Second, misfires happen with cease and desist letters: sometimes, individuals are mistargeted. One off-the-cuff response is to have the RIAA pay $3,000 to every non-computer user and the estate of every dead grandman who gets one of these letters.
Third, this doesn't take fair use into account, and thus might not be a fair proxy for many other cases. For example, the Righthaven litigation seems entirely different to me - reproducing a news story online seems different than illegally downloading a song instead of paying $1, in part because the news story is closer to copyright's idea line, where more of the content is likely unprotectable, and because the redistribution of news is more likely to be fair use.
Fourth, it doesn't really deal with the potentially unconstitutional / arguably stupid possibility that some college student could be ordered to pay $150,000 per download, if a jury determines he downloaded willfully. I'd actually be happy with a rule that tells the record labels they can only threaten a maximum damage award equal to the average from the four jury determinations in the Tenenbaum and Thomas-Rasset cases. That's still $43,562.50 per song. Round it down to the non-willful statutory cap, $30,000, and I still think that a $3,000 settlement is just about perfect.
Now tell me why I'm crazy.
Saturday, November 03, 2012
"The past is never dead. It's not even past."
Hello all and a tremendous thank you to Dan and the PrawfsBlawg crew for having me this month! I'm usually thinking about patent law, but today I've got a short note on other IP...
Last week, the estate of William Faulkner filed two lawsuits over quotes from Faulkner works – one against Sony for the movie, Midnight in Paris, featuring a misquote of the above quotation from Faulkner's novel, Requiem for a Nun, the other against the Washington Post and Northrop Grumman for an ad featuring a quote from a Harper’s piece on civil rights. The complaints can be found here and here. Both suits allege three causes of action: copyright infringement, trademark infringement and misappropriation for commercial advantage of Faulkner’s likeness and image. I’m guessing that the suits spark a little surprise and outrage in most folks, folks who feel like this shouldn’t be actionable copyright or trademark infringement because the use seems like a fair (and quite common) one. And I believe these mildly outraged folks would be right—both copyright and trademark fair use doctrines appear to protect this sort of use. We might give Faulkner his due on the misappropriation count, however—after all, the movie and ad are clearly commercial endeavors. Yet without delving too deeply into right of publicity torts, it seems reasonable that an incidental use in works of entertainment like a movie does not trigger liability. The ad may be a more difficult case because it’s not an entertaining work of fiction.
This is really interesting to Faulkner (and Woody Allen) fans. But an IP fan might have a couple more questions.If everyone but the Faulkner estate thinks this case is a slam-dunk loser, why file it? And why pick these as your first suits ever in defending the estate’s intellectual property? One hypothesis: These suits are a shot across the bow to moviemakers, ad men and other creative types who want to quote (or misquote) Faulkner (a suggestion made by BU's BC's Dave Olson here) without getting the estate's permission.
Another hypothesis: Assume the misappropriation claim with respect to the Northrop Grumman ad has a chance (however slim) of winning for the estate. Perhaps buttressing the state tort claim with two federal infringement claims frames the case as one of intellectual property rights and, in doing so, legitimizes and strengthens the state claim. Having a conversation about the quotes as protected expression or as protected marks (however weak those claims may be) may set up the misappropriation conversation more favorably for Faulkner than if it stood alone. The movie case (and others in the future) have to be filed to keep the momentum going—building a case for respect for their IP rights, whatever they may be.
As an aside, the estate appears recently to have licensed a quote to the television show, Modern Family. This is a question I usually ask my students. Should the fact that some people get and pay for permission inform our decision on whether those who do not seek permission and/or do not pay for similar uses are using fairly? For example, Weird Al usually gets permission for his songs even though they seem like fair uses after Campbell v. Acuff-Rose and when Lady Gaga would not grant permission, he created a parody of her song anyway, relying on fair use.
Looking forward to a great month here!
Friday, October 26, 2012
The Future of Books at NYLS
I'm at a wonderful conference, In re Books, hosted by New York Law School and Internet / copyright guru James Grimmelmann, and you can attend, at least virtually: http://www.nyls.edu/centers/harlan_scholar_centers/institute_for_information_law_and_policy/events/upcoming_conferences/in_re_books/webcast
Thursday, October 25, 2012
Copyright's Serenity Prayer
I recently discovered an article by Carissa Hessick, where she argues that the relative ease of tracking child pornography online may lead legislators and law enforcement to err in two ways. First, law enforcement may pursue the more easily detected possession of child pornography at the expense of pursuing actual abuse, which often happens in secret and is diffcult to detect. Second, legislators may be swayed to think that catching child porn possessors is as good as catching abusers, because the former either have abused, or will abuse in the future. Thus, sentences for possession often mirror sentences for abuse, and we see a potential perversion of the structure of enforcement that gives a false sense of security about how much we are doing to combat the problem.
With the caveat that I know preventing child abuse is muchmuch more important that preventing copyright infringement, I think the ease of detecting unauthorized Internet music traffic may also have troubling perverse effects.
When I was a young man, copying my uncle's LP collection so I could take home a library of David Bowie casette tapes, there was no way Bowie or his record label would ever know. The same is true today, even though they now make turntables that will plug right into my computer and give me digital files that any self-respecting hipster would still disdain, but at least require me to flip a vinyl disc as my cost of copying.
On the other hand, it's much easier to trace free-riding that occurs online. That was part of what lead to the record industry's highly unpopular campaign against individual infringers. Once you can locate the individual infringer, you can pursue infringment that used to be "under the radar." The centralized, searchable nature of the Internet also made plausible Righthaven's disastrous campaign against websites copying news stories, and the attempt by attorney Blake Field to catch Google infringing his copyright in posted material by crawling his website with automated data gathering programs.
What if copyright owners are chasing the wrong harm? For example, one leaked RIAA study suggests that while a noticeable chunk of copyright infringement occurs via p2p sharing, it's not the largest chunk. While the RIAA noted that in 2011, 6% of unauthorized sharing (4% of total consumption) happens in locker services like Megauploads, and 23% (15%) happens via p2p, 42% (27%) of unauthorized acquisition is done by burning and ripping CDs from others, and another 29% (19%) happens through face-to-face hard drive trading. Offline file sharing is apparently more prevalent than the online variety, but it is much more difficult to chase. So it may be that copyright holders chase the infringement they can find, rather than the infringement that most severely affects the bottom line.
In a way, leaning on the infringement they can detect is reminiscent of the oft-repeated "Serenity Prayer," modified here for your contemplation:
God, grant me the serenity to accept the infringement I cannot find,
The courage to crush the infringement I can,
And the wisdom to know the difference.
All this brings me back to the friends and family question. The study on Copy Culture in the U.S. reports that roughly 80% of the adults owning music files think it's okay to share with family, and 60% think it's okay to share with friends. In addition, the Copyright Act specifically insulates friends and family sharing in the context of performing or displaying copyrighted works to family and close friends in a private home (17 USC s. 101, "publicly"). Thus, there is some danger in going after that friends and family sharing. If the family and friends line is the right line, can we at least feel more comfortable that someone to whom I'm willing to grant physical access to my CD library is a "real" friend than my collection of Facebook friends and acquaintances, some of whom will never get their hands on my vinyl phonograph of Blues and Roots?
Tuesday, October 23, 2012
The New Normal
Two news items from across the pond highlight the adaptability of musicians, but also a highlight a shift from music as a good to music as an experience, necessitated by the ubiquity of file sharing.
+, the debut album by British singer and producer Ed Sheeran, has apparently been downloaded illegally more than any album in the U.K this year. Sheeran is sanguine about the whole thing, gushing on Twitter about purchasers and free-riders alike, because he concludes that both types of fans are buying tickets, and as Sheeran puts it, "I'm still selling albums, but I'm selling tickets at the same time. My gig tickets are like £18, and my albums £8, so ... it's all relative."
Venerable British pop stars Squeeze are also moving to a more DIY, performance-based financial model this year. Fans who attend concerts can choose to purchase a download of the show at a "pop-up" shop after each performance, and meet the band as well. To date, this is the only way for fans to get their hands on Squeeze's first new songs in 14 years...at least until they are posted online. Squeeze founder Glenn Tillbrook is also excited about this brave new world. Tillbrook states, "I love the opportunities and surprises thrown up by the digital age and the fading away of the major labels. Being able to innovate and take control of our own destiny is something I could only have dreamt of back then." And for bands like Squeeze, the old label-centric business model may well have passed them by. As Tillbrook notes, “With the traditional record label no longer relevant for us, our relationship with the merchandisers is increasingly important in order to help us deliver quality products for our fans.”
As I postulated a few months ago, with regard to comic books offered online, I can't help but wonder whether the end result will be less professionally crafted music because the system will support fewer professional craftspeople, or whether we'll just get more artists who are more comfortable with a DIY esthetic, and fewer that rely on big machinery or well-placed intermediaries to make things happen.
It may be that the most important thing a new artist can do is leverage networks and relationships. Here's an example: I'm a huge Josh Ritter fan. Chris Thile's band, Punch Brothers, recently covered a Ritter song, and offered a free download of it for fans that purchased the new Punch Brothers EP. How did I find out? I follow Ritter on Twitter, and he let me know. I wouldn't have otherwise purchased the Punch Brothers EP, but was excited about this opportunity. Once upon a time, you could rely on certain labels for a certain aesthetic in its recorded offerings. Relationships between artists might in the future do some of that same work.
Tuesday, October 16, 2012
Chickens and Eggs in Music Consumption
As I blogged about in my previous post, the Copyright Culture survey, which looks at music consumption habits in the U.S. and Germany, has leaked in bits and pieces. Another interesting tidbit found its way to TorrentFreak yesterday. It turns out that p2p file sharers, both in the U.S. and Germany, have bigger music collections than non p2p file sharers. Perhaps more importantly, the file sharers buy more music than their counterparts. U.S. file sharers bought 30% more music, and German file sharers almost 300% more music, than non-p2p luddites. [TorrentFreak has a nice chart that breaks this down for you.] The tone of TorrentFreak's summary suggests that this means file sharers are the best friends the music industry could have, because they love music. I am a bit skeptical, because there's another way to cut the same data, which I share after the break.
File sharers bought more music than non file sharers, but they also obtained more music without paying for it. U.S. file sharers paid for only 38% of the items in their collection, while non file sharers paid for 47% of their music. And the difference was more stark for German consumers. German p2p users paid for only 26% of their music, while non p2p users paid for 60% of their music, although this amounted to significantly smaller sales.
Do non p2p users buy less music than their counterparts because they aren't exposed to as much music as p2p users? Or do p2p users pay for a smaller proportion of their music consumption because they elect to use more unauthorized avenues to purchase music? I don't have an answer. I'm slightly more sympathetic to the latter interpretation than the former, but I'm really interested to see what the full report looks like, when it's released.
Wednesday, October 10, 2012
FriendsHello all. Glad to be back at Prawfsblawg for another round of blogging. I'm looking forward to sharing some thoughts about entertainment contracts, the orphan works problem in copyright, and the new settlement between Google and several publishers over Google Books. Today, I want to talk a bit about file-sharing and friendship. A recent study asked U.S. and German citizens whether they thought it was "reasonable" to share unauthorized, copyrighted files with family, with friends, and in several different online contexts. Perhaps unsurprisingly, respondents in the 18-29 range responded more favorably to file sharing than older respondents in every context. What interests me is that respondents in every context see a sharp difference between sharing files with friends, and posting a file on Facebook. We call our Facebook contacts "friends," but I'm curious why the respondents to this study made the distinction between sharing with friends and sharing on Facebook. I have a few inchoate thoughts, and I'd love to hear what you think. Megan Carpenter wrote an interesting article about the expressive and personal dimension of making mix tapes. I grew up in the mix tape era as well, and remember well the emotional sweat that I poured into collections of love songs made for teenage paramours in the hopes of sustaining doomed long-distance romances. Carpenter correctly argues that there is something personal about that act, and it seems reasonable that it would fall outside the reach of the Copyright Act. I also remember copying my uncle's entire collection of David Bowie LPs onto casette tapes when I was in junior high. In that instance, music moved through family connections, and in my small town in Wyoming, there were no casettes from the Bowie back catalog on the shelves of the local music store. But the only effort involved in making those casettes was turning the LP at the end of a side. Less expressive, but within a fairly tight social network. A properly functioning copyright system might reasonably allow for these uses, and still sanction a decision to post my entire Bowie collection on Facebook, or through a torrent. I'm skeptical of any definition of "friends and family" so capacious that it would include Facebook friends, and I suspect that many people realize now, if they didn't then, that what constitutes a face-to-face friend is different than what constitutes a Facebook friend, but you may have a different impression. I hope you'll share it here, whatever it is.
Sunday, July 01, 2012
Trading away the internet
[The following is by my FIU colleague Hannibal Travis (links now have been corrected)]:
Last week, the office of U.S. Trade Representative Ron Kirk denied a request by Darrell Issa, chairman of the House Oversight and Government Reform Committee, that he and his staff be allowed participate in the next round of negotiations of a new treaty that Issa believes will have an "immense impact" on the U.S. economy. According to the American Civil Liberties Union, the treaty, known as the Trans-Pacific Partnership (TPP) pact, would set the stage for "worldwide crackdowns on Internet activity by a coordinated authority that could work at cross-purposes with the laws and policies of the participating countries." In this, it resembles the Anti-counterfeiting Trade Agreement (ACTA) and Stop Online Piracy Act. Expected as early as next year, the TPP would, according to a U.S. proposal, impose civil liability for the removal of copyright terms or other “rights management information” from a copy of a copyrighted work as part of the commission or facilitation of copyright infringement. A draft of another such treaty, being negotiated under the auspices of the World Intellectual Property Organization, provides for civil liability for negligently removing rights management information from an audiovisual performance in order to communicate the performance or make it available to others without permission.
Both the TPP and the WIPO Audiovisual Treaty threaten the Internet by potentially outlawing remix culture and fair use of existing content. Many YouTube videos are mashups of news, entertainment, or public affairs videos with additional commentary or montage. There are already precedents for using the removal of “rights management information” from a remix in an attempt to censor artists and other creators of fair use works. For example, the video search engine Veoh was sued out of existence for storing users’ videos that allegedly contained infringing clips of copyrighted music, even though the Ninth Circuit upheld a lower court ruling that the site complied with the “safe harbor” for Web sites set forth in the Digital Millennium Copyright Act of 1998. The Associated Press sued the artist Shepard Fairey, who made the Obama “Hope” poster, for removing from his stylized version of President Obama’s visage the copyright management information attached to the image of Obama after the Associated Press fixed it in a photograph. It also sued a news aggregator site for providing excerpts of news articles available to subscribers without the original rights management data.
The TPP and the WIPO Audiovisual Treaty go beyond existing law in prohibiting not only the intentional removal of rights management information for purposes of infringement as opposed to fair use, but in potentially reaching the negligent removal of such information to commit infringement or to encourage or facilitate infringement by another person, or perhaps even the negligent removal of such information to make a fair use. Article 10 of the TPP requires parties to criminalize the intentional removal of copyright management information from copyrighted work en route to infringement, and to impose civil liability even for the negligent removal of the information. Article 16 of the WIPO Audiovisual Treaty requires parties to provide civil remedies against those who negligently facilitate the distribution, importation for distribution, communication or making available to the public, “performances or copies of performances fixed in audiovisual fixations knowing that electronic rights management information has been removed or altered without authority.” This would appear to prohibit, for example, the use of clips of news, films, or television shows with the copyright notices, credits, or contractual use terms intentionally omitted, even when the clips are used in transformative works such as documentary films, news reports, parodies, lip-synching videos, etc. Existing U.S. law has a copyright management information provision (17 U.S.C. s. 1202(a)), but it requires intentional removal or alteration of the information for purposes of infringement, not mere negligence.
Article 5 of the WIPO Audiovisual Treaty states that independently of any economic rights, and even after the transfer of them, a performer has a right as to fixations of live performances “to object to any distortion, mutilation or other modification of his performances that would be prejudicial to his reputation, taking due account of the nature of audiovisual fixations.” This of course could lead to endless litigation concerning mashups and the like, redolent of the attempts to restrict the fair use of songs, music videos, and video games in Lenz. v. Universal, Lewis Galoob v. Nintendo, and Campbell v. Acuff-Rose Music. Article 6 seems to grant a broad new right to restrict the "communication" of unfixed performances not already broadcast, exceptions to which “may” but do not need to be granted. Goodbye to YouTube access to rare concert footage?
Even worse, Article 17 of the WIPO Audiovisual Treaty forbids any formalities in the audiovisual performance ownership right, which threatens the many important roles that formalities play in U.S. copyright law from playing their usual part. As Jason Mazzone has argued: “The U.S. Copyright Office registers copyrighted works, but there is no official registry for works belonging to the public. As a result, publishers and the owners of physical copies of works plaster copyright notices on everything. These publishers and owners also restrict copying and extract payment from individuals who do not know better or find it preferable not to risk a lawsuit. These circumstances have produced fraud on an untold scale....” Imagine the scale of the ownership-related misrepresentations that will occur as audiovisual performances are protected but formalities such as registration are done away with. John Bergmayer of Public Knowledge has argued that: “Creating new kinds of ‘middleman rights’ could increase the complexity of dealing with content exponentially. It could give broadcasters the right to prevent recording shows for later viewing, or even effectively remove works from the public domain.”
Although Article 13 of the WIPO Audiovisual Treaty provides that parties may provide similar exceptions and limitations to the audiovisual protection right as they do for copyright, paragraph 2 of that article states that such exceptions and limitations must not injure the normal licensing expectations of the owner of any right in the audiovisual performance. Thus, the USA could be brought up on World Trade Organization claims (or threatened claims) for allowing fair use of audiovisual clips on YouTube, and be pressured to reform its copyright law in response; there are precedents for this from the 1989 copyright reform and the 2011 patent reform.
The threat to the Internet is compounded by the process by which the TPP or the WIPO Audiovisual Treaty may be adopted. The TPP is apparently being characterized as a “sole executive agreement,” which will not be ratified by the Senate or passed by the Congress in the form of legislation. This removes many of the checks and balances that prevented the Clinton administration’s proposals to impose crippling copyright liability on Internet pioneers from becoming law as a result of section 512 of the Digital Millennium Copyright Act of 1998. Unlike most international criminal laws, as well as the North American Free Trade Agreement and the WIPO Copyright Treaty, the TPP and the WIPO Audiovisual Treaty may not be submitted for the advice and consent of the Senate. This prompts the ACLU to complain that treaty “negotiations are being conducted behind closed doors with details shared only with ... top executives from [such corporations as] AT&T, Verizon, the RIAA, the pharmaceutical lobby, and Cisco.” Most sole executive agreements have not been criminal law reforms, intellectual property expansions, or vast new trade pacts, but rather governed the discretion of the executive in military, nuclear, aviation, scientific, postal, and international financial affairs. A defender of the TPP or WIPO Audiovisual Treaty might respond that Congress would have to act before Americans could face civil liability or criminal charges for their YouTube videos or other remixes, but the federal government has been seizing Web sites that do not themselves infringe copyrights. The fear is that the many vague clauses in treaties such as the TPP, ACTA, or WIPO Audiovisual Treaty will lead to the end of the Internet as we know it, as Internet companies are forced to edit out the quotation of copyrighted material, and all Internet traffic is inspected by the state.
Thursday, June 07, 2012
The Virtual Honesty Box
As a fan of comic book art, I'm often thrilled to encounter areas where copyright or trademark law and comic books intersect. As is the case in other media, the current business models of comic book publishers and creators has been threatened by the ability of consumers to access their work online without paying for it. Many comic publishers are worried about easy migration of content from paying digital consumers to non-paying digital consumers. Of course, scans of comics have been making their way around the internet on, or sometimes before, a given comic's onsale date for some time now. As in other industries, publishers have dabbled with DRM, and publishers have enbraced different (and somewhat incompatible) methods for providing consumers with authorized content. Publishers' choices sometimes lead to problems with vendors and customers, as I discuss a bit below.
While services like Comixology offer a wide selection of content from most major comics publishers, they are missing chunks of both the DC Comics and Marvel Comics catalogues. DC entered a deal to distribute 100 of its graphic novels (think multi-issue collections of comic books) exclusively via Kindle. Marvel Comics subsequently struck a deal to offer "the largest selection of Marvel graphic novels on any device" to users of the Nook.
Sometimes exclusive deals leave a bad taste in the mouths of other intermediaries. DCs graphic novels were pulled from Barnes & Noble shelves because the purveyor of the Nook was miffed. Independent publisher Top Shelf is an outlier, offering its books through every interface and intermediary it can. But to date, most publishers are trying to make digital work as a complement to, and not a replacement for, print.
Consumers are sometimes frustrated by a content-owner's choice to restrict access, so much so that they feel justified engaging in "piracy." (Here I define "piracy" as acquiring content through unauthorized channels, which will almost always mean without paying the content owner.) Some comics providers respond with completely open access. Mark Waid, for example, started Thrillbent Comics with the idea of embracing digital as digital, and in a manner similar to Cory Doctorow, embracing "piracy" as something that could drive consumers back to his authorized site, even if they didn't pay for the content originally.
I recently ran across another approach from comic creators Leah Moore and John Reppion. Like Mark Waid, Moore and Reppion have accepted, if not embraced, the fact that they cannot control the flow of their work through unauthorized channels, but they still assert a hope, if not a right, that they can make money from the sales of their work. To that end, they introduced a virtual "honesty box," named after the clever means of collecting cash from customers without monitoring the transaction. In essence, Moore and Reppion invite fans who may have consumed their work without paying for it to even up the karmic scales. This response strikes me as both clever and disheartening.
I'll admit my attraction to perhaps outmoded content-delivery systems -- I also have unduly fond memories of the 8-track cassette -- but I'm disheartened to hear that Moore and Reppion could have made roughly $5,500 more working minimum wage jobs last year. Perhaps this means that they should be doing something else, if they can't figure out a better way to monetize their creativity in this new environment. Eric Johnson, for one, has argued that we likely don't need legal or technological interventions for authors like Moore and Reppion in part because there are enough creative amateurs to fill the gap. The money in comics today may not be in comics at all, but in licensing movies derived from those comics. See, e.g., Avengers, the.
I hope Mark Waid is right, and that "piracy" is simply another form of marketing that will eventually pay greater dividends for authors than fighting piracy. And perhaps Moore and Reppion should embrace "piracy" and hope that the popularity of their work leads to a development deal from a major film studio. Personally, I might miss the days when comics were something other than a transparent attempt to land a movie deal.
As for the honesty box itself? Radiohead abandoned the idea with its most recent release, King of Limbs, after the name-your-price model adopted for the release of In Rainbows had arguably disappointing results: according to one report, 60% of consumers paid nothing for the album. I can't seen Moore and Reppion doing much better, but maybe if 40% of "pirates" kick in a little something into the virtual honesty box, that will be enough to keep Moore and Reppion from taking some minimum wage job where their talents may go to waste.
Friday, June 01, 2012
Oracle v. Google - The Other Shoe Drops
For those of you following the Oracle v. Google case, as I predicted here, the court has ordered that the APIs that Google copied are not copyrightable - at least not in the form that they were used. The case is basically dismissed with no remedy to Oracle.
Thursday, May 31, 2012
A Coasean Look at Commercial Skipping...
Readers may have seen that DISH has sued the networks for declaratory relief (and was promptly cross-sued) over some new digital video recorder (DVR) functionality. The full set of issues is complex, so I want to focus on a single issue: commercials skipping. The new DVR automatically removes commercials when playing back some recorded programs. Another company tried this many years ago, but was brow-beaten into submission by content owners. Not so for DISH. In this post, I will try to take a look at the dispute from a fresh angle.
Many think that commercial skipping implicates derivative work rights (that is, transformation of a copyrighted work). I don't think so. The content is created separately from the commercials, and different commercials are broadcast in different parts of the country. The whole package is probably a compiliation of several works, but that compilation is unlikely to be registered with the copyright office as a single work. Also, copying the work of only one author in the compilation is just copying of the subset, not creating a derivative work of the whole.
So, if it is not a derivative work, what rights are at stake? I believe that it is the right to copy in the first place in a stored DVR file. This activity is so ubiquitous that we might not think of it as copying, but it is. The Copyright Act says that the content author has the right to decide whether you store a copy on your disk drive, absent some exception.
And there is an exception - namely fair use. In the famous Sony v. Universal Studios case, the Court held that "time shifting" is a fair use by viewers, and thus sellers of the VCR were not helping users infringe. Had the Court held otherwise, the VCR would have been enjoined as an agent of infringement, just like Grokster was.
I realize that this result is hard to imagine, but Sony was 5-4, and the initial vote had been in favor of finding infringement. Folks can debate whether Sony intended to include commercial skipping or not. At the time, remote controls were rare, so skipping a recorded commercial meant getting off the couch. It wasn't much of an issue. Even now, advertisers tolerate the fact that people usually fast forward through commercials, and viewers have always left the TV to go to the bathroom or kitchen (hopefully not at the same time!).
But commercial skipping is potentially different, because there is zero chance that someone will stop to watch a catchy commercial or see the name of a movie in the black bar above the trailer as it zooms by. I don't intend to resolve that debate here. A primary reason I am skipping the debate is that fair use tends to be a circular enterprise. Whether a use is fair depends on whether it reduces the market possibilities for the owner. The problem is, the owner only has market possibilities if we say they do. For some things, we may not want them to have a market because we want to preserve free use. Thus, we allow copying via a DVR and VCR, even if content owners say they would like to charge for that right.
Knowing when we should allow the content owner to exploit the market and when we should allow users to take away a market in the name of fair use is the hard part. For this reason, I want to look at the issue through the lens of the Coase Theorem. Coase's idea, at its simplest, is that if parties can bargain (which I'll discuss below), then it does not matter with whom we vest the initial rights. The parties will eventually get to the outcome that makes each person best off given the options, and the only difference is who pays.
One example is smoking in the dorm room. Let's say that one person smokes and the other does not. Regardless of which roommate you give the right to, you will get the same amount of smoking in the room. The only difference will be who pays. If the smoker has the right to smoke, then the non-smoker will either pay the smoker to stop or will leave during smoking (or will negotiate a schedule). If you give the non-smoker the right to a smoke-free room, then the smoker will pay to smoke in the room, will smoke elswhere, or the parties will negotiate a schedule. Assuming non-strategic bargaining (hold-ups) and adequate resources, the same result will ensue because the parties will get to the level where the combination of their activities and their money make them the happiest. The key is to separate the analysis from normative views about smoking to determine who pays.
Now, let's apply this to the DVR context. If we give the right to skip commercials to the user, then several things might happen. Advertisers will advertise less or pay less for advertising slots. Indeed, I suspect that one reason why ads for the Super Bowl are so expensive, even in a down economy, is that not only are there a lot of viewers, but that those viewers are watching live and not able to skip commercials. In response, broadcasters will create less content, create cheaper content, or figure out other ways to make money (e.g. charging more for view on demand or DVDs). Refusing to broadcast unless users pay a fee is unlikely based on current laws. In short, if users want more and better content, they will have to go elsewhere to get it - paying for more channels on cable or satellite, paying for video on demand, etc. Or, they will just have less to watch.
If we give the right to stop commercial skipping to the broadcaster, then we would expect broadcasters will broadcast the mix they have in the past. Viewers will pay for the right to commercial skip. This can be done as it is now, through video on demand services like Netflix, but that's not the only model. Many broadcasters allow for downloading via the satellite or cable provider, which allows the content owner to disable fast forwarding. Fewer commercials, but you have to watch them. Or, in the future, users could pay a higher fee to the broadcaster for the right to skip commercials, and this fee would be passed on to content owners.
These two scenarios illustrate a key limit to the Coase Theorem. To get to the single efficient solution, transactions costs must be low. This means that the parties must be able to bargain cheaply, and there must be no costs or benefits that are being left out of the transaction (what we call externalities). Transactions costs are why we have to be careful about allocating pollution rights. The factory could pay a neighborhood for the right to pollute, but there are costs imposed on those not party to the transaction. Similarly, a neighborhood could pay a factory not to pollute, but difficulty coordinating many people is a transaction cost that keeps such deals from happening.
I think that transactions costs are high in one direction in the commercial skipping scenario, but not as much in the other. If the network has the right to stop skipping, there are low cost ways that content aggregators (satellite and cable) can facilitate user rights to commercial skip - through video on demand, surcharges, and whatnot. This apparatus is already largely in place, and there is at least some competition among content owners (some get DVDs out soon, some don't for example).
If, on the other hand, we vest the skipping right with users, then the ability for content owners to pay (essentially share their advertising revenues) with users is lower if they want to enter into such a transaction. Such a payment could be achieved, though, through reduced user fees for those who disable channel skipping. Even there, though, dividing among all content owners might be difficult.
Normatively, this feels a bit yucky. It seems wrong that consumers should pay more to content providers for the right to automate something they already have the right to do - skip commercials. However, we have to separate the normative from the transactional analysis - for this mind experiment, at least.
Commercials are a key part of how shows get made, and good shows really do go away if there aren't enough eyeballs on the commercials. Thus, we want there to be an efficient transaction that allows for metered advertising and content in a way that both users and networks get the benefit of whatever bargain they are willing to make.
There are a couple of other relevant factors that imply to me that the most efficient allocation of this right is with the network:
1. DISH only allows skipping after 1AM on the day the show is recorded. This no doubt militates in favor of fair use, because most people watch shows on the day they are recorded (or so I've read, I could be wrong). However, it also shows that the time at which the function kicks in can be moved, and thus negotiated and even differentiated among customers that pay different amounts. Some might want free viewing with no skipping, some might pay a large premium for immediate skipping. If we give the user the right to skip whenever, it is unlikely that broadcasters can pay users not to skip, and this means they are stuck in a world with maximum skipping - which kills negotiation to an efficient middle.
2. The skipping is only available for broadcast tv primetime recordings - not for recordings on "cable" channels, where providers must pay for content. Thus, there appears to already be a payment structure in practice - DISH is allowing for skipping on some networks and not others, which implies that the structure for efficient payments are already in place. If, for example, DISH skipped commercials on TNT, then TNT would charge DISH more to carry content. The networks may not have that option due to "must carry" rules. I suspect this is precisely why DISH skips for broadcasters - because it can without paying. In order to allow for bargaining however, given that networks can't charge more for DISH to carry content is to vest the right with networks and let the market take over.
These are my gut thoughts from an efficiency standpoint. Others may think of ways to allow for bargaining to happen by vesting rights with users. As a user, I would be happy to hear such ideas.
This is my last post for the month - time flies! Thanks to Prawfs again for having me, and I look forward to guest blogging in the future. As a reminder, I regularly blog at Madisonian.
Wednesday, May 30, 2012
America's First Patents
My post today is a pointer to my guest post at the Patently-O blog called America's First Patents. Here is the first paragraph:
My forthcoming Florida Law Review article, America’s First Patents, examines every available patent issued during the first 50 years of patenting in the United States. A full draft is accessible at this SSRN page. The article reaches three conclusions:
- Our patentable subject matter jurisprudence with respect to methods can, in part, blame its current unclarity on early decisions by a few important judges to import British law into the new patent system.
- Early patenting trends suggest that Congress has never intended new subject areas be limited until Congress explicitly allowed the new subject area.
- The machine-or-transformation test, which allows a method patent only if the process involves a machine or transforms matter, has no basis in historic patenting practices.
Tuesday, May 29, 2012
School of Rock
I had a unique experience last Friday, teaching some copyright law basics to music students at a local high school. The instructor invited me to present to the class in part because he wanted a better understanding of his own potential liability for arranging song for performances, and in part because he suspected his students were, by and large, frequently downloading music and movies without the permission of copyright owners, and he thought they should understand the legal implications of that behavior. The students were far more interested in the inconsistencies they perceived in the current copyright system. I'll discuss a few of those after the break.
First, the Copyright Act grants the exclusive right to publicly perform a musical work, or authorize such a performance, to the author of the work, but there is no right public performance right granted to the author or owner of a sound recording. See 17 U.S.C. § 114. In other words, Rod Temperton, the author of the song "Thriller," has the right to collect money paid to secure permission to publicly perform the song, but neither Michael Jackson's estate nor Epic Records holds any such right, although it's hard to discount the creative choices of Michael Jackson, Quincy Jones and their collaborators in making much of what the public values about that recording. To those who had tried their hands at writing songs, however, the disparity made a lot of sense because "Thriller" should be Temperton's song because of his creative labors.
Second, the Copyright Act makes specific allowance for what I call "faithful" cover tunes, but not beat sampling or mashups. If a song (the musical work) has been commercially released, another artist can make a cover of the song and sell recordings of it without securing the permission of the copyright owner, so long as the cover artist provides notice, pays a compulsory license (currenty $0.091 per physical or digital recording) and doesn't change the song too much. See 17 U.S.C. § 115. If the cover artist makes a change in "the basic melody or fundamental character of the work," then the compulsory license in unavailable, and the cover artist must get permission and pay what the copyright owner asks. In addition, the compulsory license does not cover the sound recording, so there is no compulsory license for a "sampling right." Thus, Van Halen can make a cover of "Oh, Pretty Woman," without Roy Orbison's permission, but Two Live Crew cannot (unless the rap version ends up qualifying for the fair use privilege).
It was also interesting to me that at least one student in each class was of the opinion that once the owner of a copyrighted work put the work on the Internet, the owner was ceding control of the work, and should expect people to download it for free. It's an observation consistent with my own analysis about why copyright owners should have a strong, if not absolute, right to decide if and when to release a work online.
On a personal level, I confirmed a suspicion about my own teaching: if I try to teach the same subject six different times on the same day, it is guaranteed to come out six different ways, and indeed, it is likely there will be significant differences in what I cover in each class. This is in part because I have way more material at my fingertips than I can cram into any 45 minute class, and so I can be somewhat flexible about what I present, and in what order. I like that, because it allows me to teach in a manner more responsive to student questions. On the other hand, it may expose a failure to determine what are the 20-30 minutes of critical material I need to cover in an introduction to copyright law.
Thursday, May 24, 2012
Oracle v. Google - Round II Jury Verdict (patent infringement)
Earlier this month, I wrote about the first part of the of the trial between Oracle and Google. I predicted that the Court would eventually rule that the elements of Java that were copied were functional, and thus not infringed. There's been no ruling on that point, but the show went on, with a trial on patents that Oracle alleged Google had infringed. Once again, I thank the folks at Groklaw for the great coverage of the case.
Yesterday, the jury ruled that there was no patent infringement of the two patents asserted. I must say that this surprised me - a lot. A finding of non-infringement of a couple narrow patents is not all that surprising. What surprised me was that these were all the patents asserted. I believed that - if Google was really trying to mimic the functionality of Java - surely there was an infringed claim of at least one patent in the portfolio.
I guess not.
How did the parties get here? I would say that it was a combination of a great aggressive strategy by Google and some strategic decisions by Oracle. First, many of the patents were re-examined at the Patent and Trademark Office. Re-exam is a method whereby the PTO gets another try to determine whether a patent is invalid, usually with more historical data (prior art) than was available the first time around. Note also that the PTO and courts have become more hostile to software patents over the years. Just this week, the Supreme Court granted cert, vacated, and remanded a software patent case back to the Federal Circuit.
The PTO had issued "final" rulings on most of the patents invalidating all the relevant claims, though Oracle could have kept fighting or appealed the rulings. Instead Oracle made the strategic decision to proceed on fewer patents (only two). It must have been pretty confident, but it lost the jury at some point, and these two patents were not infringed. I was also surprised at how short the trial was, but I guess a lot of background came out in the copyright portion.
I think we can generalize a few things from this outcome, some of which (surprise) support the conclusion in my article "Patent Troll Myths." First, it's not all about trolls; we should look at the patents rather than the person asserting them to decide whether there is merit to the case. Second, no matter how big your portfolio is, you are at risk of losing your key patents. It makes sense, then, to time actions after reexamination, and to attempt to bulletproof the patent before filing suit. Maybe Oracle couldn't wait here. Third, this was a victory for the system without knocking out software patents wholesale. There were some valid claims, and they were not infringed, and others were found invalid. I believe this is a better outcome than removing the patent incentive altogether. Sure, this was an expensive trial, but it only lasted a few days in front of the jury. My former firm tried cases of this number of patents for a lot less than this one cost. Thus, the final point is that perhaps more cases should be tried by smaller firms for les money- something I doubt big companies are willing to do.
Wednesday, May 16, 2012
Fair Use and Electronic Reserves
For several years Georgia State was involved in litigation over the fair use doctrine. Specifically a consortium of publishers backed by Oxford, Cambridge and Sage sued Georgia State over copyright violations by many of the faculty. Many of my colleagues in the department were specifically named in the suit. A decision has now been rendered. You can read abou the decision here, and you can read the decision here.
The Court backed Georgia State in almost every instance, finding no copyright violation. However, the Court did lay down some rules - in particular you can use no more than 10% or one chapter, whichever is shorter, of any book.
Oh, and my colleagues were all found to have not violated copyright laws. For two of them the Court found that the plaintiffs could even prove a copyright.
Wednesday, May 09, 2012
Oracle v. Google: Digging Deeper
This follows my recent post about Oracle v. Google. At the behest of commenters, both online and offline, I decided to dig a bit deeper to see exactly what level of abstraction is at issue in this case. The reason is simple: I made some assumptions in the last post about what the jury must have found, and it turns out that the assumption was wrong. Before anyone accuses me of changing my mind, I want to note that in my last post I made a guess, and that guess was wrong once I read the actual evidence. My view of the law hasn't changed. More after the jump.
For the masochistic, Groklaw has compiled the expert reports in an accessible fashion here and here. Why do I look at the reports, and not the briefs? It turns out that lawyers will make all sorts of arguments about what the evidence will say, but what is really relevant is the evidence actually presented. The expert reports, submitted before trial, are the broadest form of evidence that can be admitted - the court can whittle down what the jury hears, but typically experts are not allowed to go much beyond their reports.
These reports represent the best evidentiary presentation the parties have on the technical merits. It turns out that as a factual matter, both reports overlap quite a bit, and neither seems "wrong" as a matter of technical fact. I would sure hope so - these are pretty well respected professors and, quite frankly, the issues in this case are just not that complicated from a coding standpoint. (Note: for those wonder what gives me the authority to say that, I could say a lot, but I'll just note that in a prior life I wrote a book about software programming for an electronic mail API).
What level of abstraction was presented and argued to the jury? As far as I can tell from the reports, other than a couple or three routines that were directly copied, the Oracle's expert found little or no similar structures or sequences in the main body source code - the part that actually does the work. The only similarity - and it was nearly identical - was in the structure, sequence and organization of the grouping of function names, and the "packages" or files that they were located in.
For computer nerds, also identical were function names, parameter orders, and variable structures passed in as parameters. In other words, the header files were essentially identical. And they would have to be, if the goal is to have a compatible system. The inputs (the function names and parameters) and the outputs need to be the same. The only way you can disallow this usage of the API is to say that you cannot create an independent software program (even one of this size) that mimics the inputs and outputs of the original program.
To say that would be bad policy, and as I discuss below, probably not in accordance with precedent. This is why the experts are both right. Oracle's expert says they are identical, and Google copied because that was the best way to lure application developers - by providing compatibility (and the jury agreed, as to the copying part). Google's expert says, so what? The only thing copied was functional, and that's legal. It's this last part that a) led to the hung jury, and b) the court will have to rule on.
In my last post, I assumed that the level of abstraction must have been at a deeper level than just the names of the methods. Why did I do that?
First, the court's jury instructions make clear that function names are not at issue. But I guess the court left it to the jury whether the collection could be infringed.
Second, the idea that an API could be infringed is usually something courts decide well in advance of trial, and it's a question that doesn't usually make it to trial.
Third, based on media accounts, it appeared that there was more testimony about deeper similarities in the code. The copied functions, I argued in my prior post, supported that view. Except that there were no other similarities. I think it is a testament to Oracle's lawyers (and experts) that this misperception of a dirty clean room shone through in media reports, because the actual evidence belies the media accounts.
This is why I decided to dig deeper, and why one should not rely on second hand reports of important evidence. Based on my reading of the reports (and I admit that I could be missing something - I wasn't in the courtroom), I think that the court will have no choice but to hold that the collection of API names is uncopyrightable - at least at this level of abstraction and claimed infringement.
To the extent that there are bits of non-functional code, I would say that's probably fair use as a matter of law to implement a compatible system. I made a very similar argument in an article I wrote 12 years ago - long before I went into academia.
Prof. Boyden asked in a comment to my prior post whether there was any law that supported the copying of APIs structure and header files. I think there is: Lotus v. Borland. That case is famous for allowing Borland to mimic the Lotus structure, but there was also an API of sorts. Lotus macros were based on the menu structure, and to provide program compatiblity with Lotus, Borland implemented the same structure. So, for example, in Lotus, a user would hit "/" to bring up the menus, "F" to bring up the file menu, and "O" to bring up the open menu. As a result, the macro "/FO" would mimic this, to bring up the open menu.
Borland's product would "read" macro programs written for Lotus, and perform the same operation. No underlying similarity of the computer code, but an identical API that took the same inputs to create the same output the user expected.
Like the lower court here, the lower court there found infringement of the structure, sequence, and organization of the menu structure. Like the lower court here, the court there found it irrelevant that Borland got the menu structure from third-party books rather than Lotus's own product. (Here, Google asserts that it got the API's from Apache Harmony, a compatible Java system, rather than the Java documents themselves). There is some dispute about whether Sun sanctioned the Apache project, and what effect that should have on the case. I think that the Harmony is a red herring.The reality is that it does not matter either way - a copy is a copy is a copy - if the copy is illicit that is.
In Lotus, the lower court found the API creative and copyrightable, the very question facing the court here. On appeal, however, the First Circuit ruled that the API was a method of operation, likening it to the buttons on a VCR. I think that's a bit simplistic, but it was definitely the right ruling. The case went up to the Supreme Court, and it was a blockbuster case, expected to -- once and for all -- put this question to rest.
Alas, the Supreme Court affirmed without opinion by an evenly divided court. And the circuit court ruling stood. And it still stands - the court never took another case, and the gist of Lotus v. Borland has been applied over and over, but rarely as directly as it might apply here.
Wholesale, direct compatibility copying of APIs just doesn't happen very often, and certainly not on the scale and with the stakes of that at issue here. Perhaps that is why there is no definitive case holding that an entire API structure is uncopyrightable. You would think we would have by 2012, but nope. Lotus comes close, but it is not identical. In Lotus, the menu structure was much smaller, and the names and structure were far less creative. Further, the concern was macro programming written by users for internal use that would not allow them to switch to a new spreadsheet program. Java programs, on the other hand, are designed to be distributed to the public in most cases.
Then again, the core issue is the same: the ability to switch the underlying program while maintaining compatibility of programs that have already been written. Based on this similarity, my prediction is that Judge Alsup will say that the collection of names is not copyrightable, or at the very least usage of the API in this manner is fair use as a matter of law. We'll see if I'm right, and whether an appeals court affirms it.
Monday, May 07, 2012
Oracle v. Google - Round I jury verdict (or not)
The jury came back today with its verdict in round one of the epic trial between two giants: Oracle v. Google. This first phase was for copyright infringement. In many ways, this was a run of the mill case, but the stakes are something we haven't seen in a technology copyright trial in quite some time.
Here's the short story of what happened, as far as I can gather.
1. Google needed an application platform for its Android phones. This platform allows software developers to write programs (or "apps" in mobile device lingo) that will run on the phone.
2. Google decided that Sun's (now Oracle's) Java was the best way to go.
3. Google didn't want to pay Sun for a license to a "virtual machine" that would run on Android phones.
4. Google developed its own virtual machine that is compatible with the Java programming language. To do so, Google had to make "APIs" that were compatible with Java. These APIs are essentially modules that provide functionality on the phone based on a keywords (instructions) from a Java language computer program. For example, if I want to display "Hello World" on the phone screen, I need only call print("Hello World"). The API module has a bunch of hidden functionality that takes "Hello World" and sends it out to the display on the screen - manipulating memory, manipulating the display, etc.
5. The key dispute is just how much of the Java source code was copied, if any to create the Google version.
The jury today held the following:
1. One small routine (9 lines) was copied directly - line for line. The court said no damages for this, but this finding will be relevant later
2. Google copied the "structure, sequence, and organization" of 37 Java API modules. I'll discuss what this means later.
3. There was no finding on whether the copying was fair use - the jury deadlocked.
4. Google did not copy any "documentation" including comments in the source code.
5. Google was not fooled into thinking it had a license from Sun.
To understand any of this, one must understand the levels of abstraction in computer code. Some options are as follows:
A. Line by line copying of the entire source code.
B. Line by line paraphrasing of the source code (changing variable names, for example, but otherwise idential lines).
C. Copying of the structure, sequence and organization of the source code - deciding what functions to include or not, creative ways to implement them, creative ways to solve problems, creative ways to name and structure variables, etc. (The creativity can't be based on functionality)
D. Copying of the functionality, but not the stucture, sequence and organization - you usually find this with reverse engineering or independent development
E. Copying of just the names of functions with similar functionality - the structure and sequence is the same, but only as far as the names go (like print, save, etc.). The Court ruled already that this is not protected.
F. Completely different functionality, including different structure, sequence, organization, names, and functionality.
Obviously F was out if Google wanted to maintain compatibility with the Java programming language (which is not copyrightable).
So, Google set up what is often called a "cleanroom." The idea is not new - AMD famously set up a cleanroom to develop copyrighted aspects of its x86 compatible microprocessors back in the early 1990's. Like Google now (according to the jury), AMD famously failed to keep its cleanroom clean.
Here's how a cleanroom works. One group develops a specification of functionality for each of the API function names (which are, remember, not protected - people are allowed to make compatible programs using the same names, like print and save). Ideally, you do this through reverse engineering, but arguably it can be done by reading copyrighted specifications/manuals, and extracting the functionality. Quite frankly, you could probably use the original documentation as well, but it does not appear as "clean" when you do so.
Then, a second group takes the "pure functionality" description, and writes its own implementation. If it is done properly, you find no overlapping source code or comments, and no overlapping structure, sequence and organization. If there happens to be similar structure, sequence and organization, then the cleanroom still wins, because that similarity must have been dictated by functionality. After all, the whole point of the cleanroom is that the people writing the software could not copy because they did not have the original to copy from.
So, where did it all go wrong? There were a few smoking guns that the jury might have latched on to:
1. Google had some emails early on that said there was no way to duplicate the functionality, and thus Google should just take a license.
2. Some of the code (specifically, the 9 lines) were copied directly. While not big in itself, it makes one wonder how clean the team was.
3. The head of development noted in an email that it was a problem for the cleanroom people to have had Sun experience, but some apparently did.
4. Oracle's expert testified (I believe) that some of the similarities were not based on functionality, or were so close as to have been copied. Google's expert, of course, said the opposite, and the jury made its choice. It probably didn't help Google that Oracle's expert came from hometown Stanford, while Google's came from far-away Duke.
So, the jury may have just discounted the Google cleanroom story, and believed Oracle's. And that's what it found. As someone who litigated many copyight cases between competing companies, this is not a shocking outcome. This issue will not doubt bring the copyright v. functionality issue to the forefront (as it did in Lotus v. Borland and Intel v. AMD), this stuff is bread and butter for most technology copyright lawyers. It's almost always factually determined. Only the scope of this case is different in my book - everything else looks like many cases I've litigated (and a couple that I've tried).
So, what happens now in the copyright phase? (A trial on patent infringement started today.) Judge Alsup has two important decisions to make.
First, the court has to decide what to do with the fair use ruling. Many say that a mistrial is warranted since fair use is a question of fact and the jury deadlocked. I'm not so sure. The facts on fair use are not really disputed here - only the legal interpretation of them; my experience is that courts are more than willing to make a ruling one way or the other when copying is clear (as the jury now says it is). I don't know what the court will do, but my gut says no fair use here. My experience is that failed cleanrooms fail fair use - it means that what was copied was more than pure functionality, and it is for commercial use with market substitution. The only real basis for fair use is that the material copied was pure functionality, and that's the next inquiry.
Second, the court must determine whether the structure, sequence, and organization of these APIs can be copyrightable, or whether they are pure functionality. I don't know the answer to that question. It will depend in large part on:
a. whether the structure, etc., copied was at a high level (e.g. structure of functions) or at a low level (e.g. line by line and function by function);
b. the volume of copied (something like 11,000 lines is at issue);
c. the credibility of the experts in testifying to how much of structure that is similar is functionally based. On a related note, the folks over at groklaw think for the most part think this is not copyrightable. They have had tremendous coverage of this case.
I've been on both sides of this argument, and I've seen it go both ways, so I don't have any predictions. I do look forward to seeing the outcome, though. It has been a while since I've written about copyright law and computer software; this case makes me want to rejoin the fray.
Tuesday, May 01, 2012
Who Are You Wearing? Part 3: The Reveal
At the start of my stint here on Prawfs, I noted the high stakes of intellectual property enforcement in the luxury goods market. A bit later, I returned to the subject by noting that the form of intellectual property law that regulates this market--the trademark doctrine of post-sale confusion. But again, this doctrine imposes infringement liability based on the fact that someone might see the purchaser of a knock-off luxury good--who knew full well that they were buying a knock-off--and mistakenly believe they were actually carrying a genuine luxury good. In my last post on the subject, I asked what social or moral ill is threatened in such a circumstance. Courts have given two different answers to this question, one of which I find plausible in theory but problematic in practice, and the other of which I find morally and perhaps even constitutionally repugnant. I'll review each (incorporating shameless plugs for my prior writings on the subject) after the jump.
The first justification for imposing liability in these circumstances is what I have called in previous work the “bystander confusion” theory. This is the situation in which a defendant sells a knock-off product to a non-confused purchaser; observers who see the non-confused purchaser using the knock-off product mistake it for the genuine product; and those observers draw conclusions from their observations about the quality of the genuine product that influence their future purchasing decisions. In theory, this sounds like a real problem. One could understand why, say, Christian Louboutin might be worried that someone who sees a woman fall and shatter her ankle when the heel breaks off her red-soled stiletto pump might blame his fashion house for shoddy workmanship and decide never to purchase his shoes again. So bystander confusion theory has some intuitive appeal.
The problem is that bystander confusion is, in practice, a hopelessly speculative theory of liability. Direct evidence of such mistaken attributions of quality based on second-hand observation is essentially non-existent; the theory is often little more than a just-so story. This is especially so in the luxury goods market, where the purchasers of genuine luxury goods are typically highly sophisticated consumers who are aware of the wide availability of knock-offs. That isn't to say that courts don't invoke bystander confusion theory, it's just that in doing so they often end up shifting the burden of proof on the question of trademark infringement from the plaintif to the defendant, in essence demanding that the defedant prove two negatives: that potential customers of the plaintiff will not observe knock-offs being consumed, and that even if they do observe such consumption these customers will not attribute the poor quality of the observed goods to the plaintiff.
This might seem bad enough as a matter of basic civil litigation principles, but the real problem with the law of luxury goods is that it seldom turns on bystander confusion at all. Rather, courts in knock-off luxury goods cases tend to rely on a theory I've referred to as "status confusion." In the clearest statement of the theory, the Second Circuit in Hermès International v. Lederer de Paris Fifth Avenue, Inc. stated that an injury "to the public" occurs "when a sophisticated buyer purchases a knockoff and passes it off to the public as the genuine article, thereby confusing the viewing public and achieving the status of owning the genuine article at a knockoff price."
What is interesting to me about this explanation of status confusion theory is that it has nothing at all to do with the quality assurance function that trademarks are usually thought to provide, or indeed with products in any sense. Rather, it is all about the effect of consumption of trademarked goods on social relations: about the level of social status afforded the surreptitious consumer of knock-off goods who has not paid an appropriate price for that status. The real knock-off, in the Second Circuit's analysis, isn't the cheap handbag, it's the woman carrying it. My own view, laid out more fully in my recent article in the Minnesota Law Review, is that policing this kind of social hierarchy ought not to be the business of the federal courts.
Of course, some type of social comparison based on consumption is natural and perhaps inevitable--as Thorstein Veblen documented over a century ago. And the Second Circuit is clearly right that modern brands serve more functions than merely indicating source or guaranteeing quality of products. Brands are increasingly freighted with social meaning--what you consume sends a message about who you are. As I've written in the NYU Journal of IP and Entertainment Law, the social dimension of even everyday brands has given rise to a fragile symbiosis between the brand owner, its customers, and the social audience, and law is increasingly being called on to mediate this web of relationships. But because brands are increasingly used to construct social meaning, and because social meaning can only be constructed through exchange among individuals and groups, the consumption of branded goods as social signals has not only a commercial dimension, but also an expressive one. And when it comes to expression, our legal system has a thumb on the scale in favor of speakers and against those who would suppress their chosen expression. Socially competitive consumption may be inevitable, but that does not mean the government should referee the competition.
Once we see consumption as a form of social expression, giving brand owners control over how we use their brands to convey and understand social identities and affiliations is troubling enough. Giving them the right to comandeer the federal courts into enforcing social hierarchies based on something as arbitrary as wealth--as status confusion doctrine does--is something else entirely. I would go so far as to say it is fundamentally anti-democratic--the kind of use of state power that the Founders fought a revolution to prevent. That this power is exercised in the context of an intellectual property claim should not obscure the profound state interference in the process of social identification, affiliation, and differentiation that post-sale confusion doctrine represents.
Nor is the troubling entanglement of courts in policing social expression limited to trademark law. Indeed, the Supreme Court this very term is considering the extent to which the government may constitutionally proscribe even a patently false factual statement made in an effort to win social acclaim. As TJ Chiang noted in an earlier post here at Prawfs, and as the Justices themselves seem at least dimly aware, the conceptual connections between the Stolen Valor Act and trademark law, between bogus boasting and modern branding, run very deep; and they all implicate core First Amendment values. It may be a cliché to observe that we all wear masks, that even misleading claims about who we are implicate universal human processes of self-expression and self-definition. But cliché or no, it bears remembering as we consider how law intervenes in those processes, and whether we are content with such intervention, or would rather defend some broader sphere of freedom to form social identities and bonds without paying a licensing fee or getting a government seal of approval.
It's the First of May
Glad to be back in the Blogosphere. Liz Phair's "Cinco de Mayo" has been in my mind nonstop today. You may ask yourself whether there is also a "First of May" song. It turns out there are at least two.
One, by the BeeGees, is a song about lost love and lost connections. The other, by geek rocker Jonathan Coulton, is about <ahem> making intimate connections in the great outdoors (and is explicit about such connections in a way that is probably NSFW).
Could the BeeGees go after JoCo for the use of the same song title? (Answer after the break)Probably not. Duplicate song titles happen all the time, and are almost never protectable under copyright law because they are too short / not sufficiently expressive. Every once in a while, we do see cases that recognize protectable trademark rights in song titles. See, for example, EMI Catalogue Partnership v. Hill, Holliday, Connors, Cosmopulos, Inc.
EMI asserted trademark rights in the title of the Benny Goodman hit "Sing, Sing, Sing (with a Swing)." The Second Circuit reversed the district court's grant of summary judgment in favor of defendant who used the phrase "Swing, Swing, Swing," in a commercial for golf clubs, accompanied by a swing tune which may or may not have been similar to the plaintiff's song.
The Second Circuit didn't resolve the defendant's fair use argument, and it's fairly solid, at least at first blush: why shouldn't an advertisement for golf clubs be able to use the phrase "swing, swing, swing"? That's what you do with a golf club. The court reversed because it felt the district court too quickly discounted the defendant's selection of a "Benny Goodman-type song like 'Swing Swing Swing.'" In fact, the advertisement in question was originally going to use the Goodman song, but the client didn't want (or couldn't affort) to pay the licensing fee. Thus, the court concluded "there are sufficient facts upon which a reasonable jury could conclude that defendants intended, in bad faith, to trade on EMI's good will in the title of the song by using the phrase 'Swing Swing Swing' in the final commercial."
The result here reminds me of the Bette Midler and Tom Waits right of publicity cases, where the respective artists turned down an invitation to sing their hit for a commercial jingle, and in both instances, the ad agency went out and hired a soundalike. As I see it, all three cases went against the defendant because of arguably bad faith attempts to either circumvent a licensing fee or circumvent the artists desire not to be associated with the client's product. You may disagree on whether EMI, Midler or Waits should have the right to say "yes but," or "no, never," but once a court is persuaded that such a right exists, the workaround seems troubling at best.
So it's the first of May, and a wonderful time to blog about the intersection of intellectual property and music, among other things. I hope you'll chime in as you have the time.
Monday, April 30, 2012
This post is cross-posted on the Patently-O blog.
Self-replicating technologies, once the subject of theory and fantasy, are now upon us. The original self-replicating machine—the living organism—has already been harnessed by biotechnology engineers and, more to the point, their lawyers. The next wave of self-replicating technologies, be they nanomedical robots or organic computers, are not far behind. Rather than triggering a “grey goo” apocalypse, these technologies are, at present, raising far more prosaic issues of intellectual property and antitrust law.
Those issues have now apparently caught the attention of the Supreme Court. A few weeks ago, the Court called for the views of the solicitor general on the certiorari petition in the case of Bowman v. Monsanto. This is the latest in a series of cases in which the Federal Circuit has addressed the application of the doctrine of patent exhaustion to the genetic engineering technology embodied in Monsanto's "Roundup-Ready" herbicide-resistant seeds. Seeds are the prototypical self-replicating technology, and a number of similar herbicide-resistant crops are in the pipeline of the largest agribusiness concerns. In each of the Roundup-Ready cases, a farmer has argued that Monsanto's patent rights do not extend to the second generation of soybeans grown from a patented first-generation seed. In each case, the Federal Circuit found for Monsanto and against the farmers.
Patent exhaustion (or "first sale") doctrine serves as a limit on patent rights, and provides that once a patentee has made an authorized sale of an embodiment of its patented invention, its patent rights with respect to that embodiment are exhausted, and the purchaser is free to use or re-sell the embodiment as it sees fit. Like analogous doctrines in copyright and trademark, it is motivated by competition concerns. Its aim is to enable the creation of downstream or secondary markets in patented articles, and to prevent patentees from using their intellectual property rights to gain market power in markets other than the market for the patented technology. When the Supreme Court last spoke on the issue, it rebuked the Federal Circuit for giving these pro-competitive policies insufficient weight. It seems to be considering an encore in the Roundup-Ready cases. For reasons I'll explain after the jump, I think that would be a mistake.The Federal Circuit's analysis of patent exhaustion in the Roundup-Ready cases is admittedly not a model of the judicial craft. Framing the issue as a formal question whether a second-generation soybean is a different "article" than the first generation seed from which it grew, the court's main justification for its result was the bare assertion that any alternative result would "eviscerate" Monsanto's patent. But this is a question-begging explanation, and there are other, better reasons why a patentee's sale of a single embodiment of its self-replicating technology ought not to exhaust patent rights with respect to the second, third, or nth generation of the technology that is propagated from that first embodiment. Moreover, these reasons are consistent not only with the reasons for granting patent rights in the first place, but with the pro-competitive principles that justify limiting those rights through exhaustion doctrine.
To get at these reasons, I propose a thought exercise. Let's imagine that the Roundup-Ready cases came out the other way--that purchasers of Roundup-Ready seed from Monsanto were free, as a matter of patent law, to use all subsequent generations of soybeans grown from those first purchased seeds however they saw fit. What would we expect the Monsantos of the world to do? How do we believe their behavior might be influenced by this new legal framework?
One possible answer to this question is: not at all. It may be that the additional revenues to be derived from selling additional embodiments of a self-replicating technology to the same customer are trivial (perhaps due to the structure of demand), and that the prospect of any one customer re-selling a subsequent generation of the technology to another potential customer of the patentee is remote. Nanomedicine, particularly personalized nanomedicine, may one day prove that this is a possible result. But in the agriculture context, it strikes me as unlikely.
Where the technology at issue is an input for the production of a commodity, and the demand for that technology is broad and essentially undifferentiated, I would expect that the possibility of re-sale of nth generation seeds by the patentee's customers would significantly eat into the patentee's revenue stream, potentially making it impossible for the patentee to recoup the investment in research and development required to develop the technology in the first place. This is the classic free-rider problem that patent law is supposed to prevent: we preserve the incentive to engage in costly research and development by giving the inventor a limited-time monopoly. Other scholars have noted that this free-rider rationale is particularly salient for inherently self-disclosing inventions (inventions that are easy to copy once they have been introduced to the public). I would add that self-replication exacerbates the problem of self-disclosure: the patentee selling an embodiment of its invention would not only be teaching competitors how to practice the invention, it would in essence be building their factories as well.
So there are sound justifications grounded in the innovation policies underlying patent law for the Federal Circuit's rulings in the Roundup-Ready cases. But of course, patent exhaustion doctrine is concerned not only with innovation policy, but also with competition policy. This brings me back to my earlier question: how would we expect the Monsantos of the world to react to the free-rider problem if patent law did not protect them against competition from nth generation copies of their own first-generation products? I can imagine two possible strategies a technologist might pursue to circumvent the free-rider problem: contract and secrecy. And I think both of these alternatives are inferior to the patent solution crafted by the Federal Circuit on competition grounds.
Take the contract approach, which has been explicitly advocated by Yee Wah Chin, one of the attorneys representing the interests of Monsanto's farmer customers. To avoid the problem of free-riders Monsanto might, for example, restrict sales of its seeds to customers who sign a license agreement in which the customers undertake to monitor the uses of nth generation embodiments. So, a farmer might have to agree to sell his soybean crop only to buyers who have their own license agreement with Monsanto, or to Monsanto itself. Or Monsanto could include field-of-use restrictions in its licenses, as Ms. Chin proposes: "Monsanto could have licensed seedmakers to sell seed embodying Monsanto technology on condition that the second-generation seed be either consumed or sold to buyers who agree to either consume the seed or isolate that seed from other seed and sell the seed only for consumption."
This does not strike me as a pro-competitive result, for a few reasons. First, it incentivizes Monsanto to extend its influence into downstream markets--such as the market for commodity soybeans and their derivative products--in ways that it would have little incentive for under the Federal Circuit's approach. This downstream market creep is precisely the type of expansion of patent rights that exhaustion doctrine is supposed to prevent, out of fear that the patentee's interests are not likely to be consistent with the efficient functioning of those downstream markets. Second, and perhaps more importantly, forcing Monsanto to look to contract rights to protect its investment in research and development shifts the costs of monitoring and enforcing the Roundup-Ready patents from Monsanto itself onto its customers, who are likely to face higher monitoring costs.
We must remember, Monsanto's customers are largely farmers, who lack Monsanto's economies of scale, its greater expertise with its own technology, and its understanding of the functioning of the markets for that technology. Moreover, shifting enforcement responsibility from the patentee to its customers is likely to create agency costs where they would not otherwise exist. A farmer who is paying Monsanto a premium for Roundup-Ready seeds probably has far weaker incentives to vigorously monitor for violation of Monsanto's license terms than does Monsanto itself, which is reaping the premium. Finally, in the event that a customer breaches these monitoring obligations, either maliciously or negligently, Monsanto's technology could fall into the hands of a competitor who is not in privity of contract with Monsanto and thus (absent any unfair competition type of claim) would be free to use the nth generation seed (in which Monsanto's patent rights are exhausted) to compete with Monsanto. An individual farmer is likely to be judgment-proof in the face of the claims Monsanto might make should such a competitive threat emerge outside the reach of its licensing provisions, which once again leads us to the original problem: how would we expect Monsanto to respond to this risk of free-riding?
This brings me to the last alternative to the Federal Circuit's solution in the Roundup-Ready cases: secrecy. Monsanto might seek to prevent free-riding by refusing to release its technology to public view, and relying on trade secret protection to protect against free-riding. But in order to preserve its secret (a prerequisite of trade secret protection), Monsanto would have to ensure that nothing it released into the market disclosed its genetic technology. As I noted above, self-replication can be seen as a heightened form of self-disclosure, and so this type of secrecy would be fairly hard to maintain. Indeed, I think the only plausible way of doing so would be to pursue a course of comprehensive vertical integration. Monsanto would not only have to be in the business of propagating seeds, but also in the business of cultivating and harvesting soybeans, and processing them into useful products (oil, animal feed, industrial adhesives, tofu, you name it) that do not reveal the genetic material at the core of Monsanto's invention. Even if this were technically possible (a big if), the effect on all sorts of markets, both for inputs and outputs of the soybean market, is likely to be catastrophically anti-competitive. Where the alternative is such drastic shocks to competition in the market for, e.g., miso paste, soy-fed livestock, and arable land, the Federal Circuit's decisions in the Roundup-Ready cases start to look surprisingly pro-competitive.
The big question in my mind, then, is not whether the Federal Circuit's reached the right result in the Roundup-Ready cases. Given the factual setting of those cases, I think the answer to that question is a relatively uncontroversial yes. The real question, to me, is whether the same holds true for self-replicating technologies other than seeds for agricultural commodities. I already noted above one type of self-replicating technology--personalized nanomedicine--that may not present the same incentives for patentees, their customers, and their competitors, as do herbicide-resistant soybeans. Given how little we can presume to know about the future development of other self-replicating technologies, it is likely unwise to try to set a rule today to govern the rights of downstream users for all such technologies that may arise tomorrow. And for this reason alone, it may be worth getting some discussion of the issue from the Supreme Court, which seems particularly sensitive (almost to a fault) to the hazards of establishing brittle legal rules to govern the unknown future of technology. If the analysis that emerges is more substantive and functionally-minded than the under-argued, formalist analysis of the Federal Circuit (admittedly, another big if), I would be happy to see the Court take the case, if only to put the type of issues I've discussed in this post on the table.
Friday, April 27, 2012
In IP3, Madhavi Sunder considered the cultural impact of intelletual property rights on those in need. Her piece refers to "compassionate uses" of patented pharmaceuticals to distribute to those unable to afford them. As she describes, such uses "would permit countries where urgently needed medicines are unaffordable at market prices to temporarily distribute these medicines at cost for 'compassionate use.'"
This morning's The New York Times describes infringement of an entirely different kind. There, a 92-year-old copyist known as "Big Hy" likely spent $30,000 of his own funds to ship bootlegged DVD's to miliatary service personnnel overseas. According to the piece, "in black grandpa shoes and blue suspenders that hoisted his trousers up to his sternum," Hy ripped bootleg films, placed them in boxes, and shipped at least some of them to an Army Chaplain, because they are (apparently) part of an effective distribution system. Once received, members of the troops would watch them, sometimes at the same time that the films were being released in theaters here.
A spokesperson for the Motion Picture Association of America appeared to acknowledge that "we produce can bring some enjoyment to them while they are away from home." This rather unaggressive stance is unusual for that organization, which is known to advocate strong copyright enforcement. Whether this response arises from compassion or a sophisticated understanding of press relations, it is good to see the organization acknowledge uses beyond those categorically permitted by the law.
Thursday, April 26, 2012
Who Are You Wearing? Part 2: The Law
In an earlier post, I flagged the high stakes surrounding intellectual property disputes over luxury goods, but questioned the rationale for making a federal case out of, say, a fake purse. In this post, I'll be examining the legal regime that allows such a case to be made.
That regime, in the United States at least, comprises a particular sub-field of federal trademark law. Section 43(a) of the Lanham Act provides the primary statutory authority for the federal law of trademark infringement and unfair competition. It imposes civil liability against any person who uses a trademark in commerce that "is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection, or association of such person with another person, or as to the origin, sponsorship, or approval of his or her goods...." But if you know the market for knock-off luxury goods, you know that the people who buy them almost always know full well that they're buying fakes. Nobody thinks the Rolex he bought for $10 in Times Square has any actual relationship to the Rolex company, nor does anybody think the vinyl Kelly Bag she bought for $20 on Canal Street has any relationship to the house of Hermès. So what is the "confusion, or... mistake, or... dece[ption]" that provides the basis for trademark liability against the makers and sellers of such knock-offs?
The answer that courts have come up with has come to be known as "post-sale confusion." Luxury knock-offs do not infringe the luxury house's trademark because of their effect on the purchaser of the knock-offs, but because of their effect on people who observe that purchaser consuming the product after it has been purchased. Such observers, the theory goes, will see the non-confused purchaser consuming the defendant's product, but mistake it for the plaintiff's product due to the similarity of the products' trademarks or overall designs. This "mistake" is the hook on which trademark liability hangs in the luxury knock-off arena, and the question I'm interested is why this type of mistake is something the federal government ought to concern itself with.
What is the social or moral ill that results if I mistakenly believe that a woman walking down Fifth Avenue is carrying an authentic Louis Vuitton purse when in fact she is carrying a cheap imitation? Trademark law is often thought to be designed to prevent producers from misleading consumers as to unobservable product qualities, either to lower consumers' search costs (as Judge Posner and Professor Landes have famously argued) or out of respect for consumers' autonomy (as I argue in a forthcoming piece in the Stanford Law Review). But, again, the purchasers of luxury knock-offs know exactly what they're buying; they aren't being deceived at all. So what gives? Why should we make post-sale confusion actionable, let alone criminal?
Once again, I'll throw this open to commenters before revealing my own thoughts in a future post; those who can't wait for the reveal can read my take in the latest issue of the Minnesota Law Review.
Tuesday, April 24, 2012
I'm a big fan of Dropbox. With a full schedule of professional travel, a need to work at home and on the go, and a less-than-perfectly-reliable university-issued computer, I've learned the hard way that I need dependable, easy-to-use cloud-based storage for my important data. But Dropbox has always targeted the casual data-sharer as much as the power user, and yesterday the company unveiled a new feature of its software that allows users to share files on their computers with anyone using an http link to a copy of the file stored on Dropbox's cloud-storage servers. The thing about this service, as some tech commentators have pointed out, is that it implements essentially the same technology that led to the federal government's recent criminal indictment of file-sharing juggernaut MegaUpload and its eccentric founder, Kim Dotcom.
So does Dropbox have a date with the feds in its future? I think most would agree the answer is no, but getting to that answer reveals the problems we've created in trying to manage the social, legal, and technological issues that surround the exchange of information. More after the jump...
The big story in copyright law for the past two or three decades has been the ongoing battle between the forces of "content" and "distribution"--between the owners of intellectual property rights in information and the sellers of technology that makes the distribution of that information cheaper, easier, and broader. This is nothing particularly new; those who make their living off of the creation and sale of new information have always been wary of technological progress. But mass adoption of digital technology and high-speed data networks have significantly raised the stakes.
In Section 512(c) of the Copyright Act (the so-called "DMCA safe-harbor") and in the case of Sony Corp. v. Universal City Studios, Congress and the Supreme Court, respectively, attempted to strike what turns out to be an uneasy balance between these competing interests. Section 512(c) immunizes the sellers of technology that facilitates the distribution of copyrighted information from liability for infringing uses of their services by customers, provided the technologists meet certain conditions. In Sony, the Court announced that technology itself is not a copyright outlaw so long as it is capable of substantial non-infringing uses. But of course, individuals and institutions may well use such technology for infringing purposes, and such uses remain actionable. We thus have a distinction set up within copyright law itself between the power of a technology in itself and the use of that technology by real people in real social settings. While we may hold individuals responsible for uses of technology that infringe a copyright, we do not hold the technology itself responsible for such uses.
This leads to the odd situation in which we now find ourselves, where the viability of entire segments of the digital economy, and of some of the largest and fastest-growing businesses in the world, come to turn on the thorniest and most contentious questions of fact the legal system can ever grapple with--questions of intent. In MGM v. Grokster, for example, the defendant companies were denied summary judgment on grounds that there was sufficient evidence that they intended to induce third parties to infringe the plaintiff's copyrights using their peer-to-peer file sharing services. But of course, intent is not a fact that can be proven by prying open the skull of a defendant and looking inside. Intent must always be proven circumstantially. In Grokster, the most important category of circumstantial evidence cited by the Court as sufficient to create a triable issue of fact (and likely sufficient to award summary judgment to the plaintiff--which was eventually granted) was evidence tending to show that the defendants targeted the cast-off customers of adjudged secondary infringer Napster. But "complement[ing]" that evidence, the court said, was the defendants' failure to impose filtering systems on their services that Section 512(c) arguably makes legally unnecessary, as well as evidence that the defendants--gasp!--were interested in growing their user base to maximize advertising revenues.
This is what Larry Lessig once referred to as "the monster Grokster created": the inquiry into a particular defendant's state of mind is now part and parcel of the legal battle between content and distribution. And because evidence of intent is necessarily circumstantial, these cases are likely to turn on a factfinder's response to the overall story woven by the parties' lawyers--a gut reaction as to whether the defendant is a good guy or a bad guy. Facts that might otherwise seem innocuous can be cited as circumstantial evidence of intent to commit secondary infringement if the factfinder just doesn't trust the defendant.
Which brings me back back to Dropbox and its new link-to-share service. Dropbox, it seems, is not maintaining a searchable index of the files its customers share via link--the type of activity that got Napster in trouble. One might think that this fact suggests the company has no interest in attracting customers who are interested in using its services to locate and freely download copyrighted content. But take a look at Paragraph 10 of the MegaUpload indictment, which alleges that MegaUpload did not maintain a searchable index of content on its servers in order to "conceal the scope of its [copyright] infringement." That paragraph also notes that MegaUpload provided financial incentives to customers whose uploaded files increased traffic on MegaUpload's website and, thereby, increasing the company's revenue base. Dropbox, in turn provides existing customers with additional free cloud storage for referring new customers to the service. If, as Grokster suggests, a desire to broaden one's customer base is circumstantial evidence of an intent to induce infringement, should we expect the refer-a-friend program to be cited in a federal indictment or a civil complaint in the near future?
I don't think so, but I can't be sure, and that is ultimately the point. The social dynamics of information exchange that new technologies like Dropbox (and, frankly, MegaUpload) make possible are unpredictable and often out of the direct control of the service providers themselves. Such exchanges can be public or private, shared or hidden, broadcast or narrowcast, and everywhere in between. Section 512(c) attempts to account for this, for example by making knowledge of specific infringing activity a prerequisite for secondary liability. But like intent, knowledge is a thorny factual issue that courts continue to disagree about, often based on differing views of the inferences that can be drawn from a particular mix of circumstantial evidence.
For my part, I look at all this as a lawyer who, in a former life, was sometimes called on to give clients guidance as to whether a course of action they were considering for their business would be likely to generate legal liability. I have to admit, I'd have a hard time giving a client like Dropbox useful advice today. And it strikes me that a legal regime that doesn't allow a segment of our economic and social lives as fundamental as the information we exchange with one another to be planned with some degree of certainty isn't doing its job very well.
Monday, April 23, 2012
Recently I learned that I'll be teaching Copyright law for the first time, a circumstance that launched my search for casebook. One of the ones that I considered was Brauneis and Schechter's Copyright: A Contemporary Approach, which is an interactive casebook just published by West. The book is released in a paper format, along with a one-year subscription to an electronic version of the book. Prawfs using a West/Westlaw password can obtain access to the electronic version.
The authors used the electronic format of the book nicely. I liked the links to the subject matter of the cases, such as clips of songs, images and the like. For example, one link which allowed me to play the video game that was the subject of Williams Electronics v. Artic Int'l. The links to the statutory text were particularly useful.
Although I ultimatley didn't end up going with this one (at least this year), I found the format helpful and intriguing, particularly for courses where there are strong visual components. If you've used any of the interactive casebooks in your courses, your feedback about your experience would be very helpful.
Sunday, April 22, 2012
80,000? That's a Lot of Patents
I just saw this Mercedes ad, intended to celebrate the innovation of the company's engineers. As a patent prawf, I was struck by the image of patents protecting the car.
80,000 patents can be a signal of serious advances. As Clarissa Long has observed, "patents can serve as a signal of firm quality." Or, it might just be indicative of a lot of patenting.
Wednesday, April 04, 2012
Who Are You Wearing? Part 1: The Stakes
It's a pleasure to be making my first appearance on PrawfsBlawg, where I have long turned for thoughtful commentary on weighty issues such as the ACA, religious liberty, workplace discrimination, the politics of judicial review, and the crisis in legal education. I hope to do my own part to uphold this tradition by talking about luxury handbags.
My own scholarship focuses on intellectual property, and mainly trademark law. These days, most of the highest-profile trademark disputes involve luxury goods: the red-soled Louboutin, the Louis Vuitton monogram, or Tiffany's blue box. We might dismiss the legal wrangling over such baubles as frivolous, but there are, quite literally, billions of dollars at stake: the premier luxury conglomerate LVMH reported revenues of over 23 billion euros last year, 22% of which came from the United States. And that's just for sales of genuine products; Congress has found that trademark counterfeiting saps our national economy of $200 billion annually, losing us "millions of dollars in tax revenue and tens of thousands of jobs" (though there are many who cast doubt on this claim, notably including the GAO). So perhaps it's not surprising that so many lawyers (and their clients) are ready to make a federal case out of a fake purse.
The question I've been investigating recently is whether we ought to allow such a federal case to be made. I think we can all intuitively appreciate the desire to police the stream of commerce for knock-off pharmaceuticals, baby formula, or brake pads--there's a public safety issue at stake. But what is the public interest in knock-off watches and open-toe pumps? This turns out to be a complicated question, and I'll be fleshing out my own view in the coming days. But before I give my take, I'm curious what the Prawfs readership thinks. Feel free to give your views in the comments.