Tuesday, June 17, 2014

IRS: "sorry, can't produce" or a bad example of hiding the ball?

Last week, the IRS stated that it lost numerous emails from Lois Lerner concerning the targeting of conservative groups for tax exempt status because her computer crashed.  And this week, the IRS is now revealing that it has lost numerous additional emails from key IRS officials.  Politics aside, it is interesting to think how this discovery issue involving electronically stored information (ESI) would be addressed in a federal court under the Federal Rules of Civil Procedure (FRCP).

The facts surrounding this issue almost read like a law school exam hypothetical.  The IRS received a subpoena to produce emails between key IRS officials and other government agents that might suggest targeting.  The IRS knew months ago, in February, that it could not produce the emails, but failed to inform Congress that the emails were lost until just the last few days.  The IRS has taken the position that the emails were lost during a computer crash in 2011 but that the IRS has made a "good faith" effort to find them having spent $10 million dollars (of tax payer money) to deal with the investigation including the cost to piece together what could be found.  The IRS does not deny that the recipients, other government officials, may still be in possession of the emails.  The IRS, however, maintains that because the subpoena was only directed at the IRS, not other government agencies, the non-IRS recipients of the emails are not required to produce them.

If this issue arose in federal court, under FRCP 26, parties are required at the outset to submit a "discovery plan" that includes how ESI will be retained and exchanged in order to prevent unnecessary expense and waste.  The FRCP requires the parties to take reasonable steps to preserve relevant ESI (a litigation hold) or face possible sanctions.  Under Rule 37's so-called safe harbor provision, however, "absent exceptional circumstances, a court may not impose sanctions ... for failing to provide electronically stored information lost as a result of the routine, good-faith operation of an electronic information system."  The IRS is hanging its hat on this safe harbor rule by arguing that, despite a good-faith effort, the emails were lost.  Did the IRS, in fact, make a good faith effort?

While there is confusion among the courts on how to apply the good faith standard, there is precedent for a court to monetarily sanction the IRS if the court found that the IRS acted negligently when it lost the emails.  The court would also have the authority to issue an adverse inference instruction (inferring that the lost evidence would have negatively impacted the IRS's position), if it determined that the IRS acted grossly negligent or willful. 

An important fact which will probably be discussed during the next few hearings is whether the IRS violated its own electronic information retention policy.  The IRS was put on notice of the investigation last year, and so had a duty to put a litigation hold on the emails at that time (the very essence of what "good faith" means).  It seems that the general IRS retention policy of ESI was six months (although now it is longer), but emails of "official record" had to have a hard copy which would never be deleted.  Whether these emails constituted an "official record" is hard to determine since Lerner won't testify to their content. 

Even assuming the emails were lost before a litigation hold could be placed (or despite a litigation hold being in place), at the very minimum, it seems "good faith" means that the IRS should have notified Congress in February that it lost the emails.  Rule 26 would have required Congress to do so.  Indeed, such notice would have brought this issue to the forefront and could have saved a lot of money - the money it apparently has already cost to piece together some of the emails, and the money it will cost as the parties argue over whether the IRS negligently or willfully destroyed evidence.  If the IRS had been upfront from the beginning, then subpoenas could have been issued months ago to other agencies who, as employers of the lost email recipients, might have copies of the missing emails.

If this discovery issue had arisen in federal court, the IRS would have likely been subject to monetary sanctions and possibly an adverse inference instruction.  Shouldn't the IRS be held to these standards?

 

Posted by Naomi Goodno on June 17, 2014 at 06:03 PM in Civil Procedure, Current Affairs, Information and Technology, Law and Politics, Tax | Permalink | Comments (7)

Monday, June 16, 2014

Looks like President O got an early start on that coconut

After the next inauguration, quipped President Obama in a hipster Tumblr interview today, he says he'll "be on the beach somewhere, drinking out of a coconut . . ."  Maybe sooner than that, as the president proclaims at the beginning of the interview:  "We have enough lawyers, although it's a fine profession.  I can say that because I'm a lawyer."

So "don't go to law school" is the message he wants to get across.  Larger debate, of course.  But let's see what he says right afterward.  Study STEM fields, he insists, in order to get a job after graduation.  STEM study, yes indeed.  But STEM trained grads often look beyond an early career as a bench scientist or an IT staffer, or a mechanical career or . . . that is, STEM-trained young people look to leverage these skills to pursue significant positions in corporate or entrepreneurial settings.  Hence, they look for additional training in business school, in non-science master's programs, and, yes, even in law schools

Tumblr promises #realtalk, so here is some real talk:  Significant progress in developing innovative projects and bringing inventions to market require a complement of STEM, business, and legal skills.  These skills are necessary to negotiate and navigate an increasingly complex regulatory environment and to interacts with lawyers and C-suite executives as they develop and implement business strategy.   Perhaps too many lawyers, but not too many lawyers who are adept at the law-business-technology interface.  "Technology is going to continue to drive innovation," wisely insists President Obama.  But it is not only technology that is this driver, but work done by folks with a complement of interdisciplinary skills and ambition.

Posted by Dan Rodriguez on June 16, 2014 at 07:29 PM in Information and Technology, Science, Web/Tech | Permalink | Comments (11)

Monday, June 09, 2014

Decline of Lawyers? Law schools quo vadis?

My Northwestern colleague, John McGinnis, has written a fascinating essay in City Journal on "Machines v. Lawyers."  An essential claim in the article is that the decline of traditional lawyers will impact the business model of law schools -- and, indeed, will put largely out of business those schools who aspire to become junior-varsity Yales, that is, who don't prepare their students for a marketplace in which machine learning and big data pushes traditional legal services to the curb and, with it, thousands of newly-minted lawyers.

Bracketing the enormously complex predictions about the restructuring of the legal market in the shadow of Moore's Law and the rise of computational power, let's focus on the connection between these developments and the modern law school.

The matter of what law schools will do raises equally complex -- and intriguing -- questions.  Here is just one:  What sorts of students will attracted to these new and improved law schools?  Under John's description of our techno-centered future, the answer is this: students who possess an eager appreciation for the prevalence and impact of technology and big data on modern legal practice.  This was presumably include, but not be limited to, students whose pre-law experience gives them solid grounding in quantitative skills.  In addition, these students will have an entrepreneurial cast of mind and, with it, some real-world experience -- ideally, experience in sectors of the economy which are already being impacted by this computational revolution.  Finally, these will be students who have the capacity and resolve to use their legal curriculum (whether in two or three years, depending upon what the future brings) to define the right questions, to make an informed assessment of risk and reward in a world of complex regulatory and structural systems, and, in short, to add value to folks who are looking principally at the business or engineering components of the problem.

Law remains ubiquitous even in a world in which traditional lawyering may be on the wane.  That is, to me, the central paradox of the "machines v. lawyers" dichotomy that John draws.  He makes an interesting, subtle point that one consequence of the impact of machine learning may be a downward pressure on the overall scope of the legal system and a greater commitment to limited government.  However, the relentless movement by entrepreneurs and inventors that has ushered in this brave new big data world has taken place with and in the shadow of government regulation and wide, deep clusters of law.  The patent system is just one example; the limited liability corporation is a second; non-compete clauses in Silicon Valley employment contracts is a third.  And, more broadly, the architecture of state and local government and the ways in which it has incentivized local cohorts to develop fruitful networks of innovation, as the literature on agglomeration economics (see, e.g., Edward Glaeser and David Schleicher for terrific analyses of this phenomenon).  This is not a paean to big govenment, to be sure.  It is just to note that the decline of (traditional) lawyers need not bring with it the decline of law which, ceteris paribus, makes the need for careful training of new lawyers an essential project.

And this brings me to a small point in John's essay, but one that ought not escape our attention.  He notes the possibilities that may emerge from the shift in focus from training lawyers to training non-layers (especially scientists and engineers) in law.  I agree completely and take judicial notice of the developments in American law schools, including my own, to focus on modalities of such training.  John says, almost as an aside, that business schools may prove more adept at such training, given their traditional emphasis on quantitative skills.  I believe that this is overstated both as to business schools (whose curriculum has not, in any profound way, concentrated on computational impacts on the new legal economy) and as to law schools.  Law schools, when rightly configured, will have a comparative advantage at educating students in substantive and procedural law on the one hand and the deployment of legal skills and legal reasoning to identify and solve problems.  So long as law and legal structures remain ubiquitous and complex, law schools will have an edge in this regard. 

Posted by Dan Rodriguez on June 9, 2014 at 10:19 AM in Information and Technology, Life of Law Schools, Science | Permalink | Comments (2)

Saturday, May 31, 2014

How do we know that the version of any case, statute or regulation we read is an accurate one

The recent kurfuffle about Supreme Court Justices changing the text of already released opinions  raises the larger question of how we can ever know whether the version of any statute or case  or regulation we are reading is the “final one.”   It also highlights the problem of of linkrot that is also affecting the reliability of judicial opinions.

Given how important a problem it can be if the text we rely on is wrong, its interesting that authenticating information places no role in the legal curriculum.  I never gave it a thought until one of my dissertation advisors asked me to write a methodology section that explained to lay readers “where statues and opinions come from” and “how do we know they are reliable.”  Here's a highly abbreviated version with some helpful links (reliable as of posting, May 31, 2014).

For statutes, all roads led to the National Archives and the Government Printing Office which operates the FDYS.   The National Archives operates the Office of the Federal Register (OFR), which receives laws directly from the White House after they are signed by the U.S. President..”      The accuracy of these texts is assured by “[t]he secure transfer of files to GPO from the AOUSC [that] maintains the chain of custody, allowing GPO to authenticate the files with digital signatures.”  

The GPO assures us that it “uses a digital certificate to apply digital signatures to PDF documents. In order for users to validate the certificate that was used by GPO to apply a digital signature to document, a chain of certificates or a certification path between the certificate and an established point of trust must be established, and every certificate within that path must be checked."  Good news.

   The GPO has developed a system of “Validation Icons”--explained further on the Authentication FAQ page.

Editors at the OFR then prepare a document called a “slip law,” which “is an official publication of the law and is admissible as ‘legal evidence.’”  It is the OFR that assigns  the  permanent  law  number  and  legal  statutory citation of each law and prepares marginal notes, citations, and the legislative history (a brief description of the Congressional action taken on each public bill), which also contains dates of related Presidential remarks or statements.”   Slip laws are made available to the public by the GPO online.  

The system is more complicated when it comes to judicial opinions.  Each of the Eleven Circuit Courts of Appeal issues its own opinions.   For example, this is the website of the Fifth Circuit Court of Appeals, The GPO has joined with the Administrative Office of the United States Courts (AOUSC) “to provide public access to opinions from selected United States appellate, district, and bankruptcy United States Courts Opinions (USCOURTS). Currently the collection has cases only as far back as 2004As indicated by the term “selected,” this database only contains some of the federal courts.

The official source for the opinions of the U.S. Supreme Court of the United States is the U.S. Supreme court itself.  Pursuant to 28 U.S.C. § 673(c), an employee of the U.S. Supreme Court is designated the “Reporter of Opinions” and he or she is responsible for working with the U.S. Government Printing Office  (GPO) to publish official opinions “in a set of case books called the United States Reports.”

According to the Court, “[p]age proofs prepared by the Court’s Publications Unit are reproduced, printed, and bound by private firms under contract with the U.S. Government Printing Office (GPO). The Court’s Publications Officer acts as liaison between the Court and the GPO.”  Moreover, “the pagination of these reports is the official pagination of the case. There are four official publishers of the U.S. Reports but the court warns on its website that “[i]n the case of any variance between versions of opinions published in the official United States Reports and any other source, whether print or electronic, the United States Reports controls.”

To some exent this latest information suggesting that there may be different versions of opinions at different times fits in well with the history of the court.  As most of us know, the Supreme Court did not have an official reporter until the mid-nineteenth century and did not produce a written opinion for every decision.  Moreover, it has only been recording oral arguments since 1955 and although now issues same day transcripts this was hardly always the case.  Also now available are the remarks that the Justices make when reading their opinions.  But, and no link is missing, I don't have one, in hearing Nina Totenberg give a key note presentation at ALI in 2012 about her days at the court, she pointed out that when she began covering the Court this was not available.  And that it was not unusual for notes to differ on exactly what the Justices said.  

For those interested, Here and here are some helpful resource for doing research on Supreme Court opinions.   

Posted by Jennifer Bard on May 31, 2014 at 10:32 PM in Culture, Current Affairs, Information and Technology | Permalink | Comments (0)

Thursday, May 01, 2014

UF Law's (and My) New MOOC: The Global Student's Introduction to US Law

I am now officially part of a MOOC, which went online today. It has been a learning experience (!!), with the biggest lesson being that it is nowhere as easy as you might think to put one of these courses together. I plan to blog about the experience at length when I get a chance. For now, though, you might be interested in viewing the University of Florida Law School's foray into the great MOOC experiment: The Global Student's Introduction to US Law

The course description is as follows: 

In this course, students will learn basic concepts and terminology about the U.S. legal system and about selected topics in the fields of constitutional law, criminal law, and contract law. A team of outstanding teachers and scholars from the University of Florida faculty introduces these subjects in an accessible and engaging format that incorporates examples from legal systems around the world, highlighting similarities to and differences from the U.S. system.  Students seeking an advanced certificate study additional topics and complete assignments involving legal research that are optional for basic level students. The course may be of interest both to U.S. students contemplating law school and to global students considering further study of the U.S. legal system.

My Senior Associate Dean Alyson Flournoy spearheaded the project, and we had excellent technical assistance, which was crucial, by Billly Wildberger. My colleagues Pedro Malavet, Jeff Harrison, Claire Germain, Loren Turner, Jennifer Wondracek, and Sharon Rush all provided lectures, and our research assistant Christy Lopez is providing support with the discussion forums. 

Posted by Lyrissa Lidsky on May 1, 2014 at 09:49 AM in Culture, Information and Technology, International Law, Life of Law Schools, Lyrissa Lidsky, Teaching Law, Web/Tech | Permalink | Comments (1)

Wednesday, April 30, 2014

Of (Courtney) Love and Malice

Today Seattle Police released a note found on Kurt Cobain at his death excoriating wife Courtney Love. Based on her subsequent behavior, Love cannot have been an easy person to be married to. I've been researching Love lately for an article on social media libel that I'm writing with RonNell Andersen Jones.  Love is not only the first person in the US to be sued for Twitter libel; she's also Twibel's only repeat player thus far. According to news reports, Love has been sued for Twitter libel twice , and recently she was sued for Pinterest libel as well. 

Love's Twitter libel trial raises interesting issues, one of which is how courts and juries should determine the existence of  "actual malice" in libel cases involving tweets or Facebook posts by "non-media" defendants. As you probably recall, the US Supreme Court has held that the First Amendment requires public figures and public officials to prove actual malice--i.e., knowledge or reckless disregard of falsity--before they can recover for defamation. And even private figure defamation plaintiffs involved in matters of public concern must prove actual malice if they wish to receive presumed or punitive damages.  However, US Supreme Court jurisprudence elucidating the concept of actual malice predominantly involves “media defendants”—members of the institutional press—and the Court’s examples of actual malice reflect the investigative practices of the institutional press. Thus, the Court has stated that in order for a plaintiff to establish actual malice, “[t]here must be sufficient evidence to permit the conclusion that the defendant in fact entertained serious doubts as to the truth of his publication." [St. Amant v. Thompson] Actual malice, for example, exists if a defendant invents a story, bases it on ‘an unverified anonymous telephone call,” publishes statements “so inherently improbable that only a reckless man would have put them in circulation,” or publishes despite “obvious reasons to doubt the veracity of [an] informant or the accuracy of his reports." Id.

These examples have little resonance for “publishers” in a social media context, many of whom, like Love, post information spontaneously with little verification other than perhaps a perusal of other social media sources. The typical social media libel defendant is less likely than her traditional media counterpart to rely on informants strategically placed within government or corporate hierarchies or to carefully analyze primary sources before publishing. Moreover, the typical social media defendants has no fact-checker, editor, or legal counsel and is less likely than institutional media publishers to have special training in gauging the credibility of sources or to profess to follow a code of ethics that prizes accuracy over speed. 

The issue Courtney Love's libel trial appears to have raised is whether it constitutes reckless disregard of falsity if a defendant irrationally believes her defamatory accusation to be true. I say "appears," because one can only glean the issue from media accounts of Love's libel trial--the first full jury trial for Twitter libel in the US. The jury found that Love lacked actual malice when she tweeted in 2010 that her former attorney had been "bought off." Specifically, Love tweeted: “I was f—— devestated when Rhonda J. Holmes esq. of san diego was bought off @FairNewsSpears perhaps you can get a quote[sic].” Holmes sued Love in California state court for $8 million, arguing that the tweet accused Holmes of bribery. Love contended that her tweet was merely hyperbole. News accounts of the jury verdict in Love’s favor, however, indicate that the jury found that Love did not post her tweet with “actual malice." The jury deliberated for three hours at the end of the seven-day trial before concluding that the plaintiff had not proved by clear and convincing evidence that Love knew her statements were false or doubted their truth.

The Love case doesn't set any precedents, but it raises interesting issues for future cases. According to court documents and news accounts, Love consulted a psychiatrist for an “addiction” to social media. Certainly Love’s actions in the series of defamation cases she has generated do not seem entirely rational, but there is no “insanity defense” to a libel claim. Yet the determination of whether a defendant had “actual malice” is a subjective one, meaning that it is relevant whether the defendant suffered from a mental illness that caused her to have irrational, or even delusional, beliefs about the truth of a statement she posted on social media. It seems problematic, however, for the law to give no recourse to the victims of mentally disordered defamers pursuing social media vendettas based on fantasies they have concocted. As a practical matter, this problem is likely to be solved by the skepticism of juries, who will rarely accept a defendant’s argument that she truly believed her delusional and defamatory statements. Or at least I hope so. 

And in case you wondered . . . Love's first social media libel case involved her postings on Twitter, MySpace and Etsy calling  a fashion designer known as the "Boudoir Queen" a "nasty lying hosebag thief" and alleging that the Queen dealt cocaine, lost custody of her child, and committed assault and burglary. Love apparently settled that case for $430,000. Love's third social media libel case involves further statements about the Queen that Love made on the Howard Stern show and posted on Pinterest. Some people, it seems, are slow learners.

Posted by Lyrissa Lidsky on April 30, 2014 at 06:30 PM in Blogging, Constitutional thoughts, Culture, Current Affairs, First Amendment, Information and Technology, Lyrissa Lidsky, Torts, Web/Tech, Weblogs | Permalink | Comments (0)

Tuesday, June 18, 2013

Libel Law, Linking, and "Scam"

Although I'm a little late to the party in writing about Redmond v. Gawker Media, I thought I'd highlight it here because, though lamentably unpublished , the decision has interesting implications for online libel cases, even though the court that decided it seems to have misunderstood the Supreme Court's decision in Milkovich v. Lorain Journal.

Redmond involved claims against "new media" company Gawker Media based on an article on its tech blog Gizmodo titled Smoke and Mirrors: The Greatest Scam in Tech. The article criticized a new tech "startup," calling it " just the latest in a string of seemingly failed tech startups that spans back about two decades, all conceived, helmed and seemingly driven into the ground by one man: Scott Redmond." The article further suggested that Redmond, the CEO of the new company, used “technobabble” to promote products that were not “technologically feasible”  and that his “ventures rarely—if ever—work.”  In other words, the article implied, and the title of the blog post stated explicitly, that Redmond’s business model was a “scam.” Redmond complained to Gizmodo in a lengthy and detailed email, and Gizmodo posted Redmond's email on the site. Regardless, Redmond sued Gawker and the authors of the post for libel and false light. Defendants filed a motion to strike under Califonia’s anti-SLAPP statute. The trial court granted the motion, and the California appellate court affirmed.

Unsurprisingly, the appellate court found that the Gizmodo article concerned an “issue of public interest,” as defined by the anti-SLAPP statute, because Redmond actively sought publicity for his company. The court described “the Gizmodo article [as] a warning to a segment of the public—consumers and investors in the tech company—that [Redmond's] claims about his latest technology were not credible.” This part of the decision is entirely non-controversial, and the court's interpretation of "public interest" is consistent with the goal of anti-SLAPP laws to prevent libel suits from being used to chill speech on matters of significant public interest.

More controversial is the court's determination that Gizmodo's use of the term “scam” was not defamatory (and thus Redmond could not show a probability of prevailing). The court noted that “’scam’ means different things to different people and is used to describe a wide range of conduct;” while the court's assertion is correct, surely at least one of the "different things" that "scam" can mean is defamatory. [For a similar statement, see McCabe v. Rattiner, 814 F.2d 839, 842 (1st Cir. 1987) ]. While the term "scam" is usually hyberbole or name-calling, in some contexts the term acts as an accusation of criminal fraud, especially when accompanied by assertions of deliberate deception for personal gain. However, the court found that "scam" was not defamatory as used in the Gizmodo article, relying heavily on the fact that the authors gave links to “evidence” about the fates of Redmond's prior companies and his method of marketing his new one.  The court concluded that the statement that Redmond's company was a “scam” was “incapable of being proven true or false.”

It is clear that the court's categorization of the statements about Redmond as “opinion rather than fact” relied on online context--both the conventions of the blog and its linguistic style. The court asserted that the article contained only statements of opinion because it was “completely transparent,” revealing all the “sources upon which the authors rel[ied] for their conclusions” and containing “active links to many of the original sources.” Technology-enabled transparency, according to the court,  “put [readers] in a position to draw their own conclusions about [the CEO] and his ventures.” The court also stressed the blog's  “casual first-person style." The authors of the article, according to the court, made “little pretense of objectivity,” thereby putting “reasonable reader[s]” on notice that they were reading “subjective opinions.”

As attractive as this reasoning is, especially to free speech advocates and technophiles, one should read the Redmond decision with caution because it almost certainly overgeneralizes about the types of "opinion" that are constitutionally protected. The Supreme Court's 1990 decision in Milkovich v. Lorain Journal clearly and forcefully indicates that a statement is not constitutionally protected simply because a reader would understand it to reflect the author's subjective point of view.  Instead, the Milkovich Court held that a purported "opinion"  can harm reputation just as much as explicit factual assertions, at least when it implies the existence of defamatory objective facts. Hence, the Court declared that the statement "In my opinion Jones is a liar" can be just as damaging to the reputation of Jones as the statement "Jones is a liar," because readers may assume unstated defamatory facts underlie the supposedly "subjective" opinion. Moreover, even if the author states the underlying facts on which the conclusion is based, the statement can still be defamatory  if the underlying facts are incorrect or incomplete, or if the author draws erroneous conclusions from them. The Court therefore rejected the proposition that defamatory statements should be protected as long as it is clear they reflect the authors' point of view, or as long as they accurately state the facts on which they are based.  [This analysis is freely borrowed from  this article at pp. 924-25, full citations are included there.]

 

Posted by Lyrissa Lidsky on June 18, 2013 at 03:24 PM in Blogging, Constitutional thoughts, First Amendment, Information and Technology, Lyrissa Lidsky, Torts, Web/Tech, Weblogs | Permalink | Comments (2) | TrackBack

Wednesday, June 05, 2013

More on MOOCs

Glenn Cohen beat me to the punch in blogging about MOOCs, but I thought I might build on what he's written by giving a different perspective: describing my own (admittedly limited) on-the-ground experience with MOOCs.

Taking a MOOC, or at least signing up for one, is extraordinarily easy and painless . A MOOC--Massive Open Online Course--is a course that is open to anyone and everyone and requires no tuition or fee, but also carries no actual academic credit. There are at least three major providers of MOOCs--Coursera, Udacity, and EdX--and signing up is as easy as entering your name and email address.

For the sheer fun of it, I suppose, I signed up for a literature course through Coursera and a statistics course through Udacity. I am just starting both. Some very brief and mostly practical observations, aimed primarily at those of us who may be doing some online teaching in the future:

1. Udacity and Coursera have radically different styles, or at least the courses I'm taking do. The Coursera course, offered through Brown, is rather sparse and staid and feels more like a traditional lecture. The Udacity course, offered through San Jose state, is flashy and interactive and self-consciously entertaining. The Udacity lecture segments are short, and they are spoken not by the professors themselves but rather by someone who appears to have been hired by Udacity for the purpose of presenting the material in an appealing way (read: an attractive young woman with a pleasant voice). Moreover, Udacity seems to be totally asynchronous, whereas Coursera requires you to follow an overall week-by-week schedule. In other words, there are a lot of choices that can be made about presentation style in the online format, and the above are just a few examples.

2.  It is exceedingly hard to pay close attention to a lecture on a video, even an engaging one, even for the brief 10-minute segments that Coursera offers. In real life, I have found that I can have difficulty focusing on live lectures for more than about 20 minutes or so too, unless the speaker is unusually entertaining. But with the computer format, it is even harder, because you are at an additional remove from the speaker, and because it is just too easy to start surfing the web, checking email, checking your bank account, etc. while still convincing yourself you are "listening" to the lecture in the background.

3.  Because thousands of people can (and do) take these MOOCs, the discussion threads are extremely lengthy. Though I suppose they are meant to give the student of feeling of interactivity, I find them rather overwhelming and not worth the time -- especially since many of the comments are relatively devoid of useful content.

4. It is really fun, but weirdly intimidating, to be a student again.

 

Posted by Jessie Hill on June 5, 2013 at 10:04 PM in Information and Technology, Life of Law Schools, Teaching Law | Permalink | Comments (3) | TrackBack

Wednesday, April 24, 2013

On Policy and Plain Meaning in Copyright Law

As noted in my last post, there have been several important copyright decisions in the last couple months. I want to focus on two of them here: Viacom v. YouTube and UMG v. Escape Media. Both relate to the DMCA safe harbors of online providers who receive copyrighted material from their users - Section 512 of the Copyright Act. Their opposing outcomes illustrate the key point I want to make: separating interpretation from policy is hard, and I tend to favor following the statute rather than rewriting it when I don't like the policy outcome. This is not an earthshattering observation - Solum and Chiang make a similar argument in their article on patent claim interpretation. Nevertheless, I think it bears some discussion with respect to the safe harbors.

For the uninitiated, 17 U.S.C. 512 states that "service providers" shall not be liable for "infringement of copyright" so long as they meet some hurdles. A primary safe harbor is in 512(c), which provides exempts providers from liability for "storage at the direction of a user of material that resides on a system" of the service provider. 

To qualify, the provider must not know that the material is infringing, must not be aware of facts and circumstances from which infringing activity is apparent, and must remove the material if it obtains this knowledge or becomes aware of the facts or circumstances. Further, if the copyright owner sends notice to the provider, the provider loses protection if it does not remove the material. Finally, the provider might be liable if it has the right and ability to control the user activity, and obtains a direct financial benefit from it.

But even if the provider fails to meet the safe harbor, it might still evade liability. The copyright owner must still prove contributory infringement, and the defendant might have defenses, such as fair use. Of course, all of that litigation is far more costly than a simple safe harbor, so there is a lot of positioning by parties about what does and does not constitute safe activity.

This brings us to our two cases:

Viacom v. YouTube

This is an old case, from back when YouTube was starting. The district court recently issued a ruling once again finding that YouTube is protected by the 512(c) safe harbor. A prior appellate ruling remanded for district court determination of whether Viacom had any evidence that YouTube knew or had reason to know that infringing clips had been posted on the site. Viacom admitted that it had no such evidence, but instead argued that YouTube was "willfully blind" to the fact of such infringement, because its emails talked about leaving other infringing clips on the site - just not any that Viacom was alleging. The court rejected this argument, saying that it was not enough to show willful blindness as to Viacom's particular clips.

The ruling is a sensible, straightforward reading of 512 that favors the service provider.

UMG v. Escape Media

We now turn to UMG v. Escape Media. In a shocking ruling yesterday, the appellate division of the NY Supreme Court (yeah, they kind of name things backward there) held that sound recordings made prior to 1972 were not part of the Section 512 safe harbors. Prior to 1972, such recordings were not protected by federal copyright. Thus, if one copies them, any liability falls under state statute or common law, often referred to as "common law copyright."  Thus, service providers could be sued under any applicable state law that protected such sound recordings.

Escape Media argued that immunity for "infringement of copyright" meant common law copyright as well, thus preempting any state law liability if the safe harbors were met.

The court disagreed, ruling that a) "copyright" meant copyright under the act, and b) reading the statute to provide safe harbors for common law copyright would negate Section 301(c), which states that "any rights or remedies under the common law or statutes of any State shall not be annulled or limited by this title until February 15, 2067." The court reasoned that the safe harbor is a limitation of the common law, and thus not allowed if not explicit.

If this ruling stands, then the entire notice and takedown scheme that everyone relies on will go away for pre-1972 sound recordings, and providers may potentially be liable under 50 different state laws. Of course, there are still potential defenses under the common law, but doing business just got a whole lot more expensive and risky to provide services. So, while the sky has not fallen, as a friend aptly commented about this case yesterday, it is definitely in a rapidly decaying orbit.

Policy and Plain Maining

This leads to the key point I want to make here, about how we read the copyright act and discuss it. Let's start with YouTube. The court faithfully applied the straightforward language of the safe harbors, and let YouTube off the hook. The statute is clear that there is no duty to monitor, and YouTube chose not to monitor, aggressively so.

And, yet, I can't help but think that YouTube did something wrong. Just reading the emails from that time period shows that the executives were playing fast and loose with copyright, leaving material up in order to get viewers. (By they way, maybe they had fair use arguments, but they don't really enter the mix). Indeed, they had a study done that showed a large amount of infringement on the site. I wonder whether anyone at YouTube asked to see the underlying data to see what was infringing so it could be taken down. I doubt it.

I would bet that 95% of my IP academic colleagues would say, so what? YouTube is a good thing, as are online services for user generated content. Thus, we read the statute strictly, and provide the safe harbor.

This brings us to UMG v. Escape Media. Here, there was a colossal  screw-up. It is quite likely that no one in Congress thought about pre-1972 sound recordings. As such, the statute was written with the copyright act in mind, and the only reasonable reading of the Section 512 is that it applies to "infringement of copyright" under the Act. I think the plain meaning of the section leads to this conclusion. First, Section 512 refers to many defined terms, such as "copyright owner" which is defined as an owner of one of the exclusive rights under the copyright act. Second, the copyright act never refers to "copyright" to refer to pre-1972 sound recordings that are protected by common law copyright. Third, expanding "copyright" elsewhere in the act to include "common law copyright" would be a disaster. Fourth, state statutes and common laws did not always refer to such protection as "common law copyright," instead covering protection under unfair competition laws. Should those be part of the safe harbor? How would we know if the only word used is copyright?

That said, I think the court's reliance on 301(c) is misplaced; I don't think that a reading of 512 that safe harbored pre-1972 recordings would limit state law. I just don't think that's what the statute says, unfortunately.

Just to be clear, this ruling is a bad thing, a disaster even. I am not convinced that it will increase any liability, but it will surely increase costs and uncertainty. If I had to write the statute differently, I would. I'm sure others would as well.

But the question of the day is whether policy should trump plain meaning when we apply a statute. The ReDigi case and the UMG case both seem to have been written to address statutes who did not foresee the policy implications downstream. Perhaps many might say yes, we should read the statute differently.

I'm pretty sure I disagree. For whatever reason - maybe the computer programmer in me - I have always favored reading the statute as it is and dealing with the bugs through fixes or workarounds. As I've argued with patentable subject matter, the law becomes a mess if you attempt to do otherwise.  ReDigi and UMG are examples of bugs. We need to fix or work around them. It irritates me to no end that Congress won't do so, but I have a hard time saying that the statutes should somehow mean something different than they say simply because it would be a better policy if they did. Perhaps that's why I prefer standards to rules - the rules are good, until they aren't. 

This is not to say I'm inflexible or unpragmatic. I'm happy to tweak a standard to meet policy needs. I've blogged before about how I think courts have misinterpreted the plain meaning of the CFAA, but I am nevertheless glad that they have done so to reign it in. I'm also often persuaded that my reading of a statute is wrong (or even crazy) even when I initially thought it was clear. I'd be happy for someone to find some argument that fixes the UMG case in a principled way. I know some of my colleagues look to the common law, for example, to solve the ReDigi problem. Maybe there is a common law solution to UMG. But until then, for me at least, plain meaning trumps policy.

 

Posted by Michael Risch on April 24, 2013 at 04:12 PM in Information and Technology, Intellectual Property, Web/Tech | Permalink | Comments (3) | TrackBack

Tuesday, April 16, 2013

Solving the Digital Resale Problem

As Bruce Willis's alleged complaints about not being able to leave his vast music collection to his children upon his death illustrate, modern digital media has created difficulties in secondary and resale markets. (I say alleged because the reports were denied. Side note: if news breaks on Daily Mail, be skeptical. And it's sad that Cracked had to inform Americans of this...).

 This post describes a recent attempt to create such a market, and proposes potential solutions.

In the good old days, when you wanted to sell your old music, books, or movies, you did just that. You sold your CD, your paperback, or your DVD. This was explicitly legalized in the Copyright Act: 17 USC Section 109 says that: “...the owner of a particular copy or phonorecord lawfully made under this title, or any person authorized by such owner, is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy or phonorecord.” As we'll see later, a phonorecord is the material object that holds a sound recording, like a CD or MP3 player.

But we don't live in the good old days. In many ways, we live in the better new days. We can buy music, books, and DVDs over the internet, delivered directly to a playback device, and often to multiple playback devices in the same household. While new format and delivery options are great, they create problems for content developers, because new media formats are easily copied. In the bad sort-of-old days, providers used digital rights management (or DRM) to control how content was distributed. DRM was so poorly implemented that it is now a dirty word, so much so that it was largely abandoned by Apple; it is, however, still used by other services, like Amazon Kindle eBooks. Providers also use contracts to limit distribution - much to Bruce Willis's chagrin. Indeed, Section 109(d) is clear that a contract can opt-out of the disposal right: “[Disposal rights] do not, unless authorized by the copyright owner, extend to any person who has acquired possession of the copy or phonorecord from the copyright owner, by rental, lease, loan, or otherwise, without acquiring ownership of it.”

But DRM is easily avoided if you simply transfer the entire device to the another party. And contracts are not necessarily as broad as people think. For example, I have scoured the iTunes terms of service and I cannot find any limitation on the transfer of a purchased song. There are limitations on apps that make software a license and limit transfers, but the music and video downloads are described as purchases unless they are "rentals," and all of the “use” limitations are actually improvements in that they allow for multiple copies rather than just one. Indeed, the contract makes clear that if Apple kills off cloud storage, you are stuck with your one copy, so you had better not lose it. If someone can point me to a contract term where Apple says you have not “purchased” the music and cannot sell it, I would like to see that.

Enter ReDigi and the lawsuit against it. ReDigi attempted to set up a secondary market for digital works. The plaintiff was Capitol Records, so there was no contract privity, so this is a pure “purchase and disposal” case. A description from the case explains how it worked (in edited form here):

To sell music on ReDigi's website, a user must first download ReDigi's “Media Manager” to his computer. Once installed, Media Manager analyzes the user's computer to build a list of digital music files eligible for sale. A file is eligible only if it was purchased on iTunes or from another ReDigi user; music downloaded from a CD or other file-sharing website is ineligible for sale. After this validation process, Media Manager continually runs on the user's computer and attached devices to ensure that the user has not retained music that has been sold or uploaded for sale. However, Media Manager cannot detect copies stored in other locations. If a copy is detected, Media Manager prompts the user to delete the file. The file is not deleted automatically or involuntarily, though ReDigi's policy is to suspend the accounts of users who refuse to comply.

After the list is built, a user may upload any of his eligible files to ReDigi's “Cloud Locker,” an ethereal moniker for what is, in fact, merely a remote server in Arizona. ReDigi's upload process is a source of contention between the parties. ReDigi asserts that the process involves “migrating” a user's file, packet by packet — “analogous to a train” — from the user's computer to the Cloud Locker so that data does not exist in two places at any one time. Capitol asserts that, semantics aside, ReDigi's upload process “necessarily involves copying” a file from the user's computer to the Cloud Locker. Regardless, at the end of the process, the digital music file is located in the Cloud Locker and not on the user's computer. Moreover, Media Manager deletes any additional copies of the file on the user's computer and connected devices.

Once uploaded, a digital music file undergoes a second analysis to verify eligibility. If ReDigi determines that the file has not been tampered with or offered for sale by another user, the file is stored in the Cloud Locker, and the user is given the option of simply storing and streaming the file for personal use or offering it for sale in ReDigi's marketplace. If a user chooses to sell his digital music file, his access to the file is terminated and transferred to the new owner at the time of purchase. Thereafter, the new owner can store the file in the Cloud Locker, stream it, sell it, or download it to her computer and other devices. No money changes hands in these transactions. Instead, users buy music with credits they either purchased from ReDigi or acquired from other sales. ReDigi credits, once acquired, cannot be exchanged for money. Instead, they can only be used to purchase additional music.

ReDigi claimed that it was protected by 17 USC 109. After all, according to the description, it was transferring the work (the song) from the owner to ReDigi, and then to the new owner. Not so, said the court. As the court notes, Section 109 protects only the disposition of particular copies (phonorecords, really) of the work. And uploading a file and deleting the original is not transferring a phonorecord, because the statute defines a “phonorecord” as the physical medium in which the work exists. Transfer from one phonorecord to another is not the same as transfering a particular phonorecord. So, ReDigi could be a secondary market for iPods filled with songs, but not the songs disembodied from the storage media.

As much as I want the court to be wrong, I think it is right here, at least on the narrow, literal statutory interpretation. The words say what they say. Even the court notes that this is an uncomfortable ruling: “[W]hile technological change may have rendered Section 109(a) unsatisfactory to many contemporary observers and consumers, it has not rendered it ambiguous.”

Once the court finds that transferring the song to ReDigi is an infringing reproduction, it's all downhill, and not in a good way. The court notably finds that there is no fair use. I think it is here that the court gets it wrong. Unlike the analysis of Section 109, the fair use analysis is short, unsophisticated, and devoid of any real factual analysis. I think this is ReDigi's best bet on appeal.

Even despite my misgivings, ReDigi's position is not a slam dunk. After all, how can it truly know that a backup copy has not been made? Or that the file has not been copied to other devices? Or that the file won't simply be downloaded from cloud storage or even iTunes after it has been uploaded to ReDigi. 

If ReDigi, which seemed to try to do a good job ensuring no residual copies, cannot form a secondary market, then what hope do we have? We certainly aren't going to get there with the statute we have, unless courts are much more willing to read a fair use into transfers. The real problem is that the statute works fine when the digital work (software, music, whatever) is stored in a single use digital product. When we start separating the “work” from the container, so that containers can hold many different works and one work might be shared on several containers all used by the same owner, all of the historical rules break down.

So, what do we do if we can't get the statute amended? I suspect people will hate my answer: a return to the dreaded DRM. A kinder, gentler, DRM. I think that DRM that allows content providers to recall content at will (or upon business closure) must go -- whether legislatively or regulatorily. It is possible, of course, for sophisticated parties to negtotiate for such use restrictions (for example, access to databases), and to set pricing for differing levels of use based on those negotiations. That's what iTunes does with its "rentals."

But companies should not be allowed to offer content "for sale" if delivery and use is tied to a contract or DRM that renders that content licensed and not in control of buyers. This is simply false advertising that takes advantage of settled expectations of users, and well within the powers of the FTC, I believe.

But DRM can and should be used to limit copying and transferrability. If transferability is allowed, then the DRM can ensure that the old user does not maintain copies. Indeed, if content outlets embraced this model, they might even create their own secondary markets to increase competition in the secondary market. In short, the solution to the problem, I believe, is going to be a technical one, and that might be a good thing for users who can no credibly show that they won't copy.

And DRM is what we are seeing right now. Apparently, ReDigi has reimplemented its service so that iTunes purchases are directly copied to a central location where they stay forever. From there, copies are downloaded to particular user devices pursuant to the iTunes agreement. This way, ReDigi acts as the digital rights manager. When a user sells a song, it ReDigi cuts off access to the song for the selling user, and allows the buying user access without making a new copy of the song on its server. I presume that its media manager also attempts to delete all copies from the sellers devices.

Of course, this might mean that content, or at least transferring it, is a little more expensive than before. But let's not kid ourselves - the good old days weren't that good. You had to buy the whole CD, or maybe a single if one was available, but you could not pick and choose any song on any album. Books are heavy and bulky; you couldn't carry thousands of them around. And DVDs require a DVD player, which has several limitations compared to video files.

DRM may just be the price we pay for convenience and choice. We don't have to pay that price. Indeed, I buy most of my music on CD. And I get to put the songs where I want, and I suppose sell the CD if I want, though I never do. As singles start costing $1.50, it may make sense to buy the whole CD. Alas, these pricing issues are incredibly complex, which may take another post in the future.

Posted by Michael Risch on April 16, 2013 at 07:00 AM in Information and Technology, Intellectual Property, Web/Tech | Permalink | Comments (5) | TrackBack

Tuesday, April 09, 2013

Academics Go To Jail – CFAA Edition

Though the Aaron Swartz tragedy has brought some much needed attention to the CFAA, I want to focus on a more recent CFAA event—one that has received much less attention but might actually touch many more people than the case against Swartz.

Andrew “Weev” Auernheimer (whom I will call AA for short) was recently convicted under the CFAA and sentenced to 41 months and $73K restitution. Orin Kerr is representing him before the Third Circuit. I am seriously considering filing an amicus brief on behalf of all academics. In short, this case scares me in a much more personal way than prior discussed in my prior CFAA posts. More after the jump.

Here’s the basic story, as described by Orin Kerr:

When iPads were first released, iPad owners could sign up for Internet access using AT&T. When they signed up, they gave AT&T their e-mail addresses. AT&T decided to configure their webservers to “pre load” those e-mail addresses when it recognized the registered iPads that visited its website. When an iPad owner would visit the AT&T website, the browser would automatically visit a specific URL associated with its own ID number; when that URL was visited, the webserver would open a pop-up window that was preloaded with the e-mail address associated with that iPad. The basic idea was to make it easier for users to log in to AT&T’s website: The user’s e-mail address would automatically appear in the pop-up window, so users only needed to enter in their passwords to access their account. But this practice effectively published the e-mail addresses on the web. You just needed to visit the right publicly-available URL to see a particular user’s e-mail address. Spitler [AA’s alleged co-conspirator] realized this, and he wrote a script to visit AT&T’s website with the different URLs and thereby collect lots of different e-mail addresses of iPad owners. And they ended up collecting a lot of e-mail addresses — around 114,000 different addresses — that they then disclosed to a reporter. Importantly, however, only e-mail addresses were obtained. No names or passwords were obtained, and no accounts were actually accessed.

Let me paraphrase this: AA went to a publicly accessible website, using publicly accessible URLs, and saved the results that AT&T sent back in response to that URL. In other words, AA did what you do every time you load up a web page. The only difference is that AA did it for multiple URLs, using sequential guesses at what those URLs would be.  There was no robot.txt file that I’m aware of (this file tells search engines which URLs should not be searched by spiders). There was no user notice or agreement that barred use of the web page in this manner. Note that I’m not saying such things should make the conduct illegal, but only that such things didn’t even exist here. It was just two people loading data from a website. Note that a commenter on my prior post asked this exact same question--whether "link guessing" was illegal--and I was noncommital. I guess now we have our answer.

The government’s indictment makes the activity sound far more nefarious, of course. It claims that AA “impersonated” an iPad. This allegation is a bit odd: the script impersonated an iPad in the same way that you might impersonate a cell phone by loading http://m.facebook.com to load the mobile version of Facebook. Go ahead, try it and you’ll see – Facebook will think you are a cell phone. Should you go to jail?

So, readers might say, what’s the problem here? AA should not have done what he did – he should have known that AT&T did not want him downloading those emails. Yeah, he probably did know that. But consider this: AA did not share the information with the world, as he could have. I am reasonably certain that if his intent was to harm users, we would never know that he did this – he would have obtained the addresses over an encrypted VPN and absconded. Instead, AA shared this flaw with the world. AT&T set up this ridiculously insecure system that allowed random web users to tie Apple IDs to email addresses through ignorance at best or hubris at worst. I don’t know if AA attempted to inform AT&T of the issue, but consider how far you got last time you contacted tech support with a problem on an ISP website. AA got AT&T’s attention, and the problem got fixed with no (known) divulgence of the records.

Before I get to academia, let me add one more point. To the extent that AA should have known AT&T didn’t desire this particular access, the issue is one of degree not of kind. And that is the real problem with the statute. There is nothing in the statute, absolutely nothing, that would help AA know whether he violated the law by testing this URL with one, five, ten, or ten thousand IDs.  Here’s one to try: click here for a link to a concert web page deep link using a URL with a numerical code. Surely Ticketmaster can’t object to such deep linking, right? Well, it did, and sued Tickets.com over such behavior. It claimed, among other things, that each and every URL was copyrighted and thus infringed if linked to by another. It lost that argument, but today it could just say that such access was unwanted.  For example, maybe Tickemaster doesn’t like me pointing out its ridiculous argument in the tickets.com case, making my link unauthorized. Or maybe I should have known because the Ticketmaster terms of service says that an express condition of my authorization to view the site is that I will not "Link to any portion of the Site other than the URL assigned to the home page of our site." That's right, TicketMaster still thinks deep linking is unauthorized, and I suppose that means I risk criminal prosecution for linking it. Imagine if I actually saved some of the data!

This is where academics come in. Many, many academics scrape. (Don’t stop reading here – I’ll get to non-scrapers below.) First, scraping is a key way to get data from online databases that are not easily downloadable. This includes, for example, scraping of the US Patent & Trademark Office site; although data is now available for mass download, that data is cumbersome, and scraper use is still common. That the PTO is public data does not help matters. In fact, it might make it worse, since “unauthorized” access to government servers might receive enhanced penalties!

Academics (and non-academics) in other disciplines scrape websites for research as well. How are these academics to know that such scraping is disallowed? What if there is no agreement barring them from doing so? What if there is a web-wrap notice as broad as Ticketmaster's, purporting to bar such activities but with no consent by the user? The CFAA could send any academic to jail for ignoring such warnings—or worse—not seeing them in the first place. Such a prosecution would be preposterous, skeptics might say. I hope the skeptics are right, but I'm not hopeful. Though I can't find the original source, I recall Orin Kerr recounting how his prosecutor colleagues said the same thing 10 years ago when he argued the CFAA might apply to those who breach contracts, and now such prosecutions are commonplace.

Finally, non-scrapers are surely safe, right? Maybe it depends on if they use Zotero. Thousands of people use it. How does Zotero get information about publications  when the web site does not provide standardized citation data? You guessed it: a scraper. Indeed, a primary reason I don’t use Zotero is that the Lexis and Westlaw scrapers don’t work. But the PubMed importer scrapes. What if PubMed decide that it considered scraping of information unauthorized? Surely people should know this, right? If it wanted people to have this data, they would provide it in Zotero readable format. The fact that the information on those pages is publicly available is irrelevant; the statute makes no distinction. And if one does a lot of research, for example, checking 20 documents, downloading each, and scraping each page, the difference from AA is in degree only, not in kind.

The irony of this case is that the core conviction is only tangentially a problem with the statute (there are some ancillary issues that are a problem with the statute). “Unauthorized access” and even “exceeds authorized access” should never have been interpreted to apply to publicly accessible data on publicly accessible web sites. Since they have, then I am convinced that the statute is impermissibly broad, and must be struck down. At the very least it must be rewritten. 

Posted by Michael Risch on April 9, 2013 at 10:21 PM in Information and Technology, Web/Tech | Permalink | Comments (15) | TrackBack

Tuesday, March 05, 2013

The iPhone, not the eye, is the window into the soul

 

It is great to be back at Prawfsblawg this year.  Thanks to Dan and the gang for having me back.  For my first post this month, I wanted to point everyone to the most important privacy research of 2012.  The same paper qualifies as the most ignored privacy research of 2012, at least within legal circles.  It is a short paper that everyone should read.

The paper in question,Mining Large Scale Smart-Phone Data for Personality Studies, is by Gokul Chittaranjan, Jan Blom, and Daniel Gatica-Perez. Chittaranjan and co-authors brilliantly show that it is straightforward to mine data from smart-phones in an automated way so as to identify particular "Five Factor" personality types in a large population of users.  They did so by administering personality tests to 117 smartphone users, and then following the smartphone activities of those users for seventeen months, identifying the patterns that emerged.  The result was that each of the "Big Five" personality dimensions was associated with particular patterns of phone usage.  For example, extraverts communicated with more people and spent more time on the phone, highly conscientious people sent more email messages from their smartphones, and users of non-standard ring-tones tended to be those who psychologists would categorize as open to new experiences.  

There is a voluminous psychology literature linking scores on particular Big Five factors to observed behavior in the real world, like voting, excelling in workplaces, and charitable giving.  Some of the literature is discussed in much more detail here.  But the Chittaranjan et al. study provides a powerful indication of precisely why data-mining can be so powerful.  Data mining concerning individuals' use of machines is picking up personality traits, and personality predicts future behavior.  

The regularities observed via the analysis of Big Data demonstrate that you can aggregate something seemingly banal like smartphone data to administer surreptitious personality tests to very large numbers of people.  Indeed, it is plausible that studying observed behavior from smartphones is a more reliable way of identifying particular personality traits than existing personality tests themselves.  After all, it is basically costless for an individual to give false answers to a personality questionnaire. It is costly for an extravert to stop calling friends.  

Privacy law has focused its attention on protecting the contents of communications or the identities of the people with whom an individual is communicating.  The new research suggests that -- to the extent that individuals have a privacy interest in the nature of their personalities -- an enormous gap exists in the present privacy framework, and cell phone providers and manufacturers are sitting on (or perhaps already using) an information gold mine.  

It's very unlikely that the phenomenon that Chittaranjan et al. identify is limited to phones.  I expect that similar patterns could be identified from analyzing peoples' use of their computers, their automobiles, and their television sets.  The Chittaranjan et al. study is a fascinating, tantalizing, and perhaps horrifying early peek at life in a Big Data world.

Posted by Lior Strahilevitz on March 5, 2013 at 09:03 AM in Article Spotlight, Information and Technology, Web/Tech | Permalink | Comments (0) | TrackBack

Wednesday, January 30, 2013

Does Not Translate?: How to Present Your Work to Real People

Recently I've agreed to give talks on social media law issues to "real" people. For example, one of the breakfast talks I've been asked to give is aimed at "judges, city and county commissioners, business leaders and UF administrators and deans." Later, I'm giving a panel presentation on the topic to prominent women alumni of UF. My dilemma is that I want to strike just the right tone and present information at just the right level for these audiences. But I'm agonizing over some basic questions. Can I assume that every educated person has at least an idea of how social media work? What segment of the information that I know about Social Media Law and free speech would be the most interesting to these audiences, and should I just skip a rock over the surface of the most interesting cases and incidents, accompanied by catchy images?  How concerned should I be about the offensive potential of talking about the real facts of disturbing cases for a general but educated audience? As a Media Law scholar and teacher, I'm perfectly comfortable talking about the "Fuck the Draft" case or presenting slides related to the heart-wrenching cyberbullying case of Amanda Todd that contain the words "Flash titties, bitch." But can I talk about this at breakfast? If I can, do I need to give a disclaimer first? And for a general audience, do I want to emphasize the disruptive potential of social media speech, or do I have an obligation to balance that segment of the presentation with the postive aspects for free speech? And do any of you agonize over such things every time you speak to a new audience?

Anyway, translation advice is appreciated. I gave our graduation address in December, and I ended up feeling as if I'd hit the right note by orienting the address around a memorable story from history that related to the challenges of law grads today. But the days and even the minutes preceding the speech involved significant agonizing, which you'd think someone whose job involves public speaking on a daily basis wouldn't experience.

 

 

Posted by Lyrissa Lidsky on January 30, 2013 at 10:07 AM in Current Affairs, First Amendment, Information and Technology, Lyrissa Lidsky, Teaching Law | Permalink | Comments (3) | TrackBack

Monday, December 10, 2012

Big Data, Privacy, and Insurers: Forget the web, Flo’s the one to watch.

    At least within the corner of the web that I frequent, it seems that I cannot go more than a few pages without running into articles discussing the never-ending growth of the Big Data industry, the death of online privacy, and how long it will be until we are all subject to 1984-esque surveillance.  These issues have been particularly interesting to me, given that, like many of us, I maintain a presence on a number of social media sites.  If at all possible, I would prefer to control who has access to the embarrassing high school yearbook photos that were posted to my Facebook wall, my Amazon.com browsing history, and the contents of the Christmas list I sent to my family.  Even when I have given my consent to certain entities to access this information, I'd like to restrict how they use this data, limit its transferability, and have some type of assurance that adequate securities measures have been put into place to protect my data.  While I recognize that the dissemination of this information would, in most cases, have little to no detrimental impact on my life, the ease with which third parties could aggregate data about me makes me quite uneasy. The public uproar that results every time Facebook changes its privacy settings establishes that my feelings are widely shared.  It is no surprise that the law’s regulation of web-based information has become one of the hotter topics in politics and legal academia (I've particularly enjoyed a forthcoming piece written by one of my colleagues: Prof. Bedi’s Facebook and Interpersonal Privacy).

    While there are good reasons that the data privacy discussion has centered on the Internet, I have found myself wondering whether this focus has diverted attention away from the rampant expansion of offline data collection.  Given my scholarly interests, it is unsurprising that the best example of this phenomenon that I can point to comes from the insurance industry.

     Insurance companies, by their very nature, have an insatiable appetite for data.  The more information they collect about their customers, the better they can estimate the odds that the company will have to pay out on its policies and set their rates.  While insurers have always been hungry for information, their data collection efforts (particular in casualty lines) have traditionally been limited to what the applicant discloses in the insurance application and public records. 

    Recent developments in the auto insurance industry may (at least in my mind) herald the beginning of a new era of aggressive approaches to data collection.  Over the past two years, Progressive has increasingly offered consumers the opportunity to reduce their premiums if they agree to allow Progressive to monitor their driving habits via wireless technology (the “Snapshot” discount).  While Progressive’s observation period is limited in both duration and amount of data collected (e.g., braking habits are recorded, GPS data is not), it is easy to see how market incentives will push auto insurers to try and collect increasing amounts of data about—or continuously monitor—their policyholders.  Further, if such programs are widely adopted throughout the industry, consent to monitoring could become a market-imposed mandatory condition for obtaining coverage.  Finally, there do not appear to be any reasons why this type of data collection would not spread to other lines of casualty insurance.

     While there are factors that will limit the expansion of this trend (collection and processing costs, state insurance regulations, social pressures), I anticipate that we have only seen the tip of the iceberg when it comes to insurers' taking an active approach towards data.  I will save my thoughts on why this type of data collection is particularly worrisome (as well as its potential upside) for another post.

Posted by Max Helveston on December 10, 2012 at 12:52 AM in Information and Technology | Permalink | Comments (4) | TrackBack

Thursday, November 08, 2012

Cease and Desist

For nearly 10 years, scholars, commentators, and disappointed downloaders have criticized the now-abandoned campaign of the Recording Industry Association of America (RIAA) to threaten litigation against, and in some cases, sue downloaders of unauthorized music. The criticisms follow two main themes. First, demand letters, which mention of statutory damages up to and including $150,000 per infringed work (if the infringement is willful), often lead to settlements of $2,000 - $3,000. A back of the envelope cost-benefit analysis would suggest this is a reasonable response from the receipient if $150,000 is a credible threat, but for those who conclude that information is free and someone must challenge these cases, the result is frustrating.

Second, it has been argued that the statutory damage itself is unconstitutional, at least as applied to downloaders, because it is completely divorced from any actual harm suffered by the record labels. The constitutional critique has been advanced by scholars like Pam Samuelson and Tara Wheatland, accepted by a district court judge in the Tenenbaum case, dodged on appeal by the First Circuit, but rejected outright by the Eighth Circuit. My intuition is that the Supreme Court would hold that Congress has the authority to craft statutory damages sufficiently high to deter infringement, and that there's sufficient evidence that Congress thought its last increase in statutory damages would accomplish that goal. 

We could debate that, but I have something much more controversial in mind. I hope to convince you that the typical $3,000 settlement is the right result, at least in file-sharing cases.

The Copy Culture survey indicates that the majority of respondents who support a penalty support fines for unauthorized downloading of a song or movie. Of those who support fines, 32% support a fine of $10 or less, 43% support fines of up to $100, 14% support fines of up to $1,000, 5% support higher fines, 3% think fines should be context sensitive, and 3% are unsure. The average max fine for the top three groups is $209. Let's cut it in half, to $100, because roughly half of survey respondents were opposed to any penalty.

How big is the typical library of "illegally" downloaded files? 10 songs? 100 songs? 1,000? The Copy Culture study reports the following from survey respondents who own digital files, by age group:

18-29: 406 files downloaded for free

30-49: 130 files downloaded for free

50-64: 60 files downloaded for free

65+: 51 files downloaded for free

In the two cases that the RIAA actually took to trial, the labels argued that the defendants had each downloaded over 1,000 songs, but sued over 30 downloads in one case, and 24 downloads in the other. As I see it, if you're downloading enough to catch a cease and desist letter, chances are good that you've got at least 30 "hot" files on your hard drive.

You can see where I'm going here. If the average target of a cease and desist letter has 30 unauthorized files, and public consensus centers around $100 per unauthorized file, then a settlement offer of $3,000 is just about right.

Four caveats. First, maybe the Copy Culture survey is not representative of public opinion and that number should be far lower than $100. Second, misfires happen with cease and desist letters: sometimes, individuals are mistargeted. One off-the-cuff response is to have the RIAA pay $3,000 to every non-computer user and the estate of every dead grandman who gets one of these letters.

Third, this doesn't take fair use into account, and thus might not be a fair proxy for many other cases. For example, the Righthaven litigation seems entirely different to me - reproducing a news story online seems different than illegally downloading a song instead of paying $1, in part because the news story is closer to copyright's idea line, where more of the content is likely unprotectable, and because the redistribution of news is more likely to be fair use.

Fourth, it doesn't really deal with the potentially unconstitutional / arguably stupid possibility that some college student could be ordered to pay $150,000 per download, if a jury determines he downloaded willfully. I'd actually be happy with a rule that tells the record labels they can only threaten a maximum damage award equal to the average from the four jury determinations in the Tenenbaum and Thomas-Rasset cases. That's still $43,562.50 per song. Round it down to the non-willful statutory cap, $30,000, and I still think that a $3,000 settlement is just about perfect.

Now tell me why I'm crazy. 

Posted by Jake Linford on November 8, 2012 at 09:30 AM in Information and Technology, Intellectual Property, Music, Web/Tech | Permalink | Comments (1) | TrackBack

Thursday, October 25, 2012

Copyright's Serenity Prayer

I recently discovered an article by Carissa Hessick, where she argues that the relative ease of tracking child pornography online may lead legislators and law enforcement to err in two ways. First, law enforcement may pursue the more easily detected possession of child pornography at the expense of pursuing actual abuse, which often happens in secret and is diffcult to detect. Second, legislators may be swayed to think that catching child porn possessors is as good as catching abusers, because the former either have abused, or will abuse in the future. Thus, sentences for possession often mirror sentences for abuse, and we see a potential perversion of the structure of enforcement that gives a false sense of security about how much we are doing to combat the problem.

With the caveat that I know preventing child abuse is muchmuch more important that preventing copyright infringement, I think the ease of detecting unauthorized Internet music traffic may also have troubling perverse effects.

When I was a young man, copying my uncle's LP collection so I could take home a library of David Bowie casette tapes, there was no way Bowie or his record label would ever know. The same is true today, even though they now make turntables that will plug right into my computer and give me digital files that any self-respecting hipster would still disdain, but at least require me to flip a vinyl disc as my cost of copying.

On the other hand, it's much easier to trace free-riding that occurs online. That was part of what lead to the record industry's highly unpopular campaign against individual infringers. Once you can locate the individual infringer, you can pursue infringment that used to be "under the radar." The centralized, searchable nature of the Internet also made plausible Righthaven's disastrous campaign against websites copying news stories, and the attempt by attorney Blake Field to catch Google infringing his copyright in posted material by crawling his website with automated data gathering programs.

What if copyright owners are chasing the wrong harm? For example, one leaked RIAA study suggests that while a noticeable chunk of copyright infringement occurs via p2p sharing, it's not the largest chunk. While the RIAA noted that in 2011, 6% of unauthorized sharing (4% of total consumption) happens in locker services like Megauploads, and 23% (15%) happens via p2p, 42% (27%) of unauthorized acquisition is done by burning and ripping CDs from others, and another 29% (19%) happens through face-to-face hard drive trading. Offline file sharing is apparently more prevalent than the online variety, but it is much more difficult to chase. So it may be that copyright holders chase the infringement they can find, rather than the infringement that most severely affects the bottom line.

In a way, leaning on the infringement they can detect is reminiscent of the oft-repeated "Serenity Prayer," modified here for your contemplation:

God, grant me the serenity to accept the infringement I cannot find,
The courage to crush the infringement I can,
And the wisdom to know the difference.

All this brings me back to the friends and family question. The study on Copy Culture in the U.S. reports that roughly 80% of the adults owning music files think it's okay to share with family, and 60% think it's okay to share with friends. In addition, the Copyright Act specifically insulates friends and family sharing in the context of performing or displaying copyrighted works to family and close friends in a private home (17 USC s. 101, "publicly"). Thus, there is some danger in going after that friends and family sharing. If the family and friends line is the right line, can we at least feel more comfortable that someone to whom I'm willing to grant physical access to my CD library is a "real" friend than my collection of Facebook friends and acquaintances, some of whom will never get their hands on my vinyl phonograph of Blues and Roots?

 

Posted by Jake Linford on October 25, 2012 at 10:30 AM in Information and Technology, Intellectual Property, Music, Web/Tech | Permalink | Comments (4) | TrackBack

Wednesday, October 10, 2012

Friends

Hello all. Glad to be back at Prawfsblawg for another round of blogging. I'm looking forward to sharing some thoughts about entertainment contracts, the orphan works problem in copyright, and the new settlement between Google and several publishers over Google Books.

Today, I want to talk a bit about file-sharing and friendship. A recent study asked U.S. and German citizens whether they thought it was "reasonable" to share unauthorized, copyrighted files with family, with friends, and in several different online contexts. Perhaps unsurprisingly, respondents in the 18-29 range responded more favorably to file sharing than older respondents in every context. What interests me is that respondents in every context see a sharp difference between sharing files with friends, and posting a file on Facebook. We call our Facebook contacts "friends," but I'm curious why the respondents to this study made the distinction between sharing with friends and sharing on Facebook. I have a few inchoate thoughts, and I'd love to hear what you think.

Megan Carpenter wrote an interesting article about the expressive and personal dimension of making mix tapes. I grew up in the mix tape era as well, and remember well the emotional sweat that I poured into collections of love songs made for teenage paramours in the hopes of sustaining doomed long-distance romances. Carpenter correctly argues that there is something personal about that act, and it seems reasonable that it would fall outside the reach of the Copyright Act.

I also remember copying my uncle's entire collection of David Bowie LPs onto casette tapes when I was in junior high. In that instance, music moved through family connections, and in my small town in Wyoming, there were no casettes from the Bowie back catalog on the shelves of the local music store. But the only effort involved in making those casettes was turning the LP at the end of a side. Less expressive, but within a fairly tight social network.

A properly functioning copyright system might reasonably allow for these uses, and still sanction a decision to post my entire Bowie collection on Facebook, or through a torrent. I'm skeptical of any definition of "friends and family" so capacious that it would include Facebook friends, and I suspect that many people realize now, if they didn't then, that what constitutes a face-to-face friend is different than what constitutes a Facebook friend, but you may have a different impression. I hope you'll share it here, whatever it is.

Posted by Jake Linford on October 10, 2012 at 12:30 PM in Information and Technology, Intellectual Property, Music | Permalink | Comments (4) | TrackBack

Thursday, October 04, 2012

TPRC Celebrates 40 Years of Research in Telecom

Two weeks ago the Telecommunications Policy Research Conference (TPRC) had a great event to celebrate its 40th year of delving into communications, information and Internet policy issues  (I'm a member of the program committee so, yes, this is a shameless plug).  What I enjoy most about TPRC is that it is truly interdisciplinary; that should come as a relief to anyone who's been in a room filled only with lawyers--bless our hearts.  The conference brings together scholars from all fields as well as policy makers and private and non profit practitioners.  There were many outstanding sessions including a Friday evening panel (soon available on video) about The Next Digital Frontier with speakers straight out of the "who's who" of telecom:  Eli Noam (Columbia), David Clark (MIT), Gigi Sohn (Public Knowledge) and Thomas Hazlett (GMU). 

There is much more work of note, I'll single out a few articles after the jump, and I encourage you to look at the TPRC Program files for additional articles of interest.  Also, around March keep your eyes open for next year's call for papers.  I will still be on the program committee so, in case you're interested, you should know I'm highly motivated by gifts of chocolate (dark preferred).

As mentioned, the TPRC website has the full program of presented articles so be sure to check it out.  I particularly enjoyed the work of the legal and economic scholars--and not just because they made the math easier than the engineers did, but that didn't hurt.  Three pieces that come to mind are Payment Innovation at the Content/Carriage Interface by James Speta, American Media Concentration Trends in Global Context: A Comparative Analysis by Eli Noam and Political Drivers and Signaling in Independent Agency Voting: Evidence from the FCC by Adam Candeub and Eric Hunnicutt.

First, if you haven't exhausted your interest in net neutrality issues, take a look at Speta's article that considers payment innovation at the customer level as a means by which congestion may be resolved in a content neutral manner.  This is a highly topical piece as current net neutrality regulation is arguably on shaky, jurisdictional ground.  Second, my friend Eli Noam, who never fails to intrigue, shared some counter intuitive observations from a multi-year, 30 country research project that tracks concentration levels in 13 communications industries.  And third, Candeub and Hunnicutt make a welcome, empirical entry in a largely qualitative arena by quantifying the effects that party affiliation (of FCC Commissioners, Congress and the Executive) has on agency decision making.  It's really a must read for anyone interested in the areas of communications, administrative law and political economy (and who isn't!).

Finally, a shout out to my fellow blogger Rob Howse who recently wrote on our need to be more patient with each other when we accidently hit "Reply to All."  The conference also featured some innovation demonstrations and, Rob, I have just the plugin for you!  The product is "Privicons" and as self-described (because I could not make this up):

Unlike more technical privacy solutions like tools that use code to lock down emails, Privicons relies on an iconographic vocabulary informed by norms-based social signals to influence users' choices about privacy.

In other words,with this plugin you can send a graphic reminder to email readers that they should "act nice."  I think I'll send some Privicons to my students right around evaluation time.

Posted by Babette Boliek on October 4, 2012 at 09:41 AM in Information and Technology | Permalink | Comments (0) | TrackBack

Wednesday, July 18, 2012

Legal Education in the Digital Age

Legal Education in the Digital Age

With the latest news of U-Va. joining a consortium of schools  promoting online education, it seems only a matter of time before law schools will have to confront the possibility of much larger chunks of the educational experience moving into the virtual world.  Along with Law 2.0 by David I.C. Thomson, there is now Legal Education in the Digital Age, edited by Ed Rubin at Vanderbilt.  The book is primarily about the development of digital course materials for law school classes, with chapters by Ed Rubin, John Palfrey, Peggy Cooper Davis, and Larry Cunningham, among others.  The book comes out of a conference hosted by Ron Collins and David Skover at Seattle U.  My contribution follows up on my thoughts about the open source production of course materials, which I have previously written about here and here.  You can get the book from Cambridge UP here, or at Amazon in hardcover or on Kindle.

One question from the conference was: innovation is coming, but where will it come from?  Some possibilities:

  • Law professors
  • Law schools and universities
  • Legal publishers
  • Outside publishers
  • Tech companies such as Amazon or Apple
  • SSRN and BePress
  • Some combination(s) of these

I think we all agree that significant change is coming down the pike.  But what it ultimately will look like is still very much up in the air.  What role will law professors play?

Posted by Matt Bodie on July 18, 2012 at 05:24 PM in Books, Information and Technology, Life of Law Schools, Web/Tech | Permalink | Comments (8) | TrackBack

Tuesday, July 03, 2012

How Not to Criminalize Cyberbullying

My co-author Andrea Pinzon Garcia and I just posted our essay, How Not to Criminalize Cyberbullying, on ssrn.  In our essay, we provide a sustained constitutional critique of the growing body of laws criminalizing cyberbullying. These laws typically proceed by either modernizing existing harassment and stalking laws or crafting new criminal offenses. Both paths are beset with First Amendment perils, which our essay illustrates through 'case studies' of selected legislative efforts. Though sympathetic to the aims of these new laws, we contend that reflexive criminalization in response to tragic cyberbullying incidents has led law-makers to conflate cyberbullying as a social problem with cyberbullying as a criminal problem, leading to pernicious consequences. The legislative zeal to eradicate cyberbullying potentially produces disproportionate punishment of common childhood wrongdoing. Furthermore, statutes criminalizing cyberbullying are especially prone to overreaching in ways that offend the First Amendment, resulting in suppression of constitutionally protected speech, misdirection of prosecutorial resources, misallocation of taxpayer funds to pass and defend such laws, and the blocking of more effective legal reforms. Our essay attempts to give legislators the First Amendment guidance they need to distinguish the types
of cyberbullying that must be addressed by education, socialization, and stigmatization from those that can be remedied with censorship and criminalization.
To see the abstract or paper, please click here or here

 

 

Posted by Lyrissa Lidsky on July 3, 2012 at 03:44 PM in Article Spotlight, Constitutional thoughts, Criminal Law, Current Affairs, First Amendment, Information and Technology, Lyrissa Lidsky, Web/Tech | Permalink | Comments (0) | TrackBack

Thursday, June 07, 2012

The Virtual Honesty Box

As a fan of comic book art, I'm often thrilled to encounter areas where copyright or trademark law and comic books intersect. As is the case in other media, the current business models of comic book publishers and creators has been threatened by the ability of consumers to access their work online without paying for it. Many comic publishers are worried about easy migration of content from paying digital consumers to non-paying digital consumers. Of course, scans of comics have been making their way around the internet on, or sometimes before, a given comic's onsale date for some time now. As in other industries, publishers have dabbled with DRM, and publishers have enbraced different (and somewhat incompatible) methods for providing consumers with authorized content. Publishers' choices sometimes lead to problems with vendors and customers, as I discuss a bit below.

While services like Comixology offer a wide selection of content from most major comics publishers, they are missing chunks of both the DC Comics and Marvel Comics catalogues. DC entered a deal to distribute 100 of its graphic novels (think multi-issue collections of comic books) exclusively via Kindle. Marvel Comics subsequently struck a deal to offer "the largest selection of Marvel graphic novels on any device" to users of the Nook. 

Sometimes exclusive deals leave a bad taste in the mouths of other intermediaries. DCs graphic novels were pulled from Barnes & Noble shelves because the purveyor of the Nook was miffed. Independent publisher Top Shelf is an outlier, offering its books through every interface and intermediary it can. But to date, most publishers are trying to make digital work as a complement to, and not a replacement for, print.

Consumers are sometimes frustrated by a content-owner's choice to restrict access, so much so that they feel justified engaging in "piracy." (Here I define "piracy" as acquiring content through unauthorized channels, which will almost always mean without paying the content owner.) Some comics providers respond with completely open access. Mark Waid, for example, started Thrillbent Comics with the idea of embracing digital as digital, and in a manner similar to Cory Doctorow, embracing "piracy" as something that could drive consumers back to his authorized site, even if they didn't pay for the content originally.

I recently ran across another approach from comic creators Leah Moore and John Reppion. Like Mark Waid, Moore and Reppion have accepted, if not embraced, the fact that they cannot control the flow of their work through unauthorized channels, but they still assert a hope, if not a right, that they can make money from the sales of their work. To that end, they introduced a virtual "honesty box," named after the clever means of collecting cash from customers without monitoring the transaction. In essence, Moore and Reppion invite fans who may have consumed their work without paying for it to even up the karmic scales. This response strikes me as both clever and disheartening.

I'll admit my attraction to perhaps outmoded content-delivery systems -- I also have unduly fond memories of the 8-track cassette -- but I'm disheartened to hear that Moore and Reppion could have made roughly $5,500 more working minimum wage jobs last year. Perhaps this means that they should be doing something else, if they can't figure out a better way to monetize their creativity in this new environment. Eric Johnson, for one, has argued that we likely don't need legal or technological interventions for authors like Moore and Reppion in part because there are enough creative amateurs to fill the gap. The money in comics today may not be in comics at all, but in licensing movies derived from those comics. See, e.g., Avengers, the.

I hope Mark Waid is right, and that "piracy" is simply another form of marketing that will eventually pay greater dividends for authors than fighting piracy. And perhaps Moore and Reppion should embrace "piracy" and hope that the popularity of their work leads to a development deal from a major film studio. Personally, I might miss the days when comics were something other than a transparent attempt to land a movie deal.

As for the honesty box itself? Radiohead abandoned the idea with its most recent release, King of Limbs, after the name-your-price model adopted for the release of In Rainbows had arguably disappointing results: according to one report, 60% of consumers paid nothing for the album. I can't seen Moore and Reppion doing much better, but maybe if 40% of "pirates" kick in a little something into the virtual honesty box, that will be enough to keep Moore and Reppion from taking some minimum wage job where their talents may go to waste.

Posted by Jake Linford on June 7, 2012 at 09:00 AM in Books, Film, First Amendment, Information and Technology, Intellectual Property, Music, Property, Web/Tech | Permalink | Comments (3) | TrackBack

Friday, June 01, 2012

Oracle v. Google - The Other Shoe Drops

For those of you following the Oracle v. Google case, as I predicted here, the court has ordered that the APIs that Google copied are not copyrightable - at least not in the form that they were used. The case is basically dismissed with no remedy to Oracle.

Posted by Michael Risch on June 1, 2012 at 03:24 PM in Information and Technology, Intellectual Property | Permalink | Comments (0) | TrackBack

Thursday, May 31, 2012

A Coasean Look at Commercial Skipping...

Readers may have seen that DISH has sued the networks for declaratory relief (and was promptly cross-sued) over some new digital video recorder (DVR) functionality. The full set of issues is complex, so I want to focus on a single issue: commercials skipping. The new DVR automatically removes commercials when playing back some recorded programs. Another company tried this many years ago, but was brow-beaten into submission by content owners. Not so for DISH. In this post, I will try to take a look at the dispute from a fresh angle.

Many think that commercial skipping implicates derivative work rights (that is, transformation of a copyrighted work). I don't think so. The content is created separately from the commercials, and different commercials are broadcast in different parts of the country. The whole package is probably a compiliation of several works, but that compilation is unlikely to be registered with the copyright office as a single work. Also, copying the work of only one author in the compilation is just copying of the subset, not creating a derivative work of the whole.

So, if it is not a derivative work, what rights are at stake? I believe that it is the right to copy in the first place in a stored DVR file. This activity is so ubiquitous that we might not think of it as copying, but it is. The Copyright Act says that the content author has the right to decide whether you store a copy on your disk drive, absent some exception.

And there is an exception - namely fair use. In the famous Sony v. Universal Studios case, the Court held that "time shifting" is a fair use by viewers, and thus sellers of the VCR were not helping users infringe. Had the Court held otherwise, the VCR would have been enjoined as an agent of infringement, just like Grokster was.

I realize that this result is hard to imagine, but Sony was 5-4, and the initial vote had been in favor of finding infringement. Folks can debate whether Sony intended to include commercial skipping or not. At the time, remote controls were rare, so skipping a recorded commercial meant getting off the couch. It wasn't much of an issue. Even now, advertisers tolerate the fact that people usually fast forward through commercials, and viewers have always left the TV to go to the bathroom or kitchen (hopefully not at the same time!). 

But commercial skipping is potentially different, because there is zero chance that someone will stop to watch a catchy commercial or see the name of a movie in the black bar above the trailer as it zooms by. I don't intend to resolve that debate here. A primary reason I am skipping the debate is that fair use tends to be a circular enterprise. Whether a use is fair depends on whether it reduces the market possibilities for the owner. The problem is, the owner only has market possibilities if we say they do. For some things, we may not want them to have a market because we want to preserve free use. Thus, we allow copying via a DVR and VCR, even if content owners say they would like to charge for that right.

Knowing when we should allow the content owner to exploit the market and when we should allow users to take away a market in the name of fair use is the hard part. For this reason, I want to look at the issue through the lens of the Coase Theorem. Coase's idea, at its simplest, is that if parties can bargain (which I'll discuss below), then it does not matter with whom we vest the initial rights. The parties will eventually get to the outcome that makes each person best off given the options, and the only difference is who pays.

One example is smoking in the dorm room. Let's say that one person smokes and the other does not. Regardless of which roommate you give the right to, you will get the same amount of smoking in the room. The only difference will be who pays. If the smoker has the right to smoke, then the non-smoker will either pay the smoker to stop or will leave during smoking (or will negotiate a schedule). If you give the non-smoker the right to a smoke-free room, then the smoker will pay to smoke in the room, will smoke elswhere, or the parties will negotiate a schedule. Assuming non-strategic bargaining (hold-ups) and adequate resources, the same result will ensue because the parties will get to the level where the combination of their activities and their money make them the happiest. The key is to separate the analysis from normative views about smoking to determine who pays.

Now, let's apply this to the DVR context. If we give the right to skip commercials to the user, then several things might happen. Advertisers will advertise less or pay less for advertising slots. Indeed, I suspect that one reason why ads for the Super Bowl are so expensive, even in a down economy, is that not only are there a lot of viewers, but that those viewers are watching live and not able to skip commercials. In response, broadcasters will create less content, create cheaper content, or figure out other ways to make money (e.g. charging more for view on demand or DVDs). Refusing to broadcast unless users pay a fee is unlikely based on current laws. In short, if users want more and better content, they will have to go elsewhere to get it - paying for more channels on cable or satellite, paying for video on demand, etc. Or, they will just have less to watch.

If we give the right to stop commercial skipping to the broadcaster, then we would expect broadcasters will broadcast the mix they have in the past. Viewers will pay for the right to commercial skip. This can be done as it is now, through video on demand services like Netflix, but that's not the only model. Many broadcasters allow for downloading via the satellite or cable provider, which allows the content owner to disable fast forwarding. Fewer commercials, but you have to watch them. Or, in the future, users could pay a higher fee to the broadcaster for the right to skip commercials, and this fee would be passed on to content owners.

These two scenarios illustrate a key limit to the Coase Theorem. To get to the single efficient solution, transactions costs must be low. This means that the parties must be able to bargain cheaply, and there must be no costs or benefits that are being left out of the transaction (what we call externalities). Transactions costs are why we have to be careful about allocating pollution rights. The factory could pay a neighborhood for the right to pollute, but there are costs imposed on those not party to the transaction. Similarly, a neighborhood could pay a factory not to pollute, but difficulty coordinating many people is a transaction cost that keeps such deals from happening.

I think that transactions costs are high in one direction in the commercial skipping scenario, but not as much in the other. If the network has the right to stop skipping, there are low cost ways that content aggregators (satellite and cable) can facilitate user rights to commercial skip - through video on demand, surcharges, and whatnot. This apparatus is already largely in place, and there is at least some competition among content owners (some get DVDs out soon, some don't for example).

If, on the other hand, we vest the skipping right with users, then the ability for content owners to pay (essentially share their advertising revenues) with users is lower if they want to enter into such a transaction. Such a payment could be achieved, though, through reduced user fees for those who disable channel skipping. Even there, though, dividing among all content owners might be difficult.

Normatively, this feels a bit yucky. It seems wrong that consumers should pay more to content providers for the right to automate something they already have the right to do - skip commercials. However, we have to separate the normative from the transactional analysis - for this mind experiment, at least.

Commercials are a key part of how shows get made, and good shows really do go away if there aren't enough eyeballs on the commercials. Thus, we want there to be an efficient transaction that allows for metered advertising and content in a way that both users and networks get the benefit of whatever bargain they are willing to make.

There are a couple of other relevant factors that imply to me that the most efficient allocation of this right is with the network:

1. DISH only allows skipping after 1AM on the day the show is recorded. This no doubt militates in favor of fair use, because most people watch shows on the day they are recorded (or so I've read, I could be wrong). However, it also shows that the time at which the function kicks in can be moved, and thus negotiated and even differentiated among customers that pay different amounts. Some might want free viewing with no skipping, some might pay a large premium for immediate skipping. If we give the user the right to skip whenever, it is unlikely that broadcasters can pay users not to skip, and this means they are stuck in a world with maximum skipping - which kills negotiation to an efficient middle.

2. The skipping is only available for broadcast tv primetime recordings - not for recordings on "cable" channels, where providers must pay for content.  Thus, there appears to already be a payment structure in practice - DISH is allowing for skipping on some networks and not others, which implies that the structure for efficient payments are already in place. If, for example, DISH skipped commercials on TNT, then TNT would charge DISH more to carry content. The networks may not have that option due to "must carry" rules. I suspect this is precisely why DISH skips for broadcasters - because it can without paying.  In order to allow for bargaining however, given that networks can't charge more for DISH to carry content is to vest the right with networks and let the market take over.

These are my gut thoughts from an efficiency standpoint. Others may think of ways to allow for bargaining to happen by vesting rights with users. As a user, I would be happy to hear such ideas.

This is my last post for the month - time flies! Thanks to Prawfs again for having me, and I look forward to guest blogging in the future. As a reminder, I regularly blog at Madisonian.

Posted by Michael Risch on May 31, 2012 at 08:05 PM in Information and Technology, Intellectual Property, Legal Theory, Television, Web/Tech | Permalink | Comments (7) | TrackBack

Tuesday, May 29, 2012

School of Rock

I had a unique experience last Friday, teaching some copyright law basics to music students at a local high school. The instructor invited me to present to the class in part because he wanted a better understanding of his own potential liability for arranging song for performances, and in part because he suspected his students were, by and large, frequently downloading music and movies without the permission of copyright owners, and he thought they should understand the legal implications of that behavior. The students were far more interested in the inconsistencies they perceived in the current copyright system. I'll discuss a few of those after the break.

First, the Copyright Act grants the exclusive right to publicly perform a musical work, or authorize such a performance, to the author of the work, but there is no right public performance right granted to the author or owner of a sound recording. See 17 U.S.C. § 114. In other words, Rod Temperton, the author of the song "Thriller," has the right to collect money paid to secure permission to publicly perform the song, but neither Michael Jackson's estate nor Epic Records holds any such right, although it's hard to discount the creative choices of Michael Jackson, Quincy Jones and their collaborators in making much of what the public values about that recording. To those who had tried their hands at writing songs, however, the disparity made a lot of sense because "Thriller" should be Temperton's song because of his creative labors.

Second, the Copyright Act makes specific allowance for what I call "faithful" cover tunes, but not beat sampling or mashups. If a song (the musical work) has been commercially released, another artist can make a cover of the song and sell recordings of it without securing the permission of the copyright owner, so long as the cover artist provides notice, pays a compulsory license (currenty $0.091 per physical or digital recording) and doesn't change the song too much. See 17 U.S.C. § 115. If the cover artist makes a change in "the basic melody or fundamental character of the work," then the compulsory license in unavailable, and the cover artist must get permission and pay what the copyright owner asks. In addition, the compulsory license does not cover the sound recording, so there is no compulsory license for a "sampling right." Thus, Van Halen can make a cover of "Oh, Pretty Woman," without Roy Orbison's permission, but Two Live Crew cannot (unless the rap version ends up qualifying for the fair use privilege).  

It was also interesting to me that at least one student in each class was of the opinion that once the owner of a copyrighted work put the work on the Internet, the owner was ceding control of the work, and should expect people to download it for free. It's an observation consistent with my own analysis about why copyright owners should have a strong, if not absolute, right to decide if and when to release a work online. 

On a personal level, I confirmed a suspicion about my own teaching: if I try to teach the same subject six different times on the same day, it is guaranteed to come out six different ways, and indeed, it is likely there will be significant differences in what I cover in each class. This is in part because I have way more material at my fingertips than I can cram into any 45 minute class, and so I can be somewhat flexible about what I present, and in what order. I like that, because it allows me to teach in a manner more responsive to student questions. On the other hand, it may expose a failure to determine what are the 20-30 minutes of critical material I need to cover in an introduction to copyright law.

 

Posted by Jake Linford on May 29, 2012 at 09:00 AM in First Amendment, Information and Technology, Intellectual Property, Music, Teaching Law | Permalink | Comments (0) | TrackBack

Friday, May 25, 2012

Using empirical methods to analyze the effectiveness of persuasive techniques

Slate Magazine has a story detailing the Obama campaign's embracement of empirical methods to assess the relative effectiveness of political advertisements. 

To those familiar with the campaign’s operations, such irregular efforts at paid communication are indicators of an experimental revolution underway at Obama’s Chicago headquarters. They reflect a commitment to using randomized trials, the result of a flowering partnership between Obama’s team and the Analyst Institute, a secret society of Democratic researchers committed to the practice, according to several people with knowledge of the arrangement. ...

The Obama campaign’s “experiment-informed programs”—known as EIP in the lefty tactical circles where they’ve become the vogue in recent years—are designed to track the impact of campaign messages as voters process them in the real world, instead of relying solely on artificial environments like focus groups and surveys. The method combines the two most exciting developments in electioneering practice over the last decade: the use of randomized, controlled experiments able to isolate cause and effect in political activity and the microtargeting statistical models that can calculate the probability a voter will hold a particular view based on hundreds of variables.

Curiously, this story comes on the heels of a New York Times op-ed questioning the utility and reliability of social science approaches to policy concerns and a movement in Congress to defund the political science studies program at NSF.

Jeff

 

Posted by Dingo_Pug on May 25, 2012 at 09:13 AM in Current Affairs, Information and Technology, Science | Permalink | Comments (1) | TrackBack

Wednesday, May 16, 2012

Contrarian Statutory Interpretation Continued (CDA Edition)

Following my contrarian post about how to read the Computer Fraud and Abuse Act, I thought I would write about the Communication's Decency Act. I've written about the CDA before (hard to believe it has been almost 3 years!), but I'll give a brief summary here.

The CDA provides immunity from the acts of users of online providers. For example, if a user provides defamatory content in a comment, a blog need not remove the comment to be immune, even if the blog receives notice that the content is defamatory, and even if the blog knows the content is defamatory.

I agree with most of my colleagues who believe this statute is a good thing for the internet. Where I part ways from most of my colleagues is how broadly to read  the statute.

Since this is a post about statutory interpretation, I'll include the statute:

Section 230(c)(1) of the CDA states that:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In turn, an interactive computer service is:

any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.

Further, an information content provider is:

any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.

So, where do I clash with others on this? The primary area is when the operators of the computer service make decisions to publish (or republish) content.  I'll give three examples that courts have determined are immune, but that I think do not fall within the statute:

  1. Web Site A pays Web Site B to republish all of B's content on Site A. Site A is immune.
  2. Web Site A selectively republishes some or all of a story from Web Site B on Site A. Site A is immune.
  3. Web Site A publishes an electronic mail received by a reader on Site A. Site A is immune.

These three examples share a common thread: Site A is immune, despite selectively seeking out and publishing content in a manner that has nothing to do with the computerized processes of the provider. In other words, it is the operator, not the service, that is making publication determinations.

To address these issues, cases have focused on "development" of the information. One case, for example, defines development as a site that "contributes materially to the alleged illegality of the conduct." Here, I agree with my colleagues that development is being defined too broadly to limit immunity. Development should mean that the provider actually creates the content that is displayed. For that reason, I agree with the Roommates.com decision, which held that Roommates developed content by providing pre-filled dropdown lists that allegedly violated the Fair Housing Act. It turns out that the roommate postings were protected speech, but that is a matter of substance, and not immunity. The fact that underlying content is eventually vindicated does not mean that immunity should be expanded. To the extent some think that the development standard is limited only to development of illegal content (something implied by the text of the Roommates.com decision), I believe that is too limiting. The question is the source of the information, not the illegality of it.

The burning issue is why plaintiffs continue to rely on "development" despite its relatively narrow application. The answer is that this is all they currently have to argue, and that is where I disagree with my colleagues. I believe the word "interactive" in the definition must mean something. It means that the receipt of content must be tied to the interactivity of the provider. In other words, receipt of the offending content must be automated or otherwise interactive to be considered for immunity.

Why do I think that this is the right reading? First, there's the word "interactive." It was chosen for a reason. Second, the definition of "information content provider" identifies information "provided through the Internet or any other interactive computer service." (emphasis added). This implies that the provision of information should be based on interactivity or automation.

There is support in the statute for only immunizing information directly provided through interactivity. Section, 230(d), for example, requires interactive service providers to notify their users about content filtering tools. This implies that the information being provided is through the interactive service.  Sections 230(a) and (b) describe the findings and policy of Congress, which describe interactive services as new ways for users to control information and for free exchange of ideas.

I think one can read the statute more broadly than I am here. But I also believe that there is no reason to do so. The primary benefit of Section 230 is a cost savings mechanism. There's is no way many service providers can screen all the content on their websites for potentially tortious activity. There's just no filter for that.

Allowing immunity for individualized editorial decisions like paying for syndicated content, picking and choosing among emails, and republishing stories from other web sites runs directly counter to this cost saving purpose.  Complaining that it costs too much to filter interactive user content is a far cry from complaining that it costs to much to determine whether an email is true before making a noninteractive decision to republish it. We should want our service providers to expend some effort before republishing.

Posted by Michael Risch on May 16, 2012 at 04:01 PM in Blogging, Information and Technology | Permalink | Comments (4) | TrackBack

Fair Use and Electronic Reserves

For several years Georgia State was involved in litigation over the fair use doctrine. Specifically a consortium of publishers backed by Oxford, Cambridge and Sage sued Georgia State over copyright violations by many of the faculty. Many of my colleagues in the department were specifically named in the suit. A decision has now been rendered. You can read abou the decision here, and you can read the decision here.

The Court backed Georgia State in almost every instance, finding no copyright violation. However, the Court did lay down some rules - in particular you can use no more than 10% or one chapter, whichever is shorter, of any book.

Oh, and my colleagues were all found to have not violated copyright laws. For two of them the Court found that the plaintiffs could even prove a copyright.

Posted by Robert Howard on May 16, 2012 at 09:23 AM in Information and Technology, Intellectual Property, Things You Oughta Know if You Teach X | Permalink | Comments (0) | TrackBack

Friday, May 11, 2012

App Enables Users to File Complaints of Airport Profiling

Following the terrorist attacks of September 11, 2001, Muslims and those perceived to be Muslim in the United States have been subjected to public and private acts of discrimination and hate violence.  Sikhs -- members of a distinct monotheistic religion founded in 15th century India -- have suffered the "disproportionate brunt" of this post-9/11 backlash.  There generally are two reasons for this.  The first concerns appearance: Sikh males wear turbans and beards, and this visual similiarity to Osama bin Laden and his associates made Sikhs an accessible and superficial target for post-9/11 emotion and scrutiny.  The second relates to ignorance: many Americans are unaware of Sikhism and of Sikh identity in particular. 

Accordingly, after 9/11, Sikhs in the United States have been murdered, stabbed, assaulted, and harassed; they also have faced discrimination in various contexts, including airports, the physical space where post-9/11 sensitivities are likely and understandably most acute.  The Sikh Coalition, an organization founded in the hours after 9/11 to advocate on behalf of Sikh-Americans, reported that 64% of Sikh-Americans felt that they had been singled-out for additional screening in airports and, at one major airport (San Francisco International), nearly 100% of turbaned Sikhs received additional screening. (A t-shirt, modeled here by Sikh actor Waris Ahluwalia and created by a Sikh-owned company, makes light of this phenomenon.)

In response to such "airport profiling," the Sikh Coalition announced the launch of a new app (Apple, Android), which "allows users to report instances of airport profiling [to the Transportation Security Administration (TSA)] in real time."  The Coalition states that the app, called "FlyRights," is the "first mobile app to combat racial profiling."  The TSA has indicated that grievances sent to the agency by way of the app will be treated as official complaints

News of the app's release has generated significant press coverage.  For example, the New York Times, ABC, Washington Post, and CNN picked up the app's announcement.  (Unfortunately, multiple outlets could not resist the predictable line, 'Profiled at the airport? There’s an app for that.')  Wade Henderson, president and CEO of The Leadership Conference on Civil and Human Rights and The Leadership Conference Education Fund, tweeted, "#FlyRights is a vanguard in civil and human rights."

It will be interesting to see whether this app will increase TSA accountability, quell profiling in the airport setting, and, more broadly, trigger other technological advances in the civil rights arena.

 

Posted by Dawinder "Dave" S. Sidhu on May 11, 2012 at 08:32 AM in Information and Technology, Religion, Travel, Web/Tech | Permalink | Comments (0) | TrackBack

Wednesday, May 09, 2012

Oracle v. Google: Digging Deeper

This follows my recent post about Oracle v. Google. At the behest of commenters, both online and offline, I decided to dig a bit deeper to see exactly what level of abstraction is at issue in this case. The reason is simple: I made some assumptions in the last post about what the jury must have found, and it turns out that the assumption was wrong. Before anyone accuses me of changing my mind, I want to note that in my last post I made a guess, and that guess was wrong once I read the actual evidence. My view of the law hasn't changed. More after the jump.

For the masochistic, Groklaw has compiled the expert reports in an accessible fashion here and here. Why do I look at the reports, and not the briefs? It turns out that lawyers will make all sorts of arguments about what the evidence will say, but what is really relevant is the evidence actually presented. The expert reports, submitted before trial, are the broadest form of evidence that can be admitted - the court can whittle down what the jury hears, but typically experts are not allowed to go much beyond their reports.

These reports represent the best evidentiary presentation the parties have on the technical merits. It turns out that as a factual matter, both reports overlap quite a bit, and neither seems "wrong" as a matter of technical fact. I would sure hope so - these are pretty well respected professors and, quite frankly, the issues in this case are just not that complicated from a coding standpoint. (Note: for those wonder what gives me the authority to say that, I could say a lot, but I'll just note that in a prior life I wrote a book about software programming for an electronic mail API).

What level of abstraction was presented and argued to the jury? As far as I can tell from the reports, other than a couple or three routines that were directly copied, the Oracle's expert found little or no similar structures or sequences in the main body source code - the part that actually does the work. The only similarity - and it was nearly identical - was in the structure, sequence and organization of the grouping of function names, and the "packages" or files that they were located in.

For computer nerds, also identical were function names, parameter orders, and variable structures passed in as parameters. In other words, the header files were essentially identical. And they would have to be, if the goal is to have a compatible system. The inputs (the function names and parameters) and the outputs need to be the same. The only way you can disallow this usage of the API is to say that you cannot create an independent software program (even one of this size) that mimics the inputs and outputs of the original program.

To say that would be bad policy, and as I discuss below, probably not in accordance with precedent. This is why the experts are both right. Oracle's expert says they are identical, and Google copied because that was the best way to lure application developers - by providing compatibility (and the jury agreed, as to the copying part). Google's expert says, so what? The only thing copied was functional, and that's legal. It's this last part that a) led to the hung jury, and b) the court will have to rule on.

In my last post, I assumed that the level of abstraction must have been at a deeper level than just the names of the methods. Why did I do that?

First, the court's jury instructions make clear that function names are not at issue. But I guess the court left it to the jury whether the collection could be infringed.

Second, the idea that an API could be infringed is usually something courts decide well in advance of trial, and it's a question that doesn't usually make it to trial.

Third, based on media accounts, it appeared that there was more testimony about deeper similarities in the code. The copied functions, I argued in my prior post, supported that view. Except that there were no other similarities. I think it is a testament to Oracle's lawyers (and experts) that this misperception of a dirty clean room shone through in media reports, because the actual evidence belies the media accounts.

This is why I decided to dig deeper, and why one should not rely on second hand reports of important evidence. Based on my reading of the reports (and I admit that I could be missing something - I wasn't in the courtroom), I think that the court will have no choice but to hold that the collection of API names is uncopyrightable - at least at this level of abstraction and claimed infringement.

To the extent that there are bits of non-functional code, I would say that's probably fair use as a matter of law to implement a compatible system. I made a very similar argument in an article I wrote 12 years ago - long before I went into academia.

Prof. Boyden asked in a comment to my prior post whether there was any law that supported the copying of APIs structure and header files. I think there is: Lotus v. Borland. That case is famous for allowing Borland to mimic the Lotus structure, but there was also an API of sorts. Lotus macros were based on the menu structure, and to provide program compatiblity with Lotus, Borland implemented the same structure. So, for example, in Lotus, a user would hit "/" to bring up the menus, "F" to bring up the file menu, and "O" to bring up the open menu. As a result, the macro "/FO" would mimic this, to bring up the open menu.

Borland's product would "read" macro programs written for Lotus, and perform the same operation. No underlying similarity of the computer code, but an identical API that took the same inputs to create the same output the user expected.

Like the lower court here, the lower court there found infringement of the structure, sequence, and organization of the menu structure. Like the lower court here, the court there found it irrelevant that Borland got the menu structure from third-party books rather than Lotus's own product. (Here, Google asserts that it got the API's from Apache Harmony, a compatible Java system, rather than the Java documents themselves). There is some dispute about whether Sun sanctioned the Apache project, and what effect that should have on the case. I think that the Harmony is a red herring.The reality is that it does not matter either way - a copy is a copy is a copy - if the copy is illicit that is.

In Lotus, the lower court found the API creative and copyrightable, the very question facing the court here. On appeal, however, the First Circuit ruled that the API was a method of operation, likening it to the buttons on a VCR. I think that's a bit simplistic, but it was definitely the right ruling. The case went up to the Supreme Court, and it was a blockbuster case, expected to -- once and for all -- put this question to rest.

Alas, the Supreme Court affirmed without opinion by an evenly divided court. And the circuit court ruling stood. And it still stands - the court never took another case, and the gist of Lotus v. Borland has been applied over and over, but rarely as directly as it might apply here.

Wholesale, direct compatibility copying of APIs just doesn't happen very often, and certainly not on the scale and with the stakes of that at issue here. Perhaps that is why there is no definitive case holding that an entire API structure is uncopyrightable. You would think we would have by 2012, but nope. Lotus comes close, but it is not identical. In Lotus, the menu structure was much smaller, and the names and structure were far less creative. Further, the concern was macro programming written by users for internal use that would not allow them to switch to a new spreadsheet program. Java programs, on the other hand, are designed to be distributed to the public in most cases.

Then again, the core issue is the same: the ability to switch the underlying program while maintaining compatibility of programs that have already been written. Based on this similarity, my prediction is that Judge Alsup will say that the collection of names is not copyrightable, or at the very least usage of the API in this manner is fair use as a matter of law. We'll see if I'm right, and whether an appeals court affirms it.

Posted by Michael Risch on May 9, 2012 at 10:40 AM in Information and Technology, Intellectual Property | Permalink | Comments (0) | TrackBack

Monday, May 07, 2012

Oracle v. Google - Round I jury verdict (or not)

The jury came back today with its verdict in round one of the epic trial between two giants: Oracle v. Google. This first phase was for copyright infringement. In many ways, this was a run of the mill case, but the stakes are something we haven't seen in a technology copyright trial in quite some time.

Here's the short story of what happened, as far as I can gather.

1. Google needed an application platform for its Android phones. This platform allows software developers to write programs (or "apps" in mobile device lingo) that will run on the phone.

2. Google decided that Sun's (now Oracle's) Java was the best way to go.

3. Google didn't want to pay Sun for a license to a "virtual machine" that would run on Android phones.

4. Google developed its own virtual machine that is compatible with the Java programming language. To do so, Google had to make "APIs" that were compatible with Java. These APIs are essentially modules that provide functionality on the phone based on a keywords (instructions) from a Java language computer program. For example, if I want to display "Hello World" on the phone screen, I need only call print("Hello World"). The API module has a bunch of hidden functionality that takes "Hello World" and sends it out to the display on the screen - manipulating memory, manipulating the display, etc.

5. The key dispute is just how much of the Java source code was copied, if any to create the Google version. 

The jury today held the following:

1. One small routine (9 lines) was copied directly - line for line. The court said no damages for this, but this finding will be relevant later

2. Google copied the "structure, sequence, and organization" of 37 Java API modules. I'll discuss what this means later.

3. There was no finding on whether the copying was fair use - the jury deadlocked.

4. Google did not copy any "documentation" including comments in the source code.

5. Google was not fooled into thinking it had a license from Sun.

To understand any of this, one must understand the levels of abstraction in computer code. Some options are as follows:

A. Line by line copying of the entire source code. 

B. Line by line paraphrasing of the source code (changing variable names, for example, but otherwise idential lines).

C. Copying of the structure, sequence and organization of the source code - deciding what functions to include or not, creative ways to implement them, creative ways to solve problems, creative ways to name and structure variables, etc.  (The creativity can't be based on functionality)

D. Copying of the functionality, but not the stucture, sequence and organization - you usually find this with reverse engineering or independent development

E. Copying of just the names of functions with similar functionality - the structure and sequence is the same, but only as far as the names go (like print, save, etc.). The Court ruled already that this is not protected.

F. Completely different functionality, including different structure, sequence, organization, names, and functionality.

Obviously F was out if Google wanted to maintain compatibility with the Java programming language (which is not copyrightable). 

So, Google set up what is often called a "cleanroom." The idea is not new - AMD famously set up a cleanroom to develop copyrighted aspects of its x86 compatible microprocessors back in the early 1990's. Like Google now (according to the jury), AMD famously failed to keep its cleanroom clean.

Here's how a cleanroom works. One group develops a specification of functionality for each of the API function names (which are, remember, not protected - people are allowed to make compatible programs using the same names, like print and save). Ideally, you do this through reverse engineering, but arguably it can be done by reading copyrighted specifications/manuals, and extracting the functionality. Quite frankly, you could probably use the original documentation as well, but it does not appear as "clean" when you do so.

Then, a second group takes the "pure functionality" description, and writes its own implementation. If it is done properly, you find no overlapping source code or comments, and no overlapping structure, sequence and organization. If there happens to be similar structure, sequence and organization, then the cleanroom still wins, because that similarity must have been dictated by functionality. After all, the whole point of the cleanroom is that the people writing the software could not copy because they did not have the original to copy from.

So, where did it all go wrong? There were a few smoking guns that the jury might have latched on to:

1. Google had some emails early on that said there was no way to duplicate the functionality, and thus Google should just take a license.

2. Some of the code (specifically, the 9 lines) were copied directly. While not big in itself, it makes one wonder how clean the team was.

3. The head of development noted in an email that it was a problem for the cleanroom people to have had Sun experience, but some apparently did.

4.  Oracle's expert testified (I believe) that some of the similarities were not based on functionality, or were so close as to have been copied. Google's expert, of course, said the opposite, and the jury made its choice. It probably didn't help Google that Oracle's expert came from hometown Stanford, while Google's came from far-away Duke.

So, the jury may have just discounted the Google cleanroom story, and believed Oracle's. And that's what it found. As someone who litigated many copyight cases between competing companies, this is not a shocking outcome. This issue will not doubt bring the copyright v. functionality issue to the forefront (as it did in Lotus v. Borland and Intel v. AMD), this stuff is bread and butter for most technology copyright lawyers. It's almost always factually determined. Only the scope of this case is different in my book - everything else looks like many cases I've litigated (and a couple that I've tried).

So, what happens now in the copyright phase?  (A trial on patent infringement started today.) Judge Alsup has two important decisions to make.

First, the court has to decide what to do with the fair use ruling. Many say that a mistrial is warranted since fair use is a question of fact and the jury deadlocked. I'm not so sure. The facts on fair use are not really disputed here - only the legal interpretation of them; my experience is that courts are more than willing to make a ruling one way or the other when copying is clear (as the jury now says it is). I don't know what the court will do, but my gut says no fair use here.  My experience is that failed cleanrooms fail fair use - it means that what was copied was more than pure functionality, and it is for commercial use with market substitution. The only real basis for fair use is that the material copied was pure functionality, and that's the next inquiry.

Second, the court must determine whether the structure, sequence, and organization of these APIs can be copyrightable, or whether they are pure functionality. I don't know the answer to that question. It will depend in large part on:

a. whether the structure, etc., copied was at a high level (e.g. structure of functions) or at a low level (e.g. line by line and function by function);

b. the volume of copied (something like 11,000 lines is at issue);

c. the credibility of the experts in testifying to how much of structure that is similar is functionally based.  On a related note, the folks over at groklaw think for the most part think this is not copyrightable. They have had tremendous coverage of this case.

I've been on both sides of this argument, and I've seen it go both ways, so I don't have any predictions. I do look forward to seeing the outcome, though. It has been a while since I've written about copyright law and computer software; this case makes me want to rejoin the fray.

Posted by Michael Risch on May 7, 2012 at 08:07 PM in Information and Technology, Intellectual Property, Web/Tech | Permalink | Comments (1) | TrackBack

Thursday, May 03, 2012

When a Good Interpretation is the Wrong One (CFAA Edition)

Hi, and thanks again to Prawfs for having me back.  In my first post, I want to revisit the CFAA and the Nosal case. I wrote about this case back in April 2011 (when the initial panel decision was issued), and again in December (when en banc review was granted). It's hard to believe that it has been more than a year!

I discuss the case in detail in the other posts, but for the busy and uninitiated, here is the issue: what does it mean to "exceed authorized access" to a computer?  In Nosal, the wrongful act was essentially trade secret misappropriation where the "exceeded authorization" was violation of a clear "don't use our information except for company benefit" type of policy. Otherwise, the employees had access to the database from which they obtained information as part of their daily work.

Back in April, I argued that the panel basically got the interpretation of the statute right, but that the interpretation was so broad as to be scary. Orin Kerr, who has written a lot about this, noted in the comments that such a broad interpretation would be void for vagueness because it would ensnare too much everyday, non-wrongful activity.  Though I'm not convinced that the law supports his view, it wouldn't break my heart if that were the outcome. But that's not the end of the story.

Last month, the Ninth Circuit finally issued the en banc opinion in the Nosal case. The court noted all the scary aspects of a broad interpretation, trotting out the parade of horribles showing innocuous conduct that would violate the broadest reading of the statute. As the court notes: "Ubiquitous, seldom-prosecuted crimes invite arbitrary and discriminatory enforcement." We all agree on that.

The solution for the court was to narrowly interpret what "exceeds authorized access" means: "we hold that  'exceeds authorized access' in the CFAA is limited to violations of restrictions on access to information, and not restrictions on its use." (emphasis in original).

On the one hand, this is a normatively "good" interpretation. The court applies the rule of lenity to not outlaw all sorts of behavior that shouldn't be outlawed and that was likely never intended to be outlawed. So, I'm not complaining about the final outcome. 

On the other hand, I can't get over the fact that the interpretation is just plain wrong as a matter of statutory interpretation. Here are some of the reasons why:

1. The term "exceeds authorized access" is defined in the statute:  "'exceeds authorized access' means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter." The statute on its face makes clear that exceeding access is not about violating an access restriction, but instead about using access to obtain information that one is not so entitled to obtain. To say that a use restriction cannot be part of the statute simply rewrites the definition.

2. They key section of the statute is not about use of information at all. Section 1030(a)(2) outlaws access to a computer, where such access leads to obtaining (including viewing) of information. So, of course exceeding authorized access should deal with an access restriction, but what is to stop everyone from rewriting their agreements conditionally: "Your access to this server is expressly conditioned on your intent at the time of access. If your intent is to use the information for nefarious purposes, then your access right is revoked." The statutory interpretation can't be so easily manipulated, but it appears to be. 

3. Even if you accept the court's reading as in line with the statute, it still leaves much uncertainty in practice. For example, the court points to Google's former terms of service that disallowed minors from using Google: You may not use the Services and may not accept the Terms if . . . you are not of legal age to form a binding contract with Google . . . .” I agree that it makes little sense for all minors who use Google to be juvenile delinquents. But read the terms carefully - they are not about use of information; they are about permission to access the services. If you are a minor, you may not use our services (that is, access our server). I suppose this is a use restriction because the court used it as an example, but that's not so clear to me.

4. The court states that Congress couldn't have meant exceeds authorized access to be about trade secret misappropriation and really only about hacking. 1030(a)(1)(a) belies that reading. That section outlaws exceeding authorized access to obtain national secrets and causing them "to be communicated, delivered, or transmitted, or attempt[ing] to communicate, deliver, transmit or cause to be communicated, delivered, or transmitted the same to any person not entitled to receive it." That sounds a lot like misappropriation to me, and I bet Congress had a situation like Nosal in mind. 

5. In fact, trade secrets appear to be exactly what Congress had in mind. The section that would ensnare most unsuspecting web users, 1030(a)(2) (which bars "obtaining" information by exceeding authorized access), was added in the same public law as the Economic Espionage Act of 1996 - the federal trade secret statute. The senate reports for the EEA and the change to 1030 were issued on the same day. As S. Rep. 104-357 makes clear, the addition was to protect the privacy of information on civilian computers. Of course, this helps aid a narrower reading - if information is not private on the web, then perhaps we should not be so concerned about it.

6. On a related note, the court's treatment of the legislative history is misleading. The definition of "exceeds authorized access" was changed in 1986. As the court notes in a footnote: 

[T]he government claims that the legislative history supports itsinterpretation. It points to an earlier version of the statute, which defined“exceeds authorized access” as “having accessed a computer with authorization, uses the opportunity such access provides for purposes to which such authorization does not extend.”  But that language was removed and replaced by the current phrase and definition.

So far, so good. In fact, this change alone seems to support the court's view, and I would have stopped there. But the the court goes on to state:

And Senators Mathias and Leahy—members of theSenate Judiciary Committee—explained that the purpose of replacing the original broader language was to “remove[] from the sweep of the statute one of the murkier grounds of liability, under which a[n] . . . employee’s access to computerized data might be legitimate in some circumstances, but criminal in other (not clearly distinguishable) circumstances.”

This reading is just not accurate in content or spirit. I reproduce below sections of S. Rep. 99-472, the legislative history cited by the court:

 [On replacing "knowing" access with "intentional" access] This is particularly true in those cases where an individual is authorized to sign onto and use a particular computer, but subsequently exceeds his authorized access by mistakenly entering another computer file or data that happens to be accessible from the same terminal. Because the user had ‘knowingly’ signed onto that terminal in the first place, the danger exists that he might incur liability for his mistaken access to another file. ... The substitution of an ‘intentional’ standard is designed to focus Federal criminal prosecutions on those whose conduct evinces a clear intent to enter, without proper authorization, computer files or data belonging to another.

. . .
[Note: (a)(3) was about access to Federal computers by employees. Access to private computers was not added for another 10 years. At the time (a)(2) covered financial information.] The Committee wishes to be very precise about who may be prosecuted under the newsubsection (a)(3). The Committee was concerned that a Federal computer crime statute not be so broad as to create a risk that government employees and others who are authorized to use a Federal Government computer would face prosecution for acts of computer access and use that, while technically wrong, should not rise to the levelof criminal conduct. At the same time, the Committee was required to balance its concern for Federal employees and other authorized users against the legitimate need to protect Government computers against abuse by ‘outsiders.’ The Committee struck that balance in the following manner.
In the first place, the Committee has declined to criminalize acts in which the offending employee merely ‘exceeds authorized access' to computers in his own department ... It is not difficult to envision an employee or other individual who, while authorized to use a particular computer in one department, briefly exceeds his authorized access and peruses data belonging to the department that he is not supposed to look at. This is especially true where the department in question lacks a clear method of delineating which individuals are authorized to access certain of its data. The Committee believes that administrative sanctions are more appropriate than criminal punishment in such a case. The Committee wishes to avoid the danger that every time an employee exceeds his authorized access to his department's computers—no matter how slightly—he could be prosecuted under this subsection. That danger will be prevented by not including ‘exceeds authorized access' as part of this subsection's offense. [emphasis added]
Section 2(c) substitutes the phrase ‘exceeds authorized access' for the more cumbersome phrase in present 18 U.S.C. 1030(a)(1) and (a)(2), ‘or having accessed a computer with authorization, uses the opportunity such access provides for purposes to which such authorization does not extend’. The Committee intends this change to simplify the language in 18 U.S.C. 1030(a)(1) and (2)... [note: not to change the meaning, though obviously it does]

[And finally, the quote in the Nosal case, which were "additional" comments in the report, not the report of the committee itself]: [1030(a)(3)] would eliminate coverage for authorized access that aims at ‘purposes to which such authorization does not extend.’  This removes from the sweep of the statute one of the murkier grounds of liability, under which a Federal employee's access to computerized data might be legitimate in some circumstances, but criminal in other (not clearly distinguishable) circumstances that might be held to exceed his authorization.
This collection of history implies four things (to me, at least):
a. The committee well understood that employees could have authorized access to a computer, but could easily, "technically," and "slightly" exceed that authorization by accessing another file on the same computer - and that it was not all about hacking.
b. The committee understood that it was problematic to hold people liable for this.
c. As a result, the committee removed "exceeds authorized access" for federal employee liability, but left it in in (a)(1) (use of U.S. secrets) and (a)(2) (gaining access to finanical information). The legislative history quoted by the court merely affirms that the "murkiness" is solved by removing the phrase altogether, and not by narrowing the scope in other subsections.
The problem is that the worries the committee had about how "exceeds authorized access" might apply to federal employeees never went away, but Congress extended liability to everyone when it expanded (a)(2) in 1996. What Congress should have done in 1996 (or anytime since) was consider the problems facing federal employees when it imposed restrictions on everyone.
A second problem is that Congress likely did not envision widespread computer servers with open access to information, whereby the only "authorization" limitations would be contractual rather than technologically based.
This leads me, again, to my conclusion above. The courts reading of the statute, while "good," is not quite right. But the panel's original reading was not quite right either.
I return to the suggestions I made in prior posts, bolstered by the legislative history here: we should look to the terms of authorization of access to see whether they have been exceeded. This means that if you are an employee who intentionally accesses information for a purpose you know is not authorized, then you are exceeding authorization.
It also means that if the terms of service on a website say explicitly that you must be truthful about your age or as a condition of authorization to access the site, then you are exceeding authorization. And that’s not always an unreasonable access limitation.  If there were a kids only website that excluded adults, I might well want to criminalize access obtained by people lying about their age. That doesn’t mean all access terms are reasonable, but I’m not troubled by that from a statutory interpretation standpoint.
I’m sure one can attack this as vague – it won’t always be clear when a term is tied to authorization. But then again, if it is not a clear term of authorization, the state shouldn’t be able to prove that authorization was exceeded. It also means that if the authorization terms are buried or unread, then there may not be an intentional access that exceeds authorization.

Posted by Michael Risch on May 3, 2012 at 01:03 PM in Information and Technology | Permalink | Comments (7) | TrackBack

Tuesday, April 17, 2012

“Breaking and Entering” Through Open Doors: Website Scripting Attacks and the Computer Fraud and Abuse Act, Part 2

Two notes: 1) Apologies to Prawfs readers for the delay in this post. It took my student and I longer than anticipated to complete some of the technical work behind this idea. 2) This post is a little longer than originally planned, because last week the Ninth Circuit en banc reversed a panel decision in United States v. Nosal which addressed whether the CFAA extends to violations of (terms of) use restrictions. In reversing the panel decision, the Ninth Circuit found the CFAA did *not* extend to such restrictions.


The idea for this post originally arose when I noticed I was able to include a hyperlink in a comment I made on a Prawfs' post. One of my students (Nick Carey) had just finished a paper discussing the applicability of the Computer Fraud and Abuse Act (CFAA) to certain types of cyberattacks that would exploit the ability to hyperlink blog comments, so I contacted Dan and offered to see if Prawfs was at risk, as it dovetailed nicely with a larger project I'm working on regarding regulating cybersecurity through criminal law.

The good news: it's actually hard to "hack" Prawfs. As best we can tell the obvious vulnerabilities are patched. It got me thinking, though, that as we start to clear away the low-hanging fruit in cybersecurity through regulatory action, focus is likely to shift to criminal investigations to address more sophisticated attackers.

Sophisticated attackers often use social engineering as a key part of their attacks. Social engineering vulnerabilities generally arise when there is a process in place to facilitate some legitimate activity, and when that process can be corrupted -- by manipulating the actors who use it -- to effect an outcome not predicted (and probably not desired). Most readers of this blog likely encounter such attacks on a regular basis, but have (hopefully!) been trained or learned how to recognize such attacks. One common example is the email, purportedly from a friend, business, or other contact, that invites you to click on a link. Once clicked on, this link in fact does not lead to the "exciting website" your friend advertised, but rather harvests the username and password for your email account and uses those for a variety of evil things.

I describe this example, which hopefully resonates with some readers (if not, be thankful for your great spam filters!), because it resembles the vulnerability we *did* find in Prawfs. This vulnerability, which perhaps is better called a design choice, highlights the tension in legal solutions to cybercrime I discuss here. Allowing commenters to hyperlink is a choice -- one that forms the basis for the "open doors" component of this question: should a user be held criminally liable under federal cybercrime law for using a website "feature" in a way other than that intended (or perhaps desired) by the operators of a website, but in a way that is otherwise not unlawful.

Prawfs uses TypePad, a well-known blogging software platform that handles (most) of the security work. And, in fact, it does quite a good job -- as mentioned above, most of the common vulnerabilities are closed off. The one we found remaining is quite interesting. It stems from the fact that commenters are permitted to use basic HTML (the "core" language in which web pages are written) in writing their comments. The danger in this approach is that it allows an attacker to include malicious "code" in their comments, such as the type of link described above. Since the setup of TypePad allows for commenters to provide their own name, it is also quite easy for an attacker to "pretend" to be someone else and use that person's "authority" to entice readers to click on the dangerous link. The final comment of Part 1 provides an example, here.

A simple solution -- one to which many security professionals rush -- is just to disable the ability to include HTML in comments. (Security professionals often tend to rush to disable entirely features that create risk.) Herein lies the problem: there is a very legitimate reason for allowing HTML in comments; it allows legitimate commenters to include clickable links to resources they cite. As we've seen in many other posts, this can be a very useful thing to do, particularly when citing opinions or other blog posts. Interestingly, as an aside, I've often found this tension curiously to resemble that found in debates about restricting speech on the basis of national security concerns. But that is a separate post.

Cybercrime clearly is a substantial problem. Tradeoffs like the one discussed here present one of the core reasons the problem cannot be solved through technology alone. Turning to law -- particularly regulating certain undesired behaviors through criminalization -- is a logical and perhaps necessary step in addressing cybersecurity problems. As I have begun to study this problem, however, I have reached the conclusion that legal solutions face a structurally similar set of tradeoffs as do technical solutions.

The CFAA is the primary federal law criminalizing certain cybercrime and "hacking" activities. The critical threshold in many CFAA cases is whether a user has "exceeded authorized access" (18 U.S.C. § 1030(a)) on a computer system. But who defines "authorized access?" Historically, this was done by a system administrator, who set rules and policies for how individuals could use computers within an organization. The usernames and passwords we all have at our respective academic institutions, and the resources those credentials allow us to access, are an example of this classic model.

What about a website like Prawfs? Most readers don't use a login and password to read or comment, but do for posting entries. Like most websites, there is a policy addressing (some of) the aspects of acceptable use. That policy, however can change at any time and without notice. (There are good reasons this is the case, the simplest being it is not practical to notify every person who ever visits the website of any change to the policy in advance of such changes taking effect.) What if a policy changes, however, in a way that makes an activity -- one previously allowed -- now impermissible? Under a broad interpretation of the CFAA, the user continuing to engage in the now impermissible activity would be exceeding their authorized access, and thereby possibly running afoul of the CFAA (specifically (a)(2)(C)).

Some courts have rejected this broad interpretation, perhaps most famously in United States v. Lori Drew, colloquially known as the "MySpace Mom" case. Other courts have accepted a broader view, as discussed by Michael Risch here and here. I find the Drew result correct, if frustrating, and the (original) Nosal result scary and incorrect. Last week, the Ninth Circuit en banc reversed itself and adopted a more Drew-like view of the CFAA. I am particularly relieved by the majority's understanding of the CFAA overbreadth problem:

The government’s construction of the statute would expand its scope far beyond computer hacking to criminalize any unauthorized use of information obtained from a computer. This would make criminals of large groups of people who would have little reason to suspect they are committing a federal crime. While ignorance of the law is no excuse, we can properly be skeptical as to whether Congress, in 1984, meant to criminalize conduct beyond that which is inherently wrongful, such as breaking into a computer.

(United States v. Nosal, No. 10-10038 (9th Cir. Apr. 10, 2012) at 3864.)

I think the court recognizes here that an overbroad interpretation of the CFAA is similar to extending a breaking and entering statute to just walking in an open door. The Ninth Circuit appears to adopt similar thinking, noting that Congress' original intent was to address the issue of hackers breaking into computer systems, not innocent actors who either don't (can't?) understand the implications of their actions or don't intend to "hack" a system when they find the system allows them to access a file or use a certain function:

While the CFAA is susceptible to the government’s broad interpretation, we find Nosal’s narrower one more plausible. Congress enacted the CFAA in 1984 primarily to address the growing problem of computer hacking, recognizing that, “[i]n intentionally trespassing into someone else’s computer files, the offender obtains at the very least information as to how to break into that computer system.” S. Rep. No. 99-432, at 9 (1986) (Conf. Rep.).

(Nosal at 3863.)

Obviously the Ninth Circuit is far from the last word on this issue, and the dissent notes differences in how other Circuits have viewed the CFAA. I suspect at some point, unless Congress first acts, the Supreme Court will end up weighing in on the issue. Before that, I hope to produce some useful thoughts on the issue, and eagerly solicit feedback from Prawfs readers. I've constructed a couple of examples below to illustrate this in the context of the Blawg.

Consider, for example, a change in a blog's rules restricting what commenters may link to in their comments. Let's assume that, like Prawfs, currrently there are no specific posted restrictions. Let's say a blog decided it had a serious problem with spam (thankfully we don't here at Prawfs), and wanted to address this by adjusting the acceptable use policy for the blog to prohibit linking to any commercial product or service. We probably wouldn't feel much empathy for the unrelated spam advertisers who filled the comments with useless information about low-cost, prescriptionless, mail-order pharmaceuticals. We definitely wouldn't about the advance-fee fraud advertisers. But what about the practitioner who is an active participant in the blog, contributes to substantive discussions, and occassionally may want to reference or link to their practice in order to raise awareness?

Technically, all three categories of activity would violate (the broad interpretation of) (a)(2)(C). Note that the intent requirement -- or lack thereof -- in (a)(2)(C) is a key element of why these are treated similarly: the only "intent" required for violation is intent to access. (a)(2)(C) does not distinguish among actors' intent beyond this. As I have commented elsewhere (scroll down), one can easily construct scenarios under a "scary" reading of the CFAA where criminal law might be unable to distinguish between innocent actors lacking any reasonable element of what we traditionally consider mens rea, and malicious actors trying to takeover or bring down information systems. At the moment, I tend to think there's a more difficult problem discerning intent in the "gray area" examples I constructed here, particularly the Facebook examples when a username/password is involved. But I wonder what some of the criminal law folks think about whether intent really *is* harder, or if we could solve that problem with better statutory construction of the CFAA.

Finally, I've added one last comment to the original post (Part 1) that highlights both how easy it is to engage in such hacking (i.e., this isn't purely hypothetical) and how difficult it is to address the problem with technical solutions (i.e., those solutions would have meant none of this post -- or of my comments on the Facebook passwords post -- could have contained clickable links). I also hope it adds a little bit of "impact factor." The text of the comment explains how it works, and also provides an example of how it could be socially engineered.

In sum, the lack of clarity in the CFAA, and the resulting "criminalization overbreadth," is what concerns me -- and, thankfully, apparently the Ninth Circuit. In the process of examining whether Prawfs/TypePad had any common vulnerabilities, it occurred to me that in the rush to defend against legitimate cybercriminals, there may develop significant political pressure to over-criminalize other activities which are not proper for regulation through the criminal law. We have already seen this happen with child pornography laws and sexting. I am extremely interested in others' thoughts on this subject, and hope I have depicted the problem in a way digestible to non-technical readers!

Posted by David Thaw on April 17, 2012 at 07:07 PM in Criminal Law, Information and Technology | Permalink | Comments (0) | TrackBack

Thursday, March 22, 2012

Wired, and Threatened

I have a short op-ed on how technology provides both power and peril for journalists over at JURIST. Here's the lede:
Journalists have never been more empowered, or more threatened. Information technology offers journalists potent tools to gather, report and disseminate information — from satellite phones to pocket video cameras to social networks. Technological advances have democratized reporting... Technology creates risks along with capabilities however... [and] The arms race of information technology is not one-sided.

Posted by Derek Bambauer on March 22, 2012 at 02:11 PM in Current Affairs, First Amendment, Information and Technology, International Law, Web/Tech | Permalink | Comments (0) | TrackBack

Wednesday, February 22, 2012

“Breaking and Entering” Through Open Doors: Website Scripting Attacks and the Computer Fraud and Abuse Act, Part 1


IMPORTANT: clicking through to the main body of this post may will cause unusual behaviors in your web browser.
Seriously. Please read more below before clicking through to the post!

Thank you Dan, Sarah, and the other Prawfs hosts for giving me the opportunity to guest Blawg! I will be writing about a project I am currently working on with one of my students (Nick Carey), examining common website cybersecurity vulnerabilities in the context of cybercrime law.

The purpose of this post is to examine these (potential) cybersecurity vulnerabilities in PrawfsBlawg. It is the first of what I hope will be a few posts examining how current federal cybercrime law (the Computer Fraud and Abuse Act, or CFAA) applies to certain Internet activities that straddle the line between aggressive business practices and criminal intent.

While certainly possible to analyze these without a public post, making the post public provides more opportunity to showcase these vulnerabilities in a way that brings the debate to life without the "risk" of engaging attackers set on causing damage.

As other scholars have observed, judicial references to the CFAA notably increased over the past decade. Part 2 of this post, which will be forthcoming after we identify which vulnerabilities are (and are not) present in the Blawg, will provide a more substantive treatment of the legal issues involved and a (better) place for discussion.



Posted by David Thaw on February 22, 2012 at 02:57 PM in Criminal Law, Information and Technology | Permalink | Comments (3)

Wednesday, February 15, 2012

Coasean Positioning System

Ronald Coase's theory of reciprocal causation is alive, well, and interfering with GPS. Yesterday, the FCC pulled the plug on a plan by LightSquared to build a new national wireless network that combines cell towers and satellite coverage. The FCC went along with a report from the NTIA that LightSquared's network would cause many GPS systems to stop working, including the ones used by airplanes and regulated closely by the FAA. Since there's no immediately feasible way to retrofit the millions of GPS devices out in the field. LightSquared had to die so that GPS could live.

LightSquared's "harmful interference" makes this sound like a simple case of electromagnetic trespass. But not so fast. LightSquared has had FCC permission to use the spectrum between 1525 and 1559 megahertz, in the "mobile-satellite spectrum" band. That's not where GPS signals are: they're in the next band up, the "radionavigation satellite service" band, which runs from 1559 to 1610 megahertz. According to LightSquared, its systems would be transmitting only in its assigned bandwidth--so if there's interference, it's because GPS devices are listening to signals in a part of the spectrum not allocated to them. Why, LightSquared plausibly asks, should it have a duty of making its own electromagnetic real estate safe for trespassers?

The underlying problem here is that "spectrum" is an abstraction for talking about radio signals, but real-life uses of the airwaves don't neatly sort themselves out according to its categories. In his 1959 article The Federal Communications Commission, Coase explained:

What does not seem to have been understood is that what is being allocated by the Federal Communications Commission, or, if there were a market, what would be sold, is the right to use a piece of equipment to transmit signals in particular way. Once the question is looked at in this way, it is unnecessary to think in terms of ownership of frequencies of the ether.

Now add to this point Coase's observation about nuisance: that the problem can be solved either by the polluter or the pollutee altering its activities, and so in a sense should be regarded as being caused equally by both of them. So here. "Interference" is a property of both transmitters and receivers; one man's noise is another man's signal. GPS devices could have been designed with different filters from the start, filters that were more aggressive in rejecting signals from the mobile-satellite band. But those filters would have added to the cost of a GPS unit, and worse, they'd have degraded the quality of GPS reception, because they would have thrown out some of the signals from the radionavigation-satellite band. (The only way to build a completely perfect filter is to make it capable of traveling back in time. No kidding!) Since the mobile-satellite band wasn't at the time being used anywhere close to as intensively as LightSquared now proposes to use it, it made good sense to build GPS devices that were sensitive rather than robust.

There are multiple very good articles on property, tort, and regulatory lurking in this story. There's one on the question Coase was concerned with: regulation versus ownership as means of choosing between competing uses (like GPS and wireless broadband). There's another on the difficulty of even defining property rights to transmit, given the failure of the "spectrum" abstraction to draw simple bright lines that avoid conflicting uses. There's one on the power of incumbents to gain "possession" over spectrum not formally assigned to them. There's another on investment costs and regulatory uncertainty: LightSquare has already launched a billion-dollar satellite. And there's one on technical expertise and its role in regulatory policy. Utterly fascinating.

Posted by James Grimmelmann on February 15, 2012 at 10:12 AM in Information and Technology | Permalink | Comments (1)

Wednesday, February 08, 2012

Criminalizing Cyberbullying and the Problem of CyberOverbreadth

In the past few years, reports have attributed at least fourteen teen suicides to cyberbullying. Phoebe Prince of Massachusetts, Jamey Rodemeyer of New York, Megan Meier of Missouri, and Seth Walsh of California are just some of the children who have taken their own lives after being harassed online and off.

These tragic stories are a testament to the serious psychological harm that sometimes results from cyberbullying, defined by the National Conference of State Legislatures as the "willful and repeated use of cell phones, computers, and other electronic communications devices to harass and threaten others." Even when victims survive cyberbullying, they can suffer psychological harms that last a lifetime. Moreover, an emerging consensus suggests that cyberbullying is reaching epidemic proportions, though reliable statistics on the phenomenon are hard to come by. Who, then, could contest that the social problem of cyberbullying merits a legal response?

In fact, a majority of states already have legislation addressing electronic harassment in some form, and fourteen have legislation that explicitly uses the term cyberbullying. (Source: here.) What's more, cyber-bullying legislation has been introduced in six more states: Georgia, Illinois, Kentucky, Maine, Nebraska, and New York. A key problem with much of this legislation, however, is that legislators have often conflated the legal definition of cyberbullying with the social definition. Though understandable, this tendency may ultimately produce legislation that is unconstitutional and therefore ineffective at remedying the real harms of cyberbullying.

Consider, for instance, a new law proposed just last month by New York State Senator Jeff Klein (D- Bronx) and Congressman Bill Scarborough. Like previous cyberbullying proposals, the New York bill was triggered by tragedy. The proposed legislation cites its justification as the death of 14-year-old Jamey Rodemeyer, who committed suicide after being bullied about his sexuality. Newspaper accounts also attribute the impetus for the legislation to the death of Amanda Cummings, a 15 year old New York teen who committed suicide by stepping in front of a bus after she was allegedly bullied at school and online. In light of these terrible tragedies, it is easy to see why New York legislators would want to take a symbolic stand against cyberbullying and join the ranks of states taking action against it.

The proposed legislation (S6132-2011) begins modestly enough by "modernizing" pre-existing New York law criminalizing stalking and harassment. Specifically, the new law amends various statutes to make clear that harassment and stalking can be committed by electronic as well as physical means. More ambitiously, the new law increases penalties for cyberbullying of "children under the age of 21," and broadly defines the activity that qualifies for criminalization under the act. The law links cyberbullying with stalking, stating that "a person is guilty of stalking in the third degree when he or she intentionally, and for no legitimate purpose, engages in a course of conduct directing electronic communication at a child [ ], and knows or reasonably should know that such conduct: (a) causes reasonable fear of material harm to the physical health, safety or property of such child; or (b) causes material harm to the physical health, emotional health, safety or property of such child." (emphasis mine) Even a single communication to multiple recipients about (and not necessarily to) a child can constitute a "course of conduct" under the statute.

Like the sponsors of this legislation, I deplore cyber-viciousness of all varieties, but I also condemn the tendency of legislators to offer well intentioned but sloppily drafted and constitutionally suspect proposals to solve pressing social problems. In this instance, the legislation opts for a broad definition of cyberbullying based on legislators' desires to appear responsive to the cyberbullying problem. The broad statutory definition (and perhaps resorting to criminalization rather than other remedies) creates positive publicity for legislators, but broad legal definitions that encompass speech and expressive activities are almost always constitutionally overbroad under the First Amendment.

Again, consider the New York proposal. The mens rea element of the offensive requires only that a defendant "reasonably should know" that "material harm to the . . . emotional health" of his target will result, and it is not even clear what constitutes "material harm." Seemingly, therefore, the proposed statute could be used to prosecute teen girls gossiping electronically from their bedrooms about another teen's attire or appearance. Likewise, the statute could arguably criminalize a Facebook posting by a 20-year-old college student casting aspersions on his ex-girlfriend. In both instances, the target of the speech almost certainly would be "materially" hurt and offended upon learning of it, and the speakers likely should reasonably know such harm would occur. Just as clearly, however, criminal punishment of "adolescent cruelty," which was a stated justification of the legislation, is an unconstitutional infringement on freedom of expression.

Certainly the drafters of the legislation may be correct in asserting that "[w]ith the use of cell phones and social networking sites, adolescent cruelty has been amplified and shifted from school yards and hallways to the Internet, where a nasty, profanity-laced comment, complete with an embarrassing photo, can be viewed by a potentially limited [sic] number of people, both known and unknown." They may also be correct to assert that prosecutors need new tools to deal with a "new breed of bully." Neither assertion, however, justifies ignoring the constraints of First Amendment law in drafting a legislative response. To do so potentially misdirects prosecutorial resources, misallocates taxpayer money that must be devoted to passsing and later defending an unconstitutional law, and block the path toward legal reforms that would address cyberbullying more effectively.

With regard to criminal law, a meaningful response to cyberbullying--one that furthers the objectives of deterrence and punishment of wrongful behavior--would be precise and specific in defining the targeted conduct. A meaningful response would carefully navigate the shoals of the First Amendment's protection of speech, acknowledging that some terrible behavior committed through speech must be curtailed through educating, socializing, and stigmatizing perpetrators rather than criminalizing and censoring their speech.

Legislators may find it difficult to address all the First Amendment ramifications of criminalizing cyberbullying, partly because the term itself potentially obscures analysis. Cyberbullying is an umbrella term that covers a wide variety of behaviors, including threats, stalking, harassment, eavesdropping, spoofing (impersonation), libel, invasion of privacy, fighting words, rumor-mongering, name-calling, and social exclusion. The First Amendment constraints on criminalizing the speech behavior involved in cyberbullying depends on which category of speech behavior is involved. Some of these behaviors, such as issuing "true threats" to harm another person or taunting them with "fighting words," lie outside the protection of the First Amendment. (See Virginia v. Black and Chaplinsky v. New Hampshire; but see R.A.V and my extended analysis here.). Some other behaviors that may cause deep emotional harm, such as name-calling, are just as clearly protected by the First Amendment in most contexts. (Compare, e.g., Cohen v. California with FCC v. Pacifica).

But context matters profoundly in determining the scope of First Amendment protection of speech. Speech in schools and workplaces can be regulated in ways that speech in public spaces cannot (See, e.g., Bethel School Dist. No. 403 v. Fraser). Even within schools, the speech of younger minors can be regulated in ways that speech of older minors cannot (Cf. Hazelwood with Joyner v. Whiting (4th Cir)) , and speech that is part of the school curriculum can be regulated in ways that political speech cannot. (Compare, e.g., Tinker with Hazelwood). Outside the school setting, speech on matters of public concern receives far more First Amendment protection than speech dealing with other matters, even when such speech causes tremendous emotional upset. (See Snyder v. Phelps). But speech targeted at children likely can be regulated in ways that speech targeted at adults cannot, given the high and possibly compelling state interest in protecting the well-being of at least younger minors. (But see Brown v. Ent. Merchants Ass'n). Finally, even though a single instance of offensive speech may be protected by the First Amendment, the same speech repeated enough times might become conduct subject to criminalization without exceeding constitutional constraints. (See Pacifica and the lower court cases cited here).

Any attempt to use criminal law to address the social phenomenon should probably start with the jurisprudential question of which aspects of cyberbullying are best addressed by criminal law, which are best addressed by other bodies of law, and which are best left to non-legal control. Once that question is answered, criminalization of cyberbullying should proceed by identifying the various forms cyberbullying can take and then researching the specific First Amendment constraints, if any, on criminalizing that form of behavior or speech. This approach should lead legislators to criminalize only particularly problematic forms of narrowly defined cyberbullying, such as . While introducing narrow legislation of this sort may not be as satisfying as criminalizing "adolescent cruelty," it is far more likely to withstand constitutional scrutiny and become a meaningful tool to combat serious harms.

Proposals to criminalize cyberbullying often seem to proceed from the notion that we will know it when we see it. In fact, most of us probably will: we all recognize the social problem of cyberbullying, defined as engaging in electronic communication that transgresses social norms and inflicts emotional distress on its targets. But criminal law cannot be used to punish every social transgression, especially when many of those transgressions are committed through speech, a substantial portion of which may be protected by the First Amendment.

[FYI: This blog post is the underpinning of a talk I'm giving at the Missouri Law Review's Symposium on Cyberbullying later in the week, and a greatly expanded and probably significantly changed version will ultimately appear in the Missouri Law Review, so I'd particularly appreciate comments. In the article, I expect to create a more detailed First Amendment guide for conscientious lawmakers seeking to regulate cyberbullying. I am especially excited about the symposium because it includes mental health researchers and experts as well as law professors. Participants include Barry McDonald (Pepperdine), Ari Waldman (Cal. Western), John Palfrey (Berkman Center at HLS), Melissa Holt (B.U.), Mark Small (Clemson), Philip Rodkin (U. Ill.), Susan P. Limber (Clemson), Daniel Weddle (UMKC), and Joew Laramie (consultant/former direction of Missouri A.G. Internet Crimes Against Children Taskforce).]

Posted by Lyrissa Lidsky on February 8, 2012 at 08:37 AM in Constitutional thoughts, Criminal Law, Current Affairs, First Amendment, Information and Technology, Lyrissa Lidsky, Web/Tech | Permalink | Comments (8) | TrackBack

Friday, February 03, 2012

The Used CD Store Goes Online

On Monday, Judge Sullivan of the Southern District of New York will hear argument on a preliminary injunction motion in Capitol Records v. ReDigi, a copyright case that could be one of the sleeper hits of the season. ReDigi is engaged in the seemingly oxymoronic business of "pre-owned digital music" sales: it lets its customers sell their music files to each other. Capitol Records, unamused, thinks the whole thing is blatantly infringing and wants it shut down, NOW.

There are oodles of meaty copyright issues in the case -- including many that one would not think would still be unresolved at this late date. ReDigi is arguing that what it's doing is protected by first sale: just as with physical CDs, resale of legally purchased copies is legal. Capitol's counter is that no physical "copy" changes hands when a ReDigi user uploads a file and another user downloads it. This disagreement cuts to the heart of what first sale means and is for in this digital age. ReDigi is also making a quiver's worth of arguments about fair use (when users upload files that they then stream back to themselves), public performance (too painfuly technical to get into on a general-interest blog), and the responsibility of intermediaries for infringements initiated by users.

I'd like to dwell briefly on one particular argument that ReDigi is making: that what it is doing is fully protected under section 117 of the Copyright Act. That rarely-used section says it's not an infringement to make a copy of a "computer program" as "an essential step in the utilization of the computer program." In ReDigi's view, the "mp3" files that its users download from iTunes and then sell through ReDigi are "computer programs" that qualify for this defense. Capitol responds that in the ontology of the Copyright Act, MP3s are data ("sound recordings," to be precise), not programs.

I winced when I read these portions of the briefs.

In the first place, none of the files being transferred through ReDigi are MP3s. ReDigi only works with files downloaded from the iTunes Store, and the only format that iTunes sells in is AAC (Advanced Audio Coding), not MP3. It's a small detail, but the parties' agreement to a false "fact" virtually guarantees that their error will be enshrined in a judicial opinion, leading future lawyers and courts to think that any digital music file is an "MP3."

Worse still, the distinction that divides ReDigi and Capitol -- between programs and data -- is untenable. Even before there were actual computers, Alan Turing proved that there is no difference between program and data. In a brilliant 1936 paper, he showed that any computer program can be treated as the data input to another program. We could think of an MP3 as a bunch of "data" that is used as an input to a music player. Or we could think of the MP3 as a "program" that, when run correctly, produces sound as an output. Both views are correct -- which is to say, that to the extent that the Copyright Act distinguishes a "program" from any other information stored in a computer, it rests on a distinction that collapses if you push too hard on it. Whether ReDigi should be able to use this "essential step" defense, therefore, has to rest on a policy judgment that cannot be derived solely from the technical facts of what AAC files are and how they work. But again, since the parties agree that there is a technical distinction and that it matters, we can only hope that the court realizes they're both blowing smoke.

Posted by James Grimmelmann on February 3, 2012 at 11:59 PM in Information and Technology, Intellectual Property | Permalink | Comments (16)

Monday, December 19, 2011

Breaking the Net

Mark Lemley, David Post, and Dave Levine have an excellent article in the Stanford Law Review Online, Don't Break the Internet. It explains why proposed legislation, such as SOPA and PROTECT IP, is so badly-designed and pernicious. It's not quite clear what is happening with SOPA, but it appears to be scheduled for mark-up this week. SOPA has, ironically, generated some highly thoughtful writing and commentary - I recently read pieces by Marvin Ammori, Zach Carter, Rebecca MacKinnon / Ivan Sigal, and Rob Fischer.

There are two additional, disturbing developments. First, the public choice problems that Jessica Litman identifies with copyright legislation more generally are manifestly evident in SOPA: Rep. Lamar Smith, the SOPA sponsor, gets more campaign donations from the TV / movie / music industries than any other source. He's not the only one. These bills are rent-seeking by politically powerful industries; those campaign donations are hardly altruistic. The 99% - the people who use the Internet - don't get a seat at the bargaining table when these bills are drafted, negotiated, and pushed forward. 

Second, representatives such as Mel Watt and Maxine Waters have not only admitted to ignorance about how the Internet works, but have been proud of that fact. They've been dismissive of technical experts such as Vint Cerf - he's only the father of TCP/IP - and folks such as Steve King of Iowa can't even be bothered to pay attention to debate over the bill. I don't mind that our Congresspeople are not knowledgeable about every subject they must consider - there are simply too many - but I am both concerned and offended that legislators like Watt and Waters are proud of being fools. This is what breeds inattention to serious cybersecurity problems while lawmakers freak out over terrorists on Twitter. (If I could have one wish for Christmas, it would be that every terrorist would use Twitter. The number of Navy SEALs following them would be... sizeable.) It is worrisome when our lawmakers not only don't know how their proposals will affect the most important communications platform in human history, but overtly don't care. Ignorance is not bliss, it is embarrassment.

Cross-posted at Info/Law.

Posted by Derek Bambauer on December 19, 2011 at 01:49 PM in Blogging, Constitutional thoughts, Corporate, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Music, Property, Television, Web/Tech | Permalink | Comments (1) | TrackBack

Wednesday, December 14, 2011

Six Things Wrong with SOPA

America is moving to censor the Internet. The PROTECT IP and Stop Online Piracy Acts have received considerable attention in the legal and tech world; SOPA's markup in the House occurs tomorrow. I'm not opposed to blacklisting Internet sites on principle; however, I think that thoughtful procedural protections are vital to doing so in a legitimate way. Let me offer six things that are wrong with SOPA and PROTECT IP: they harm cybersecurity, are wildly overbroad and vague, enable unconstitutional prior restraint, undercut American credibility on Internet freedom, damage a well-working system for online infringement, and lack any empirical justification whatsoever. And, let me address briefly Floyd Abrams's letter in support of PROTECT IP, as it is frequently adverted to by supporters of the legislation. (The one-word summary: "sellout." The longer summary: The PROTECT IP letter will be to Abrams' career what the Transformersmovie was to that of Orson Welles.)

  1. Cybersecurity - the bills make cybersecurity worse. The most significant risk is that they impede - in fact, they'd prevent - the deployment of DNSSEC, which is vitally important to reducing phishing, man-in-the-middle attacks, and similar threats. Technical experts are unanimous on this - see, for example, Sandia National Laboratories, or Steve CrockerPaul Vixie / Dan Kaminsky et al. Idiots, like the MPAA's Michael O'Leary, disagree, and simply assert that "the codes change." (This is what I call "magic elf" thinking: we can just get magic elves to change the Internet to solve all of our problems. Congress does this, too, as when it includes imaginary age-verifying technologies in Internet legislation.) Both bills would mandate that ISPs redirect users away from targeted sites, to government warning notices such as those employed in domain name seizure cases. But, this is exactly what DNSSEC seeks to prevent - it ensures that the only content returned in response to a request for a Web site is that authorized by the site's owner. There are similar problems with IP-based redirection, as Pakistan's inadvertent hijacking of YouTube demonstrated. It is ironic that at a time when the Obama administration has designated cybersecurity as a major priority, Congress is prepared to adopt legislation that makes the Net markedly less secure.
  2. Wildly overbroad and vague- the legislation (particularly SOPA) is a blunderbuss, not a scalpel. Sites eligible for censoring include those:
    •  
      • primarily designed or operated for copyright infringement, trademark infringement, or DMCA § 1201 infringement
      • with a limited purpose or use other than such infringement
      • that facilitate or enable such infringement
      • that promote their use to engage in infringement
      • that take deliberate actions to avoid confirming high probability of such use

    If Flickr, Dropbox, and YouTube were located overseas, they would plainly qualify. Targeting sites that "facilitate or enable" infringement is particularly worrisome - this charge can be brought against a huge range of sites, such as proxy services or anonymizers. User-generated content sites are clearly dead. And the vagueness inherent in these terms means two things: a wave of litigation as courts try to sort out what the terminology means, and a chilling of innovation by tech startups.

  3. Unconstitutional prior restraint - the legislation engages in unconstitutional prior restraint. On filing an action, the Attorney General can obtain an injunction that mandates blocking of a site, or the cutoff of advertising and financial services to it - before the site's owner has had a chance to answer, or even appear. This is exactly backwards: the Constitution teaches that the government cannot censor speech until it has made the necessary showing, in an adversarial proceeding - typically under strict scrutiny. Even under the more relaxed, intermediate scrutiny that characterizes review of IP law, censorship based solely on the government's say-so is forbidden. The prior restraint problem is worsened as the bills target the entire site via its domain name, rather than focusing on individualized infringing content, as the DMCA does. Finally, SOPA's mandatory notice-and-takedown procedure is entirely one-sided: it requires intermediaries to cease doing business with alleged infringers, but does not create any counter-notification akin to Section 512(g) of the DMCA. The bills tilt the table towards censorship. They're unconstitutional, although it may well take long and expensive litigation to demonstrate that.
  4. Undercuts America's moral legitimacy - there is an irreconciliable tension between these bills and the position of the Obama administration - especially Secretary of State Hillary Clinton - on Internet freedom. States such as Iran also mandate blocking of unlawful content; that's why Iran blocked our "virtual embassy" there. America surrenders the rhetorical and moral advantage when it, too, censors on-line content with minimal process. SOPA goes one step farther: it permits injunctions against technologies that circumvent blocking - such as those funded by the State Department. This is fine with SOPA adherents; the MPAA's Chris Dodd is a fan of Chinese-style censorship. But it ought to worry the rest of us, who have a stake in uncensored Internet communication.
  5. Undercuts DMCA - the notice-and-takedown provisions of the DMCA are reasonably well-working. They're predictable, they scale for both discovering infringing content and removing it, and they enable innovation, such as both YouTube itself and YouTube's system of monetizing potentially infringing content. The bills shift the burden of enforcement from IP owners - which is where it has traditionally rested, and where it belongs - onto intermediaries. SOPA in particular increases the burden, since sites must respond within 5 days of a notification of claimed infringement, with no exception for holidays or weekends. The content industries do not like the DMCA. That is no evidence at all that it is not functioning well.
  6. No empirical evidence - put simply, there is no empirical data suggesting these bills are necessary. The content industries routinely throw around made-up numbers, but they have been frequently debunked. How important are losses from foreign sites that are beyond the reach of standard infringement litigation, versus losses from domestic P2P networks, physical infringement, and the like? Data from places like Switzerland suggests that losses are, at best, minimal. If Hollywood wants America to censor the Internet, it needs to make a convincing case based on actual data, and not moronic analogies to stealing things off trucks. The bills, at their core, are rent-seeking: they would rewrite the law and alter fundamentally Internet free expression to benefit relatively small yet politically powerful industries. (It's no shock two key Congressional aides who worked on the legislation have taken jobs in Hollywood - they're just following Mitch Glazier, Dan Glickman, and Chris Dodd through the revolving door.) The bills are likely to impede innovation by the far larger information technology industry, and indeed to drive some economic activity in IT offshore.

The bills are bad policy and bad law. And yet I expect one of them to pass and be signed into law. Lastly, the Abrams letter: Noted First Amendment attorney Floyd Abrams wrote a letter in favor of PROTECT IP. Abrams's letter is long, but surprisingly thin on substantive legal analysis of PROTECT IP's provisions. It looks like advocacy, but in reality, it is Abrams selling his (fading) reputation as a First Amendment defender to Hollywood. The letter rehearses standard copyright and First Amendment doctrine, and then tries to portray PROTECT IP as a bill firmly in line with First Amendment jurisprudence. It isn't, as Marvin Ammori and Larry Tribe note, and Abrams embarrasses himself by pretending otherwise. Having the government target Internet sites for pre-emptive censorship, and permitting them to do so before a hearing on the merits, is extraordinary. It is error-prone - look at Dajaz1 and mooo.com. And it runs afoul of not only traditional First Amendment doctrine, but in particular the current Court's heightened protection of speech in a wave of cases last term. Injunctions affecting speech are different in character than injunctions affecting other things, such as conduct, and even the cases that Abrams cites (such as Universal City Studios v. Corley) acknowledge this. According to Abrams, the constitutionality of PROTECT IP is an easy call. That's only true if you're Hollywood's sockpuppet. Thoughtful analysis is far harder.

Cross-posted at Info/Law.

Posted by Derek Bambauer on December 14, 2011 at 09:07 PM in Constitutional thoughts, Culture, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Music, Property, Web/Tech | Permalink | Comments (1) | TrackBack

On the Move

Jane Yakowitz and I have accepted offers from the University of Arizona James E. Rogers College of Law. We're excited to join such a talented group! But, we'll miss our Brooklyn friends. Come visit us in Tucson!

Posted by Derek Bambauer on December 14, 2011 at 05:39 PM in Current Affairs, Getting a Job on the Law Teaching Market, Housekeeping, Information and Technology, Intellectual Property, Life of Law Schools, Teaching Law, Travel | Permalink | Comments (2) | TrackBack

Saturday, December 10, 2011

Copyright and Your Face

The Federal Trade Commission recently held a workshop on facial recognition technology, such as Facebook's much-hated system, and its privacy implications. The FTC has promised to come down hard on companies who abuse these capabilities, but privacy advocates are seeking even stronger protections. One proposal raised was to provide people with copyright in their faceprints or facial features. This idea has two demerits: it is unconstitutional, and it is insane. Otherwise, it seems fine.

Let's start with the idea's constitutional flaws. There are relatively few constitutional limits on Congressional power to regulate copyright: you cannot, for example, have perpetual copyright. And yet, this proposal runs afoul of two of them. First, imagine that I take a photo of you, and upload it to Facebook. Congress is free to establish a copyright system that protects that photo, with one key limitation: I am the only person who can obtain copyright initially. That's because the IP Clause of the Constitution says that Congress may "secur[e] for limited Times to Authors... the exclusive Right to their respective Writings." I'm the author: I took the photograph (copyright nerds would say that I "fixed" it in my camera's memory). The drafters of the Constitution had good reason to limit grants of copyright to authors: England spent almost 150 years operating under a copyright-like monopoly system that awarded entitlements to a distributor, the Stationer's Company. The British crown had an excellent reason for giving the Company a monopoly - the Stationer's Company implemented censorship. Having a single distributor with exclusive rights gives a government but one choke point to control. This is all to say that Congress can only give copyright to the author of a work, and the author is the person who creates / fixes it (here, the photographer). It's unconstitutional to award it to anyone else.

Second, Congress cannot permit facts to be copyrighted. That's partly for policy reasons - we don't want one person locking up facts for life plus seventy years (the duration of copyright) - and partly for definitional ones. Copyright applies only to works of creative expression, and facts don't qualify. They aren't created - they're already extant. Your face is a fact: it's naturally occurring, and you haven't created it. (A fun question, though, is whether a good plastic surgeon might be able to copyright the appearance of your surgically altered nose. Scholars disagree on this one.) So, attempting to work around the author problem by giving you copyright protection over the configuration of your face is also out. So, the proposal is unconstitutional.

It's also stupid: fixing privacy with copyright is like fixing alcoholism with heroin. Copyright infringement is ubiquitous in a world of digital networked computers. Similarly, if we get copyright in our facial features, every bystander who inadvertently snaps our picture with her iPhone becomes an infringer - subject to statutory damages of between $750 and $30,000. Even if few people sue, those who do have a powerful weapon on their side. Courts would inevitably try to mitigate the harsh effects of this regime, probably by finding most such incidents to be fair use. But that imposes high administrative costs, and fair use is an equitable doctrine - it invites courts to inject their normative views into the analysis. It also creates extraordinarily high administrative costs. It's already expensive for filmmakers, for example, to clear all trademarked and copyrighted items from the zones they film (which is why they have errors and omissions insurance). Now, multiply that permissions problem by every single person captured in a film or photograph. It becomes costly even to do the right thing - and leads to strategic behavior by people who see a potential defendant with deep pockets.

Finally, we already have an IP doctrine that covers this area: the right of publicity (which is based in state tort law). The right of publicity at least has some built-in doctrinal elements that deal with the problems outlined above, such as exceptions when one's likeness is used in a newsworthy fashion. It's not as absolute as copyright, and it lacks the hammer of statutory damages, which is probably why advocates aren't turning to it. But those are features, not bugs.

Privacy problems on social networks are real. But we need to address them with thoughtful, tailored solutions, not by slapping copyright on the problem and calling it done.

Cross-posted at Info/Law.

Posted by Derek Bambauer on December 10, 2011 at 06:03 PM in Constitutional thoughts, Corporate, Culture, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Property, Torts | Permalink | Comments (4) | TrackBack

Tuesday, December 06, 2011

Cry Baby Cry

The project to crowdsource a Tighter White Album (hereinafter TWA) is done, and we’ve come up with a list of 15 songs that might have made a better end product than the original. Today I want to discuss whether I've done something wrong, legally or morally. 

I am no expert on European law, or its protection of the moral rights of the author, but I was reminded by Howard Knopf that my hypothetical exercise could generate litigation, as the author has rights against the distortion or mutilation of the work, separate from copyright protection.  The current copyright act in the UK bars derogatory "treatments" of the work. A treatment can include "deletion from" the original, and the TWA is just that -- 15 songs were trimmed from the trimmed White Album, ostensibly to make something "better than" the original. To the extent the remaining Beatles and their heirs can agree on anything, it might be the sanctity of the existing discography in its extant form, at least as it encapsulates the end product stemming from the individual proclivities of the Beatles at the time. But see Free as a Bird. Fans and critics reacted strongly to Danger Mouse's recent splice of Jay-Z's Black Album and the Beatles' White Album, with one critic describing it as "an insult to the legacy of the Beatles (though ironically, probably intended as a tribute)". Could the TWA implicate the moral rights of the Beatles?

 

On one level, I and my (perhaps unwitting) co-conspirators are doing nothing more than music fans have done for generations: debating which songs of an artist's body of work merit approval and which merit approbrium. Coffee houses and bars are often filled with these discussions. Rolling Stones has made a cottage industry of ranking and reranking the top songs and albums of the recent past and in recent memory. This project is no different.

On the other hand, I am suggesting, by having the audacity to conduct this survey and publish the results, that the lads from Liverpool did it wrong, were too indulgent, etc., in releasing the White Album in its official form. That's different from saying "Revolution #9" is "not as good" as "Back in the U.S.S.R." (or vice versa). But to my eyes, it falls short of distortion.

Moral rights in sound recordings and musical compositions are not explicitly protected under the Copyright Act. In one case predating the effective date of the current Act, the Monty Python troupe was granted an injunction against the broadcast of its skits in heavily edited form on U.S. television, but that case was grounded more in contract law (ABC having exceded the scope of its license) and a right not to have the hack job attributed to the Pythons under the Lanham Act.*  The TWA doesn't edit individual songs, and whilte the Monty Python case protected 30 minute Python episodes as a cohesive whole, it is difficult to argue that the copyright owners of the White Album are necessarily committed to the same cohesive view of the White Album, to the extent they sell individual songs online. One can buy individual Beatles songs, even from the White Album. Once you can buy individual tracks, can there really be moral rights implications in posting my preferred version of the album in a format that allows you to go and buy it?

On to the standard rights protected under U.S. copyright law. Yesterday, I talked about the possibility that the list itself might be a compilation, with protectable creativity in the selection. Might the TWA also be an unauthorized derivative work, exposing me to copyright liability? A derivative work is one "based on" a preexisting work, in which the original is "recast, transformed or adapted." That's similar to the language used to describe a treatment under UK law. Owners of sound recordings often release new versions, with songs added, outtakes included, and bonus art, ostensibly to sell copies to consumers who already purchased them. I certainly didn't ask the Beatles (or more precisely, the copyright owner of the White Album) for permission to propose a shortened album, but what I have done looks like an abridgement of the sort that might fall into traditional notions of fair use.

Once upon a time, I might have made a mixtape and distributed it to my dearest friends (although when I was young, the 45 minute tape was optimal, so I might have been forced to cut another song or two). Committing my findings to vinyl, compact disc, or mp3, using the original recordings, technically violate 17 USC 106(1)'s prohibition on unauthorized reproduction. If I give an unauthorized copy to someone else, I violate the exclusive right to distribute under section 106(3). Unlike the public performance and display rights, there is no express carve out for "private" copying and/or distribution, although it was historically hard to detect. The mixtape in its analog form seems like the type of private use that should be permitted under any reasonable interpretation of fair use, if not insulated by statute.

If I send my digital mixtape to all of my Facebook friends, that seems a bridge too far. However, Megan Carpenter has suggested that by failing to make room for the mix tape in the digital environment, copyright law "breeds contempt." 11 Nev. L.J. 44, 79-80 (2010).  Jessica Litman, Joseph Liu, Glynn Lunney and Rebecca Tushnet, among others, have argued that space for personal consumption is as important in the digital realm as it was in the good old days when everything was analog.

If I instead use social networking tools like Spotify Social** to share my playlist, I probably don't infringe the 106(4, 6) public performance right. Because I use authorized channels, any streaming you do to preview my playlist is likely authorized. And if I post the playlist on iTunes, you can go and buy it as constituted. That seems somewhat closer to an unauthorized copy, but it's not actually unauthorized. The Beatles sell individual singles through iTunes, so it seems problematic to conclude that consumers are not authorized to buy only those songs they prefer.

So all in all, given that I'm not running a CD burner in my office, I think I'm in the clear. What do you think?

*A recent Supreme Court decision puts in doubt the Lanham Act portion of the Monty Python holding

**The Spotify Social example is complicated by the fact that the Beatles aren't included, although I have found reasonable covers of all the songs included on the TWA. The copyright act explicitly provides for a compulsory license to make cover tunes, so long as the cover doesn't deviate too drastically from the original. 17 USC § 115(a). If the license was paid, and the copyright owner notified, those songs are authorized. My repackaging of them in a virtual mixtape, however, is not. 17 U.S.C. § 114(b).

 

Posted by Jake Linford on December 6, 2011 at 07:31 PM in Information and Technology, Intellectual Property, Music | Permalink | Comments (1) | TrackBack

Revisiting the Scary CFAA

Last April, I blogged about the Nosal case, which led to the scary result that just about any breach of contract on the internet can potentially be a criminal access to a protected computer. I discuss the case in extensive detail in that post, so I won't repeat it here. The gist is that employees who had access to a server in their ordinary course of work were held to have exceeded their authorization when they accessed that same server with the intent of funneling information out to a competitive ex-employee. The scary extension is that anyone breaching a contract with a web provider might then be considered to be accessing the web server in excess of authorization, and therefore committing a crime.

I'm happy to report that Nosal is now being reheard in the Ninth Circuit. I'm hopeful that the court will do something to rein in the case.

I think most of my colleagues agree with me that the broad interpretation of the statute is a scary one. Where some depart, though, is on the interpretive question. As you'll see in the comments to my last post, there is some disagreement about how to interpret the statute and whether it is void for vagueness. I want to address some of the continuing disagreement after the jump.

I think there are three ways to look at Nosal:

    1. The ruling was right, and the extension to all web users is fine (ouch);

    2. The ruling was right as to the Nosal parties, but should not be extended to all web users; and

    3. The ruling was not right as to the Nosal parties, and also wrong as to all web users.

I believe where I diverge from many of my cyberlaw colleagues is that I fall into group two. I hope to explain why, and perhaps suggest a way forward. Note that I'm not a con law guy, and I'm not a crim law guy, but I am a internet statute guy, so I call the statutory interpretation like I see it.

I want to focus on the notion of authorization. The statute at issue, the Computer Fraud and Abuse Act (or CFAA)  outlaws obtaining information from networked computers if one "intentionally accesses a computer without authorization or exceeds authorized access."

Orin Kerr, a leader in this area, wrote a great post yesterday that did two things. First, it rejected tort based tresspass rules like implied consent as too vague for a criminal statute. On this, I agree. Second, it defined "authorization" with respect to other criminal law treatment of consent. In short, the idea is if you consent to access in the first place, then doing bad things in violation of the promises made is does not mean lack of consent to access. On this, I agree as well.

But here's the rub: the statute says "without authorization or exceeds authorized access." And this second phrase has to mean something. The goal, for me at least, is that it covers the Nosal case but not the broad range of activity on the internet. Professor Kerr, I suspect, would say that the only way to do that is for it to be vague, and if so, then the statute must be vague.

I'm OK with the court going that way, but here's my problem with the argument. The statute isn't necessarily vague. Let's say that the scary broad interpretation fron Nosal means that every breach of contract is now a criminal act on the web. That's not vague. Breach a contract, then you're liable; there's no wondering whether you have committed a crime or not. 

Of course, the contract might be vague, but that's a factual issue that can be litigated. It is not unheard of to have a crime based on failure to live up to an agreement to do something. A dispute about what the agreement was is not the same as being vague. Does that mean I like it? No. Does that mean it's crazy overbroad? Yes. Does that mean everyone's at risk and someone should do something about this nutty statute? Absolutely.

Now, here is where some vagueness comes in - only some breaches lead to exceeded access, and some don't. How are we to decide which is which? The argument Professor Kerr takes on is tying it to trespass, and I agree that doesn't work.

So, I return to my suggestion from several months ago - we should look to the terms of authorization of access to see whether they have been exceeded. This means that if you are an employee who accesses information for a purpose you know is not authorized, then you are exceeding authorization. It also means that if the terms of service on a website say explicitly that you must be truthful about your age or you are not authorized to access the site, then you are unauthorized. And that's not always an unreasonable access limitation.  If there were a kids only website that excluded adults, I might well want to criminalize access obtained by people lying about their age. That doesn't mean all access terms are reasonable, but I'm not troubled by that from a statutory interpretation standpoint.

I'm sure one can attack this as vague - it won't always be clear when a term is tied to authorization. But then again, if it is not a clear term of authorization, the state shouldn't be able to prove that authorization was exceeded. This does mean that snoops all over and people who don't read web site terms (me included) are at risk for violating terms of access we never saw or agreed to. I don't like that part of the law, and it should be changed. I'm fine with making it more limiting in ways that Professor Kerr and others have suggested.

But I don't know that it is invalid as vague - there are lots of things that may be illegal that people don't even know are on the books. Terms of service, at least, people have at least some chance of knowing and choose not to. That doesn't mean it isn't scary, because I don't see behavior (including my own) changing anytime soon.

Posted by Michael Risch on December 6, 2011 at 05:18 PM in Information and Technology, Web/Tech | Permalink | Comments (8) | TrackBack

Monday, December 05, 2011

While My (Favorite Beatles Song) Gently Weeps

The voting is done and the world has (or 264 entities voting in unique user sessions have) selected the songs for "The Tighter" White Album (hereinafter TWA). The survey invited voters to make pairwise comparisons between two Beatles songs, under the premise that one could be kept, and one would be cut.

There are several copyright-related implications of my experiment, and I wanted to unpack a few of them. Today, my thoughts on the potential authorship and ownership of the list itself. Tomorrow, a few thoughts on moral rights, whether I’ve done something wrong, and whether what I've done is actionable. [Edited to add hyperlink to Part II]

But first, the results -- An album's worth of music (two sides no longer than 24:25 each, the length of Side Four of the original), ranked from strongest to weakest:

SIDE ONE:

While My Guitar Gently Weeps

Blackbird

Back in the USSR

Happiness is a Warm Gun

Dear Prudence

Revolution 1

Ob-la-di, Ob-la-Da

SIDE TWO:

Helter Skelter

I'm So Tired

I Will

Julia

Rocky Raccoon

Mother Nature's Son

Cry Baby Cry

Sexy Sadie

How did the voters do? Very well, by my estimation. I was pleasantly surprised by the balance. McCartney and Lennon each sang (which by this point in their career was a strong signal of primary authorship) 12 of the 30 tracks, and each had 7 selections on the TWA. (John also wrote "Good Night," which was sung by Ringo and overproduced at Paul's behest, so I think it can be safely cabined.) Only one of George Harrison's four compositions, "While My Guitar Gently Weeps," made the cut, but was the strongest finalist. Ringo's "Don't Pass Me By," no critical darling, did poorly in the final assessment.*

It's possible, although highly unlikely in this instance, that the list of songs is copyrightable expression. As a matter of black letter law, one who compiles other copyrighted works may secure copyright protection in the

collection and assembiling of preexisting materials or of data that are selected, coordinated, or arranged in such a way that the resulting work as a whole constitutes an original work of authorship.

Protection only extends to the material contributed by the author. The Second Circuit has found copyrightable expression in the exercise of judgment as expressed in a prediction about the price of used cars over the next six months, even where the prediction was striving to map as close as possible to the actual value of cars in those markets. Other Second Circuit cases recognize copyright protection in the selection of terms of venery -- labels for groups of animals (e.g., a pride of lions) and in the selection of nine pitching statistics from among scores of potential stats. In each of these cases, there was some judgment exercised about what to include or what not to include.

In this case, I proposed the question, put together the survey, monitored the queue, and recruited respondents through various channels. The voting, however, was actually done by multiple individuals selecting between pairs of songs. It's difficult to paint that as a "work of authorship" in any traditional sense of the phrase. I set up the experiment and then cut it loose. I could have made my own list (and have, but I won't bore you with that), and that list would have been my own work of authorship. This seems like something different, because I'm not making any independent judgment (other than the decision to limit the length of the TWA to twice the length of the longest side of the White Album).

Let's assume for a moment that there is protectable expression, even though I crowdsourced the selection process. Could it be that all 246 voters are joint authors with me in this work? It seems unlikely. The black letter test asks (1) whether we all intended our independent, copyrightable contributions to merge into an inseparable whole, and (2) whether we intended everyone to be a co-author. It's hard to call an individual vote between two songs a separately copyrightable contribution, even with the prompt: "The Beatles' White Album might have been stronger with fewer songs. Which song would you keep?" By atomizing the decision, I might be insulated from claims that individual voters are co-authors of the final list, although I suggested that there was something cooperative about this event in my description of the vote:

We’re crowdsourcing a “Tighter White Album.” Some say the White Album would have been better if it was shorter, which requires cutting some songs. Together, we can work it out. For each pair, vote for the song you would keep. Vote early and often, and share this with your friends. The voting will go through the end of November.

Still, to the extent they took seriously my admonitions, the readers were endeavoring to decide which of the two songs presented belonged on the TWA, whatever the factors that played into the decision. Might that choice also be protected in individual opinions sorted in a certain fashion? This really only matters if I make money from the proposed TWA. I would then need to make an accounting to my joint authors. And even if the vote itself was copyrightable expression, the voter likely granted me an implied license to include it in my final tally.

Should I have copyright protection in this list? Copyright protection is arguably granted to give authors (term of art) the incentive to create expressive works. I didn't need copyright protection as an incentive: I ran the survey so that I could talk about the results (and to satify my own curiosity). And my purposes are served if others take the results and run with them (although I would prefer to be attributed). Maybe no one else needs copyright protection, either, as lists ranking Beatles songs abound on the internet. Rolling Stone magazine has built a cottage industry on ranking and reranking the popular music output of the last 60 years, but uses its archives of rankings as an incentive to pay for a subscription. If the rankings didn't sell, magazines would likely do something else.

As an alternative, Rolling Stone might also arguably benefit from common law protection against the misappropriation of hot news, granted by the Supreme Court in INS v. AP, which would provide narrow injunctive relief to allow it to sell its news before others can copy without permission. The magazine might have trouble with recent precedent from the 2d Circuit which held that making the news does not qualify for hot news protection, although reporting the news might. So if I reproduce Rolling Stone's list (breaking news: Rolling Stone prefers Sonic Youth to Brittany Spears), that might fall outside of hot news misappropriation, although perhaps not outside of copyright protection itself.

 

*Two personal reflections: (1) I am astounded that Honey Pie didn't make the cut. Perhaps voters confused it with Wild Honey Pie, which probably deserved its lowest ranking. (2) I sing Good Night to my five-year old each night as a lullaby, and my world would be different without it. That is the inherent danger in a project like mine, and those who criticize the very idea that the White Album would have been the better had it been shorter can marshall my own anecdotal evidence in support of their skepticism.

 

 

 

 

 

Posted by Jake Linford on December 5, 2011 at 03:35 PM in Information and Technology, Intellectual Property, Music | Permalink | Comments (1) | TrackBack

Sunday, November 27, 2011

Threading the Needle

Imagine that Ron Wyden fails: either PROTECT IP or SoPA / E-PARASITE passes and is signed into law by President Obama. Advocacy groups such as the EFF would launch an immediate constitutional challenge to the bill’s censorship mandates. I believe the outcome of such litigation is far less certain than either side believes. American censorship legislation would keep lots of lawyers employed (always a good thing in a down economy), and might generate some useful First Amendment jurisprudence. Let me sketch three areas of uncertainty that the courts would have to resolve, and that improve the odds that such a bill would survive.

First, how high is the constitutional barrier to the legislation? Both bills look like systems of prior restraint, which loads the government with a “heavy presumption” against their constitutionality . The Supreme Court’s jurisprudence in the two most relevant prior cases, Reno v. ACLU and Ashcroft v. ACLU, applied strict scrutiny: laws must serve a compelling government interest, and be narrowly tailored to that interest. This looks bad for the state, but wait: we’re dealing with laws regulating intellectual property, and such laws draw intermediate scrutiny at most. This is what I call the IP loophole in the First Amendment. Copyright law, for example, enjoys more lenient treatment under free speech examination because the law has built-in safeguards such as fair use, the idea-expression dichotomy, and the (ever-lengthening) limited term of rights.

Moreover, it’s not certain that the bills even regulate speech. Here, I mean “speech” in its First Amendment sense, not the colloquial one. Burning one’s draft card at a protest seems like speech to most of us – the anti-war message is embodied within the act – but the Supreme Court views it as conduct. And conduct can be regulated so long as the government meets the minimal strictures of rational review. The two bills focus on domain name filtering – they impede users from reaching certain on-line material, but formally limit only the conversion of domain name to IP address by an Internet service provider. (I’m skipping over the requirement that search engines de-list such sites, which is a much clearer case of regulating speech.) DNS lookups seem akin to conduct, although the Court’s precedent in this area is hardly a model of lucidity. (Burning the American flag = speech; burning a draft card = conduct. QED.) Other courts have struggled, most notably in the context of the anti-circumvention provisions of the Digital Millennium Copyright Act, to categorize domain names as speech or not-speech, and thus far have found a kind of Hegelian duality to them. That suggests an intermediate level of scrutiny, which would resonate with the IP loophole analysis above.

Second, who has standing? It seems that our plaintiffs would need to find a site that conceded it met the definition of a site “dedicated to the theft of U.S. property.” That seems hard to do until filtering begins – at which point whatever ills the legislation creates will have materialized. (It might also expose the site to suits from affected IP owners.) Perhaps Internet service providers could bring a challenge based on either third-party standing (on behalf of their users, if we think users’ rights are implicated, or the foreign sites) or their own speech interests. However, I think it’s unlikely that users would have standing, particularly given the somewhat dilute harm of being unable to reach material on allegedly infringing sites. And, as described above, it’s not clear that ISPs have a speech interest at all: domain name services simply may be conduct. 

Finally, how can we distinguish E-PARASITE or PROTECT IP from similar legislation that passes constitutional muster? Section 1201 of the DMCA, for example, permits liability to be imposed not only on those who make tools for circumventing access controls available, but even on those who knowingly link to such tools on-line. The government can limit distribution of encryption technology – at least as object code – overseas, by treating it as a munition. And thus far, the federal government has been able to seize domain names under civil forfeiture provisions, with nary a quibble from the federal courts.

To be plain: I think both bills are terrible legislation. They’re certain to damage America’s technology innovation industries, which are the crown jewels of our economy and our future competitiveness. They turn over censorship decisions to private actors with no interest whatsoever in countervailing values such as free expression or, indeed, anything other than their own profit margins. And their procedural protections are utterly inadequate – in my view. But I think it is possible that these bills may thread the constitutional needle, particularly given the one-way ratchet of copyright protection before the federal courts. The decision in Ashcroft, for instance, found that end user filtering was a narrower alternative than the Children’s Online Protection Act. But end user filtering doesn’t work when the person installing the software is not a parent concerned about on-line filth, but one eager to download infringing movies. And that means that legislation may escape narrowness analysis as well. As I wrote in Orwell’s Armchair:

focusing only on content that is clearly unlawful – such as child pornography, obscenity, or intellectual property infringement – has constitutional benefits that can help a statute survive. These categories of material do not count as “speech” for First Amendment analysis, and hence the government need not satisfy strict scrutiny in attacking them. Recent bills seem to show that legislators have learned this lesson – the PROTECT IP Act, for example, targets only those Web sites with “no significant use other than engaging in, enabling, or facilitating” IP infringement. Banning only unprotected material could move censorial legislation past overbreadth objections.

So: the outcome of any litigation is not only highly uncertain, but more uncertain than free speech advocates believe. Please paint a more hopeful story for me, and tell me why I’m wrong.

Cross-posted at Info/Law.

 

Posted by Derek Bambauer on November 27, 2011 at 08:37 PM in Civil Procedure, Constitutional thoughts, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Web/Tech | Permalink | Comments (0) | TrackBack

Monday, November 21, 2011

How Not To Secure the Net

In the wake of credible allegations of hacking of a water utility, including physical damage, attention has turned to software security weaknesses. One might think that we'd want independent experts - call them whistleblowers, busticati, or hackers - out there testing, and reporting, important software bugs. But it turns out that overblown cease-and-desist letters still rule the day for software companies. Fortunately, when software vendor Carrier IQ attempted to misstate IP law to silence security researcher Trevor Eckhart, the EFF took up his cause. But this brings to mind three problems.

First, unfortunately, EFF doesn't scale. We need a larger-scale effort to represent threatened researchers. I've been thinking about how we might accomplish this, and would invite comments on the topic.

Second, IP law's strict liability, significant penalties, and increasing criminalization can create significant chilling effects for valuable security research. This is why Oliver Day and I propose a shield against IP claims for researchers who follow the responsible disclosure model.

Finally, vendors really need to have their general counsel run these efforts past outside counsel who know IP. Carrier IQ's C&D reads like a high school student did some basic Wikipedia research on copyright law and then ran the resulting letter through Google Translate (English to Lawyer). If this is the aptitude that Carrier IQ brings to IP, they'd better not be counting on their IP portfolio for their market cap.

When IP law suppresses valuable research, it demonstrates, in Oliver's words, that lawyers have hacked East Coast Code in a way it was not designed for. Props to EFF for hacking back.

Cross-posted at Info/Law.

Posted by Derek Bambauer on November 21, 2011 at 09:33 PM in Corporate, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Science, Web/Tech | Permalink | Comments (2) | TrackBack

Friday, November 18, 2011

A Soap Impression of His Wife

As I previewed earlier this week, I want to talk about the copyright implications for 3D printers. A 3D printer is a device that can reproduce a 3-dimensional object by spraying layers of plastic, metal, or ceramic into a given shape. (I imagine the process smelling like those Mold-a-Rama plastic souvenir vending machines prevalent in many museums, a thought simultaneously nostalgic and sickening). Apparently, early adopters are already purchasing the first generation of 3D printers, and there are websites like Thingiverse where you can find plans for items you can print in your home, like these Tardis salt shakers.*

Tardis salt shakers

Perhaps unsurprisingly, there can be copyright implications. A recent NY Times blog post correctly notes that the 3D printer is primarily suited to reproduce what § 101 of the Copyright Act calls "useful articles," physical objects that have "an intrinsic utilitarian function," and which, by definition, receive no copyright protection...except when they do. 

 A useful article can include elements that are protectable as a "pictorial, graphic, [or] sculptural work." The elements are protectable to the extent "the pictorial, graphic, or sculptural features...can be identified separately from, and are capable of existing independently of, the utilitarian aspects of the article." There are half a dozen tests courts have employed to determine whether protectable features can be separated from utilitarian aspects. Courts have rejected copyright protection for mannequin torsos and the ubiquitous ribbon bike rack, but granted it for belt buckles with ornamental elements that were not a necessary part of a functioning belt.

Carol Vaquero 1


Print out a "functional" mannequin torso (or post your plans for it on the internet) and you should have no trouble. Post a schematic for the Vaquero belt buckle, and you may well be violating the copyright protection in the sculptural elements. But even that can be convoluted. The case law is mixed on how to think about 2D works derived from 3Dworks, and vice versa. A substantially similar 3D work can infringe a 2D graphic or pictorial work (Ideal Toy Corp. v. Kenner Prods. Div., 443 F. Supp. 291 (S.D.N.Y. 1977)), but constructing a building without permission from protectable architectural plans was not infringement, prior to a recent revision to the Copyright Act. Likewise, adrawing of a utilitarian item might be protectable as a drawing, but does not grant the copyright holder the right to control the manufacture of the item.

And if consumers are infringing, there is a significant risk that the manufacturer of the 3D printer could be vicariously or contributorily liable for that infringement. The famous Sony decision, which insulated the distribution of devices capable of commercially significant noninfringing uses, even if they could also be used for copyright infringement, has been narrowed both by recent Grokster filesharing decision and by the DMCA anticircumvention provisions. The easy, but unsatisfying takeaway is that 3D printers will keep copyright lawyers employed for years to come.

Back to the Tardis shakers, for a moment: the individual who posted them to the Thingiverse noted that the shaker "is derivative of thingiverse.com/thing:1528 and thingiverse.com/thing:12278", a Tardis sculpture and the lid of bottle, respectively. I found this striking for two reasons. First, it suggests a custom of attribution on thingiverse, but I don't yet have a sense for whether it's widespread. Second, if either of those first things are protectable as copyrighted works, (which seems more likely for the Tardis sculpture, and less so for the lid) then the Tardis salt shaker may be an unauthorized, and infringing, derivative work, and the decision to offer attribution perhaps unwise in retrospect.

* The TARDIS is the preferred means of locomotion of Doctor Who, the titular character of the long-running BBC science fiction program. It's a time machine / space ship disguised as a 1960s-era London police call box. The shape of the TARDIS, in its distinctive blue color, is protected by three registered trademarks in the UK.

 

Posted by Jake Linford on November 18, 2011 at 09:00 AM in Information and Technology, Intellectual Property, Television, Web/Tech | Permalink | Comments (0) | TrackBack

Thursday, November 17, 2011

Choosing Censorship

Yesterday, the House of Representatives held hearings on the Stop Online Piracy Act (it's being called SOPA, but I like E-PARASITE tons better). There's been a lot of good coverage in the media and on the blogs. Jason Mazzone had a great piece in TorrentFreak about SOPA, and see also stories about how the bill would re-write the DMCA, about Google's perspective, and about the Global Network Initiative's perspective.

My interest is in the public choice aspect of the hearings, and indeed the legislation. The tech sector dwarfs the movie and music industries economically - heck, the video game industry is bigger. Why, then, do we propose to censor the Internet to protect Hollywood's business model? I think there are two answers. First, these particular content industries are politically astute. They've effectively lobbied Congress for decades; Larry Lessig and Bill Patry among others have documented Jack Valenti's persuasive powers. They have more lobbyists and donate more money than companies like Google, Yahoo, and Facebook, which are neophytes at this game. 

Second, they have a simpler story: property rights good, theft bad. The AFL-CIO representative who testified said that "the First Amendment does not protect stealing goods off trucks." That is perfectly true, and of course perfectly irrelevant. (More accurately: it is idiotic, but the AFL-CIO is a useful idiot for pro-SOPA forces.) The anti-SOPA forces can wheel to a simple argument themselves - censorship is bad - but that's somewhat misleading, too. The more complicated, and accurate, arguments are that SOPA lacks sufficient procedural safeguards; that it will break DNSSEC, one of the most important cybersecurity moves in a decade; that it fatally undermines our ability to advocate credibly for Internet freedom in countries like China and Burma; and that IP infringement is not always harmful and not always undesirable. But those arguments don't fit on a bumper sticker or the lede in a news story.

I am interested in how we decide on censorship because I'm not an absolutist: I believe that censorship - prior restraint - can have a legitimate role in a democracy. But everything depends on the processes by which we arrive at decisions about what to censor, and how. Jessica Litman powerfully documents the tilted table of IP legislation in Digital Copyright. Her story is being replayed now with the debates over SOPA and PROTECT IP: we're rushing into decisions about censoring the most important and innovative medium in history to protect a few small, politically powerful interest groups. That's unwise. And the irony is that a completely undemocratic move - Ron Wyden's hold, and threatened filibuster, in the Senate - is the only thing that may force us into more fulsome consideration of this measure. I am having to think hard about my confidence in process as legitimating censorship.

Cross-posted at Info/Law.

Posted by Derek Bambauer on November 17, 2011 at 09:15 PM in Constitutional thoughts, Corporate, Culture, Current Affairs, Deliberation and voices, First Amendment, Information and Technology, Intellectual Property, Music, Property, Web/Tech | Permalink | Comments (9) | TrackBack

Tuesday, November 15, 2011

You Say You Want a Revolution

Two potentially revolutionary "disruptive technologies" were back in the news this week. The first is ReDigi, a marketplace for the sale of used "legally downloaded digital music." For over 100 years, copyright law has included a first sale doctrine, which says I can transfer "lawfully made" copy* (a material object in which a copyrighted work is fixed) by sale or other means, without permission of the copyright owner. The case law is codified at 17 U.S.C. § 109.

ReDigi says its marketplace falls squarely within the first sale limitation on the copyright owner's right to distribute, because it verifies that copies are "from a legitimate source," and it deletes the original from all the seller's devices. The Recording Industry Association of America has objected to ReDigi's characterization of the fair use claim on two primary grounds,** as seen in this cease and desist letter.

First, as ReDigi describes its technology, it makes a copy for the buyer, and deletes the original copy from the computer of the seller. The RIAA finds fault with the copying. Section 109 insulates against liability for unauthorized redistribution of a work, but not for making an unauthorized copy of a work. Second, the RIAA is unpersuaded there are ReDigi can guarantee that sellers are selling "lawfully made" digital files. ReDigi's initial response can be found here

At a first cut, ReDigi might find it difficult to ever satisfy the RIAA that it was only allowing the resale of lawfully made digital files. Whether it can satisfy a court is another matter. It might be easier for an authorized vendor, like iTunes or Kindle, to mark legitimate copies going forward, but probably not to detect prior infringement.

Still, verifying legitimate copies may be easier than shoehorning the "copy and delete" business model into the language of § 109. Deleting the original and moving a copy seems in line with the spirit of the law, but not its letter. Should that matter? ReDigi attempts to position itself as close as technologically possible to the framework spelled out in the statute, but that's a framework designed to handle the sale of physical objects that embody copyrightable works.

This is not the only area where complying with statutory requirements can tie businesses in knots. Courts have consistently struggled with how to think about digital files. In London-Sire Records v. Does, the court had to puzzle out whether a digital file can be a material object and thus a copy* distributed in violation of § 106(3). The policy question is easy to articulate, if reasonable minds still differ about the answer: is the sale and distribution of digital files something we want the copyright owner to control or not?

As a statutory matter, the court in London-Sire concluded that material didn't mean material in its sense as "a tangible object with a certain heft," but instead "as a medium in which a copyrighted work can be 'fixed.'" This definition is, of course, driven by the statute: copyright subsists once an original work of authorship is fixed in a tangible medium of expression from which it can be reproduced, and the Second Circuit has recently held in the Cablevision case that a work must also be fixed -- embodied in a copy or phonorecord for a period of more than transitory duration -- for infringement to occur. Policy intuitions may be clear, but fitting the solution in the statutory language sometimes is not. And a business model designed to fit existing statutory safe harbors might do things that appear otherwise nonsensical, like Cablevision's decision to keep individual copies of digital videos recorded by consumers on its servers, to avoid copyright liability.

Potentially even more disruptive is the 3D printer, prototypes of which already exist in the wild, and which I will talk more about tomorrow.

* Technically, a digital audio file is a phonorecord, and not a copy, but that's a distinction without a difference here.

** The RIAA also claims that ReDigi violates the exclusive right of public performance by playing 30 second samples of members' songs on its website, but that's not a first sale issue.

Posted by Jake Linford on November 15, 2011 at 04:22 PM in Information and Technology, Intellectual Property, Music, Web/Tech | Permalink | Comments (1) | TrackBack

Thursday, November 10, 2011

Cyber-Terror: Still Nothing to See Here

Cybersecurity is a hot policy / legal topic at the moment: the SEC recently issued guidance on cybersecurity reporting, defense contractors suffered a spear-phishing attack, the Office of the National Counterintelligence Executive issued a report on cyber-espionage, and Brazilian ISPs fell victim to DNS poisoning. (The last highlights a problem with E-PARASITE and PROTECT IP: if they inadvertently encourage Americans to use foreign DNS providers, they may worsen cybersecurity problems.) Cybersecurity is a moniker that covers a host of problems, from identity theft to denial of service attacks to theft of trade secrets. The challenges are real, and there are many of them. That's why it is disheartening to see otherwise knowledgeable experts focusing on chimerical targets.

For example, Eugene Kaspersky stated at the London Cyber Conference that "we are close, very close, to cyber terrorism. Perhaps already the criminals have sold their skills to the terrorists - and then...oh, God." FBI executive assistant director Shawn Henry said that attacks could "paralyze cities" and that "ultimately, people could die." Do these claims hold up? What, exactly, is it that cyber-terrorists are going to do? Engage in identity theft? Steal U.S. intellectual property? Those are somewhat worrisome, but where is the "terror" part? Terrorists support malevolent activities with all sorts of crimes. But that's "support," not "terror." Hysterics like Richard Clarke spout nonsense about shutting down air traffic control systems or blowing up power plants, but there is precisely zero evidence that even nation-states can do this sort of thing, let alone small, non-state actors. The "oh, God" part of Kaspersky's comment is a standard rhetorical trope in the apocalyptic discussions of cybersecurity. (I knock these down in Conundrum, coming out shortly in Minnesota Law Review.) And paralyzing a city isn't too hard: snowstorms do it routinely. The question is how likely such threats are to materialize, and whether the proposed answers (Henry thinks we should build a new, more secure Internet) make any sense.

There are at least two plausible reasons why otherwise rational people spout lurid doomsday scenarios instead of focusing on the mundane, technical, and challenging problems of networked information stores. First, and most cynically, they can make money from doing so. Kaspersky runs an Internet security company; Clarke is a cybersecurity consultant; former NSA director Mike McConnell works for a law firm that sells cybersecurity services to the government. I think there's something to this, but I'm not ready to accuse these people of being venal. I think a more likely explanation flows from Paul Ohm's Myth of the Superuser: many of these experts have seen what truly talented hackers can do, given sufficient time, resources, and information. They then extrapolate to a world where such skills are commonplace, and unrestrained by ethics, social pressures, or sheer rational actor deterrence. Combine that with the chance to peddle one's own wares, or books, to address the problems, and you get the sum of all fears. Cognitive bias matters.

The sky, though, is not falling. Melodrama won't help - in fact, it distracts us from the things we need to do: to create redundancy, to test recovery scenarios, to deploy more secure software, and to encourage a culture of testing (the classic "hacking"). We are not going to deploy a new Internet. We are not going to force everyone to get an Internet driver's license. Most cybersecurity improvements are going to be gradual and unremarkable, rather than involving Bruce Willis and an F-35. Or, to quote Frank Drebin, "Nothing to see here, please disperse!" Cross-posted at Info/Law.

Posted by Derek Bambauer on November 10, 2011 at 03:53 PM in Criminal Law, Current Affairs, Information and Technology, International Law, Web/Tech | Permalink | Comments (1) | TrackBack