Tuesday, April 09, 2013
Academics Go To Jail – CFAA Edition
Though the Aaron Swartz tragedy has brought some much needed attention to the CFAA, I want to focus on a more recent CFAA event—one that has received much less attention but might actually touch many more people than the case against Swartz.
Andrew “Weev” Auernheimer (whom I will call AA for short) was recently convicted under the CFAA and sentenced to 41 months and $73K restitution. Orin Kerr is representing him before the Third Circuit. I am seriously considering filing an amicus brief on behalf of all academics. In short, this case scares me in a much more personal way than prior discussed in my prior CFAA posts. More after the jump.
Here’s the basic story, as described by Orin Kerr:
When iPads were first released, iPad owners could sign up for Internet access using AT&T. When they signed up, they gave AT&T their e-mail addresses. AT&T decided to configure their webservers to “pre load” those e-mail addresses when it recognized the registered iPads that visited its website. When an iPad owner would visit the AT&T website, the browser would automatically visit a specific URL associated with its own ID number; when that URL was visited, the webserver would open a pop-up window that was preloaded with the e-mail address associated with that iPad. The basic idea was to make it easier for users to log in to AT&T’s website: The user’s e-mail address would automatically appear in the pop-up window, so users only needed to enter in their passwords to access their account. But this practice effectively published the e-mail addresses on the web. You just needed to visit the right publicly-available URL to see a particular user’s e-mail address. Spitler [AA’s alleged co-conspirator] realized this, and he wrote a script to visit AT&T’s website with the different URLs and thereby collect lots of different e-mail addresses of iPad owners. And they ended up collecting a lot of e-mail addresses — around 114,000 different addresses — that they then disclosed to a reporter. Importantly, however, only e-mail addresses were obtained. No names or passwords were obtained, and no accounts were actually accessed.
Let me paraphrase this: AA went to a publicly accessible website, using publicly accessible URLs, and saved the results that AT&T sent back in response to that URL. In other words, AA did what you do every time you load up a web page. The only difference is that AA did it for multiple URLs, using sequential guesses at what those URLs would be. There was no robot.txt file that I’m aware of (this file tells search engines which URLs should not be searched by spiders). There was no user notice or agreement that barred use of the web page in this manner. Note that I’m not saying such things should make the conduct illegal, but only that such things didn’t even exist here. It was just two people loading data from a website. Note that a commenter on my prior post asked this exact same question--whether "link guessing" was illegal--and I was noncommital. I guess now we have our answer.
The government’s indictment makes the activity sound far more nefarious, of course. It claims that AA “impersonated” an iPad. This allegation is a bit odd: the script impersonated an iPad in the same way that you might impersonate a cell phone by loading http://m.facebook.com to load the mobile version of Facebook. Go ahead, try it and you’ll see – Facebook will think you are a cell phone. Should you go to jail?
So, readers might say, what’s the problem here? AA should not have done what he did – he should have known that AT&T did not want him downloading those emails. Yeah, he probably did know that. But consider this: AA did not share the information with the world, as he could have. I am reasonably certain that if his intent was to harm users, we would never know that he did this – he would have obtained the addresses over an encrypted VPN and absconded. Instead, AA shared this flaw with the world. AT&T set up this ridiculously insecure system that allowed random web users to tie Apple IDs to email addresses through ignorance at best or hubris at worst. I don’t know if AA attempted to inform AT&T of the issue, but consider how far you got last time you contacted tech support with a problem on an ISP website. AA got AT&T’s attention, and the problem got fixed with no (known) divulgence of the records.
Before I get to academia, let me add one more point. To the extent that AA should have known AT&T didn’t desire this particular access, the issue is one of degree not of kind. And that is the real problem with the statute. There is nothing in the statute, absolutely nothing, that would help AA know whether he violated the law by testing this URL with one, five, ten, or ten thousand IDs. Here’s one to try: click here for a link to a concert web page deep link using a URL with a numerical code. Surely Ticketmaster can’t object to such deep linking, right? Well, it did, and sued Tickets.com over such behavior. It claimed, among other things, that each and every URL was copyrighted and thus infringed if linked to by another. It lost that argument, but today it could just say that such access was unwanted. For example, maybe Tickemaster doesn’t like me pointing out its ridiculous argument in the tickets.com case, making my link unauthorized. Or maybe I should have known because the Ticketmaster terms of service says that an express condition of my authorization to view the site is that I will not "Link to any portion of the Site other than the URL assigned to the home page of our site." That's right, TicketMaster still thinks deep linking is unauthorized, and I suppose that means I risk criminal prosecution for linking it. Imagine if I actually saved some of the data!
This is where academics come in. Many, many academics scrape. (Don’t stop reading here – I’ll get to non-scrapers below.) First, scraping is a key way to get data from online databases that are not easily downloadable. This includes, for example, scraping of the US Patent & Trademark Office site; although data is now available for mass download, that data is cumbersome, and scraper use is still common. That the PTO is public data does not help matters. In fact, it might make it worse, since “unauthorized” access to government servers might receive enhanced penalties!
Academics (and non-academics) in other disciplines scrape websites for research as well. How are these academics to know that such scraping is disallowed? What if there is no agreement barring them from doing so? What if there is a web-wrap notice as broad as Ticketmaster's, purporting to bar such activities but with no consent by the user? The CFAA could send any academic to jail for ignoring such warnings—or worse—not seeing them in the first place. Such a prosecution would be preposterous, skeptics might say. I hope the skeptics are right, but I'm not hopeful. Though I can't find the original source, I recall Orin Kerr recounting how his prosecutor colleagues said the same thing 10 years ago when he argued the CFAA might apply to those who breach contracts, and now such prosecutions are commonplace.
Finally, non-scrapers are surely safe, right? Maybe it depends on if they use Zotero. Thousands of people use it. How does Zotero get information about publications when the web site does not provide standardized citation data? You guessed it: a scraper. Indeed, a primary reason I don’t use Zotero is that the Lexis and Westlaw scrapers don’t work. But the PubMed importer scrapes. What if PubMed decide that it considered scraping of information unauthorized? Surely people should know this, right? If it wanted people to have this data, they would provide it in Zotero readable format. The fact that the information on those pages is publicly available is irrelevant; the statute makes no distinction. And if one does a lot of research, for example, checking 20 documents, downloading each, and scraping each page, the difference from AA is in degree only, not in kind.
The irony of this case is that the core conviction is only tangentially a problem with the statute (there are some ancillary issues that are a problem with the statute). “Unauthorized access” and even “exceeds authorized access” should never have been interpreted to apply to publicly accessible data on publicly accessible web sites. Since they have, then I am convinced that the statute is impermissibly broad, and must be struck down. At the very least it must be rewritten.
Tuesday, March 05, 2013
The iPhone, not the eye, is the window into the soul
It is great to be back at Prawfsblawg this year. Thanks to Dan and the gang for having me back. For my first post this month, I wanted to point everyone to the most important privacy research of 2012. The same paper qualifies as the most ignored privacy research of 2012, at least within legal circles. It is a short paper that everyone should read.
The paper in question,Mining Large Scale Smart-Phone Data for Personality Studies, is by Gokul Chittaranjan, Jan Blom, and Daniel Gatica-Perez. Chittaranjan and co-authors brilliantly show that it is straightforward to mine data from smart-phones in an automated way so as to identify particular "Five Factor" personality types in a large population of users. They did so by administering personality tests to 117 smartphone users, and then following the smartphone activities of those users for seventeen months, identifying the patterns that emerged. The result was that each of the "Big Five" personality dimensions was associated with particular patterns of phone usage. For example, extraverts communicated with more people and spent more time on the phone, highly conscientious people sent more email messages from their smartphones, and users of non-standard ring-tones tended to be those who psychologists would categorize as open to new experiences.
There is a voluminous psychology literature linking scores on particular Big Five factors to observed behavior in the real world, like voting, excelling in workplaces, and charitable giving. Some of the literature is discussed in much more detail here. But the Chittaranjan et al. study provides a powerful indication of precisely why data-mining can be so powerful. Data mining concerning individuals' use of machines is picking up personality traits, and personality predicts future behavior.
The regularities observed via the analysis of Big Data demonstrate that you can aggregate something seemingly banal like smartphone data to administer surreptitious personality tests to very large numbers of people. Indeed, it is plausible that studying observed behavior from smartphones is a more reliable way of identifying particular personality traits than existing personality tests themselves. After all, it is basically costless for an individual to give false answers to a personality questionnaire. It is costly for an extravert to stop calling friends.
Privacy law has focused its attention on protecting the contents of communications or the identities of the people with whom an individual is communicating. The new research suggests that -- to the extent that individuals have a privacy interest in the nature of their personalities -- an enormous gap exists in the present privacy framework, and cell phone providers and manufacturers are sitting on (or perhaps already using) an information gold mine.
It's very unlikely that the phenomenon that Chittaranjan et al. identify is limited to phones. I expect that similar patterns could be identified from analyzing peoples' use of their computers, their automobiles, and their television sets. The Chittaranjan et al. study is a fascinating, tantalizing, and perhaps horrifying early peek at life in a Big Data world.
Wednesday, January 30, 2013
Does Not Translate?: How to Present Your Work to Real People
Recently I've agreed to give talks on social media law issues to "real" people. For example, one of the breakfast talks I've been asked to give is aimed at "judges, city and county commissioners, business leaders and UF administrators and deans." Later, I'm giving a panel presentation on the topic to prominent women alumni of UF. My dilemma is that I want to strike just the right tone and present information at just the right level for these audiences. But I'm agonizing over some basic questions. Can I assume that every educated person has at least an idea of how social media work? What segment of the information that I know about Social Media Law and free speech would be the most interesting to these audiences, and should I just skip a rock over the surface of the most interesting cases and incidents, accompanied by catchy images? How concerned should I be about the offensive potential of talking about the real facts of disturbing cases for a general but educated audience? As a Media Law scholar and teacher, I'm perfectly comfortable talking about the "Fuck the Draft" case or presenting slides related to the heart-wrenching cyberbullying case of Amanda Todd that contain the words "Flash titties, bitch." But can I talk about this at breakfast? If I can, do I need to give a disclaimer first? And for a general audience, do I want to emphasize the disruptive potential of social media speech, or do I have an obligation to balance that segment of the presentation with the postive aspects for free speech? And do any of you agonize over such things every time you speak to a new audience?
Anyway, translation advice is appreciated. I gave our graduation address in December, and I ended up feeling as if I'd hit the right note by orienting the address around a memorable story from history that related to the challenges of law grads today. But the days and even the minutes preceding the speech involved significant agonizing, which you'd think someone whose job involves public speaking on a daily basis wouldn't experience.
Monday, December 10, 2012
Big Data, Privacy, and Insurers: Forget the web, Flo’s the one to watch.
At least within the corner of the web that I frequent, it seems that I cannot go more than a few pages without running into articles discussing the never-ending growth of the Big Data industry, the death of online privacy, and how long it will be until we are all subject to 1984-esque surveillance. These issues have been particularly interesting to me, given that, like many of us, I maintain a presence on a number of social media sites. If at all possible, I would prefer to control who has access to the embarrassing high school yearbook photos that were posted to my Facebook wall, my Amazon.com browsing history, and the contents of the Christmas list I sent to my family. Even when I have given my consent to certain entities to access this information, I'd like to restrict how they use this data, limit its transferability, and have some type of assurance that adequate securities measures have been put into place to protect my data. While I recognize that the dissemination of this information would, in most cases, have little to no detrimental impact on my life, the ease with which third parties could aggregate data about me makes me quite uneasy. The public uproar that results every time Facebook changes its privacy settings establishes that my feelings are widely shared. It is no surprise that the law’s regulation of web-based information has become one of the hotter topics in politics and legal academia (I've particularly enjoyed a forthcoming piece written by one of my colleagues: Prof. Bedi’s Facebook and Interpersonal Privacy).
there are good reasons that the data privacy discussion has centered on the
Internet, I have found myself wondering whether this focus has diverted
attention away from the rampant expansion of offline data collection. Given my
scholarly interests, it is unsurprising that the best example of this
phenomenon that I can point to comes from the insurance industry.
Recent developments in the auto insurance industry may (at least in my mind) herald the beginning of a new era of aggressive approaches to data collection. Over the past two years, Progressive has increasingly offered consumers the opportunity to reduce their premiums if they agree to allow Progressive to monitor their driving habits via wireless technology (the “Snapshot” discount). While Progressive’s observation period is limited in both duration and amount of data collected (e.g., braking habits are recorded, GPS data is not), it is easy to see how market incentives will push auto insurers to try and collect increasing amounts of data about—or continuously monitor—their policyholders. Further, if such programs are widely adopted throughout the industry, consent to monitoring could become a market-imposed mandatory condition for obtaining coverage. Finally, there do not appear to be any reasons why this type of data collection would not spread to other lines of casualty insurance.
While there are factors that will limit the expansion of this trend (collection and processing costs, state insurance regulations, social pressures), I anticipate that we have only seen the tip of the iceberg when it comes to insurers' taking an active approach towards data. I will save my thoughts on why this type of data collection is particularly worrisome (as well as its potential upside) for another post.
Thursday, November 08, 2012
Cease and Desist
For nearly 10 years, scholars, commentators, and disappointed downloaders have criticized the now-abandoned campaign of the Recording Industry Association of America (RIAA) to threaten litigation against, and in some cases, sue downloaders of unauthorized music. The criticisms follow two main themes. First, demand letters, which mention of statutory damages up to and including $150,000 per infringed work (if the infringement is willful), often lead to settlements of $2,000 - $3,000. A back of the envelope cost-benefit analysis would suggest this is a reasonable response from the receipient if $150,000 is a credible threat, but for those who conclude that information is free and someone must challenge these cases, the result is frustrating.
Second, it has been argued that the statutory damage itself is unconstitutional, at least as applied to downloaders, because it is completely divorced from any actual harm suffered by the record labels. The constitutional critique has been advanced by scholars like Pam Samuelson and Tara Wheatland, accepted by a district court judge in the Tenenbaum case, dodged on appeal by the First Circuit, but rejected outright by the Eighth Circuit. My intuition is that the Supreme Court would hold that Congress has the authority to craft statutory damages sufficiently high to deter infringement, and that there's sufficient evidence that Congress thought its last increase in statutory damages would accomplish that goal.
We could debate that, but I have something much more controversial in mind. I hope to convince you that the typical $3,000 settlement is the right result, at least in file-sharing cases.
The Copy Culture survey indicates that the majority of respondents who support a penalty support fines for unauthorized downloading of a song or movie. Of those who support fines, 32% support a fine of $10 or less, 43% support fines of up to $100, 14% support fines of up to $1,000, 5% support higher fines, 3% think fines should be context sensitive, and 3% are unsure. The average max fine for the top three groups is $209. Let's cut it in half, to $100, because roughly half of survey respondents were opposed to any penalty.
How big is the typical library of "illegally" downloaded files? 10 songs? 100 songs? 1,000? The Copy Culture study reports the following from survey respondents who own digital files, by age group:
18-29: 406 files downloaded for free
30-49: 130 files downloaded for free
50-64: 60 files downloaded for free
65+: 51 files downloaded for free
In the two cases that the RIAA actually took to trial, the labels argued that the defendants had each downloaded over 1,000 songs, but sued over 30 downloads in one case, and 24 downloads in the other. As I see it, if you're downloading enough to catch a cease and desist letter, chances are good that you've got at least 30 "hot" files on your hard drive.
You can see where I'm going here. If the average target of a cease and desist letter has 30 unauthorized files, and public consensus centers around $100 per unauthorized file, then a settlement offer of $3,000 is just about right.
Four caveats. First, maybe the Copy Culture survey is not representative of public opinion and that number should be far lower than $100. Second, misfires happen with cease and desist letters: sometimes, individuals are mistargeted. One off-the-cuff response is to have the RIAA pay $3,000 to every non-computer user and the estate of every dead grandman who gets one of these letters.
Third, this doesn't take fair use into account, and thus might not be a fair proxy for many other cases. For example, the Righthaven litigation seems entirely different to me - reproducing a news story online seems different than illegally downloading a song instead of paying $1, in part because the news story is closer to copyright's idea line, where more of the content is likely unprotectable, and because the redistribution of news is more likely to be fair use.
Fourth, it doesn't really deal with the potentially unconstitutional / arguably stupid possibility that some college student could be ordered to pay $150,000 per download, if a jury determines he downloaded willfully. I'd actually be happy with a rule that tells the record labels they can only threaten a maximum damage award equal to the average from the four jury determinations in the Tenenbaum and Thomas-Rasset cases. That's still $43,562.50 per song. Round it down to the non-willful statutory cap, $30,000, and I still think that a $3,000 settlement is just about perfect.
Now tell me why I'm crazy.
Thursday, October 25, 2012
Copyright's Serenity Prayer
I recently discovered an article by Carissa Hessick, where she argues that the relative ease of tracking child pornography online may lead legislators and law enforcement to err in two ways. First, law enforcement may pursue the more easily detected possession of child pornography at the expense of pursuing actual abuse, which often happens in secret and is diffcult to detect. Second, legislators may be swayed to think that catching child porn possessors is as good as catching abusers, because the former either have abused, or will abuse in the future. Thus, sentences for possession often mirror sentences for abuse, and we see a potential perversion of the structure of enforcement that gives a false sense of security about how much we are doing to combat the problem.
With the caveat that I know preventing child abuse is muchmuch more important that preventing copyright infringement, I think the ease of detecting unauthorized Internet music traffic may also have troubling perverse effects.
When I was a young man, copying my uncle's LP collection so I could take home a library of David Bowie casette tapes, there was no way Bowie or his record label would ever know. The same is true today, even though they now make turntables that will plug right into my computer and give me digital files that any self-respecting hipster would still disdain, but at least require me to flip a vinyl disc as my cost of copying.
On the other hand, it's much easier to trace free-riding that occurs online. That was part of what lead to the record industry's highly unpopular campaign against individual infringers. Once you can locate the individual infringer, you can pursue infringment that used to be "under the radar." The centralized, searchable nature of the Internet also made plausible Righthaven's disastrous campaign against websites copying news stories, and the attempt by attorney Blake Field to catch Google infringing his copyright in posted material by crawling his website with automated data gathering programs.
What if copyright owners are chasing the wrong harm? For example, one leaked RIAA study suggests that while a noticeable chunk of copyright infringement occurs via p2p sharing, it's not the largest chunk. While the RIAA noted that in 2011, 6% of unauthorized sharing (4% of total consumption) happens in locker services like Megauploads, and 23% (15%) happens via p2p, 42% (27%) of unauthorized acquisition is done by burning and ripping CDs from others, and another 29% (19%) happens through face-to-face hard drive trading. Offline file sharing is apparently more prevalent than the online variety, but it is much more difficult to chase. So it may be that copyright holders chase the infringement they can find, rather than the infringement that most severely affects the bottom line.
In a way, leaning on the infringement they can detect is reminiscent of the oft-repeated "Serenity Prayer," modified here for your contemplation:
God, grant me the serenity to accept the infringement I cannot find,
The courage to crush the infringement I can,
And the wisdom to know the difference.
All this brings me back to the friends and family question. The study on Copy Culture in the U.S. reports that roughly 80% of the adults owning music files think it's okay to share with family, and 60% think it's okay to share with friends. In addition, the Copyright Act specifically insulates friends and family sharing in the context of performing or displaying copyrighted works to family and close friends in a private home (17 USC s. 101, "publicly"). Thus, there is some danger in going after that friends and family sharing. If the family and friends line is the right line, can we at least feel more comfortable that someone to whom I'm willing to grant physical access to my CD library is a "real" friend than my collection of Facebook friends and acquaintances, some of whom will never get their hands on my vinyl phonograph of Blues and Roots?
Wednesday, October 10, 2012
FriendsHello all. Glad to be back at Prawfsblawg for another round of blogging. I'm looking forward to sharing some thoughts about entertainment contracts, the orphan works problem in copyright, and the new settlement between Google and several publishers over Google Books. Today, I want to talk a bit about file-sharing and friendship. A recent study asked U.S. and German citizens whether they thought it was "reasonable" to share unauthorized, copyrighted files with family, with friends, and in several different online contexts. Perhaps unsurprisingly, respondents in the 18-29 range responded more favorably to file sharing than older respondents in every context. What interests me is that respondents in every context see a sharp difference between sharing files with friends, and posting a file on Facebook. We call our Facebook contacts "friends," but I'm curious why the respondents to this study made the distinction between sharing with friends and sharing on Facebook. I have a few inchoate thoughts, and I'd love to hear what you think. Megan Carpenter wrote an interesting article about the expressive and personal dimension of making mix tapes. I grew up in the mix tape era as well, and remember well the emotional sweat that I poured into collections of love songs made for teenage paramours in the hopes of sustaining doomed long-distance romances. Carpenter correctly argues that there is something personal about that act, and it seems reasonable that it would fall outside the reach of the Copyright Act. I also remember copying my uncle's entire collection of David Bowie LPs onto casette tapes when I was in junior high. In that instance, music moved through family connections, and in my small town in Wyoming, there were no casettes from the Bowie back catalog on the shelves of the local music store. But the only effort involved in making those casettes was turning the LP at the end of a side. Less expressive, but within a fairly tight social network. A properly functioning copyright system might reasonably allow for these uses, and still sanction a decision to post my entire Bowie collection on Facebook, or through a torrent. I'm skeptical of any definition of "friends and family" so capacious that it would include Facebook friends, and I suspect that many people realize now, if they didn't then, that what constitutes a face-to-face friend is different than what constitutes a Facebook friend, but you may have a different impression. I hope you'll share it here, whatever it is.
Thursday, October 04, 2012
TPRC Celebrates 40 Years of Research in Telecom
Two weeks ago the Telecommunications Policy Research Conference (TPRC) had a great event to celebrate its 40th year of delving into communications, information and Internet policy issues (I'm a member of the program committee so, yes, this is a shameless plug). What I enjoy most about TPRC is that it is truly interdisciplinary; that should come as a relief to anyone who's been in a room filled only with lawyers--bless our hearts. The conference brings together scholars from all fields as well as policy makers and private and non profit practitioners. There were many outstanding sessions including a Friday evening panel (soon available on video) about The Next Digital Frontier with speakers straight out of the "who's who" of telecom: Eli Noam (Columbia), David Clark (MIT), Gigi Sohn (Public Knowledge) and Thomas Hazlett (GMU).
There is much more work of note, I'll single out a few articles after the jump, and I encourage you to look at the TPRC Program files for additional articles of interest. Also, around March keep your eyes open for next year's call for papers. I will still be on the program committee so, in case you're interested, you should know I'm highly motivated by gifts of chocolate (dark preferred).As mentioned, the TPRC website has the full program of presented articles so be sure to check it out. I particularly enjoyed the work of the legal and economic scholars--and not just because they made the math easier than the engineers did, but that didn't hurt. Three pieces that come to mind are Payment Innovation at the Content/Carriage Interface by James Speta, American Media Concentration Trends in Global Context: A Comparative Analysis by Eli Noam and Political Drivers and Signaling in Independent Agency Voting: Evidence from the FCC by Adam Candeub and Eric Hunnicutt.
First, if you haven't exhausted your interest in net neutrality issues, take a look at Speta's article that considers payment innovation at the customer level as a means by which congestion may be resolved in a content neutral manner. This is a highly topical piece as current net neutrality regulation is arguably on shaky, jurisdictional ground. Second, my friend Eli Noam, who never fails to intrigue, shared some counter intuitive observations from a multi-year, 30 country research project that tracks concentration levels in 13 communications industries. And third, Candeub and Hunnicutt make a welcome, empirical entry in a largely qualitative arena by quantifying the effects that party affiliation (of FCC Commissioners, Congress and the Executive) has on agency decision making. It's really a must read for anyone interested in the areas of communications, administrative law and political economy (and who isn't!).
Finally, a shout out to my fellow blogger Rob Howse who recently wrote on our need to be more patient with each other when we accidently hit "Reply to All." The conference also featured some innovation demonstrations and, Rob, I have just the plugin for you! The product is "Privicons" and as self-described (because I could not make this up):
Unlike more technical privacy solutions like tools that use code to lock down emails, Privicons relies on an iconographic vocabulary informed by norms-based social signals to influence users' choices about privacy.
In other words,with this plugin you can send a graphic reminder to email readers that they should "act nice." I think I'll send some Privicons to my students right around evaluation time.
Wednesday, July 18, 2012
Legal Education in the Digital Age
With the latest news of U-Va. joining a consortium of schools promoting online education, it seems only a matter of time before law schools will have to confront the possibility of much larger chunks of the educational experience moving into the virtual world. Along with Law 2.0 by David I.C. Thomson, there is now Legal Education in the Digital Age, edited by Ed Rubin at Vanderbilt. The book is primarily about the development of digital course materials for law school classes, with chapters by Ed Rubin, John Palfrey, Peggy Cooper Davis, and Larry Cunningham, among others. The book comes out of a conference hosted by Ron Collins and David Skover at Seattle U. My contribution follows up on my thoughts about the open source production of course materials, which I have previously written about here and here. You can get the book from Cambridge UP here, or at Amazon in hardcover or on Kindle.
One question from the conference was: innovation is coming, but where will it come from? Some possibilities:
- Law professors
- Law schools and universities
- Legal publishers
- Outside publishers
- Tech companies such as Amazon or Apple
- SSRN and BePress
- Some combination(s) of these
I think we all agree that significant change is coming down the pike. But what it ultimately will look like is still very much up in the air. What role will law professors play?
Tuesday, July 03, 2012
How Not to Criminalize Cyberbullying
My co-author Andrea Pinzon Garcia and I just posted our essay, How Not to Criminalize Cyberbullying, on ssrn. In our essay, we provide a sustained constitutional critique of the growing body of laws criminalizing cyberbullying. These laws typically proceed by either modernizing existing harassment and stalking laws or crafting new criminal offenses. Both paths are beset with First Amendment perils, which our essay illustrates through 'case studies' of selected legislative efforts. Though sympathetic to the aims of these new laws, we contend that reflexive criminalization in response to tragic cyberbullying incidents has led law-makers to conflate cyberbullying as a social problem with cyberbullying as a criminal problem, leading to pernicious consequences. The legislative zeal to eradicate cyberbullying potentially produces disproportionate punishment of common childhood wrongdoing. Furthermore, statutes criminalizing cyberbullying are especially prone to overreaching in ways that offend the First Amendment, resulting in suppression of constitutionally protected speech, misdirection of prosecutorial resources, misallocation of taxpayer funds to pass and defend such laws, and the blocking of more effective legal reforms. Our essay attempts to give legislators the First Amendment guidance they need to distinguish the types
of cyberbullying that must be addressed by education, socialization, and stigmatization from those that can be remedied with censorship and criminalization. To see the abstract or paper, please click here or here.
Posted by Lyrissa Lidsky on July 3, 2012 at 03:44 PM in Article Spotlight, Constitutional thoughts, Criminal Law, Current Affairs, First Amendment, Information and Technology, Lyrissa Lidsky, Web/Tech | Permalink | Comments (0) | TrackBack
Thursday, June 07, 2012
The Virtual Honesty Box
As a fan of comic book art, I'm often thrilled to encounter areas where copyright or trademark law and comic books intersect. As is the case in other media, the current business models of comic book publishers and creators has been threatened by the ability of consumers to access their work online without paying for it. Many comic publishers are worried about easy migration of content from paying digital consumers to non-paying digital consumers. Of course, scans of comics have been making their way around the internet on, or sometimes before, a given comic's onsale date for some time now. As in other industries, publishers have dabbled with DRM, and publishers have enbraced different (and somewhat incompatible) methods for providing consumers with authorized content. Publishers' choices sometimes lead to problems with vendors and customers, as I discuss a bit below.
While services like Comixology offer a wide selection of content from most major comics publishers, they are missing chunks of both the DC Comics and Marvel Comics catalogues. DC entered a deal to distribute 100 of its graphic novels (think multi-issue collections of comic books) exclusively via Kindle. Marvel Comics subsequently struck a deal to offer "the largest selection of Marvel graphic novels on any device" to users of the Nook.
Sometimes exclusive deals leave a bad taste in the mouths of other intermediaries. DCs graphic novels were pulled from Barnes & Noble shelves because the purveyor of the Nook was miffed. Independent publisher Top Shelf is an outlier, offering its books through every interface and intermediary it can. But to date, most publishers are trying to make digital work as a complement to, and not a replacement for, print.
Consumers are sometimes frustrated by a content-owner's choice to restrict access, so much so that they feel justified engaging in "piracy." (Here I define "piracy" as acquiring content through unauthorized channels, which will almost always mean without paying the content owner.) Some comics providers respond with completely open access. Mark Waid, for example, started Thrillbent Comics with the idea of embracing digital as digital, and in a manner similar to Cory Doctorow, embracing "piracy" as something that could drive consumers back to his authorized site, even if they didn't pay for the content originally.
I recently ran across another approach from comic creators Leah Moore and John Reppion. Like Mark Waid, Moore and Reppion have accepted, if not embraced, the fact that they cannot control the flow of their work through unauthorized channels, but they still assert a hope, if not a right, that they can make money from the sales of their work. To that end, they introduced a virtual "honesty box," named after the clever means of collecting cash from customers without monitoring the transaction. In essence, Moore and Reppion invite fans who may have consumed their work without paying for it to even up the karmic scales. This response strikes me as both clever and disheartening.
I'll admit my attraction to perhaps outmoded content-delivery systems -- I also have unduly fond memories of the 8-track cassette -- but I'm disheartened to hear that Moore and Reppion could have made roughly $5,500 more working minimum wage jobs last year. Perhaps this means that they should be doing something else, if they can't figure out a better way to monetize their creativity in this new environment. Eric Johnson, for one, has argued that we likely don't need legal or technological interventions for authors like Moore and Reppion in part because there are enough creative amateurs to fill the gap. The money in comics today may not be in comics at all, but in licensing movies derived from those comics. See, e.g., Avengers, the.
I hope Mark Waid is right, and that "piracy" is simply another form of marketing that will eventually pay greater dividends for authors than fighting piracy. And perhaps Moore and Reppion should embrace "piracy" and hope that the popularity of their work leads to a development deal from a major film studio. Personally, I might miss the days when comics were something other than a transparent attempt to land a movie deal.
As for the honesty box itself? Radiohead abandoned the idea with its most recent release, King of Limbs, after the name-your-price model adopted for the release of In Rainbows had arguably disappointing results: according to one report, 60% of consumers paid nothing for the album. I can't seen Moore and Reppion doing much better, but maybe if 40% of "pirates" kick in a little something into the virtual honesty box, that will be enough to keep Moore and Reppion from taking some minimum wage job where their talents may go to waste.
Friday, June 01, 2012
Oracle v. Google - The Other Shoe Drops
For those of you following the Oracle v. Google case, as I predicted here, the court has ordered that the APIs that Google copied are not copyrightable - at least not in the form that they were used. The case is basically dismissed with no remedy to Oracle.
Thursday, May 31, 2012
A Coasean Look at Commercial Skipping...
Readers may have seen that DISH has sued the networks for declaratory relief (and was promptly cross-sued) over some new digital video recorder (DVR) functionality. The full set of issues is complex, so I want to focus on a single issue: commercials skipping. The new DVR automatically removes commercials when playing back some recorded programs. Another company tried this many years ago, but was brow-beaten into submission by content owners. Not so for DISH. In this post, I will try to take a look at the dispute from a fresh angle.
Many think that commercial skipping implicates derivative work rights (that is, transformation of a copyrighted work). I don't think so. The content is created separately from the commercials, and different commercials are broadcast in different parts of the country. The whole package is probably a compiliation of several works, but that compilation is unlikely to be registered with the copyright office as a single work. Also, copying the work of only one author in the compilation is just copying of the subset, not creating a derivative work of the whole.
So, if it is not a derivative work, what rights are at stake? I believe that it is the right to copy in the first place in a stored DVR file. This activity is so ubiquitous that we might not think of it as copying, but it is. The Copyright Act says that the content author has the right to decide whether you store a copy on your disk drive, absent some exception.
And there is an exception - namely fair use. In the famous Sony v. Universal Studios case, the Court held that "time shifting" is a fair use by viewers, and thus sellers of the VCR were not helping users infringe. Had the Court held otherwise, the VCR would have been enjoined as an agent of infringement, just like Grokster was.
I realize that this result is hard to imagine, but Sony was 5-4, and the initial vote had been in favor of finding infringement. Folks can debate whether Sony intended to include commercial skipping or not. At the time, remote controls were rare, so skipping a recorded commercial meant getting off the couch. It wasn't much of an issue. Even now, advertisers tolerate the fact that people usually fast forward through commercials, and viewers have always left the TV to go to the bathroom or kitchen (hopefully not at the same time!).
But commercial skipping is potentially different, because there is zero chance that someone will stop to watch a catchy commercial or see the name of a movie in the black bar above the trailer as it zooms by. I don't intend to resolve that debate here. A primary reason I am skipping the debate is that fair use tends to be a circular enterprise. Whether a use is fair depends on whether it reduces the market possibilities for the owner. The problem is, the owner only has market possibilities if we say they do. For some things, we may not want them to have a market because we want to preserve free use. Thus, we allow copying via a DVR and VCR, even if content owners say they would like to charge for that right.
Knowing when we should allow the content owner to exploit the market and when we should allow users to take away a market in the name of fair use is the hard part. For this reason, I want to look at the issue through the lens of the Coase Theorem. Coase's idea, at its simplest, is that if parties can bargain (which I'll discuss below), then it does not matter with whom we vest the initial rights. The parties will eventually get to the outcome that makes each person best off given the options, and the only difference is who pays.
One example is smoking in the dorm room. Let's say that one person smokes and the other does not. Regardless of which roommate you give the right to, you will get the same amount of smoking in the room. The only difference will be who pays. If the smoker has the right to smoke, then the non-smoker will either pay the smoker to stop or will leave during smoking (or will negotiate a schedule). If you give the non-smoker the right to a smoke-free room, then the smoker will pay to smoke in the room, will smoke elswhere, or the parties will negotiate a schedule. Assuming non-strategic bargaining (hold-ups) and adequate resources, the same result will ensue because the parties will get to the level where the combination of their activities and their money make them the happiest. The key is to separate the analysis from normative views about smoking to determine who pays.
Now, let's apply this to the DVR context. If we give the right to skip commercials to the user, then several things might happen. Advertisers will advertise less or pay less for advertising slots. Indeed, I suspect that one reason why ads for the Super Bowl are so expensive, even in a down economy, is that not only are there a lot of viewers, but that those viewers are watching live and not able to skip commercials. In response, broadcasters will create less content, create cheaper content, or figure out other ways to make money (e.g. charging more for view on demand or DVDs). Refusing to broadcast unless users pay a fee is unlikely based on current laws. In short, if users want more and better content, they will have to go elsewhere to get it - paying for more channels on cable or satellite, paying for video on demand, etc. Or, they will just have less to watch.
If we give the right to stop commercial skipping to the broadcaster, then we would expect broadcasters will broadcast the mix they have in the past. Viewers will pay for the right to commercial skip. This can be done as it is now, through video on demand services like Netflix, but that's not the only model. Many broadcasters allow for downloading via the satellite or cable provider, which allows the content owner to disable fast forwarding. Fewer commercials, but you have to watch them. Or, in the future, users could pay a higher fee to the broadcaster for the right to skip commercials, and this fee would be passed on to content owners.
These two scenarios illustrate a key limit to the Coase Theorem. To get to the single efficient solution, transactions costs must be low. This means that the parties must be able to bargain cheaply, and there must be no costs or benefits that are being left out of the transaction (what we call externalities). Transactions costs are why we have to be careful about allocating pollution rights. The factory could pay a neighborhood for the right to pollute, but there are costs imposed on those not party to the transaction. Similarly, a neighborhood could pay a factory not to pollute, but difficulty coordinating many people is a transaction cost that keeps such deals from happening.
I think that transactions costs are high in one direction in the commercial skipping scenario, but not as much in the other. If the network has the right to stop skipping, there are low cost ways that content aggregators (satellite and cable) can facilitate user rights to commercial skip - through video on demand, surcharges, and whatnot. This apparatus is already largely in place, and there is at least some competition among content owners (some get DVDs out soon, some don't for example).
If, on the other hand, we vest the skipping right with users, then the ability for content owners to pay (essentially share their advertising revenues) with users is lower if they want to enter into such a transaction. Such a payment could be achieved, though, through reduced user fees for those who disable channel skipping. Even there, though, dividing among all content owners might be difficult.
Normatively, this feels a bit yucky. It seems wrong that consumers should pay more to content providers for the right to automate something they already have the right to do - skip commercials. However, we have to separate the normative from the transactional analysis - for this mind experiment, at least.
Commercials are a key part of how shows get made, and good shows really do go away if there aren't enough eyeballs on the commercials. Thus, we want there to be an efficient transaction that allows for metered advertising and content in a way that both users and networks get the benefit of whatever bargain they are willing to make.
There are a couple of other relevant factors that imply to me that the most efficient allocation of this right is with the network:
1. DISH only allows skipping after 1AM on the day the show is recorded. This no doubt militates in favor of fair use, because most people watch shows on the day they are recorded (or so I've read, I could be wrong). However, it also shows that the time at which the function kicks in can be moved, and thus negotiated and even differentiated among customers that pay different amounts. Some might want free viewing with no skipping, some might pay a large premium for immediate skipping. If we give the user the right to skip whenever, it is unlikely that broadcasters can pay users not to skip, and this means they are stuck in a world with maximum skipping - which kills negotiation to an efficient middle.
2. The skipping is only available for broadcast tv primetime recordings - not for recordings on "cable" channels, where providers must pay for content. Thus, there appears to already be a payment structure in practice - DISH is allowing for skipping on some networks and not others, which implies that the structure for efficient payments are already in place. If, for example, DISH skipped commercials on TNT, then TNT would charge DISH more to carry content. The networks may not have that option due to "must carry" rules. I suspect this is precisely why DISH skips for broadcasters - because it can without paying. In order to allow for bargaining however, given that networks can't charge more for DISH to carry content is to vest the right with networks and let the market take over.
These are my gut thoughts from an efficiency standpoint. Others may think of ways to allow for bargaining to happen by vesting rights with users. As a user, I would be happy to hear such ideas.
This is my last post for the month - time flies! Thanks to Prawfs again for having me, and I look forward to guest blogging in the future. As a reminder, I regularly blog at Madisonian.
Tuesday, May 29, 2012
School of Rock
I had a unique experience last Friday, teaching some copyright law basics to music students at a local high school. The instructor invited me to present to the class in part because he wanted a better understanding of his own potential liability for arranging song for performances, and in part because he suspected his students were, by and large, frequently downloading music and movies without the permission of copyright owners, and he thought they should understand the legal implications of that behavior. The students were far more interested in the inconsistencies they perceived in the current copyright system. I'll discuss a few of those after the break.
First, the Copyright Act grants the exclusive right to publicly perform a musical work, or authorize such a performance, to the author of the work, but there is no right public performance right granted to the author or owner of a sound recording. See 17 U.S.C. § 114. In other words, Rod Temperton, the author of the song "Thriller," has the right to collect money paid to secure permission to publicly perform the song, but neither Michael Jackson's estate nor Epic Records holds any such right, although it's hard to discount the creative choices of Michael Jackson, Quincy Jones and their collaborators in making much of what the public values about that recording. To those who had tried their hands at writing songs, however, the disparity made a lot of sense because "Thriller" should be Temperton's song because of his creative labors.
Second, the Copyright Act makes specific allowance for what I call "faithful" cover tunes, but not beat sampling or mashups. If a song (the musical work) has been commercially released, another artist can make a cover of the song and sell recordings of it without securing the permission of the copyright owner, so long as the cover artist provides notice, pays a compulsory license (currenty $0.091 per physical or digital recording) and doesn't change the song too much. See 17 U.S.C. § 115. If the cover artist makes a change in "the basic melody or fundamental character of the work," then the compulsory license in unavailable, and the cover artist must get permission and pay what the copyright owner asks. In addition, the compulsory license does not cover the sound recording, so there is no compulsory license for a "sampling right." Thus, Van Halen can make a cover of "Oh, Pretty Woman," without Roy Orbison's permission, but Two Live Crew cannot (unless the rap version ends up qualifying for the fair use privilege).
It was also interesting to me that at least one student in each class was of the opinion that once the owner of a copyrighted work put the work on the Internet, the owner was ceding control of the work, and should expect people to download it for free. It's an observation consistent with my own analysis about why copyright owners should have a strong, if not absolute, right to decide if and when to release a work online.
On a personal level, I confirmed a suspicion about my own teaching: if I try to teach the same subject six different times on the same day, it is guaranteed to come out six different ways, and indeed, it is likely there will be significant differences in what I cover in each class. This is in part because I have way more material at my fingertips than I can cram into any 45 minute class, and so I can be somewhat flexible about what I present, and in what order. I like that, because it allows me to teach in a manner more responsive to student questions. On the other hand, it may expose a failure to determine what are the 20-30 minutes of critical material I need to cover in an introduction to copyright law.
Friday, May 25, 2012
Using empirical methods to analyze the effectiveness of persuasive techniques
Slate Magazine has a story detailing the Obama campaign's embracement of empirical methods to assess the relative effectiveness of political advertisements.
To those familiar with the campaign’s operations, such irregular efforts at paid communication are indicators of an experimental revolution underway at Obama’s Chicago headquarters. They reflect a commitment to using randomized trials, the result of a flowering partnership between Obama’s team and the Analyst Institute, a secret society of Democratic researchers committed to the practice, according to several people with knowledge of the arrangement. ...
The Obama campaign’s “experiment-informed programs”—known as EIP in the lefty tactical circles where they’ve become the vogue in recent years—are designed to track the impact of campaign messages as voters process them in the real world, instead of relying solely on artificial environments like focus groups and surveys. The method combines the two most exciting developments in electioneering practice over the last decade: the use of randomized, controlled experiments able to isolate cause and effect in political activity and the microtargeting statistical models that can calculate the probability a voter will hold a particular view based on hundreds of variables.
Curiously, this story comes on the heels of a New York Times op-ed questioning the utility and reliability of social science approaches to policy concerns and a movement in Congress to defund the political science studies program at NSF.
Wednesday, May 16, 2012
Contrarian Statutory Interpretation Continued (CDA Edition)
Following my contrarian post about how to read the Computer Fraud and Abuse Act, I thought I would write about the Communication's Decency Act. I've written about the CDA before (hard to believe it has been almost 3 years!), but I'll give a brief summary here.
The CDA provides immunity from the acts of users of online providers. For example, if a user provides defamatory content in a comment, a blog need not remove the comment to be immune, even if the blog receives notice that the content is defamatory, and even if the blog knows the content is defamatory.
I agree with most of my colleagues who believe this statute is a good thing for the internet. Where I part ways from most of my colleagues is how broadly to read the statute.
Since this is a post about statutory interpretation, I'll include the statute:
Section 230(c)(1) of the CDA states that:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
In turn, an interactive computer service is:
any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
Further, an information content provider is:
any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.
So, where do I clash with others on this? The primary area is when the operators of the computer service make decisions to publish (or republish) content. I'll give three examples that courts have determined are immune, but that I think do not fall within the statute:
- Web Site A pays Web Site B to republish all of B's content on Site A. Site A is immune.
- Web Site A selectively republishes some or all of a story from Web Site B on Site A. Site A is immune.
- Web Site A publishes an electronic mail received by a reader on Site A. Site A is immune.
These three examples share a common thread: Site A is immune, despite selectively seeking out and publishing content in a manner that has nothing to do with the computerized processes of the provider. In other words, it is the operator, not the service, that is making publication determinations.
To address these issues, cases have focused on "development" of the information. One case, for example, defines development as a site that "contributes materially to the alleged illegality of the conduct." Here, I agree with my colleagues that development is being defined too broadly to limit immunity. Development should mean that the provider actually creates the content that is displayed. For that reason, I agree with the Roommates.com decision, which held that Roommates developed content by providing pre-filled dropdown lists that allegedly violated the Fair Housing Act. It turns out that the roommate postings were protected speech, but that is a matter of substance, and not immunity. The fact that underlying content is eventually vindicated does not mean that immunity should be expanded. To the extent some think that the development standard is limited only to development of illegal content (something implied by the text of the Roommates.com decision), I believe that is too limiting. The question is the source of the information, not the illegality of it.
The burning issue is why plaintiffs continue to rely on "development" despite its relatively narrow application. The answer is that this is all they currently have to argue, and that is where I disagree with my colleagues. I believe the word "interactive" in the definition must mean something. It means that the receipt of content must be tied to the interactivity of the provider. In other words, receipt of the offending content must be automated or otherwise interactive to be considered for immunity.
Why do I think that this is the right reading? First, there's the word "interactive." It was chosen for a reason. Second, the definition of "information content provider" identifies information "provided through the Internet or any other interactive computer service." (emphasis added). This implies that the provision of information should be based on interactivity or automation.
There is support in the statute for only immunizing information directly provided through interactivity. Section, 230(d), for example, requires interactive service providers to notify their users about content filtering tools. This implies that the information being provided is through the interactive service. Sections 230(a) and (b) describe the findings and policy of Congress, which describe interactive services as new ways for users to control information and for free exchange of ideas.
I think one can read the statute more broadly than I am here. But I also believe that there is no reason to do so. The primary benefit of Section 230 is a cost savings mechanism. There's is no way many service providers can screen all the content on their websites for potentially tortious activity. There's just no filter for that.
Allowing immunity for individualized editorial decisions like paying for syndicated content, picking and choosing among emails, and republishing stories from other web sites runs directly counter to this cost saving purpose. Complaining that it costs too much to filter interactive user content is a far cry from complaining that it costs to much to determine whether an email is true before making a noninteractive decision to republish it. We should want our service providers to expend some effort before republishing.
Fair Use and Electronic Reserves
For several years Georgia State was involved in litigation over the fair use doctrine. Specifically a consortium of publishers backed by Oxford, Cambridge and Sage sued Georgia State over copyright violations by many of the faculty. Many of my colleagues in the department were specifically named in the suit. A decision has now been rendered. You can read abou the decision here, and you can read the decision here.
The Court backed Georgia State in almost every instance, finding no copyright violation. However, the Court did lay down some rules - in particular you can use no more than 10% or one chapter, whichever is shorter, of any book.
Oh, and my colleagues were all found to have not violated copyright laws. For two of them the Court found that the plaintiffs could even prove a copyright.
Friday, May 11, 2012
App Enables Users to File Complaints of Airport Profiling
Following the terrorist attacks of September 11, 2001, Muslims and those perceived to be Muslim in the United States have been subjected to public and private acts of discrimination and hate violence. Sikhs -- members of a distinct monotheistic religion founded in 15th century India -- have suffered the "disproportionate brunt" of this post-9/11 backlash. There generally are two reasons for this. The first concerns appearance: Sikh males wear turbans and beards, and this visual similiarity to Osama bin Laden and his associates made Sikhs an accessible and superficial target for post-9/11 emotion and scrutiny. The second relates to ignorance: many Americans are unaware of Sikhism and of Sikh identity in particular.
Accordingly, after 9/11, Sikhs in the United States have been murdered, stabbed, assaulted, and harassed; they also have faced discrimination in various contexts, including airports, the physical space where post-9/11 sensitivities are likely and understandably most acute. The Sikh Coalition, an organization founded in the hours after 9/11 to advocate on behalf of Sikh-Americans, reported that 64% of Sikh-Americans felt that they had been singled-out for additional screening in airports and, at one major airport (San Francisco International), nearly 100% of turbaned Sikhs received additional screening. (A t-shirt, modeled here by Sikh actor Waris Ahluwalia and created by a Sikh-owned company, makes light of this phenomenon.)
In response to such "airport profiling," the Sikh Coalition announced the launch of a new app (Apple, Android), which "allows users to report instances of airport profiling [to the Transportation Security Administration (TSA)] in real time." The Coalition states that the app, called "FlyRights," is the "first mobile app to combat racial profiling." The TSA has indicated that grievances sent to the agency by way of the app will be treated as official complaints.News of the app's release has generated significant press coverage. For example, the New York Times, ABC, Washington Post, and CNN picked up the app's announcement. (Unfortunately, multiple outlets could not resist the predictable line, 'Profiled at the airport? There’s an app for that.') Wade Henderson, president and CEO of The Leadership Conference on Civil and Human Rights and The Leadership Conference Education Fund, tweeted, "#FlyRights is a vanguard in civil and human rights."
It will be interesting to see whether this app will increase TSA accountability, quell profiling in the airport setting, and, more broadly, trigger other technological advances in the civil rights arena.
Wednesday, May 09, 2012
Oracle v. Google: Digging Deeper
This follows my recent post about Oracle v. Google. At the behest of commenters, both online and offline, I decided to dig a bit deeper to see exactly what level of abstraction is at issue in this case. The reason is simple: I made some assumptions in the last post about what the jury must have found, and it turns out that the assumption was wrong. Before anyone accuses me of changing my mind, I want to note that in my last post I made a guess, and that guess was wrong once I read the actual evidence. My view of the law hasn't changed. More after the jump.
For the masochistic, Groklaw has compiled the expert reports in an accessible fashion here and here. Why do I look at the reports, and not the briefs? It turns out that lawyers will make all sorts of arguments about what the evidence will say, but what is really relevant is the evidence actually presented. The expert reports, submitted before trial, are the broadest form of evidence that can be admitted - the court can whittle down what the jury hears, but typically experts are not allowed to go much beyond their reports.
These reports represent the best evidentiary presentation the parties have on the technical merits. It turns out that as a factual matter, both reports overlap quite a bit, and neither seems "wrong" as a matter of technical fact. I would sure hope so - these are pretty well respected professors and, quite frankly, the issues in this case are just not that complicated from a coding standpoint. (Note: for those wonder what gives me the authority to say that, I could say a lot, but I'll just note that in a prior life I wrote a book about software programming for an electronic mail API).
What level of abstraction was presented and argued to the jury? As far as I can tell from the reports, other than a couple or three routines that were directly copied, the Oracle's expert found little or no similar structures or sequences in the main body source code - the part that actually does the work. The only similarity - and it was nearly identical - was in the structure, sequence and organization of the grouping of function names, and the "packages" or files that they were located in.
For computer nerds, also identical were function names, parameter orders, and variable structures passed in as parameters. In other words, the header files were essentially identical. And they would have to be, if the goal is to have a compatible system. The inputs (the function names and parameters) and the outputs need to be the same. The only way you can disallow this usage of the API is to say that you cannot create an independent software program (even one of this size) that mimics the inputs and outputs of the original program.
To say that would be bad policy, and as I discuss below, probably not in accordance with precedent. This is why the experts are both right. Oracle's expert says they are identical, and Google copied because that was the best way to lure application developers - by providing compatibility (and the jury agreed, as to the copying part). Google's expert says, so what? The only thing copied was functional, and that's legal. It's this last part that a) led to the hung jury, and b) the court will have to rule on.
In my last post, I assumed that the level of abstraction must have been at a deeper level than just the names of the methods. Why did I do that?
First, the court's jury instructions make clear that function names are not at issue. But I guess the court left it to the jury whether the collection could be infringed.
Second, the idea that an API could be infringed is usually something courts decide well in advance of trial, and it's a question that doesn't usually make it to trial.
Third, based on media accounts, it appeared that there was more testimony about deeper similarities in the code. The copied functions, I argued in my prior post, supported that view. Except that there were no other similarities. I think it is a testament to Oracle's lawyers (and experts) that this misperception of a dirty clean room shone through in media reports, because the actual evidence belies the media accounts.
This is why I decided to dig deeper, and why one should not rely on second hand reports of important evidence. Based on my reading of the reports (and I admit that I could be missing something - I wasn't in the courtroom), I think that the court will have no choice but to hold that the collection of API names is uncopyrightable - at least at this level of abstraction and claimed infringement.
To the extent that there are bits of non-functional code, I would say that's probably fair use as a matter of law to implement a compatible system. I made a very similar argument in an article I wrote 12 years ago - long before I went into academia.
Prof. Boyden asked in a comment to my prior post whether there was any law that supported the copying of APIs structure and header files. I think there is: Lotus v. Borland. That case is famous for allowing Borland to mimic the Lotus structure, but there was also an API of sorts. Lotus macros were based on the menu structure, and to provide program compatiblity with Lotus, Borland implemented the same structure. So, for example, in Lotus, a user would hit "/" to bring up the menus, "F" to bring up the file menu, and "O" to bring up the open menu. As a result, the macro "/FO" would mimic this, to bring up the open menu.
Borland's product would "read" macro programs written for Lotus, and perform the same operation. No underlying similarity of the computer code, but an identical API that took the same inputs to create the same output the user expected.
Like the lower court here, the lower court there found infringement of the structure, sequence, and organization of the menu structure. Like the lower court here, the court there found it irrelevant that Borland got the menu structure from third-party books rather than Lotus's own product. (Here, Google asserts that it got the API's from Apache Harmony, a compatible Java system, rather than the Java documents themselves). There is some dispute about whether Sun sanctioned the Apache project, and what effect that should have on the case. I think that the Harmony is a red herring.The reality is that it does not matter either way - a copy is a copy is a copy - if the copy is illicit that is.
In Lotus, the lower court found the API creative and copyrightable, the very question facing the court here. On appeal, however, the First Circuit ruled that the API was a method of operation, likening it to the buttons on a VCR. I think that's a bit simplistic, but it was definitely the right ruling. The case went up to the Supreme Court, and it was a blockbuster case, expected to -- once and for all -- put this question to rest.
Alas, the Supreme Court affirmed without opinion by an evenly divided court. And the circuit court ruling stood. And it still stands - the court never took another case, and the gist of Lotus v. Borland has been applied over and over, but rarely as directly as it might apply here.
Wholesale, direct compatibility copying of APIs just doesn't happen very often, and certainly not on the scale and with the stakes of that at issue here. Perhaps that is why there is no definitive case holding that an entire API structure is uncopyrightable. You would think we would have by 2012, but nope. Lotus comes close, but it is not identical. In Lotus, the menu structure was much smaller, and the names and structure were far less creative. Further, the concern was macro programming written by users for internal use that would not allow them to switch to a new spreadsheet program. Java programs, on the other hand, are designed to be distributed to the public in most cases.
Then again, the core issue is the same: the ability to switch the underlying program while maintaining compatibility of programs that have already been written. Based on this similarity, my prediction is that Judge Alsup will say that the collection of names is not copyrightable, or at the very least usage of the API in this manner is fair use as a matter of law. We'll see if I'm right, and whether an appeals court affirms it.
Monday, May 07, 2012
Oracle v. Google - Round I jury verdict (or not)
The jury came back today with its verdict in round one of the epic trial between two giants: Oracle v. Google. This first phase was for copyright infringement. In many ways, this was a run of the mill case, but the stakes are something we haven't seen in a technology copyright trial in quite some time.
Here's the short story of what happened, as far as I can gather.
1. Google needed an application platform for its Android phones. This platform allows software developers to write programs (or "apps" in mobile device lingo) that will run on the phone.
2. Google decided that Sun's (now Oracle's) Java was the best way to go.
3. Google didn't want to pay Sun for a license to a "virtual machine" that would run on Android phones.
4. Google developed its own virtual machine that is compatible with the Java programming language. To do so, Google had to make "APIs" that were compatible with Java. These APIs are essentially modules that provide functionality on the phone based on a keywords (instructions) from a Java language computer program. For example, if I want to display "Hello World" on the phone screen, I need only call print("Hello World"). The API module has a bunch of hidden functionality that takes "Hello World" and sends it out to the display on the screen - manipulating memory, manipulating the display, etc.
5. The key dispute is just how much of the Java source code was copied, if any to create the Google version.
The jury today held the following:
1. One small routine (9 lines) was copied directly - line for line. The court said no damages for this, but this finding will be relevant later
2. Google copied the "structure, sequence, and organization" of 37 Java API modules. I'll discuss what this means later.
3. There was no finding on whether the copying was fair use - the jury deadlocked.
4. Google did not copy any "documentation" including comments in the source code.
5. Google was not fooled into thinking it had a license from Sun.
To understand any of this, one must understand the levels of abstraction in computer code. Some options are as follows:
A. Line by line copying of the entire source code.
B. Line by line paraphrasing of the source code (changing variable names, for example, but otherwise idential lines).
C. Copying of the structure, sequence and organization of the source code - deciding what functions to include or not, creative ways to implement them, creative ways to solve problems, creative ways to name and structure variables, etc. (The creativity can't be based on functionality)
D. Copying of the functionality, but not the stucture, sequence and organization - you usually find this with reverse engineering or independent development
E. Copying of just the names of functions with similar functionality - the structure and sequence is the same, but only as far as the names go (like print, save, etc.). The Court ruled already that this is not protected.
F. Completely different functionality, including different structure, sequence, organization, names, and functionality.
Obviously F was out if Google wanted to maintain compatibility with the Java programming language (which is not copyrightable).
So, Google set up what is often called a "cleanroom." The idea is not new - AMD famously set up a cleanroom to develop copyrighted aspects of its x86 compatible microprocessors back in the early 1990's. Like Google now (according to the jury), AMD famously failed to keep its cleanroom clean.
Here's how a cleanroom works. One group develops a specification of functionality for each of the API function names (which are, remember, not protected - people are allowed to make compatible programs using the same names, like print and save). Ideally, you do this through reverse engineering, but arguably it can be done by reading copyrighted specifications/manuals, and extracting the functionality. Quite frankly, you could probably use the original documentation as well, but it does not appear as "clean" when you do so.
Then, a second group takes the "pure functionality" description, and writes its own implementation. If it is done properly, you find no overlapping source code or comments, and no overlapping structure, sequence and organization. If there happens to be similar structure, sequence and organization, then the cleanroom still wins, because that similarity must have been dictated by functionality. After all, the whole point of the cleanroom is that the people writing the software could not copy because they did not have the original to copy from.
So, where did it all go wrong? There were a few smoking guns that the jury might have latched on to:
1. Google had some emails early on that said there was no way to duplicate the functionality, and thus Google should just take a license.
2. Some of the code (specifically, the 9 lines) were copied directly. While not big in itself, it makes one wonder how clean the team was.
3. The head of development noted in an email that it was a problem for the cleanroom people to have had Sun experience, but some apparently did.
4. Oracle's expert testified (I believe) that some of the similarities were not based on functionality, or were so close as to have been copied. Google's expert, of course, said the opposite, and the jury made its choice. It probably didn't help Google that Oracle's expert came from hometown Stanford, while Google's came from far-away Duke.
So, the jury may have just discounted the Google cleanroom story, and believed Oracle's. And that's what it found. As someone who litigated many copyight cases between competing companies, this is not a shocking outcome. This issue will not doubt bring the copyright v. functionality issue to the forefront (as it did in Lotus v. Borland and Intel v. AMD), this stuff is bread and butter for most technology copyright lawyers. It's almost always factually determined. Only the scope of this case is different in my book - everything else looks like many cases I've litigated (and a couple that I've tried).
So, what happens now in the copyright phase? (A trial on patent infringement started today.) Judge Alsup has two important decisions to make.
First, the court has to decide what to do with the fair use ruling. Many say that a mistrial is warranted since fair use is a question of fact and the jury deadlocked. I'm not so sure. The facts on fair use are not really disputed here - only the legal interpretation of them; my experience is that courts are more than willing to make a ruling one way or the other when copying is clear (as the jury now says it is). I don't know what the court will do, but my gut says no fair use here. My experience is that failed cleanrooms fail fair use - it means that what was copied was more than pure functionality, and it is for commercial use with market substitution. The only real basis for fair use is that the material copied was pure functionality, and that's the next inquiry.
Second, the court must determine whether the structure, sequence, and organization of these APIs can be copyrightable, or whether they are pure functionality. I don't know the answer to that question. It will depend in large part on:
a. whether the structure, etc., copied was at a high level (e.g. structure of functions) or at a low level (e.g. line by line and function by function);
b. the volume of copied (something like 11,000 lines is at issue);
c. the credibility of the experts in testifying to how much of structure that is similar is functionally based. On a related note, the folks over at groklaw think for the most part think this is not copyrightable. They have had tremendous coverage of this case.
I've been on both sides of this argument, and I've seen it go both ways, so I don't have any predictions. I do look forward to seeing the outcome, though. It has been a while since I've written about copyright law and computer software; this case makes me want to rejoin the fray.
Thursday, May 03, 2012
When a Good Interpretation is the Wrong One (CFAA Edition)
Hi, and thanks again to Prawfs for having me back. In my first post, I want to revisit the CFAA and the Nosal case. I wrote about this case back in April 2011 (when the initial panel decision was issued), and again in December (when en banc review was granted). It's hard to believe that it has been more than a year!
I discuss the case in detail in the other posts, but for the busy and uninitiated, here is the issue: what does it mean to "exceed authorized access" to a computer? In Nosal, the wrongful act was essentially trade secret misappropriation where the "exceeded authorization" was violation of a clear "don't use our information except for company benefit" type of policy. Otherwise, the employees had access to the database from which they obtained information as part of their daily work.
Back in April, I argued that the panel basically got the interpretation of the statute right, but that the interpretation was so broad as to be scary. Orin Kerr, who has written a lot about this, noted in the comments that such a broad interpretation would be void for vagueness because it would ensnare too much everyday, non-wrongful activity. Though I'm not convinced that the law supports his view, it wouldn't break my heart if that were the outcome. But that's not the end of the story.
Last month, the Ninth Circuit finally issued the en banc opinion in the Nosal case. The court noted all the scary aspects of a broad interpretation, trotting out the parade of horribles showing innocuous conduct that would violate the broadest reading of the statute. As the court notes: "Ubiquitous, seldom-prosecuted crimes invite arbitrary and discriminatory enforcement." We all agree on that.
The solution for the court was to narrowly interpret what "exceeds authorized access" means: "we hold that 'exceeds authorized access' in the CFAA is limited to violations of restrictions on access to information, and not restrictions on its use." (emphasis in original).
On the one hand, this is a normatively "good" interpretation. The court applies the rule of lenity to not outlaw all sorts of behavior that shouldn't be outlawed and that was likely never intended to be outlawed. So, I'm not complaining about the final outcome.
On the other hand, I can't get over the fact that the interpretation is just plain wrong as a matter of statutory interpretation. Here are some of the reasons why:
1. The term "exceeds authorized access" is defined in the statute: "'exceeds authorized access' means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter." The statute on its face makes clear that exceeding access is not about violating an access restriction, but instead about using access to obtain information that one is not so entitled to obtain. To say that a use restriction cannot be part of the statute simply rewrites the definition.
2. They key section of the statute is not about use of information at all. Section 1030(a)(2) outlaws access to a computer, where such access leads to obtaining (including viewing) of information. So, of course exceeding authorized access should deal with an access restriction, but what is to stop everyone from rewriting their agreements conditionally: "Your access to this server is expressly conditioned on your intent at the time of access. If your intent is to use the information for nefarious purposes, then your access right is revoked." The statutory interpretation can't be so easily manipulated, but it appears to be.
3. Even if you accept the court's reading as in line with the statute, it still leaves much uncertainty in practice. For example, the court points to Google's former terms of service that disallowed minors from using Google: You may not use the Services and may not accept the Terms if . . . you are not of legal age to form a binding contract with Google . . . .” I agree that it makes little sense for all minors who use Google to be juvenile delinquents. But read the terms carefully - they are not about use of information; they are about permission to access the services. If you are a minor, you may not use our services (that is, access our server). I suppose this is a use restriction because the court used it as an example, but that's not so clear to me.
4. The court states that Congress couldn't have meant exceeds authorized access to be about trade secret misappropriation and really only about hacking. 1030(a)(1)(a) belies that reading. That section outlaws exceeding authorized access to obtain national secrets and causing them "to be communicated, delivered, or transmitted, or attempt[ing] to communicate, deliver, transmit or cause to be communicated, delivered, or transmitted the same to any person not entitled to receive it." That sounds a lot like misappropriation to me, and I bet Congress had a situation like Nosal in mind.
5. In fact, trade secrets appear to be exactly what Congress had in mind. The section that would ensnare most unsuspecting web users, 1030(a)(2) (which bars "obtaining" information by exceeding authorized access), was added in the same public law as the Economic Espionage Act of 1996 - the federal trade secret statute. The senate reports for the EEA and the change to 1030 were issued on the same day. As S. Rep. 104-357 makes clear, the addition was to protect the privacy of information on civilian computers. Of course, this helps aid a narrower reading - if information is not private on the web, then perhaps we should not be so concerned about it.
6. On a related note, the court's treatment of the legislative history is misleading. The definition of "exceeds authorized access" was changed in 1986. As the court notes in a footnote:
[T]he government claims that the legislative history supports itsinterpretation. It points to an earlier version of the statute, which defined“exceeds authorized access” as “having accessed a computer with authorization, uses the opportunity such access provides for purposes to which such authorization does not extend.” But that language was removed and replaced by the current phrase and definition.
So far, so good. In fact, this change alone seems to support the court's view, and I would have stopped there. But the the court goes on to state:
And Senators Mathias and Leahy—members of theSenate Judiciary Committee—explained that the purpose of replacing the original broader language was to “remove from the sweep of the statute one of the murkier grounds of liability, under which a[n] . . . employee’s access to computerized data might be legitimate in some circumstances, but criminal in other (not clearly distinguishable) circumstances.”
This reading is just not accurate in content or spirit. I reproduce below sections of S. Rep. 99-472, the legislative history cited by the court:
[On replacing "knowing" access with "intentional" access] This is particularly true in those cases where an individual is authorized to sign onto and use a particular computer, but subsequently exceeds his authorized access by mistakenly entering another computer file or data that happens to be accessible from the same terminal. Because the user had ‘knowingly’ signed onto that terminal in the first place, the danger exists that he might incur liability for his mistaken access to another file. ... The substitution of an ‘intentional’ standard is designed to focus Federal criminal prosecutions on those whose conduct evinces a clear intent to enter, without proper authorization, computer files or data belonging to another.. . .[Note: (a)(3) was about access to Federal computers by employees. Access to private computers was not added for another 10 years. At the time (a)(2) covered financial information.] The Committee wishes to be very precise about who may be prosecuted under the newsubsection (a)(3). The Committee was concerned that a Federal computer crime statute not be so broad as to create a risk that government employees and others who are authorized to use a Federal Government computer would face prosecution for acts of computer access and use that, while technically wrong, should not rise to the levelof criminal conduct. At the same time, the Committee was required to balance its concern for Federal employees and other authorized users against the legitimate need to protect Government computers against abuse by ‘outsiders.’ The Committee struck that balance in the following manner.In the first place, the Committee has declined to criminalize acts in which the offending employee merely ‘exceeds authorized access' to computers in his own department ... It is not difficult to envision an employee or other individual who, while authorized to use a particular computer in one department, briefly exceeds his authorized access and peruses data belonging to the department that he is not supposed to look at. This is especially true where the department in question lacks a clear method of delineating which individuals are authorized to access certain of its data. The Committee believes that administrative sanctions are more appropriate than criminal punishment in such a case. The Committee wishes to avoid the danger that every time an employee exceeds his authorized access to his department's computers—no matter how slightly—he could be prosecuted under this subsection. That danger will be prevented by not including ‘exceeds authorized access' as part of this subsection's offense. [emphasis added]Section 2(c) substitutes the phrase ‘exceeds authorized access' for the more cumbersome phrase in present 18 U.S.C. 1030(a)(1) and (a)(2), ‘or having accessed a computer with authorization, uses the opportunity such access provides for purposes to which such authorization does not extend’. The Committee intends this change to simplify the language in 18 U.S.C. 1030(a)(1) and (2)... [note: not to change the meaning, though obviously it does]
[And finally, the quote in the Nosal case, which were "additional" comments in the report, not the report of the committee itself]: [1030(a)(3)] would eliminate coverage for authorized access that aims at ‘purposes to which such authorization does not extend.’ This removes from the sweep of the statute one of the murkier grounds of liability, under which a Federal employee's access to computerized data might be legitimate in some circumstances, but criminal in other (not clearly distinguishable) circumstances that might be held to exceed his authorization.
Tuesday, April 17, 2012
“Breaking and Entering” Through Open Doors: Website Scripting Attacks and the Computer Fraud and Abuse Act, Part 2
Two notes: 1) Apologies to Prawfs readers for the delay in this post. It took my student and I longer than anticipated to complete some of the technical work behind this idea. 2) This post is a little longer than originally planned, because last week the Ninth Circuit en banc reversed a panel decision in United States v. Nosal which addressed whether the CFAA extends to violations of (terms of) use restrictions. In reversing the panel decision, the Ninth Circuit found the CFAA did *not* extend to such restrictions.
The idea for this post originally arose when I noticed I was able to include a hyperlink in a comment I made on a Prawfs' post. One of my students (Nick Carey) had just finished a paper discussing the applicability of the Computer Fraud and Abuse Act (CFAA) to certain types of cyberattacks that would exploit the ability to hyperlink blog comments, so I contacted Dan and offered to see if Prawfs was at risk, as it dovetailed nicely with a larger project I'm working on regarding regulating cybersecurity through criminal law.
The good news: it's actually hard to "hack" Prawfs. As best we can tell the obvious vulnerabilities are patched. It got me thinking, though, that as we start to clear away the low-hanging fruit in cybersecurity through regulatory action, focus is likely to shift to criminal investigations to address more sophisticated attackers.
Sophisticated attackers often use social engineering as a key part of their attacks. Social engineering vulnerabilities generally arise when there is a process in place to facilitate some legitimate activity, and when that process can be corrupted -- by manipulating the actors who use it -- to effect an outcome not predicted (and probably not desired). Most readers of this blog likely encounter such attacks on a regular basis, but have (hopefully!) been trained or learned how to recognize such attacks. One common example is the email, purportedly from a friend, business, or other contact, that invites you to click on a link. Once clicked on, this link in fact does not lead to the "exciting website" your friend advertised, but rather harvests the username and password for your email account and uses those for a variety of evil things.
I describe this example, which hopefully resonates with some readers (if not, be thankful for your great spam filters!), because it resembles the vulnerability we *did* find in Prawfs. This vulnerability, which perhaps is better called a design choice, highlights the tension in legal solutions to cybercrime I discuss here. Allowing commenters to hyperlink is a choice -- one that forms the basis for the "open doors" component of this question: should a user be held criminally liable under federal cybercrime law for using a website "feature" in a way other than that intended (or perhaps desired) by the operators of a website, but in a way that is otherwise not unlawful.
Prawfs uses TypePad, a well-known blogging software platform that handles (most) of the security work. And, in fact, it does quite a good job -- as mentioned above, most of the common vulnerabilities are closed off. The one we found remaining is quite interesting. It stems from the fact that commenters are permitted to use basic HTML (the "core" language in which web pages are written) in writing their comments. The danger in this approach is that it allows an attacker to include malicious "code" in their comments, such as the type of link described above. Since the setup of TypePad allows for commenters to provide their own name, it is also quite easy for an attacker to "pretend" to be someone else and use that person's "authority" to entice readers to click on the dangerous link. The final comment of Part 1 provides an example, here.
A simple solution -- one to which many security professionals rush -- is just to disable the ability to include HTML in comments. (Security professionals often tend to rush to disable entirely features that create risk.) Herein lies the problem: there is a very legitimate reason for allowing HTML in comments; it allows legitimate commenters to include clickable links to resources they cite. As we've seen in many other posts, this can be a very useful thing to do, particularly when citing opinions or other blog posts. Interestingly, as an aside, I've often found this tension curiously to resemble that found in debates about restricting speech on the basis of national security concerns. But that is a separate post.
Cybercrime clearly is a substantial problem. Tradeoffs like the one discussed here present one of the core reasons the problem cannot be solved through technology alone. Turning to law -- particularly regulating certain undesired behaviors through criminalization -- is a logical and perhaps necessary step in addressing cybersecurity problems. As I have begun to study this problem, however, I have reached the conclusion that legal solutions face a structurally similar set of tradeoffs as do technical solutions.
The CFAA is the primary federal law criminalizing certain cybercrime and "hacking" activities. The critical threshold in many CFAA cases is whether a user has "exceeded authorized access" (18 U.S.C. § 1030(a)) on a computer system. But who defines "authorized access?" Historically, this was done by a system administrator, who set rules and policies for how individuals could use computers within an organization. The usernames and passwords we all have at our respective academic institutions, and the resources those credentials allow us to access, are an example of this classic model.
What about a website like Prawfs? Most readers don't use a login and password to read or comment, but do for posting entries. Like most websites, there is a policy addressing (some of) the aspects of acceptable use. That policy, however can change at any time and without notice. (There are good reasons this is the case, the simplest being it is not practical to notify every person who ever visits the website of any change to the policy in advance of such changes taking effect.) What if a policy changes, however, in a way that makes an activity -- one previously allowed -- now impermissible? Under a broad interpretation of the CFAA, the user continuing to engage in the now impermissible activity would be exceeding their authorized access, and thereby possibly running afoul of the CFAA (specifically (a)(2)(C)).
Some courts have rejected this broad interpretation, perhaps most famously in United States v. Lori Drew, colloquially known as the "MySpace Mom" case. Other courts have accepted a broader view, as discussed by Michael Risch here and here. I find the Drew result correct, if frustrating, and the (original) Nosal result scary and incorrect. Last week, the Ninth Circuit en banc reversed itself and adopted a more Drew-like view of the CFAA. I am particularly relieved by the majority's understanding of the CFAA overbreadth problem:
The government’s construction of the statute would expand its scope far beyond computer hacking to criminalize any unauthorized use of information obtained from a computer. This would make criminals of large groups of people who would have little reason to suspect they are committing a federal crime. While ignorance of the law is no excuse, we can properly be skeptical as to whether Congress, in 1984, meant to criminalize conduct beyond that which is inherently wrongful, such as breaking into a computer.
(United States v. Nosal, No. 10-10038 (9th Cir. Apr. 10, 2012) at 3864.)
I think the court recognizes here that an overbroad interpretation of the CFAA is similar to extending a breaking and entering statute to just walking in an open door. The Ninth Circuit appears to adopt similar thinking, noting that Congress' original intent was to address the issue of hackers breaking into computer systems, not innocent actors who either don't (can't?) understand the implications of their actions or don't intend to "hack" a system when they find the system allows them to access a file or use a certain function:
While the CFAA is susceptible to the government’s broad interpretation, we find Nosal’s narrower one more plausible. Congress enacted the CFAA in 1984 primarily to address the growing problem of computer hacking, recognizing that, “[i]n intentionally trespassing into someone else’s computer files, the offender obtains at the very least information as to how to break into that computer system.” S. Rep. No. 99-432, at 9 (1986) (Conf. Rep.).
(Nosal at 3863.)
Obviously the Ninth Circuit is far from the last word on this issue, and the dissent notes differences in how other Circuits have viewed the CFAA. I suspect at some point, unless Congress first acts, the Supreme Court will end up weighing in on the issue. Before that, I hope to produce some useful thoughts on the issue, and eagerly solicit feedback from Prawfs readers. I've constructed a couple of examples below to illustrate this in the context of the Blawg.
Consider, for example, a change in a blog's rules restricting what commenters may link to in their comments. Let's assume that, like Prawfs, currrently there are no specific posted restrictions. Let's say a blog decided it had a serious problem with spam (thankfully we don't here at Prawfs), and wanted to address this by adjusting the acceptable use policy for the blog to prohibit linking to any commercial product or service. We probably wouldn't feel much empathy for the unrelated spam advertisers who filled the comments with useless information about low-cost, prescriptionless, mail-order pharmaceuticals. We definitely wouldn't about the advance-fee fraud advertisers. But what about the practitioner who is an active participant in the blog, contributes to substantive discussions, and occassionally may want to reference or link to their practice in order to raise awareness?
Technically, all three categories of activity would violate (the broad interpretation of) (a)(2)(C). Note that the intent requirement -- or lack thereof -- in (a)(2)(C) is a key element of why these are treated similarly: the only "intent" required for violation is intent to access. (a)(2)(C) does not distinguish among actors' intent beyond this. As I have commented elsewhere (scroll down), one can easily construct scenarios under a "scary" reading of the CFAA where criminal law might be unable to distinguish between innocent actors lacking any reasonable element of what we traditionally consider mens rea, and malicious actors trying to takeover or bring down information systems. At the moment, I tend to think there's a more difficult problem discerning intent in the "gray area" examples I constructed here, particularly the Facebook examples when a username/password is involved. But I wonder what some of the criminal law folks think about whether intent really *is* harder, or if we could solve that problem with better statutory construction of the CFAA.
Finally, I've added one last comment to the original post (Part 1) that highlights both how easy it is to engage in such hacking (i.e., this isn't purely hypothetical) and how difficult it is to address the problem with technical solutions (i.e., those solutions would have meant none of this post -- or of my comments on the Facebook passwords post -- could have contained clickable links). I also hope it adds a little bit of "impact factor." The text of the comment explains how it works, and also provides an example of how it could be socially engineered.
In sum, the lack of clarity in the CFAA, and the resulting "criminalization overbreadth," is what concerns me -- and, thankfully, apparently the Ninth Circuit. In the process of examining whether Prawfs/TypePad had any common vulnerabilities, it occurred to me that in the rush to defend against legitimate cybercriminals, there may develop significant political pressure to over-criminalize other activities which are not proper for regulation through the criminal law. We have already seen this happen with child pornography laws and sexting. I am extremely interested in others' thoughts on this subject, and hope I have depicted the problem in a way digestible to non-technical readers!
Thursday, March 22, 2012
Wired, and ThreatenedI have a short op-ed on how technology provides both power and peril for journalists over at JURIST. Here's the lede:
Journalists have never been more empowered, or more threatened. Information technology offers journalists potent tools to gather, report and disseminate information — from satellite phones to pocket video cameras to social networks. Technological advances have democratized reporting... Technology creates risks along with capabilities however... [and] The arms race of information technology is not one-sided.
Wednesday, February 22, 2012
“Breaking and Entering” Through Open Doors: Website Scripting Attacks and the Computer Fraud and Abuse Act, Part 1
IMPORTANT: clicking through to the main body of this post
Seriously. Please read more below before clicking through to the post!
Thank you Dan, Sarah, and the other Prawfs hosts for giving me the opportunity to guest Blawg! I will be writing about a project I am currently working on with one of my students (Nick Carey), examining common website cybersecurity vulnerabilities in the context of cybercrime law.
The purpose of this post is to examine these (potential) cybersecurity vulnerabilities in PrawfsBlawg. It is the first of what I hope will be a few posts examining how current federal cybercrime law (the Computer Fraud and Abuse Act, or CFAA) applies to certain Internet activities that straddle the line between aggressive business practices and criminal intent.
While certainly possible to analyze these without a public post, making the post public provides more opportunity to showcase these vulnerabilities in a way that brings the debate to life without the "risk" of engaging attackers set on causing damage.
As other scholars have observed, judicial references to the CFAA notably increased over the past decade. Part 2 of this post, which will be forthcoming after we identify which vulnerabilities are (and are not) present in the Blawg, will provide a more substantive treatment of the legal issues involved and a (better) place for discussion.
Wednesday, February 15, 2012
Coasean Positioning System
Ronald Coase's theory of reciprocal causation is alive, well, and interfering with GPS. Yesterday, the FCC pulled the plug on a plan by LightSquared to build a new national wireless network that combines cell towers and satellite coverage. The FCC went along with a report from the NTIA that LightSquared's network would cause many GPS systems to stop working, including the ones used by airplanes and regulated closely by the FAA. Since there's no immediately feasible way to retrofit the millions of GPS devices out in the field. LightSquared had to die so that GPS could live.
LightSquared's "harmful interference" makes this sound like a simple case of electromagnetic trespass. But not so fast. LightSquared has had FCC permission to use the spectrum between 1525 and 1559 megahertz, in the "mobile-satellite spectrum" band. That's not where GPS signals are: they're in the next band up, the "radionavigation satellite service" band, which runs from 1559 to 1610 megahertz. According to LightSquared, its systems would be transmitting only in its assigned bandwidth--so if there's interference, it's because GPS devices are listening to signals in a part of the spectrum not allocated to them. Why, LightSquared plausibly asks, should it have a duty of making its own electromagnetic real estate safe for trespassers?
The underlying problem here is that "spectrum" is an abstraction for talking about radio signals, but real-life uses of the airwaves don't neatly sort themselves out according to its categories. In his 1959 article The Federal Communications Commission, Coase explained:
What does not seem to have been understood is that what is being allocated by the Federal Communications Commission, or, if there were a market, what would be sold, is the right to use a piece of equipment to transmit signals in particular way. Once the question is looked at in this way, it is unnecessary to think in terms of ownership of frequencies of the ether.
Now add to this point Coase's observation about nuisance: that the problem can be solved either by the polluter or the pollutee altering its activities, and so in a sense should be regarded as being caused equally by both of them. So here. "Interference" is a property of both transmitters and receivers; one man's noise is another man's signal. GPS devices could have been designed with different filters from the start, filters that were more aggressive in rejecting signals from the mobile-satellite band. But those filters would have added to the cost of a GPS unit, and worse, they'd have degraded the quality of GPS reception, because they would have thrown out some of the signals from the radionavigation-satellite band. (The only way to build a completely perfect filter is to make it capable of traveling back in time. No kidding!) Since the mobile-satellite band wasn't at the time being used anywhere close to as intensively as LightSquared now proposes to use it, it made good sense to build GPS devices that were sensitive rather than robust.
There are multiple very good articles on property, tort, and regulatory lurking in this story. There's one on the question Coase was concerned with: regulation versus ownership as means of choosing between competing uses (like GPS and wireless broadband). There's another on the difficulty of even defining property rights to transmit, given the failure of the "spectrum" abstraction to draw simple bright lines that avoid conflicting uses. There's one on the power of incumbents to gain "possession" over spectrum not formally assigned to them. There's another on investment costs and regulatory uncertainty: LightSquare has already launched a billion-dollar satellite. And there's one on technical expertise and its role in regulatory policy. Utterly fascinating.
Wednesday, February 08, 2012
Criminalizing Cyberbullying and the Problem of CyberOverbreadth
In the past few years, reports have attributed at least fourteen teen suicides to cyberbullying. Phoebe Prince of Massachusetts, Jamey Rodemeyer of New York, Megan Meier of Missouri, and Seth Walsh of California are just some of the children who have taken their own lives after being harassed online and off.
These tragic stories are a testament to the serious psychological harm that sometimes results from cyberbullying, defined by the National Conference of State Legislatures as the "willful and repeated use of cell phones, computers, and other electronic communications devices to harass and threaten others." Even when victims survive cyberbullying, they can suffer psychological harms that last a lifetime. Moreover, an emerging consensus suggests that cyberbullying is reaching epidemic proportions, though reliable statistics on the phenomenon are hard to come by. Who, then, could contest that the social problem of cyberbullying merits a legal response?
In fact, a majority of states already have legislation addressing electronic harassment in some form, and fourteen have legislation that explicitly uses the term cyberbullying. (Source: here.) What's more, cyber-bullying legislation has been introduced in six more states: Georgia, Illinois, Kentucky, Maine, Nebraska, and New York. A key problem with much of this legislation, however, is that legislators have often conflated the legal definition of cyberbullying with the social definition. Though understandable, this tendency may ultimately produce legislation that is unconstitutional and therefore ineffective at remedying the real harms of cyberbullying.
Consider, for instance, a new law proposed just last month by New York State Senator Jeff Klein (D- Bronx) and Congressman Bill Scarborough. Like previous cyberbullying proposals, the New York bill was triggered by tragedy. The proposed legislation cites its justification as the death of 14-year-old Jamey Rodemeyer, who committed suicide after being bullied about his sexuality. Newspaper accounts also attribute the impetus for the legislation to the death of Amanda Cummings, a 15 year old New York teen who committed suicide by stepping in front of a bus after she was allegedly bullied at school and online. In light of these terrible tragedies, it is easy to see why New York legislators would want to take a symbolic stand against cyberbullying and join the ranks of states taking action against it.
The proposed legislation (S6132-2011) begins modestly enough by "modernizing" pre-existing New York law criminalizing stalking and harassment. Specifically, the new law amends various statutes to make clear that harassment and stalking can be committed by electronic as well as physical means. More ambitiously, the new law increases penalties for cyberbullying of "children under the age of 21," and broadly defines the activity that qualifies for criminalization under the act. The law links cyberbullying with stalking, stating that "a person is guilty of stalking in the third degree when he or she intentionally, and for no legitimate purpose, engages in a course of conduct directing electronic communication at a child [ ], and knows or reasonably should know that such conduct: (a) causes reasonable fear of material harm to the physical health, safety or property of such child; or (b) causes material harm to the physical health, emotional health, safety or property of such child." (emphasis mine) Even a single communication to multiple recipients about (and not necessarily to) a child can constitute a "course of conduct" under the statute.
Like the sponsors of this legislation, I deplore cyber-viciousness of all varieties, but I also condemn the tendency of legislators to offer well intentioned but sloppily drafted and constitutionally suspect proposals to solve pressing social problems. In this instance, the legislation opts for a broad definition of cyberbullying based on legislators' desires to appear responsive to the cyberbullying problem. The broad statutory definition (and perhaps resorting to criminalization rather than other remedies) creates positive publicity for legislators, but broad legal definitions that encompass speech and expressive activities are almost always constitutionally overbroad under the First Amendment.
Again, consider the New York proposal. The mens rea element of the offensive requires only that a defendant "reasonably should know" that "material harm to the . . . emotional health" of his target will result, and it is not even clear what constitutes "material harm." Seemingly, therefore, the proposed statute could be used to prosecute teen girls gossiping electronically from their bedrooms about another teen's attire or appearance. Likewise, the statute could arguably criminalize a Facebook posting by a 20-year-old college student casting aspersions on his ex-girlfriend. In both instances, the target of the speech almost certainly would be "materially" hurt and offended upon learning of it, and the speakers likely should reasonably know such harm would occur. Just as clearly, however, criminal punishment of "adolescent cruelty," which was a stated justification of the legislation, is an unconstitutional infringement on freedom of expression.
Certainly the drafters of the legislation may be correct in asserting that "[w]ith the use of cell phones and social networking sites, adolescent cruelty has been amplified and shifted from school yards and hallways to the Internet, where a nasty, profanity-laced comment, complete with an embarrassing photo, can be viewed by a potentially limited [sic] number of people, both known and unknown." They may also be correct to assert that prosecutors need new tools to deal with a "new breed of bully." Neither assertion, however, justifies ignoring the constraints of First Amendment law in drafting a legislative response. To do so potentially misdirects prosecutorial resources, misallocates taxpayer money that must be devoted to passsing and later defending an unconstitutional law, and block the path toward legal reforms that would address cyberbullying more effectively.
With regard to criminal law, a meaningful response to cyberbullying--one that furthers the objectives of deterrence and punishment of wrongful behavior--would be precise and specific in defining the targeted conduct. A meaningful response would carefully navigate the shoals of the First Amendment's protection of speech, acknowledging that some terrible behavior committed through speech must be curtailed through educating, socializing, and stigmatizing perpetrators rather than criminalizing and censoring their speech.
Legislators may find it difficult to address all the First Amendment ramifications of criminalizing cyberbullying, partly because the term itself potentially obscures analysis. Cyberbullying is an umbrella term that covers a wide variety of behaviors, including threats, stalking, harassment, eavesdropping, spoofing (impersonation), libel, invasion of privacy, fighting words, rumor-mongering, name-calling, and social exclusion. The First Amendment constraints on criminalizing the speech behavior involved in cyberbullying depends on which category of speech behavior is involved. Some of these behaviors, such as issuing "true threats" to harm another person or taunting them with "fighting words," lie outside the protection of the First Amendment. (See Virginia v. Black and Chaplinsky v. New Hampshire; but see R.A.V and my extended analysis here.). Some other behaviors that may cause deep emotional harm, such as name-calling, are just as clearly protected by the First Amendment in most contexts. (Compare, e.g., Cohen v. California with FCC v. Pacifica).
But context matters profoundly in determining the scope of First Amendment protection of speech. Speech in schools and workplaces can be regulated in ways that speech in public spaces cannot (See, e.g., Bethel School Dist. No. 403 v. Fraser). Even within schools, the speech of younger minors can be regulated in ways that speech of older minors cannot (Cf. Hazelwood with Joyner v. Whiting (4th Cir)) , and speech that is part of the school curriculum can be regulated in ways that political speech cannot. (Compare, e.g., Tinker with Hazelwood). Outside the school setting, speech on matters of public concern receives far more First Amendment protection than speech dealing with other matters, even when such speech causes tremendous emotional upset. (See Snyder v. Phelps). But speech targeted at children likely can be regulated in ways that speech targeted at adults cannot, given the high and possibly compelling state interest in protecting the well-being of at least younger minors. (But see Brown v. Ent. Merchants Ass'n). Finally, even though a single instance of offensive speech may be protected by the First Amendment, the same speech repeated enough times might become conduct subject to criminalization without exceeding constitutional constraints. (See Pacifica and the lower court cases cited here).
Any attempt to use criminal law to address the social phenomenon should probably start with the jurisprudential question of which aspects of cyberbullying are best addressed by criminal law, which are best addressed by other bodies of law, and which are best left to non-legal control. Once that question is answered, criminalization of cyberbullying should proceed by identifying the various forms cyberbullying can take and then researching the specific First Amendment constraints, if any, on criminalizing that form of behavior or speech. This approach should lead legislators to criminalize only particularly problematic forms of narrowly defined cyberbullying, such as . While introducing narrow legislation of this sort may not be as satisfying as criminalizing "adolescent cruelty," it is far more likely to withstand constitutional scrutiny and become a meaningful tool to combat serious harms.
Proposals to criminalize cyberbullying often seem to proceed from the notion that we will know it when we see it. In fact, most of us probably will: we all recognize the social problem of cyberbullying, defined as engaging in electronic communication that transgresses social norms and inflicts emotional distress on its targets. But criminal law cannot be used to punish every social transgression, especially when many of those transgressions are committed through speech, a substantial portion of which may be protected by the First Amendment.
[FYI: This blog post is the underpinning of a talk I'm giving at the Missouri Law Review's Symposium on Cyberbullying later in the week, and a greatly expanded and probably significantly changed version will ultimately appear in the Missouri Law Review, so I'd particularly appreciate comments. In the article, I expect to create a more detailed First Amendment guide for conscientious lawmakers seeking to regulate cyberbullying. I am especially excited about the symposium because it includes mental health researchers and experts as well as law professors. Participants include Barry McDonald (Pepperdine), Ari Waldman (Cal. Western), John Palfrey (Berkman Center at HLS), Melissa Holt (B.U.), Mark Small (Clemson), Philip Rodkin (U. Ill.), Susan P. Limber (Clemson), Daniel Weddle (UMKC), and Joew Laramie (consultant/former direction of Missouri A.G. Internet Crimes Against Children Taskforce).]
Posted by Lyrissa Lidsky on February 8, 2012 at 08:37 AM in Constitutional thoughts, Criminal Law, Current Affairs, First Amendment, Information and Technology, Lyrissa Lidsky, Web/Tech | Permalink | Comments (8) | TrackBack
Friday, February 03, 2012
The Used CD Store Goes Online
On Monday, Judge Sullivan of the Southern District of New York will hear argument on a preliminary injunction motion in Capitol Records v. ReDigi, a copyright case that could be one of the sleeper hits of the season. ReDigi is engaged in the seemingly oxymoronic business of "pre-owned digital music" sales: it lets its customers sell their music files to each other. Capitol Records, unamused, thinks the whole thing is blatantly infringing and wants it shut down, NOW.
There are oodles of meaty copyright issues in the case -- including many that one would not think would still be unresolved at this late date. ReDigi is arguing that what it's doing is protected by first sale: just as with physical CDs, resale of legally purchased copies is legal. Capitol's counter is that no physical "copy" changes hands when a ReDigi user uploads a file and another user downloads it. This disagreement cuts to the heart of what first sale means and is for in this digital age. ReDigi is also making a quiver's worth of arguments about fair use (when users upload files that they then stream back to themselves), public performance (too painfuly technical to get into on a general-interest blog), and the responsibility of intermediaries for infringements initiated by users.
I'd like to dwell briefly on one particular argument that ReDigi is making: that what it is doing is fully protected under section 117 of the Copyright Act. That rarely-used section says it's not an infringement to make a copy of a "computer program" as "an essential step in the utilization of the computer program." In ReDigi's view, the "mp3" files that its users download from iTunes and then sell through ReDigi are "computer programs" that qualify for this defense. Capitol responds that in the ontology of the Copyright Act, MP3s are data ("sound recordings," to be precise), not programs.
I winced when I read these portions of the briefs.
In the first place, none of the files being transferred through ReDigi are MP3s. ReDigi only works with files downloaded from the iTunes Store, and the only format that iTunes sells in is AAC (Advanced Audio Coding), not MP3. It's a small detail, but the parties' agreement to a false "fact" virtually guarantees that their error will be enshrined in a judicial opinion, leading future lawyers and courts to think that any digital music file is an "MP3."
Worse still, the distinction that divides ReDigi and Capitol -- between programs and data -- is untenable. Even before there were actual computers, Alan Turing proved that there is no difference between program and data. In a brilliant 1936 paper, he showed that any computer program can be treated as the data input to another program. We could think of an MP3 as a bunch of "data" that is used as an input to a music player. Or we could think of the MP3 as a "program" that, when run correctly, produces sound as an output. Both views are correct -- which is to say, that to the extent that the Copyright Act distinguishes a "program" from any other information stored in a computer, it rests on a distinction that collapses if you push too hard on it. Whether ReDigi should be able to use this "essential step" defense, therefore, has to rest on a policy judgment that cannot be derived solely from the technical facts of what AAC files are and how they work. But again, since the parties agree that there is a technical distinction and that it matters, we can only hope that the court realizes they're both blowing smoke.
Monday, December 19, 2011
Breaking the Net
Mark Lemley, David Post, and Dave Levine have an excellent article in the Stanford Law Review Online, Don't Break the Internet. It explains why proposed legislation, such as SOPA and PROTECT IP, is so badly-designed and pernicious. It's not quite clear what is happening with SOPA, but it appears to be scheduled for mark-up this week. SOPA has, ironically, generated some highly thoughtful writing and commentary - I recently read pieces by Marvin Ammori, Zach Carter, Rebecca MacKinnon / Ivan Sigal, and Rob Fischer.
There are two additional, disturbing developments. First, the public choice problems that Jessica Litman identifies with copyright legislation more generally are manifestly evident in SOPA: Rep. Lamar Smith, the SOPA sponsor, gets more campaign donations from the TV / movie / music industries than any other source. He's not the only one. These bills are rent-seeking by politically powerful industries; those campaign donations are hardly altruistic. The 99% - the people who use the Internet - don't get a seat at the bargaining table when these bills are drafted, negotiated, and pushed forward.
Second, representatives such as Mel Watt and Maxine Waters have not only admitted to ignorance about how the Internet works, but have been proud of that fact. They've been dismissive of technical experts such as Vint Cerf - he's only the father of TCP/IP - and folks such as Steve King of Iowa can't even be bothered to pay attention to debate over the bill. I don't mind that our Congresspeople are not knowledgeable about every subject they must consider - there are simply too many - but I am both concerned and offended that legislators like Watt and Waters are proud of being fools. This is what breeds inattention to serious cybersecurity problems while lawmakers freak out over terrorists on Twitter. (If I could have one wish for Christmas, it would be that every terrorist would use Twitter. The number of Navy SEALs following them would be... sizeable.) It is worrisome when our lawmakers not only don't know how their proposals will affect the most important communications platform in human history, but overtly don't care. Ignorance is not bliss, it is embarrassment.
Cross-posted at Info/Law.
Posted by Derek Bambauer on December 19, 2011 at 01:49 PM in Blogging, Constitutional thoughts, Corporate, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Music, Property, Television, Web/Tech | Permalink | Comments (1) | TrackBack
Wednesday, December 14, 2011
Six Things Wrong with SOPA
America is moving to censor the Internet. The PROTECT IP and Stop Online Piracy Acts have received considerable attention in the legal and tech world; SOPA's markup in the House occurs tomorrow. I'm not opposed to blacklisting Internet sites on principle; however, I think that thoughtful procedural protections are vital to doing so in a legitimate way. Let me offer six things that are wrong with SOPA and PROTECT IP: they harm cybersecurity, are wildly overbroad and vague, enable unconstitutional prior restraint, undercut American credibility on Internet freedom, damage a well-working system for online infringement, and lack any empirical justification whatsoever. And, let me address briefly Floyd Abrams's letter in support of PROTECT IP, as it is frequently adverted to by supporters of the legislation. (The one-word summary: "sellout." The longer summary: The PROTECT IP letter will be to Abrams' career what the Transformersmovie was to that of Orson Welles.)
- Cybersecurity - the bills make cybersecurity worse. The most significant risk is that they impede - in fact, they'd prevent - the deployment of DNSSEC, which is vitally important to reducing phishing, man-in-the-middle attacks, and similar threats. Technical experts are unanimous on this - see, for example, Sandia National Laboratories, or Steve Crocker / Paul Vixie / Dan Kaminsky et al. Idiots, like the MPAA's Michael O'Leary, disagree, and simply assert that "the codes change." (This is what I call "magic elf" thinking: we can just get magic elves to change the Internet to solve all of our problems. Congress does this, too, as when it includes imaginary age-verifying technologies in Internet legislation.) Both bills would mandate that ISPs redirect users away from targeted sites, to government warning notices such as those employed in domain name seizure cases. But, this is exactly what DNSSEC seeks to prevent - it ensures that the only content returned in response to a request for a Web site is that authorized by the site's owner. There are similar problems with IP-based redirection, as Pakistan's inadvertent hijacking of YouTube demonstrated. It is ironic that at a time when the Obama administration has designated cybersecurity as a major priority, Congress is prepared to adopt legislation that makes the Net markedly less secure.
- Wildly overbroad and vague- the legislation (particularly SOPA) is a blunderbuss, not a scalpel. Sites eligible for censoring include those:
- primarily designed or operated for copyright infringement, trademark infringement, or DMCA § 1201 infringement
- with a limited purpose or use other than such infringement
- that facilitate or enable such infringement
- that promote their use to engage in infringement
- that take deliberate actions to avoid confirming high probability of such use
If Flickr, Dropbox, and YouTube were located overseas, they would plainly qualify. Targeting sites that "facilitate or enable" infringement is particularly worrisome - this charge can be brought against a huge range of sites, such as proxy services or anonymizers. User-generated content sites are clearly dead. And the vagueness inherent in these terms means two things: a wave of litigation as courts try to sort out what the terminology means, and a chilling of innovation by tech startups.
- Unconstitutional prior restraint - the legislation engages in unconstitutional prior restraint. On filing an action, the Attorney General can obtain an injunction that mandates blocking of a site, or the cutoff of advertising and financial services to it - before the site's owner has had a chance to answer, or even appear. This is exactly backwards: the Constitution teaches that the government cannot censor speech until it has made the necessary showing, in an adversarial proceeding - typically under strict scrutiny. Even under the more relaxed, intermediate scrutiny that characterizes review of IP law, censorship based solely on the government's say-so is forbidden. The prior restraint problem is worsened as the bills target the entire site via its domain name, rather than focusing on individualized infringing content, as the DMCA does. Finally, SOPA's mandatory notice-and-takedown procedure is entirely one-sided: it requires intermediaries to cease doing business with alleged infringers, but does not create any counter-notification akin to Section 512(g) of the DMCA. The bills tilt the table towards censorship. They're unconstitutional, although it may well take long and expensive litigation to demonstrate that.
- Undercuts America's moral legitimacy - there is an irreconciliable tension between these bills and the position of the Obama administration - especially Secretary of State Hillary Clinton - on Internet freedom. States such as Iran also mandate blocking of unlawful content; that's why Iran blocked our "virtual embassy" there. America surrenders the rhetorical and moral advantage when it, too, censors on-line content with minimal process. SOPA goes one step farther: it permits injunctions against technologies that circumvent blocking - such as those funded by the State Department. This is fine with SOPA adherents; the MPAA's Chris Dodd is a fan of Chinese-style censorship. But it ought to worry the rest of us, who have a stake in uncensored Internet communication.
- Undercuts DMCA - the notice-and-takedown provisions of the DMCA are reasonably well-working. They're predictable, they scale for both discovering infringing content and removing it, and they enable innovation, such as both YouTube itself and YouTube's system of monetizing potentially infringing content. The bills shift the burden of enforcement from IP owners - which is where it has traditionally rested, and where it belongs - onto intermediaries. SOPA in particular increases the burden, since sites must respond within 5 days of a notification of claimed infringement, with no exception for holidays or weekends. The content industries do not like the DMCA. That is no evidence at all that it is not functioning well.
- No empirical evidence - put simply, there is no empirical data suggesting these bills are necessary. The content industries routinely throw around made-up numbers, but they have been frequently debunked. How important are losses from foreign sites that are beyond the reach of standard infringement litigation, versus losses from domestic P2P networks, physical infringement, and the like? Data from places like Switzerland suggests that losses are, at best, minimal. If Hollywood wants America to censor the Internet, it needs to make a convincing case based on actual data, and not moronic analogies to stealing things off trucks. The bills, at their core, are rent-seeking: they would rewrite the law and alter fundamentally Internet free expression to benefit relatively small yet politically powerful industries. (It's no shock two key Congressional aides who worked on the legislation have taken jobs in Hollywood - they're just following Mitch Glazier, Dan Glickman, and Chris Dodd through the revolving door.) The bills are likely to impede innovation by the far larger information technology industry, and indeed to drive some economic activity in IT offshore.
The bills are bad policy and bad law. And yet I expect one of them to pass and be signed into law. Lastly, the Abrams letter: Noted First Amendment attorney Floyd Abrams wrote a letter in favor of PROTECT IP. Abrams's letter is long, but surprisingly thin on substantive legal analysis of PROTECT IP's provisions. It looks like advocacy, but in reality, it is Abrams selling his (fading) reputation as a First Amendment defender to Hollywood. The letter rehearses standard copyright and First Amendment doctrine, and then tries to portray PROTECT IP as a bill firmly in line with First Amendment jurisprudence. It isn't, as Marvin Ammori and Larry Tribe note, and Abrams embarrasses himself by pretending otherwise. Having the government target Internet sites for pre-emptive censorship, and permitting them to do so before a hearing on the merits, is extraordinary. It is error-prone - look at Dajaz1 and mooo.com. And it runs afoul of not only traditional First Amendment doctrine, but in particular the current Court's heightened protection of speech in a wave of cases last term. Injunctions affecting speech are different in character than injunctions affecting other things, such as conduct, and even the cases that Abrams cites (such as Universal City Studios v. Corley) acknowledge this. According to Abrams, the constitutionality of PROTECT IP is an easy call. That's only true if you're Hollywood's sockpuppet. Thoughtful analysis is far harder.
Cross-posted at Info/Law.
Posted by Derek Bambauer on December 14, 2011 at 09:07 PM in Constitutional thoughts, Culture, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Music, Property, Web/Tech | Permalink | Comments (1) | TrackBack
On the Move
Jane Yakowitz and I have accepted offers from the University of Arizona James E. Rogers College of Law. We're excited to join such a talented group! But, we'll miss our Brooklyn friends. Come visit us in Tucson!
Posted by Derek Bambauer on December 14, 2011 at 05:39 PM in Current Affairs, Getting a Job on the Law Teaching Market, Housekeeping, Information and Technology, Intellectual Property, Life of Law Schools, Teaching Law, Travel | Permalink | Comments (2) | TrackBack
Saturday, December 10, 2011
Copyright and Your Face
The Federal Trade Commission recently held a workshop on facial recognition technology, such as Facebook's much-hated system, and its privacy implications. The FTC has promised to come down hard on companies who abuse these capabilities, but privacy advocates are seeking even stronger protections. One proposal raised was to provide people with copyright in their faceprints or facial features. This idea has two demerits: it is unconstitutional, and it is insane. Otherwise, it seems fine.
Let's start with the idea's constitutional flaws. There are relatively few constitutional limits on Congressional power to regulate copyright: you cannot, for example, have perpetual copyright. And yet, this proposal runs afoul of two of them. First, imagine that I take a photo of you, and upload it to Facebook. Congress is free to establish a copyright system that protects that photo, with one key limitation: I am the only person who can obtain copyright initially. That's because the IP Clause of the Constitution says that Congress may "secur[e] for limited Times to Authors... the exclusive Right to their respective Writings." I'm the author: I took the photograph (copyright nerds would say that I "fixed" it in my camera's memory). The drafters of the Constitution had good reason to limit grants of copyright to authors: England spent almost 150 years operating under a copyright-like monopoly system that awarded entitlements to a distributor, the Stationer's Company. The British crown had an excellent reason for giving the Company a monopoly - the Stationer's Company implemented censorship. Having a single distributor with exclusive rights gives a government but one choke point to control. This is all to say that Congress can only give copyright to the author of a work, and the author is the person who creates / fixes it (here, the photographer). It's unconstitutional to award it to anyone else.
Second, Congress cannot permit facts to be copyrighted. That's partly for policy reasons - we don't want one person locking up facts for life plus seventy years (the duration of copyright) - and partly for definitional ones. Copyright applies only to works of creative expression, and facts don't qualify. They aren't created - they're already extant. Your face is a fact: it's naturally occurring, and you haven't created it. (A fun question, though, is whether a good plastic surgeon might be able to copyright the appearance of your surgically altered nose. Scholars disagree on this one.) So, attempting to work around the author problem by giving you copyright protection over the configuration of your face is also out. So, the proposal is unconstitutional.
It's also stupid: fixing privacy with copyright is like fixing alcoholism with heroin. Copyright infringement is ubiquitous in a world of digital networked computers. Similarly, if we get copyright in our facial features, every bystander who inadvertently snaps our picture with her iPhone becomes an infringer - subject to statutory damages of between $750 and $30,000. Even if few people sue, those who do have a powerful weapon on their side. Courts would inevitably try to mitigate the harsh effects of this regime, probably by finding most such incidents to be fair use. But that imposes high administrative costs, and fair use is an equitable doctrine - it invites courts to inject their normative views into the analysis. It also creates extraordinarily high administrative costs. It's already expensive for filmmakers, for example, to clear all trademarked and copyrighted items from the zones they film (which is why they have errors and omissions insurance). Now, multiply that permissions problem by every single person captured in a film or photograph. It becomes costly even to do the right thing - and leads to strategic behavior by people who see a potential defendant with deep pockets.
Finally, we already have an IP doctrine that covers this area: the right of publicity (which is based in state tort law). The right of publicity at least has some built-in doctrinal elements that deal with the problems outlined above, such as exceptions when one's likeness is used in a newsworthy fashion. It's not as absolute as copyright, and it lacks the hammer of statutory damages, which is probably why advocates aren't turning to it. But those are features, not bugs.
Privacy problems on social networks are real. But we need to address them with thoughtful, tailored solutions, not by slapping copyright on the problem and calling it done.
Cross-posted at Info/Law.
Posted by Derek Bambauer on December 10, 2011 at 06:03 PM in Constitutional thoughts, Corporate, Culture, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Property, Torts | Permalink | Comments (4) | TrackBack
Tuesday, December 06, 2011
Cry Baby Cry
The project to crowdsource a Tighter White Album (hereinafter TWA) is done, and we’ve come up with a list of 15 songs that might have made a better end product than the original. Today I want to discuss whether I've done something wrong, legally or morally.
I am no expert on European law, or its protection of the moral rights of the author, but I was reminded by Howard Knopf that my hypothetical exercise could generate litigation, as the author has rights against the distortion or mutilation of the work, separate from copyright protection. The current copyright act in the UK bars derogatory "treatments" of the work. A treatment can include "deletion from" the original, and the TWA is just that -- 15 songs were trimmed from the trimmed White Album, ostensibly to make something "better than" the original. To the extent the remaining Beatles and their heirs can agree on anything, it might be the sanctity of the existing discography in its extant form, at least as it encapsulates the end product stemming from the individual proclivities of the Beatles at the time. But see Free as a Bird. Fans and critics reacted strongly to Danger Mouse's recent splice of Jay-Z's Black Album and the Beatles' White Album, with one critic describing it as "an insult to the legacy of the Beatles (though ironically, probably intended as a tribute)". Could the TWA implicate the moral rights of the Beatles?
On one level, I and my (perhaps unwitting) co-conspirators are doing nothing more than music fans have done for generations: debating which songs of an artist's body of work merit approval and which merit approbrium. Coffee houses and bars are often filled with these discussions. Rolling Stones has made a cottage industry of ranking and reranking the top songs and albums of the recent past and in recent memory. This project is no different.
On the other hand, I am suggesting, by having the audacity to conduct this survey and publish the results, that the lads from Liverpool did it wrong, were too indulgent, etc., in releasing the White Album in its official form. That's different from saying "Revolution #9" is "not as good" as "Back in the U.S.S.R." (or vice versa). But to my eyes, it falls short of distortion.
Moral rights in sound recordings and musical compositions are not explicitly protected under the Copyright Act. In one case predating the effective date of the current Act, the Monty Python troupe was granted an injunction against the broadcast of its skits in heavily edited form on U.S. television, but that case was grounded more in contract law (ABC having exceded the scope of its license) and a right not to have the hack job attributed to the Pythons under the Lanham Act.* The TWA doesn't edit individual songs, and whilte the Monty Python case protected 30 minute Python episodes as a cohesive whole, it is difficult to argue that the copyright owners of the White Album are necessarily committed to the same cohesive view of the White Album, to the extent they sell individual songs online. One can buy individual Beatles songs, even from the White Album. Once you can buy individual tracks, can there really be moral rights implications in posting my preferred version of the album in a format that allows you to go and buy it?
On to the standard rights protected under U.S. copyright law. Yesterday, I talked about the possibility that the list itself might be a compilation, with protectable creativity in the selection. Might the TWA also be an unauthorized derivative work, exposing me to copyright liability? A derivative work is one "based on" a preexisting work, in which the original is "recast, transformed or adapted." That's similar to the language used to describe a treatment under UK law. Owners of sound recordings often release new versions, with songs added, outtakes included, and bonus art, ostensibly to sell copies to consumers who already purchased them. I certainly didn't ask the Beatles (or more precisely, the copyright owner of the White Album) for permission to propose a shortened album, but what I have done looks like an abridgement of the sort that might fall into traditional notions of fair use.
Once upon a time, I might have made a mixtape and distributed it to my dearest friends (although when I was young, the 45 minute tape was optimal, so I might have been forced to cut another song or two). Committing my findings to vinyl, compact disc, or mp3, using the original recordings, technically violate 17 USC 106(1)'s prohibition on unauthorized reproduction. If I give an unauthorized copy to someone else, I violate the exclusive right to distribute under section 106(3). Unlike the public performance and display rights, there is no express carve out for "private" copying and/or distribution, although it was historically hard to detect. The mixtape in its analog form seems like the type of private use that should be permitted under any reasonable interpretation of fair use, if not insulated by statute.
If I send my digital mixtape to all of my Facebook friends, that seems a bridge too far. However, Megan Carpenter has suggested that by failing to make room for the mix tape in the digital environment, copyright law "breeds contempt." 11 Nev. L.J. 44, 79-80 (2010). Jessica Litman, Joseph Liu, Glynn Lunney and Rebecca Tushnet, among others, have argued that space for personal consumption is as important in the digital realm as it was in the good old days when everything was analog.
If I instead use social networking tools like Spotify Social** to share my playlist, I probably don't infringe the 106(4, 6) public performance right. Because I use authorized channels, any streaming you do to preview my playlist is likely authorized. And if I post the playlist on iTunes, you can go and buy it as constituted. That seems somewhat closer to an unauthorized copy, but it's not actually unauthorized. The Beatles sell individual singles through iTunes, so it seems problematic to conclude that consumers are not authorized to buy only those songs they prefer.
So all in all, given that I'm not running a CD burner in my office, I think I'm in the clear. What do you think?
*A recent Supreme Court decision puts in doubt the Lanham Act portion of the Monty Python holding.
**The Spotify Social example is complicated by the fact that the Beatles aren't included, although I have found reasonable covers of all the songs included on the TWA. The copyright act explicitly provides for a compulsory license to make cover tunes, so long as the cover doesn't deviate too drastically from the original. 17 USC § 115(a). If the license was paid, and the copyright owner notified, those songs are authorized. My repackaging of them in a virtual mixtape, however, is not. 17 U.S.C. § 114(b).
Revisiting the Scary CFAA
Last April, I blogged about the Nosal case, which led to the scary result that just about any breach of contract on the internet can potentially be a criminal access to a protected computer. I discuss the case in extensive detail in that post, so I won't repeat it here. The gist is that employees who had access to a server in their ordinary course of work were held to have exceeded their authorization when they accessed that same server with the intent of funneling information out to a competitive ex-employee. The scary extension is that anyone breaching a contract with a web provider might then be considered to be accessing the web server in excess of authorization, and therefore committing a crime.
I'm happy to report that Nosal is now being reheard in the Ninth Circuit. I'm hopeful that the court will do something to rein in the case.
I think most of my colleagues agree with me that the broad interpretation of the statute is a scary one. Where some depart, though, is on the interpretive question. As you'll see in the comments to my last post, there is some disagreement about how to interpret the statute and whether it is void for vagueness. I want to address some of the continuing disagreement after the jump.
I think there are three ways to look at Nosal:
1. The ruling was right, and the extension to all web users is fine (ouch);
2. The ruling was right as to the Nosal parties, but should not be extended to all web users; and
3. The ruling was not right as to the Nosal parties, and also wrong as to all web users.
I believe where I diverge from many of my cyberlaw colleagues is that I fall into group two. I hope to explain why, and perhaps suggest a way forward. Note that I'm not a con law guy, and I'm not a crim law guy, but I am a internet statute guy, so I call the statutory interpretation like I see it.
I want to focus on the notion of authorization. The statute at issue, the Computer Fraud and Abuse Act (or CFAA) outlaws obtaining information from networked computers if one "intentionally accesses a computer without authorization or exceeds authorized access."
Orin Kerr, a leader in this area, wrote a great post yesterday that did two things. First, it rejected tort based tresspass rules like implied consent as too vague for a criminal statute. On this, I agree. Second, it defined "authorization" with respect to other criminal law treatment of consent. In short, the idea is if you consent to access in the first place, then doing bad things in violation of the promises made is does not mean lack of consent to access. On this, I agree as well.
But here's the rub: the statute says "without authorization or exceeds authorized access." And this second phrase has to mean something. The goal, for me at least, is that it covers the Nosal case but not the broad range of activity on the internet. Professor Kerr, I suspect, would say that the only way to do that is for it to be vague, and if so, then the statute must be vague.
I'm OK with the court going that way, but here's my problem with the argument. The statute isn't necessarily vague. Let's say that the scary broad interpretation fron Nosal means that every breach of contract is now a criminal act on the web. That's not vague. Breach a contract, then you're liable; there's no wondering whether you have committed a crime or not.
Of course, the contract might be vague, but that's a factual issue that can be litigated. It is not unheard of to have a crime based on failure to live up to an agreement to do something. A dispute about what the agreement was is not the same as being vague. Does that mean I like it? No. Does that mean it's crazy overbroad? Yes. Does that mean everyone's at risk and someone should do something about this nutty statute? Absolutely.
Now, here is where some vagueness comes in - only some breaches lead to exceeded access, and some don't. How are we to decide which is which? The argument Professor Kerr takes on is tying it to trespass, and I agree that doesn't work.
So, I return to my suggestion from several months ago - we should look to the terms of authorization of access to see whether they have been exceeded. This means that if you are an employee who accesses information for a purpose you know is not authorized, then you are exceeding authorization. It also means that if the terms of service on a website say explicitly that you must be truthful about your age or you are not authorized to access the site, then you are unauthorized. And that's not always an unreasonable access limitation. If there were a kids only website that excluded adults, I might well want to criminalize access obtained by people lying about their age. That doesn't mean all access terms are reasonable, but I'm not troubled by that from a statutory interpretation standpoint.
I'm sure one can attack this as vague - it won't always be clear when a term is tied to authorization. But then again, if it is not a clear term of authorization, the state shouldn't be able to prove that authorization was exceeded. This does mean that snoops all over and people who don't read web site terms (me included) are at risk for violating terms of access we never saw or agreed to. I don't like that part of the law, and it should be changed. I'm fine with making it more limiting in ways that Professor Kerr and others have suggested.
But I don't know that it is invalid as vague - there are lots of things that may be illegal that people don't even know are on the books. Terms of service, at least, people have at least some chance of knowing and choose not to. That doesn't mean it isn't scary, because I don't see behavior (including my own) changing anytime soon.
Monday, December 05, 2011
While My (Favorite Beatles Song) Gently Weeps
The voting is done and the world has (or 264 entities voting in unique user sessions have) selected the songs for "The Tighter" White Album (hereinafter TWA). The survey invited voters to make pairwise comparisons between two Beatles songs, under the premise that one could be kept, and one would be cut.
There are several copyright-related implications of my experiment, and I wanted to unpack a few of them. Today, my thoughts on the potential authorship and ownership of the list itself. Tomorrow, a few thoughts on moral rights, whether I’ve done something wrong, and whether what I've done is actionable. [Edited to add hyperlink to Part II]
But first, the results -- An album's worth of music (two sides no longer than 24:25 each, the length of Side Four of the original), ranked from strongest to weakest:
While My Guitar Gently Weeps
Back in the USSR
Happiness is a Warm Gun
I'm So Tired
Mother Nature's Son
Cry Baby Cry
How did the voters do? Very well, by my estimation. I was pleasantly surprised by the balance. McCartney and Lennon each sang (which by this point in their career was a strong signal of primary authorship) 12 of the 30 tracks, and each had 7 selections on the TWA. (John also wrote "Good Night," which was sung by Ringo and overproduced at Paul's behest, so I think it can be safely cabined.) Only one of George Harrison's four compositions, "While My Guitar Gently Weeps," made the cut, but was the strongest finalist. Ringo's "Don't Pass Me By," no critical darling, did poorly in the final assessment.*
It's possible, although highly unlikely in this instance, that the list of songs is copyrightable expression. As a matter of black letter law, one who compiles other copyrighted works may secure copyright protection in the
collection and assembiling of preexisting materials or of data that are selected, coordinated, or arranged in such a way that the resulting work as a whole constitutes an original work of authorship.
Protection only extends to the material contributed by the author. The Second Circuit has found copyrightable expression in the exercise of judgment as expressed in a prediction about the price of used cars over the next six months, even where the prediction was striving to map as close as possible to the actual value of cars in those markets. Other Second Circuit cases recognize copyright protection in the selection of terms of venery -- labels for groups of animals (e.g., a pride of lions) and in the selection of nine pitching statistics from among scores of potential stats. In each of these cases, there was some judgment exercised about what to include or what not to include.
In this case, I proposed the question, put together the survey, monitored the queue, and recruited respondents through various channels. The voting, however, was actually done by multiple individuals selecting between pairs of songs. It's difficult to paint that as a "work of authorship" in any traditional sense of the phrase. I set up the experiment and then cut it loose. I could have made my own list (and have, but I won't bore you with that), and that list would have been my own work of authorship. This seems like something different, because I'm not making any independent judgment (other than the decision to limit the length of the TWA to twice the length of the longest side of the White Album).
Let's assume for a moment that there is protectable expression, even though I crowdsourced the selection process. Could it be that all 246 voters are joint authors with me in this work? It seems unlikely. The black letter test asks (1) whether we all intended our independent, copyrightable contributions to merge into an inseparable whole, and (2) whether we intended everyone to be a co-author. It's hard to call an individual vote between two songs a separately copyrightable contribution, even with the prompt: "The Beatles' White Album might have been stronger with fewer songs. Which song would you keep?" By atomizing the decision, I might be insulated from claims that individual voters are co-authors of the final list, although I suggested that there was something cooperative about this event in my description of the vote:
We’re crowdsourcing a “Tighter White Album.” Some say the White Album would have been better if it was shorter, which requires cutting some songs. Together, we can work it out. For each pair, vote for the song you would keep. Vote early and often, and share this with your friends. The voting will go through the end of November.
Still, to the extent they took seriously my admonitions, the readers were endeavoring to decide which of the two songs presented belonged on the TWA, whatever the factors that played into the decision. Might that choice also be protected in individual opinions sorted in a certain fashion? This really only matters if I make money from the proposed TWA. I would then need to make an accounting to my joint authors. And even if the vote itself was copyrightable expression, the voter likely granted me an implied license to include it in my final tally.
Should I have copyright protection in this list? Copyright protection is arguably granted to give authors (term of art) the incentive to create expressive works. I didn't need copyright protection as an incentive: I ran the survey so that I could talk about the results (and to satify my own curiosity). And my purposes are served if others take the results and run with them (although I would prefer to be attributed). Maybe no one else needs copyright protection, either, as lists ranking Beatles songs abound on the internet. Rolling Stone magazine has built a cottage industry on ranking and reranking the popular music output of the last 60 years, but uses its archives of rankings as an incentive to pay for a subscription. If the rankings didn't sell, magazines would likely do something else.
As an alternative, Rolling Stone might also arguably benefit from common law protection against the misappropriation of hot news, granted by the Supreme Court in INS v. AP, which would provide narrow injunctive relief to allow it to sell its news before others can copy without permission. The magazine might have trouble with recent precedent from the 2d Circuit which held that making the news does not qualify for hot news protection, although reporting the news might. So if I reproduce Rolling Stone's list (breaking news: Rolling Stone prefers Sonic Youth to Brittany Spears), that might fall outside of hot news misappropriation, although perhaps not outside of copyright protection itself.
*Two personal reflections: (1) I am astounded that Honey Pie didn't make the cut. Perhaps voters confused it with Wild Honey Pie, which probably deserved its lowest ranking. (2) I sing Good Night to my five-year old each night as a lullaby, and my world would be different without it. That is the inherent danger in a project like mine, and those who criticize the very idea that the White Album would have been the better had it been shorter can marshall my own anecdotal evidence in support of their skepticism.
Sunday, November 27, 2011
Threading the Needle
Imagine that Ron Wyden fails: either PROTECT IP or SoPA / E-PARASITE passes and is signed into law by President Obama. Advocacy groups such as the EFF would launch an immediate constitutional challenge to the bill’s censorship mandates. I believe the outcome of such litigation is far less certain than either side believes. American censorship legislation would keep lots of lawyers employed (always a good thing in a down economy), and might generate some useful First Amendment jurisprudence. Let me sketch three areas of uncertainty that the courts would have to resolve, and that improve the odds that such a bill would survive.
First, how high is the constitutional barrier to the legislation? Both bills look like systems of prior restraint, which loads the government with a “heavy presumption” against their constitutionality . The Supreme Court’s jurisprudence in the two most relevant prior cases, Reno v. ACLU and Ashcroft v. ACLU, applied strict scrutiny: laws must serve a compelling government interest, and be narrowly tailored to that interest. This looks bad for the state, but wait: we’re dealing with laws regulating intellectual property, and such laws draw intermediate scrutiny at most. This is what I call the IP loophole in the First Amendment. Copyright law, for example, enjoys more lenient treatment under free speech examination because the law has built-in safeguards such as fair use, the idea-expression dichotomy, and the (ever-lengthening) limited term of rights.
Moreover, it’s not certain that the bills even regulate speech. Here, I mean “speech” in its First Amendment sense, not the colloquial one. Burning one’s draft card at a protest seems like speech to most of us – the anti-war message is embodied within the act – but the Supreme Court views it as conduct. And conduct can be regulated so long as the government meets the minimal strictures of rational review. The two bills focus on domain name filtering – they impede users from reaching certain on-line material, but formally limit only the conversion of domain name to IP address by an Internet service provider. (I’m skipping over the requirement that search engines de-list such sites, which is a much clearer case of regulating speech.) DNS lookups seem akin to conduct, although the Court’s precedent in this area is hardly a model of lucidity. (Burning the American flag = speech; burning a draft card = conduct. QED.) Other courts have struggled, most notably in the context of the anti-circumvention provisions of the Digital Millennium Copyright Act, to categorize domain names as speech or not-speech, and thus far have found a kind of Hegelian duality to them. That suggests an intermediate level of scrutiny, which would resonate with the IP loophole analysis above.
Second, who has standing? It seems that our plaintiffs would need to find a site that conceded it met the definition of a site “dedicated to the theft of U.S. property.” That seems hard to do until filtering begins – at which point whatever ills the legislation creates will have materialized. (It might also expose the site to suits from affected IP owners.) Perhaps Internet service providers could bring a challenge based on either third-party standing (on behalf of their users, if we think users’ rights are implicated, or the foreign sites) or their own speech interests. However, I think it’s unlikely that users would have standing, particularly given the somewhat dilute harm of being unable to reach material on allegedly infringing sites. And, as described above, it’s not clear that ISPs have a speech interest at all: domain name services simply may be conduct.
Finally, how can we distinguish E-PARASITE or PROTECT IP from similar legislation that passes constitutional muster? Section 1201 of the DMCA, for example, permits liability to be imposed not only on those who make tools for circumventing access controls available, but even on those who knowingly link to such tools on-line. The government can limit distribution of encryption technology – at least as object code – overseas, by treating it as a munition. And thus far, the federal government has been able to seize domain names under civil forfeiture provisions, with nary a quibble from the federal courts.
To be plain: I think both bills are terrible legislation. They’re certain to damage America’s technology innovation industries, which are the crown jewels of our economy and our future competitiveness. They turn over censorship decisions to private actors with no interest whatsoever in countervailing values such as free expression or, indeed, anything other than their own profit margins. And their procedural protections are utterly inadequate – in my view. But I think it is possible that these bills may thread the constitutional needle, particularly given the one-way ratchet of copyright protection before the federal courts. The decision in Ashcroft, for instance, found that end user filtering was a narrower alternative than the Children’s Online Protection Act. But end user filtering doesn’t work when the person installing the software is not a parent concerned about on-line filth, but one eager to download infringing movies. And that means that legislation may escape narrowness analysis as well. As I wrote in Orwell’s Armchair:
focusing only on content that is clearly unlawful – such as child pornography, obscenity, or intellectual property infringement – has constitutional benefits that can help a statute survive. These categories of material do not count as “speech” for First Amendment analysis, and hence the government need not satisfy strict scrutiny in attacking them. Recent bills seem to show that legislators have learned this lesson – the PROTECT IP Act, for example, targets only those Web sites with “no significant use other than engaging in, enabling, or facilitating” IP infringement. Banning only unprotected material could move censorial legislation past overbreadth objections.
So: the outcome of any litigation is not only highly uncertain, but more uncertain than free speech advocates believe. Please paint a more hopeful story for me, and tell me why I’m wrong.
Cross-posted at Info/Law.
Posted by Derek Bambauer on November 27, 2011 at 08:37 PM in Civil Procedure, Constitutional thoughts, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Web/Tech | Permalink | Comments (0) | TrackBack
Monday, November 21, 2011
How Not To Secure the Net
In the wake of credible allegations of hacking of a water utility, including physical damage, attention has turned to software security weaknesses. One might think that we'd want independent experts - call them whistleblowers, busticati, or hackers - out there testing, and reporting, important software bugs. But it turns out that overblown cease-and-desist letters still rule the day for software companies. Fortunately, when software vendor Carrier IQ attempted to misstate IP law to silence security researcher Trevor Eckhart, the EFF took up his cause. But this brings to mind three problems.
First, unfortunately, EFF doesn't scale. We need a larger-scale effort to represent threatened researchers. I've been thinking about how we might accomplish this, and would invite comments on the topic.
Second, IP law's strict liability, significant penalties, and increasing criminalization can create significant chilling effects for valuable security research. This is why Oliver Day and I propose a shield against IP claims for researchers who follow the responsible disclosure model.
Finally, vendors really need to have their general counsel run these efforts past outside counsel who know IP. Carrier IQ's C&D reads like a high school student did some basic Wikipedia research on copyright law and then ran the resulting letter through Google Translate (English to Lawyer). If this is the aptitude that Carrier IQ brings to IP, they'd better not be counting on their IP portfolio for their market cap.
When IP law suppresses valuable research, it demonstrates, in Oliver's words, that lawyers have hacked East Coast Code in a way it was not designed for. Props to EFF for hacking back.
Cross-posted at Info/Law.
Posted by Derek Bambauer on November 21, 2011 at 09:33 PM in Corporate, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Science, Web/Tech | Permalink | Comments (2) | TrackBack
Friday, November 18, 2011
A Soap Impression of His Wife
As I previewed earlier this week, I want to talk about the copyright implications for 3D printers. A 3D printer is a device that can reproduce a 3-dimensional object by spraying layers of plastic, metal, or ceramic into a given shape. (I imagine the process smelling like those Mold-a-Rama plastic souvenir vending machines prevalent in many museums, a thought simultaneously nostalgic and sickening). Apparently, early adopters are already purchasing the first generation of 3D printers, and there are websites like Thingiverse where you can find plans for items you can print in your home, like these Tardis salt shakers.*
Perhaps unsurprisingly, there can be copyright implications. A recent NY Times blog post correctly notes that the 3D printer is primarily suited to reproduce what § 101 of the Copyright Act calls "useful articles," physical objects that have "an intrinsic utilitarian function," and which, by definition, receive no copyright protection...except when they do.A useful article can include elements that are protectable as a "pictorial, graphic, [or] sculptural work." The elements are protectable to the extent "the pictorial, graphic, or sculptural features...can be identified separately from, and are capable of existing independently of, the utilitarian aspects of the article." There are half a dozen tests courts have employed to determine whether protectable features can be separated from utilitarian aspects. Courts have rejected copyright protection for mannequin torsos and the ubiquitous ribbon bike rack, but granted it for belt buckles with ornamental elements that were not a necessary part of a functioning belt.
Print out a "functional" mannequin torso (or post your plans for it on the internet) and you should have no trouble. Post a schematic for the Vaquero belt buckle, and you may well be violating the copyright protection in the sculptural elements. But even that can be convoluted. The case law is mixed on how to think about 2D works derived from 3Dworks, and vice versa. A substantially similar 3D work can infringe a 2D graphic or pictorial work (Ideal Toy Corp. v. Kenner Prods. Div., 443 F. Supp. 291 (S.D.N.Y. 1977)), but constructing a building without permission from protectable architectural plans was not infringement, prior to a recent revision to the Copyright Act. Likewise, adrawing of a utilitarian item might be protectable as a drawing, but does not grant the copyright holder the right to control the manufacture of the item.
And if consumers are infringing, there is a significant risk that the manufacturer of the 3D printer could be vicariously or contributorily liable for that infringement. The famous Sony decision, which insulated the distribution of devices capable of commercially significant noninfringing uses, even if they could also be used for copyright infringement, has been narrowed both by recent Grokster filesharing decision and by the DMCA anticircumvention provisions. The easy, but unsatisfying takeaway is that 3D printers will keep copyright lawyers employed for years to come.
Back to the Tardis shakers, for a moment: the individual who posted them to the Thingiverse noted that the shaker "is derivative of thingiverse.com/thing:1528 and thingiverse.com/thing:12278", a Tardis sculpture and the lid of bottle, respectively. I found this striking for two reasons. First, it suggests a custom of attribution on thingiverse, but I don't yet have a sense for whether it's widespread. Second, if either of those first things are protectable as copyrighted works, (which seems more likely for the Tardis sculpture, and less so for the lid) then the Tardis salt shaker may be an unauthorized, and infringing, derivative work, and the decision to offer attribution perhaps unwise in retrospect.
* The TARDIS is the preferred means of locomotion of Doctor Who, the titular character of the long-running BBC science fiction program. It's a time machine / space ship disguised as a 1960s-era London police call box. The shape of the TARDIS, in its distinctive blue color, is protected by three registered trademarks in the UK.
Thursday, November 17, 2011
Yesterday, the House of Representatives held hearings on the Stop Online Piracy Act (it's being called SOPA, but I like E-PARASITE tons better). There's been a lot of good coverage in the media and on the blogs. Jason Mazzone had a great piece in TorrentFreak about SOPA, and see also stories about how the bill would re-write the DMCA, about Google's perspective, and about the Global Network Initiative's perspective.
My interest is in the public choice aspect of the hearings, and indeed the legislation. The tech sector dwarfs the movie and music industries economically - heck, the video game industry is bigger. Why, then, do we propose to censor the Internet to protect Hollywood's business model? I think there are two answers. First, these particular content industries are politically astute. They've effectively lobbied Congress for decades; Larry Lessig and Bill Patry among others have documented Jack Valenti's persuasive powers. They have more lobbyists and donate more money than companies like Google, Yahoo, and Facebook, which are neophytes at this game.
Second, they have a simpler story: property rights good, theft bad. The AFL-CIO representative who testified said that "the First Amendment does not protect stealing goods off trucks." That is perfectly true, and of course perfectly irrelevant. (More accurately: it is idiotic, but the AFL-CIO is a useful idiot for pro-SOPA forces.) The anti-SOPA forces can wheel to a simple argument themselves - censorship is bad - but that's somewhat misleading, too. The more complicated, and accurate, arguments are that SOPA lacks sufficient procedural safeguards; that it will break DNSSEC, one of the most important cybersecurity moves in a decade; that it fatally undermines our ability to advocate credibly for Internet freedom in countries like China and Burma; and that IP infringement is not always harmful and not always undesirable. But those arguments don't fit on a bumper sticker or the lede in a news story.
I am interested in how we decide on censorship because I'm not an absolutist: I believe that censorship - prior restraint - can have a legitimate role in a democracy. But everything depends on the processes by which we arrive at decisions about what to censor, and how. Jessica Litman powerfully documents the tilted table of IP legislation in Digital Copyright. Her story is being replayed now with the debates over SOPA and PROTECT IP: we're rushing into decisions about censoring the most important and innovative medium in history to protect a few small, politically powerful interest groups. That's unwise. And the irony is that a completely undemocratic move - Ron Wyden's hold, and threatened filibuster, in the Senate - is the only thing that may force us into more fulsome consideration of this measure. I am having to think hard about my confidence in process as legitimating censorship.
Cross-posted at Info/Law.
Posted by Derek Bambauer on November 17, 2011 at 09:15 PM in Constitutional thoughts, Corporate, Culture, Current Affairs, Deliberation and voices, First Amendment, Information and Technology, Intellectual Property, Music, Property, Web/Tech | Permalink | Comments (9) | TrackBack
Tuesday, November 15, 2011
You Say You Want a Revolution
Two potentially revolutionary "disruptive technologies" were back in the news this week. The first is ReDigi, a marketplace for the sale of used "legally downloaded digital music." For over 100 years, copyright law has included a first sale doctrine, which says I can transfer "lawfully made" copy* (a material object in which a copyrighted work is fixed) by sale or other means, without permission of the copyright owner. The case law is codified at 17 U.S.C. § 109.
ReDigi says its marketplace falls squarely within the first sale limitation on the copyright owner's right to distribute, because it verifies that copies are "from a legitimate source," and it deletes the original from all the seller's devices. The Recording Industry Association of America has objected to ReDigi's characterization of the fair use claim on two primary grounds,** as seen in this cease and desist letter.First, as ReDigi describes its technology, it makes a copy for the buyer, and deletes the original copy from the computer of the seller. The RIAA finds fault with the copying. Section 109 insulates against liability for unauthorized redistribution of a work, but not for making an unauthorized copy of a work. Second, the RIAA is unpersuaded there are ReDigi can guarantee that sellers are selling "lawfully made" digital files. ReDigi's initial response can be found here.
At a first cut, ReDigi might find it difficult to ever satisfy the RIAA that it was only allowing the resale of lawfully made digital files. Whether it can satisfy a court is another matter. It might be easier for an authorized vendor, like iTunes or Kindle, to mark legitimate copies going forward, but probably not to detect prior infringement.
Still, verifying legitimate copies may be easier than shoehorning the "copy and delete" business model into the language of § 109. Deleting the original and moving a copy seems in line with the spirit of the law, but not its letter. Should that matter? ReDigi attempts to position itself as close as technologically possible to the framework spelled out in the statute, but that's a framework designed to handle the sale of physical objects that embody copyrightable works.
This is not the only area where complying with statutory requirements can tie businesses in knots. Courts have consistently struggled with how to think about digital files. In London-Sire Records v. Does, the court had to puzzle out whether a digital file can be a material object and thus a copy* distributed in violation of § 106(3). The policy question is easy to articulate, if reasonable minds still differ about the answer: is the sale and distribution of digital files something we want the copyright owner to control or not?
As a statutory matter, the court in London-Sire concluded that material didn't mean material in its sense as "a tangible object with a certain heft," but instead "as a medium in which a copyrighted work can be 'fixed.'" This definition is, of course, driven by the statute: copyright subsists once an original work of authorship is fixed in a tangible medium of expression from which it can be reproduced, and the Second Circuit has recently held in the Cablevision case that a work must also be fixed -- embodied in a copy or phonorecord for a period of more than transitory duration -- for infringement to occur. Policy intuitions may be clear, but fitting the solution in the statutory language sometimes is not. And a business model designed to fit existing statutory safe harbors might do things that appear otherwise nonsensical, like Cablevision's decision to keep individual copies of digital videos recorded by consumers on its servers, to avoid copyright liability.
Potentially even more disruptive is the 3D printer, prototypes of which already exist in the wild, and which I will talk more about tomorrow.
* Technically, a digital audio file is a phonorecord, and not a copy, but that's a distinction without a difference here.
** The RIAA also claims that ReDigi violates the exclusive right of public performance by playing 30 second samples of members' songs on its website, but that's not a first sale issue.
Thursday, November 10, 2011
Cyber-Terror: Still Nothing to See Here
Cybersecurity is a hot policy / legal topic at the moment: the SEC recently issued guidance on cybersecurity reporting, defense contractors suffered a spear-phishing attack, the Office of the National Counterintelligence Executive issued a report on cyber-espionage, and Brazilian ISPs fell victim to DNS poisoning. (The last highlights a problem with E-PARASITE and PROTECT IP: if they inadvertently encourage Americans to use foreign DNS providers, they may worsen cybersecurity problems.) Cybersecurity is a moniker that covers a host of problems, from identity theft to denial of service attacks to theft of trade secrets. The challenges are real, and there are many of them. That's why it is disheartening to see otherwise knowledgeable experts focusing on chimerical targets.
For example, Eugene Kaspersky stated at the London Cyber Conference that "we are close, very close, to cyber terrorism. Perhaps already the criminals have sold their skills to the terrorists - and then...oh, God." FBI executive assistant director Shawn Henry said that attacks could "paralyze cities" and that "ultimately, people could die." Do these claims hold up? What, exactly, is it that cyber-terrorists are going to do? Engage in identity theft? Steal U.S. intellectual property? Those are somewhat worrisome, but where is the "terror" part? Terrorists support malevolent activities with all sorts of crimes. But that's "support," not "terror." Hysterics like Richard Clarke spout nonsense about shutting down air traffic control systems or blowing up power plants, but there is precisely zero evidence that even nation-states can do this sort of thing, let alone small, non-state actors. The "oh, God" part of Kaspersky's comment is a standard rhetorical trope in the apocalyptic discussions of cybersecurity. (I knock these down in Conundrum, coming out shortly in Minnesota Law Review.) And paralyzing a city isn't too hard: snowstorms do it routinely. The question is how likely such threats are to materialize, and whether the proposed answers (Henry thinks we should build a new, more secure Internet) make any sense.
There are at least two plausible reasons why otherwise rational people spout lurid doomsday scenarios instead of focusing on the mundane, technical, and challenging problems of networked information stores. First, and most cynically, they can make money from doing so. Kaspersky runs an Internet security company; Clarke is a cybersecurity consultant; former NSA director Mike McConnell works for a law firm that sells cybersecurity services to the government. I think there's something to this, but I'm not ready to accuse these people of being venal. I think a more likely explanation flows from Paul Ohm's Myth of the Superuser: many of these experts have seen what truly talented hackers can do, given sufficient time, resources, and information. They then extrapolate to a world where such skills are commonplace, and unrestrained by ethics, social pressures, or sheer rational actor deterrence. Combine that with the chance to peddle one's own wares, or books, to address the problems, and you get the sum of all fears. Cognitive bias matters.
The sky, though, is not falling. Melodrama won't help - in fact, it distracts us from the things we need to do: to create redundancy, to test recovery scenarios, to deploy more secure software, and to encourage a culture of testing (the classic "hacking"). We are not going to deploy a new Internet. We are not going to force everyone to get an Internet driver's license. Most cybersecurity improvements are going to be gradual and unremarkable, rather than involving Bruce Willis and an F-35. Or, to quote Frank Drebin, "Nothing to see here, please disperse!" Cross-posted at Info/Law.
Saturday, November 05, 2011
The House of Representatives is considering the disturbingly-named E-PARASITE Act. The bill, which is intended to curb copyright infringment on-line, is similar to the Senate's PROTECT IP Act, but much much worse. It's as though George Lucas came out with the director's cut of "The Phantom Menace," but added in another half-hour of Jar Jar Binks.
As with PROTECT IP, the provisions allowing the Attorney General to obtain a court order to block sites that engage in criminal copyright violations are, in theory, less objectionable. But they're quite problematic in their particulars. Let me give three examples.
First, the orders not only block access through ISPs, but also require search engines to de-list objectionable sites. That not only places a burden on Google, Bing, and other search sites, but it "vaporizes" (to use George Orwell's term) the targeted sites until they can prove they're licit. That has things exactly backwards: the government must prove that material is unlawful before restraining it. This aspect of the order is likely constitutionally infirm.
Second, the bill attacks circumvention as well: MAFIAAFire and its ilk become unlawful immediately. Filtering creep is inevitable: you have to target circumvention, and the scope of circumvention targeted widens with time. Proxy services like Anonymizer are likely next.
Finally, commentators have noted that the bill relies on DNS blocking, but they're actually underestimating its impact. The legislation says ISPs must take "technically feasible and reasonable measures designed to prevent access by its subscribers located within the United States" to Web sites targeted under the bill, "including measures designed to prevent the domain name of the foreign infringing site (or portion thereof) from resolvingto that domain name's Internet protocol address." The definitional section of the bill says that "including" does not mean "limited to." In other words, if an ISP can engage in technically feasible, reasonable IP address blocking or URL blocking - which is increasingly possible with providers who employ deep packet inspection - it must do so. The bill, in other words, targets more than the DNS.
On the plus side, the bill does provide notice to users (the AG must specify text to display when users try to access the site), and it allows for amended orders to deal with the whack-a-mole problem of illegal content evading restrictions by changing domain names or Web hosting providers.
The private action section of the bill is extremely problematic. Under its provisions, YouTube is clearly unlawful, and neither advertising or payment providers would be able to transact business with it. The content industry doesn't like YouTube - see the Viacom litigation - but it's plainly a powerful and important innovation. This part of E-PARASITE targets sites "dedicated to the theft of U.S. property." (Side note: sorry, it's not theft. This is a rhetorical trope in the IP wars, but IP infringement simply is not the same as theft. Theft deals with rivalrous goods. In addition, physical property rights do not expire with time. If this is theft, why aren't copyright and patent expirations a regulatory taking? Why not just call it "property terrorism"?)
So, what defines such a site? It is:
- "primarily designed or operated for the purpose of, has only limited purpose or use other than, or is marketed by its operator or another acting in concert with that operator for use in, offering goods or services in a manner that engages in, enables, or facilitates" violations of the Copyright Act, Title I of the Digital Millennium Copyright Act, or anti-counterfeiting laws; or,
- "is taking, or has taken, deliberate actions to avoid confirming a high probability of the use of the U.S.-directed site to carry out the acts that constitute a violation" of those laws; or,
- the owner "operates the U.S.-directed site with the object of promoting, or has promoted, its use to carry out acts that constitute a violation" of those laws.
That is an extraordinarily broad ambit. Would buying keywords, for example, that mention a popular brand constitute a violation? And how do we know what a site is "primarily designed for"? YouTube seems to have limited purpose or use other than facilitating copyright infringement. Heck, if the VCR were a Web site, it'd be unlawful, too.
The bill purports to establish a DMCA-like regime for such sites: the IP owner provides notice, and the site's owner can challenge via counter-notification. But the defaults matter here, a lot: payment providers and advertisers must cease doing business with such sites unless the site owner counter-notifies, and even then, the IP owner can obtain an injunction to the same effect. Moreover, to counter-notify, a site owner must concede jurisdiction, which foreign sites will undoubtedly be reluctant to do. (Litigating in the U.S. is expensive, and the courts tend to be friendly towards local IP owners. See, for example, Judge Crotty's slipshod opinion in the Rojadirecta case.)
I've argued in a new paper that using direct, open, and transparent methods to censor the Internet is preferable to our current system of "soft" censorship via domain name seizures and backdoor arm-twisting of private firms, but E-PARASITE shows that it's entirely possible for hard censorship to be badly designed. The major problem is that it outsources censorship decisions to private companies. Prior restraint is an incredibly powerful tool, and we need the accountability that derives from having elected officials make these decisions. Private firms have one-sided incentives, as we've seen with DMCA take-downs.
In short, the private action measures make it remarkably easy for IP owners to cut off funding for sites to which they object. These include Torrent tracker sites, on-line video sites, sites that host mash-ups, and so forth. The procedural provisions tilt the table strongly towards IP owners, including by establishing very short time periods by which advertisers and payment providers have to comply. Money matters: WikiLeaks is going under because of exactly these sort of tactics.
America is getting into the Internet censorship business. We started down this path to deal with pornographic and obscene content; our focus has shifted to intellectual property. I've argued that this is because IP legislation draws lower First Amendment scrutiny than other speech restrictions, and interest groups are taking advantage of that loophole. It's strange to me that Congress would damage innovation on the Internet - only the most powerful communications medium since words on paper - to protect movies and music, which are relatively small-scale in the U.S. economy. But, as always with IP, the political economy matters.
I predict that a bill like PROTECT IP or E-PARASITE will become law. Then, we'll fight out again what the First Amendment means on the Internet, and then the myth of America's free speech exceptionalism on-line will likely be dead.
Cross-posted at Info/Law.
Posted by Derek Bambauer on November 5, 2011 at 05:06 PM in Civil Procedure, Constitutional thoughts, Culture, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Music, Property, Web/Tech | Permalink | Comments (2) | TrackBack
Thursday, November 03, 2011
Why Don't We Do It In the Road?
In that White Album gem, "Why Don't We Do It In the Road?", Paul McCartney insinuated that whatever "it" was wouldn't matter because "no one will be watching us." The feeling of being watched can change the way in which one engages in an activity. Often, perceiving one's own behavior clearly is an essential step in changing that behavior.
I've thought about this lately as I've tried to become more productive in my writing, and I'm drawn to resources that help me externalize my monitoring procress. There are various commitment mechanisms out there, which I've lumped roughly into three groups. Some are designed to make me more conscious of my own obligation to write. Other mechanisms are designed to bring outsiders on board, inviting /forcing me to give an account of my productivity or lack thereof to others. And some, like StickK, combine the second with the means to penalize me if I fail to perform.
Should I need tricks to write? Perhaps not, but even with the best of intentions, it's easy to get waylaid by the administrative and educational requirements of the job. Commitment mechanisms help me remember why I want to fill those precious moments of downtime with writing. Below the fold I'll discuss some methods I've tried in that first category, and problems that make them less than optimal for my purposes. Feel free to include your suggestions and experiences here, as well. Also note that over at Concurring Opinions, Kaimipono Wenger has started the first annual National Article Finishing Month, a commitment mechanism I have not yet tried (but just might). In subsequent posts, I'll tackle socializing techniques and my love / hate relationship with StickK.
Perhaps like many of you, I find the Internet to be a two-edged sword. While I can be more productive because so many resources are at my fingertips, I also waste too much time surfing the webs. I've tried commitment mechanisms that shut down the internet, but have so far found them lacking. I've tried Freedom, which kills the entire internet for a designated period of time. That's helpful to an extent, although I store my documents on Dropbox, so my work moves with me from home to office without the need to keep track of a usb drive. While Dropbox should automatically synch once Freedom stops running, I've found that hasn't been as smooth as I hoped. This in turn makes me hesitant to rely on Freedom.
What makes me even more hesitant to use Freedom is that I have to reboot my computer to get back to the internet every other time I use it. If you are not saving your work to the cloud, you may see that as a feature and not a bug.
I turned next to StayFocusd, a Chrome App that allows me to pick and choose websites to block. Stay Focusd reminds me when I'm out of free browsing time with a pop-up that dominates the screen, with a mild scold, like "Shouldn't you be working?" If you are the type to use multiple browsers for different purposes, however, Stay Focusd is only a Firefox window away from being relatively ineffectual.
The self-monitoring tool I've liked the best so far is Write or Die. You set the amount of time you want to write, the words you propose to generate, and you start writing. As I set it, if you stop typing for more than 10 seconds, the program makes irritating noises (babies crying, horns honking, etc.) until you start typing again. Write or Die is great for plowing through a first draft quickly, but is less effective if the goal is to refine text. This is part because the interface gives you bare bones text. I'm too cheap to download the product, which has more bells and whistles than the free online version (like the ability to italicize text). In addition, in the time it takes to think about the line I'm rewriting, the babies begin to howl again.
So, what commitment mechanisms do you use when you don't feel like writing?
Wednesday, October 26, 2011
How Baseball Made Me a PirateMajor League Baseball has made me a pirate, with no regrets. Nick Ross, on Australia's ABC, makes "The Case for Piracy." His article argues that piracy often results, essentially, from market failure: customers are willing to pay content owners for access to material, and the content owners refuse - because they can't be bothered to serve that market or geography, because they are trying to force consumers onto another platform, or because they are trying to leverage interest in, say, Premier League matches as a means of getting cable customers to buy the Golf Network. The music industry made exactly these mistakes before the combination of Napster and iTunes forced them into better behavior: MusicNow and Pressplay were expensive disasters, loaded with DRM restrictions and focused on preventing any possible re-use of content rather than delivering actual value. TV content owners are now making the same mistake. Take, for example, MLB. I tried to purchase a plan to watch the baseball playoffs on mlb.com - I don't own a TV, and it's a bit awkward to hang out at the local pub for 3 hours. MLB didn't make it obvious how to do this. Eventually, I clicked a plan that indicated it would allow me to watch the entire postseason for $19.99, and gladly put in my credit card number. My mistake. It turns out that option is apparently for non-U.S. customers. I learned this the hard way when I tried to watch an ALDS game, only to get... nothing. No content, except an ad that tried to get me to buy an additional plan. That's right, for my $19.99, I receive literally nothing of value. When I e-mailed MLB Customer Service to try to get a refund, here's the answer I received: "Dear Valued Subscriber: Your request for a refund in connection with your 2011 MLB.TV Postseason Package subscription has been denied in accordance with the terms of your purchase." Apparently the terms allow fraud. Naturally, I'm going to dispute the charge with my credit card company. But here's the thing: I love baseball. I would gladly pay MLB to watch the postseason on-line. And yet there's no way to do so, legally. In fact, apparently the only people who can are folks outside the U.S. And if you try to give them your money anyway, they'll take it, and then tell you how valued you are. But you're not. So, I'm finding ways to watch MLB anyway. If you have suggestions or tips, offer 'em in the comments - there must be a Rojadirecta for baseball. And next season, when I want to watch the Red Sox, that's the medium I'll use - not MLB's Extra Innings. MLB has turned me into a pirate, with no regrets.Cross-posted at Info/Law.
Posted by Derek Bambauer on October 26, 2011 at 07:48 PM in Criminal Law, Culture, Information and Technology, Intellectual Property, International Law, Music, Odd World, Sports, Television, Web/Tech | Permalink | Comments (34) | TrackBack
Thursday, October 20, 2011
Policing Copyright Infringement on the Net
Mark Lemley has a smart editorial up at Law.com on the hearings at the Second Circuit Court of Appeals in Viacom v. YouTube. The question is, formally, one of interpreting Title II of the Digital Millennium Copyright Act (17 U.S.C. 512), and determining whether YouTube meets the statutory requirements for immunity from liability. But this is really a fight about how much on-line service providers must do to police, or protect against, copyright infringement. Mark, and the district court in the case, think that Congress answered this question rather clearly: services such as YouTube need to respond promptly to notifications of claimed infringement, and to avoid business models where they profit directly from infringement. The fact that a site attracts infringing content (which YouTube indubitably does) can't wipe out the safe harbor, because then the DMCA would be a nullity. It may be that the burden of policing copyrights should fall more heavily on services such as YouTube than it currently does. But, if that's the case, Viacom should be lobbying Congress, not the Second Circuit. I predict a clean win for YouTube.
Monday, October 17, 2011
The Myth of Cyberterror
UPI's article on cyberterrorism helpfully states the obvious: there's no such thing. This is in sharp contrast to the rhetoric in cybersecurity discussions, which highlights purported threats from terrorists to the power grid, the transportation system, and even the ability to play Space Invaders using the lights of skyscrapers. It's all quite entertaining, except for 2 problems: 1) perception frequently drives policy, and 2) all of these risks are chimerical. Yes, non-state actors are capable of defacing Web sites and even launching denial of service attacks, but that's a far cry from train bombings or shootings in hotels.
The response from some quarters is that, while terrorists do not currently have the capability to execute devastating cyberattacks, they will at some point, and so we should act now. I find this unsatisfying. Law rarely imposes large current costs, such as changing how the Internet's core protocols run, to address remote risks of uncertain (but low) incidence and uncertain magnitude. In 2009, nearly 31,000 people died in highway car crashes, but we don't require people to drive tanks. (And, few people choose to do so, except for Hummer employees.)
Why, then, the continued focus on cyberterror? I think there are four reasons. First, terror is the policy issue of the moment: connecting to it both focuses people's attention and draws funding. Second, we're in an age of rapid and constant technological change, which always produces some level of associated fear. Few of us understand how BGP works, or why its lack of built-in authentication creates risk, and we are afraid of the unknown. Third, terror attacks are like shark attacks. We are afraid of dying in highly gory or horrific fashion, rather than basing our worries on actual incidence of harm (compare our fear of terrorists versus our fear of bad drivers, and then look at the underlying number of fatalities in each category). Lastly, cybersecurity is a battleground not merely for machines but for money. Federal agencies, defense contractors, and software companies all hold a stake in concentrating attention on cyber-risks and offering their wares as a means of remediating them.
So what should we do at this point? For cyberterror, the answer is "nothing," or at least nothing that we wouldn't do anyway. Preventing cyberattacks by terrorists, nation states, and spies all involve the same things, as I argue in Conundrum. But: this approach gets called "naive" with some regularity, so I'd be interested in your take...
Posted by Derek Bambauer on October 17, 2011 at 04:43 PM in Criminal Law, Current Affairs, Information and Technology, International Law, Law and Politics, Science, Web/Tech | Permalink | Comments (7) | TrackBack
Friday, October 14, 2011
Behind the Scenes of Six Strikes
Wired has a story on the cozy relationship between content industries and the Obama administration, which resulted in the deployment of the new "six strikes" plan to combat on-line copyright infringement. Internet security and privacy researcher Chris Soghoian obtained e-mail communication between administration officials and industry via a Freedom of Information Act (FoIA) request. (Disclosure: Jonathan Askin and I represent Chris in his appeal regarding this FoIA request.) The e-mails demonstrate vividly what everyone suspected: Hollywood - in the form of the music and movie industries - has an administration eager to be helpful, including by pressuring ISPs. Stay tuned.
Posted by Derek Bambauer on October 14, 2011 at 11:10 AM in Blogging, Culture, Current Affairs, Film, Information and Technology, Intellectual Property, Judicial Process, Law and Politics, Music, Web/Tech | Permalink | Comments (0) | TrackBack
Thursday, October 13, 2011
The Pirates' Code
There have been a number of attempts to alter consumer norms about copyright infringement (especially those of teenagers). The MPAA has its campaigns; the BSA has its ferret; and now New York City has a crowdsourced initiative to design a new public service announcement. At first blush, the plan looks smart: rather than have studio executives try to figure out what will appeal to kids (Sorcerer's Apprentice, anyone?), leave it to the kids themselves.
On further inspection, though, the plan seems a bit shaky. First, it's not actually a NYC campaign: the Bloomberg administration is sockpuppeting for NBC Universal. Second, why is the City even spending scarce taxpayer funds on this? Copyright enforcement is primarily private, although the Obama administration is lending a helping hand. Third, is this the most effective tactic? It seems more efficient to go after the street vendors who sell bootleg DVDs, for example - I can buy a Blockbuster Video store's worth of movies just by walking out the front door of my office.
Yogi Berra (or was it Niels Bohr?) said that the hardest thing to predict is the future. And the hardest thing about norms is changing them. Larry Lessig's New Chicago framework not only points to the power of norms regulation (along the lines of Bob Ellickson), but suggests that norms are effectively free - no one has to pay to enforce them. This makes them attractive as a means of regulation. The problem, though, is that norms tend to be resistant to overt efforts to shift them. Think of how long it took to change norms around smoking - a practice proven to kill you - and you'll appreciate the scope of the challenge. The Bloomberg administration should save its resources for moving snow this winter...
Monday, October 10, 2011
Spying, Skynet, and Cybersecurity
The drones used by the U.S. Air Force have been infected by malware - reportedly, a program that logs the commands transmitted from the pilots' computers at a base in Nevada to the drones flying over Iraq and Afghanistan. This has led to comparisons to Skynet, particularly since the Terminators' network was supposed to become self-aware in April. While I think we don't yet need to stock up on robot-sniffing dogs, the malware situation is worrisome, for three reasons.
First, the military is aware of the virus's presence, but is reportedly unable to prevent it from re-installing itself even after they clean off the computers' drives. Wired reports that re-building the computers is time-consuming. That's undoubtedly true, but cyber-threats are an increasing part of warfare, and they'll soon be ubiquitous. I've argued that resilience is a critical component of cybersecurity. The Department of Defense needs to assume that their systems will be compromised - because they will - and to plan for recovery. Prevention is impossible; remediation is vital.
Second, the malware took hold despite the air gap between the drones' network and the public Internet. The idea of separate, isolated networks is a very attractive one in security, but it's false comfort. In a world where flash drives are ubiquitous, where iPods can store files, and where one can download sensitive data onto a Lady Gaga CD, information will inevitably cross the gap. Separation may be sensible as one security measure, but it is not a panacea.
Lastly, the Air Force is the branch of the armed forces currently in the lead in terms of cyberspace and cybersecurity initiatives. If they can't solve this problem, do we want them taking the lead on this new dimension of the battlefield?
It's not clear how seriously the drones' network has been compromised - security breaches have occurred before. But cybersecurity is difficult. We saw the first true cyberweapon in Stuxnet, which damaged Iran's nuclear centrifuges and set back its uranium enrichment program. That program too looked benign, on first inspection. Let's hope the program here is closer to Kyle Reese than a T-1000.
Tuesday, October 04, 2011
America Censors the Internet
If you're an on-line poker player, a fan of the Premier League, or someone who'd like to visit Cuba, you probably already know this. Most people, though, aren't aware that America censors the Internet. Lawyers tend to believe that a pair of Supreme Court cases, Reno v. ACLU (1997) and Ashcroft v. ACLU (2004), permanently interred government censorship of the Net in the U.S. Not so.
In a new paper, Orwell's Armchair (forthcoming in the University of Chicago Law Review), I argue that government censors retain a potent set of tools to block disfavored on-line content, from using unrelated laws (like civil forfeiture statutes) as a pretext to paying intermediaries to filter to pressuring private actors into blocking. These methods are not only indirect, they are less legitimate than overt, transparent regulation of Internet content. In the piece, I analyze the constraints that exist to check such soft censorship, and find that they are weak at best. So, I argue, if we're going to censor the Internet, let's be clear about it: the paper concludes by proposing elements of a prior restraint statute for on-line content that could both operate legitimately and survive constitutional scrutiny.
Jerry Brito of George Mason University's Mercatus Center kindly interviewed me about the issues the article raises for his Surprisingly Free podcast. It's worth a listen, even though my voice is surprisingly annoying.
Cross-posted at Info/Law.
Posted by Derek Bambauer on October 4, 2011 at 06:14 PM in Civil Procedure, Constitutional thoughts, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Web/Tech | Permalink | Comments (3) | TrackBack
Sunday, October 02, 2011
What Commons Have in Common
Thanks to Dan and the Prawfs crew for having me! Blogging here is a nice distraction from the Red Sox late-season collapse.
Last week, NYU Law School hosted Convening Cultural Commons, a two-day workshop intended to accelerate the work on information commons begun by Carol Rose, Elinor Ostrom, and Mike Madison / Kathy Strandburg / Brett Frischmann. All four of the above were presented as case studies (by Dave Fagundes, Sonali Shah, Charles Schweik, and Mike Madison, respectively). Elinor Ostrom gave the keynote address, and sat in on most of the presentations. It's exciting stuff: Mike, Kathy, and Brett have worked hard to adapt Ostrom's Institutional Analysis and Development framework to analysis of information commons such as Wikipedia, the Associated Press, and jambands. Yet, there was one looming issue that the conferees couldn't resolve: what, exactly, is a commons?
The short answer is: no one knows. Ostrom's work counsels a bottom-up, accretive way to answer this question. Over time, with enough case studies, the boundaries of what constitutes a "commons" become clear. So, the conventional answer, and one supported by a lot of folks at the NYU conference, is to go forth and, in the spirit of Clifford Geertz, engage in collection and thick description of things that look like, or might be, commons.
As an outsider to the field, I think that's a mistake.What commons research in law (and allied disciplines) needs is some theories of the middle range. There is no Platonic or canonical commons out there. Instead, there are a number of dimensions along which a particular set of information can be measured, and which make it more or less "commons-like." Let me suggest a few as food for thought:
- Barriers to access - some information, like Wikipedia, is available to all comers; other data, like pooled patents, are only available to members of the club. The lower the barriers to access, the more commons-like a resource is.
- State role in management - government may be involved in managing resources directly (for example, data in the National Practitioner Data Bank), indirectly (for example, via intellectual property laws), or not at all. I think a resource is more commons-like as it is less managed by the state.
- Ability to privatize - information resources are more and less subject to privatization. Information in the public domain, such as Shakespeare's plays, cannot be privatized - no one can assert rights over them (at least, not under American copyright law). Some information commons protected by IP law cannot be privatized, such as software developed under the GPL, and some can be, such as software developed under the Apache License. The greater the ability to privatize, I'd argue, the less commons-like.
- Depletability - classic commons resources (such as fisheries or grazing land) are subject to depletion. Information resources can be depleted, though depletion here may come more in the form of congestion, as Yochai Benkler argues. Internet infrastructure is somewhat subject to depletion, while ideas or prices are not. The greater the risk of depletion,the less commons-like.
Finally, why do we care about the commons? I think that commons studies are a reaction to the IP wars: they are a form of resistance to IP maximalism. By showing that information commons are not only ubiquitous, but vital to innovation and even a market economy, legal scholars can offer a principled means of arguing against ever-increasing IP rights. That makes studying these resources - and, hopefully, putting forward testable theories about what are and are not attributes of a commons - vital to enlightened policymaking.
(Cross-posted to Info/Law.)