Friday, May 25, 2012

Using empirical methods to analyze the effectiveness of persuasive techniques

Slate Magazine has a story detailing the Obama campaign's embracement of empirical methods to assess the relative effectiveness of political advertisements. 

To those familiar with the campaign’s operations, such irregular efforts at paid communication are indicators of an experimental revolution underway at Obama’s Chicago headquarters. They reflect a commitment to using randomized trials, the result of a flowering partnership between Obama’s team and the Analyst Institute, a secret society of Democratic researchers committed to the practice, according to several people with knowledge of the arrangement. ...

The Obama campaign’s “experiment-informed programs”—known as EIP in the lefty tactical circles where they’ve become the vogue in recent years—are designed to track the impact of campaign messages as voters process them in the real world, instead of relying solely on artificial environments like focus groups and surveys. The method combines the two most exciting developments in electioneering practice over the last decade: the use of randomized, controlled experiments able to isolate cause and effect in political activity and the microtargeting statistical models that can calculate the probability a voter will hold a particular view based on hundreds of variables.

Curiously, this story comes on the heels of a New York Times op-ed questioning the utility and reliability of social science approaches to policy concerns and a movement in Congress to defund the political science studies program at NSF.

Jeff

 

Posted by Dingo_Pug on May 25, 2012 at 09:13 AM in Current Affairs, Information and Technology, Science | Permalink | Comments (1) | TrackBack

Wednesday, May 16, 2012

Contrarian Statutory Interpretation Continued (CDA Edition)

Following my contrarian post about how to read the Computer Fraud and Abuse Act, I thought I would write about the Communication's Decency Act. I've written about the CDA before (hard to believe it has been almost 3 years!), but I'll give a brief summary here.

The CDA provides immunity from the acts of users of online providers. For example, if a user provides defamatory content in a comment, a blog need not remove the comment to be immune, even if the blog receives notice that the content is defamatory, and even if the blog knows the content is defamatory.

I agree with most of my colleagues who believe this statute is a good thing for the internet. Where I part ways from most of my colleagues is how broadly to read  the statute.

Since this is a post about statutory interpretation, I'll include the statute:

Section 230(c)(1) of the CDA states that:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In turn, an interactive computer service is:

any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.

Further, an information content provider is:

any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.

So, where do I clash with others on this? The primary area is when the operators of the computer service make decisions to publish (or republish) content.  I'll give three examples that courts have determined are immune, but that I think do not fall within the statute:

  1. Web Site A pays Web Site B to republish all of B's content on Site A. Site A is immune.
  2. Web Site A selectively republishes some or all of a story from Web Site B on Site A. Site A is immune.
  3. Web Site A publishes an electronic mail received by a reader on Site A. Site A is immune.

These three examples share a common thread: Site A is immune, despite selectively seeking out and publishing content in a manner that has nothing to do with the computerized processes of the provider. In other words, it is the operator, not the service, that is making publication determinations.

To address these issues, cases have focused on "development" of the information. One case, for example, defines development as a site that "contributes materially to the alleged illegality of the conduct." Here, I agree with my colleagues that development is being defined too broadly to limit immunity. Development should mean that the provider actually creates the content that is displayed. For that reason, I agree with the Roommates.com decision, which held that Roommates developed content by providing pre-filled dropdown lists that allegedly violated the Fair Housing Act. It turns out that the roommate postings were protected speech, but that is a matter of substance, and not immunity. The fact that underlying content is eventually vindicated does not mean that immunity should be expanded. To the extent some think that the development standard is limited only to development of illegal content (something implied by the text of the Roommates.com decision), I believe that is too limiting. The question is the source of the information, not the illegality of it.

The burning issue is why plaintiffs continue to rely on "development" despite its relatively narrow application. The answer is that this is all they currently have to argue, and that is where I disagree with my colleagues. I believe the word "interactive" in the definition must mean something. It means that the receipt of content must be tied to the interactivity of the provider. In other words, receipt of the offending content must be automated or otherwise interactive to be considered for immunity.

Why do I think that this is the right reading? First, there's the word "interactive." It was chosen for a reason. Second, the definition of "information content provider" identifies information "provided through the Internet or any other interactive computer service." (emphasis added). This implies that the provision of information should be based on interactivity or automation.

There is support in the statute for only immunizing information directly provided through interactivity. Section, 230(d), for example, requires interactive service providers to notify their users about content filtering tools. This implies that the information being provided is through the interactive service.  Sections 230(a) and (b) describe the findings and policy of Congress, which describe interactive services as new ways for users to control information and for free exchange of ideas.

I think one can read the statute more broadly than I am here. But I also believe that there is no reason to do so. The primary benefit of Section 230 is a cost savings mechanism. There's is no way many service providers can screen all the content on their websites for potentially tortious activity. There's just no filter for that.

Allowing immunity for individualized editorial decisions like paying for syndicated content, picking and choosing among emails, and republishing stories from other web sites runs directly counter to this cost saving purpose.  Complaining that it costs too much to filter interactive user content is a far cry from complaining that it costs to much to determine whether an email is true before making a noninteractive decision to republish it. We should want our service providers to expend some effort before republishing.

Posted by Michael Risch on May 16, 2012 at 04:01 PM in Blogging, Information and Technology | Permalink | Comments (4) | TrackBack

Fair Use and Electronic Reserves

For several years Georgia State was involved in litigation over the fair use doctrine. Specifically a consortium of publishers backed by Oxford, Cambridge and Sage sued Georgia State over copyright violations by many of the faculty. Many of my colleagues in the department were specifically named in the suit. A decision has now been rendered. You can read abou the decision here, and you can read the decision here.

The Court backed Georgia State in almost every instance, finding no copyright violation. However, the Court did lay down some rules - in particular you can use no more than 10% or one chapter, whichever is shorter, of any book.

Oh, and my colleagues were all found to have not violated copyright laws. For two of them the Court found that the plaintiffs could even prove a copyright.

Posted by Robert Howard on May 16, 2012 at 09:23 AM in Information and Technology, Intellectual Property, Things You Oughta Know if You Teach X | Permalink | Comments (0) | TrackBack

Friday, May 11, 2012

App Enables Users to File Complaints of Airport Profiling

Following the terrorist attacks of September 11, 2001, Muslims and those perceived to be Muslim in the United States have been subjected to public and private acts of discrimination and hate violence.  Sikhs -- members of a distinct monotheistic religion founded in 15th century India -- have suffered the "disproportionate brunt" of this post-9/11 backlash.  There generally are two reasons for this.  The first concerns appearance: Sikh males wear turbans and beards, and this visual similiarity to Osama bin Laden and his associates made Sikhs an accessible and superficial target for post-9/11 emotion and scrutiny.  The second relates to ignorance: many Americans are unaware of Sikhism and of Sikh identity in particular. 

Accordingly, after 9/11, Sikhs in the United States have been murdered, stabbed, assaulted, and harassed; they also have faced discrimination in various contexts, including airports, the physical space where post-9/11 sensitivities are likely and understandably most acute.  The Sikh Coalition, an organization founded in the hours after 9/11 to advocate on behalf of Sikh-Americans, reported that 64% of Sikh-Americans felt that they had been singled-out for additional screening in airports and, at one major airport (San Francisco International), nearly 100% of turbaned Sikhs received additional screening. (A t-shirt, modeled here by Sikh actor Waris Ahluwalia and created by a Sikh-owned company, makes light of this phenomenon.)

In response to such "airport profiling," the Sikh Coalition announced the launch of a new app (Apple, Android), which "allows users to report instances of airport profiling [to the Transportation Security Administration (TSA)] in real time."  The Coalition states that the app, called "FlyRights," is the "first mobile app to combat racial profiling."  The TSA has indicated that grievances sent to the agency by way of the app will be treated as official complaints

News of the app's release has generated significant press coverage.  For example, the New York Times, ABC, Washington Post, and CNN picked up the app's announcement.  (Unfortunately, multiple outlets could not resist the predictable line, 'Profiled at the airport? There’s an app for that.')  Wade Henderson, president and CEO of The Leadership Conference on Civil and Human Rights and The Leadership Conference Education Fund, tweeted, "#FlyRights is a vanguard in civil and human rights."

It will be interesting to see whether this app will increase TSA accountability, quell profiling in the airport setting, and, more broadly, trigger other technological advances in the civil rights arena.

 

Posted by Dawinder "Dave" S. Sidhu on May 11, 2012 at 08:32 AM in Information and Technology, Religion, Travel, Web/Tech | Permalink | Comments (0) | TrackBack

Wednesday, May 09, 2012

Oracle v. Google: Digging Deeper

This follows my recent post about Oracle v. Google. At the behest of commenters, both online and offline, I decided to dig a bit deeper to see exactly what level of abstraction is at issue in this case. The reason is simple: I made some assumptions in the last post about what the jury must have found, and it turns out that the assumption was wrong. Before anyone accuses me of changing my mind, I want to note that in my last post I made a guess, and that guess was wrong once I read the actual evidence. My view of the law hasn't changed. More after the jump.

For the masochistic, Groklaw has compiled the expert reports in an accessible fashion here and here. Why do I look at the reports, and not the briefs? It turns out that lawyers will make all sorts of arguments about what the evidence will say, but what is really relevant is the evidence actually presented. The expert reports, submitted before trial, are the broadest form of evidence that can be admitted - the court can whittle down what the jury hears, but typically experts are not allowed to go much beyond their reports.

These reports represent the best evidentiary presentation the parties have on the technical merits. It turns out that as a factual matter, both reports overlap quite a bit, and neither seems "wrong" as a matter of technical fact. I would sure hope so - these are pretty well respected professors and, quite frankly, the issues in this case are just not that complicated from a coding standpoint. (Note: for those wonder what gives me the authority to say that, I could say a lot, but I'll just note that in a prior life I wrote a book about software programming for an electronic mail API).

What level of abstraction was presented and argued to the jury? As far as I can tell from the reports, other than a couple or three routines that were directly copied, the Oracle's expert found little or no similar structures or sequences in the main body source code - the part that actually does the work. The only similarity - and it was nearly identical - was in the structure, sequence and organization of the grouping of function names, and the "packages" or files that they were located in.

For computer nerds, also identical were function names, parameter orders, and variable structures passed in as parameters. In other words, the header files were essentially identical. And they would have to be, if the goal is to have a compatible system. The inputs (the function names and parameters) and the outputs need to be the same. The only way you can disallow this usage of the API is to say that you cannot create an independent software program (even one of this size) that mimics the inputs and outputs of the original program.

To say that would be bad policy, and as I discuss below, probably not in accordance with precedent. This is why the experts are both right. Oracle's expert says they are identical, and Google copied because that was the best way to lure application developers - by providing compatibility (and the jury agreed, as to the copying part). Google's expert says, so what? The only thing copied was functional, and that's legal. It's this last part that a) led to the hung jury, and b) the court will have to rule on.

In my last post, I assumed that the level of abstraction must have been at a deeper level than just the names of the methods. Why did I do that?

First, the court's jury instructions make clear that function names are not at issue. But I guess the court left it to the jury whether the collection could be infringed.

Second, the idea that an API could be infringed is usually something courts decide well in advance of trial, and it's a question that doesn't usually make it to trial.

Third, based on media accounts, it appeared that there was more testimony about deeper similarities in the code. The copied functions, I argued in my prior post, supported that view. Except that there were no other similarities. I think it is a testament to Oracle's lawyers (and experts) that this misperception of a dirty clean room shone through in media reports, because the actual evidence belies the media accounts.

This is why I decided to dig deeper, and why one should not rely on second hand reports of important evidence. Based on my reading of the reports (and I admit that I could be missing something - I wasn't in the courtroom), I think that the court will have no choice but to hold that the collection of API names is uncopyrightable - at least at this level of abstraction and claimed infringement.

To the extent that there are bits of non-functional code, I would say that's probably fair use as a matter of law to implement a compatible system. I made a very similar argument in an article I wrote 12 years ago - long before I went into academia.

Prof. Boyden asked in a comment to my prior post whether there was any law that supported the copying of APIs structure and header files. I think there is: Lotus v. Borland. That case is famous for allowing Borland to mimic the Lotus structure, but there was also an API of sorts. Lotus macros were based on the menu structure, and to provide program compatiblity with Lotus, Borland implemented the same structure. So, for example, in Lotus, a user would hit "/" to bring up the menus, "F" to bring up the file menu, and "O" to bring up the open menu. As a result, the macro "/FO" would mimic this, to bring up the open menu.

Borland's product would "read" macro programs written for Lotus, and perform the same operation. No underlying similarity of the computer code, but an identical API that took the same inputs to create the same output the user expected.

Like the lower court here, the lower court there found infringement of the structure, sequence, and organization of the menu structure. Like the lower court here, the court there found it irrelevant that Borland got the menu structure from third-party books rather than Lotus's own product. (Here, Google asserts that it got the API's from Apache Harmony, a compatible Java system, rather than the Java documents themselves). There is some dispute about whether Sun sanctioned the Apache project, and what effect that should have on the case. I think that the Harmony is a red herring.The reality is that it does not matter either way - a copy is a copy is a copy - if the copy is illicit that is.

In Lotus, the lower court found the API creative and copyrightable, the very question facing the court here. On appeal, however, the First Circuit ruled that the API was a method of operation, likening it to the buttons on a VCR. I think that's a bit simplistic, but it was definitely the right ruling. The case went up to the Supreme Court, and it was a blockbuster case, expected to -- once and for all -- put this question to rest.

Alas, the Supreme Court affirmed without opinion by an evenly divided court. And the circuit court ruling stood. And it still stands - the court never took another case, and the gist of Lotus v. Borland has been applied over and over, but rarely as directly as it might apply here.

Wholesale, direct compatibility copying of APIs just doesn't happen very often, and certainly not on the scale and with the stakes of that at issue here. Perhaps that is why there is no definitive case holding that an entire API structure is uncopyrightable. You would think we would have by 2012, but nope. Lotus comes close, but it is not identical. In Lotus, the menu structure was much smaller, and the names and structure were far less creative. Further, the concern was macro programming written by users for internal use that would not allow them to switch to a new spreadsheet program. Java programs, on the other hand, are designed to be distributed to the public in most cases.

Then again, the core issue is the same: the ability to switch the underlying program while maintaining compatibility of programs that have already been written. Based on this similarity, my prediction is that Judge Alsup will say that the collection of names is not copyrightable, or at the very least usage of the API in this manner is fair use as a matter of law. We'll see if I'm right, and whether an appeals court affirms it.

Posted by Michael Risch on May 9, 2012 at 10:40 AM in Information and Technology, Intellectual Property | Permalink | Comments (0) | TrackBack

Monday, May 07, 2012

Oracle v. Google - Round I jury verdict (or not)

The jury came back today with its verdict in round one of the epic trial between two giants: Oracle v. Google. This first phase was for copyright infringement. In many ways, this was a run of the mill case, but the stakes are something we haven't seen in a technology copyright trial in quite some time.

Here's the short story of what happened, as far as I can gather.

1. Google needed an application platform for its Android phones. This platform allows software developers to write programs (or "apps" in mobile device lingo) that will run on the phone.

2. Google decided that Sun's (now Oracle's) Java was the best way to go.

3. Google didn't want to pay Sun for a license to a "virtual machine" that would run on Android phones.

4. Google developed its own virtual machine that is compatible with the Java programming language. To do so, Google had to make "APIs" that were compatible with Java. These APIs are essentially modules that provide functionality on the phone based on a keywords (instructions) from a Java language computer program. For example, if I want to display "Hello World" on the phone screen, I need only call print("Hello World"). The API module has a bunch of hidden functionality that takes "Hello World" and sends it out to the display on the screen - manipulating memory, manipulating the display, etc.

5. The key dispute is just how much of the Java source code was copied, if any to create the Google version. 

The jury today held the following:

1. One small routine (9 lines) was copied directly - line for line. The court said no damages for this, but this finding will be relevant later

2. Google copied the "structure, sequence, and organization" of 37 Java API modules. I'll discuss what this means later.

3. There was no finding on whether the copying was fair use - the jury deadlocked.

4. Google did not copy any "documentation" including comments in the source code.

5. Google was not fooled into thinking it had a license from Sun.

To understand any of this, one must understand the levels of abstraction in computer code. Some options are as follows:

A. Line by line copying of the entire source code. 

B. Line by line paraphrasing of the source code (changing variable names, for example, but otherwise idential lines).

C. Copying of the structure, sequence and organization of the source code - deciding what functions to include or not, creative ways to implement them, creative ways to solve problems, creative ways to name and structure variables, etc.  (The creativity can't be based on functionality)

D. Copying of the functionality, but not the stucture, sequence and organization - you usually find this with reverse engineering or independent development

E. Copying of just the names of functions with similar functionality - the structure and sequence is the same, but only as far as the names go (like print, save, etc.). The Court ruled already that this is not protected.

F. Completely different functionality, including different structure, sequence, organization, names, and functionality.

Obviously F was out if Google wanted to maintain compatibility with the Java programming language (which is not copyrightable). 

So, Google set up what is often called a "cleanroom." The idea is not new - AMD famously set up a cleanroom to develop copyrighted aspects of its x86 compatible microprocessors back in the early 1990's. Like Google now (according to the jury), AMD famously failed to keep its cleanroom clean.

Here's how a cleanroom works. One group develops a specification of functionality for each of the API function names (which are, remember, not protected - people are allowed to make compatible programs using the same names, like print and save). Ideally, you do this through reverse engineering, but arguably it can be done by reading copyrighted specifications/manuals, and extracting the functionality. Quite frankly, you could probably use the original documentation as well, but it does not appear as "clean" when you do so.

Then, a second group takes the "pure functionality" description, and writes its own implementation. If it is done properly, you find no overlapping source code or comments, and no overlapping structure, sequence and organization. If there happens to be similar structure, sequence and organization, then the cleanroom still wins, because that similarity must have been dictated by functionality. After all, the whole point of the cleanroom is that the people writing the software could not copy because they did not have the original to copy from.

So, where did it all go wrong? There were a few smoking guns that the jury might have latched on to:

1. Google had some emails early on that said there was no way to duplicate the functionality, and thus Google should just take a license.

2. Some of the code (specifically, the 9 lines) were copied directly. While not big in itself, it makes one wonder how clean the team was.

3. The head of development noted in an email that it was a problem for the cleanroom people to have had Sun experience, but some apparently did.

4.  Oracle's expert testified (I believe) that some of the similarities were not based on functionality, or were so close as to have been copied. Google's expert, of course, said the opposite, and the jury made its choice. It probably didn't help Google that Oracle's expert came from hometown Stanford, while Google's came from far-away Duke.

So, the jury may have just discounted the Google cleanroom story, and believed Oracle's. And that's what it found. As someone who litigated many copyight cases between competing companies, this is not a shocking outcome. This issue will not doubt bring the copyright v. functionality issue to the forefront (as it did in Lotus v. Borland and Intel v. AMD), this stuff is bread and butter for most technology copyright lawyers. It's almost always factually determined. Only the scope of this case is different in my book - everything else looks like many cases I've litigated (and a couple that I've tried).

So, what happens now in the copyright phase?  (A trial on patent infringement started today.) Judge Alsup has two important decisions to make.

First, the court has to decide what to do with the fair use ruling. Many say that a mistrial is warranted since fair use is a question of fact and the jury deadlocked. I'm not so sure. The facts on fair use are not really disputed here - only the legal interpretation of them; my experience is that courts are more than willing to make a ruling one way or the other when copying is clear (as the jury now says it is). I don't know what the court will do, but my gut says no fair use here.  My experience is that failed cleanrooms fail fair use - it means that what was copied was more than pure functionality, and it is for commercial use with market substitution. The only real basis for fair use is that the material copied was pure functionality, and that's the next inquiry.

Second, the court must determine whether the structure, sequence, and organization of these APIs can be copyrightable, or whether they are pure functionality. I don't know the answer to that question. It will depend in large part on:

a. whether the structure, etc., copied was at a high level (e.g. structure of functions) or at a low level (e.g. line by line and function by function);

b. the volume of copied (something like 11,000 lines is at issue);

c. the credibility of the experts in testifying to how much of structure that is similar is functionally based.  On a related note, the folks over at groklaw think for the most part think this is not copyrightable. They have had tremendous coverage of this case.

I've been on both sides of this argument, and I've seen it go both ways, so I don't have any predictions. I do look forward to seeing the outcome, though. It has been a while since I've written about copyright law and computer software; this case makes me want to rejoin the fray.

Posted by Michael Risch on May 7, 2012 at 08:07 PM in Information and Technology, Intellectual Property, Web/Tech | Permalink | Comments (1) | TrackBack

Thursday, May 03, 2012

When a Good Interpretation is the Wrong One (CFAA Edition)

Hi, and thanks again to Prawfs for having me back.  In my first post, I want to revisit the CFAA and the Nosal case. I wrote about this case back in April 2011 (when the initial panel decision was issued), and again in December (when en banc review was granted). It's hard to believe that it has been more than a year!

I discuss the case in detail in the other posts, but for the busy and uninitiated, here is the issue: what does it mean to "exceed authorized access" to a computer?  In Nosal, the wrongful act was essentially trade secret misappropriation where the "exceeded authorization" was violation of a clear "don't use our information except for company benefit" type of policy. Otherwise, the employees had access to the database from which they obtained information as part of their daily work.

Back in April, I argued that the panel basically got the interpretation of the statute right, but that the interpretation was so broad as to be scary. Orin Kerr, who has written a lot about this, noted in the comments that such a broad interpretation would be void for vagueness because it would ensnare too much everyday, non-wrongful activity.  Though I'm not convinced that the law supports his view, it wouldn't break my heart if that were the outcome. But that's not the end of the story.

Last month, the Ninth Circuit finally issued the en banc opinion in the Nosal case. The court noted all the scary aspects of a broad interpretation, trotting out the parade of horribles showing innocuous conduct that would violate the broadest reading of the statute. As the court notes: "Ubiquitous, seldom-prosecuted crimes invite arbitrary and discriminatory enforcement." We all agree on that.

The solution for the court was to narrowly interpret what "exceeds authorized access" means: "we hold that  'exceeds authorized access' in the CFAA is limited to violations of restrictions on access to information, and not restrictions on its use." (emphasis in original).

On the one hand, this is a normatively "good" interpretation. The court applies the rule of lenity to not outlaw all sorts of behavior that shouldn't be outlawed and that was likely never intended to be outlawed. So, I'm not complaining about the final outcome. 

On the other hand, I can't get over the fact that the interpretation is just plain wrong as a matter of statutory interpretation. Here are some of the reasons why:

1. The term "exceeds authorized access" is defined in the statute:  "'exceeds authorized access' means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter." The statute on its face makes clear that exceeding access is not about violating an access restriction, but instead about using access to obtain information that one is not so entitled to obtain. To say that a use restriction cannot be part of the statute simply rewrites the definition.

2. They key section of the statute is not about use of information at all. Section 1030(a)(2) outlaws access to a computer, where such access leads to obtaining (including viewing) of information. So, of course exceeding authorized access should deal with an access restriction, but what is to stop everyone from rewriting their agreements conditionally: "Your access to this server is expressly conditioned on your intent at the time of access. If your intent is to use the information for nefarious purposes, then your access right is revoked." The statutory interpretation can't be so easily manipulated, but it appears to be. 

3. Even if you accept the court's reading as in line with the statute, it still leaves much uncertainty in practice. For example, the court points to Google's former terms of service that disallowed minors from using Google: You may not use the Services and may not accept the Terms if . . . you are not of legal age to form a binding contract with Google . . . .” I agree that it makes little sense for all minors who use Google to be juvenile delinquents. But read the terms carefully - they are not about use of information; they are about permission to access the services. If you are a minor, you may not use our services (that is, access our server). I suppose this is a use restriction because the court used it as an example, but that's not so clear to me.

4. The court states that Congress couldn't have meant exceeds authorized access to be about trade secret misappropriation and really only about hacking. 1030(a)(1)(a) belies that reading. That section outlaws exceeding authorized access to obtain national secrets and causing them "to be communicated, delivered, or transmitted, or attempt[ing] to communicate, deliver, transmit or cause to be communicated, delivered, or transmitted the same to any person not entitled to receive it." That sounds a lot like misappropriation to me, and I bet Congress had a situation like Nosal in mind. 

5. In fact, trade secrets appear to be exactly what Congress had in mind. The section that would ensnare most unsuspecting web users, 1030(a)(2) (which bars "obtaining" information by exceeding authorized access), was added in the same public law as the Economic Espionage Act of 1996 - the federal trade secret statute. The senate reports for the EEA and the change to 1030 were issued on the same day. As S. Rep. 104-357 makes clear, the addition was to protect the privacy of information on civilian computers. Of course, this helps aid a narrower reading - if information is not private on the web, then perhaps we should not be so concerned about it.

6. On a related note, the court's treatment of the legislative history is misleading. The definition of "exceeds authorized access" was changed in 1986. As the court notes in a footnote: 

[T]he government claims that the legislative history supports itsinterpretation. It points to an earlier version of the statute, which defined“exceeds authorized access” as “having accessed a computer with authorization, uses the opportunity such access provides for purposes to which such authorization does not extend.”  But that language was removed and replaced by the current phrase and definition.

So far, so good. In fact, this change alone seems to support the court's view, and I would have stopped there. But the the court goes on to state:

And Senators Mathias and Leahy—members of theSenate Judiciary Committee—explained that the purpose of replacing the original broader language was to “remove[] from the sweep of the statute one of the murkier grounds of liability, under which a[n] . . . employee’s access to computerized data might be legitimate in some circumstances, but criminal in other (not clearly distinguishable) circumstances.”

This reading is just not accurate in content or spirit. I reproduce below sections of S. Rep. 99-472, the legislative history cited by the court:

 [On replacing "knowing" access with "intentional" access] This is particularly true in those cases where an individual is authorized to sign onto and use a particular computer, but subsequently exceeds his authorized access by mistakenly entering another computer file or data that happens to be accessible from the same terminal. Because the user had ‘knowingly’ signed onto that terminal in the first place, the danger exists that he might incur liability for his mistaken access to another file. ... The substitution of an ‘intentional’ standard is designed to focus Federal criminal prosecutions on those whose conduct evinces a clear intent to enter, without proper authorization, computer files or data belonging to another.

. . .
[Note: (a)(3) was about access to Federal computers by employees. Access to private computers was not added for another 10 years. At the time (a)(2) covered financial information.] The Committee wishes to be very precise about who may be prosecuted under the newsubsection (a)(3). The Committee was concerned that a Federal computer crime statute not be so broad as to create a risk that government employees and others who are authorized to use a Federal Government computer would face prosecution for acts of computer access and use that, while technically wrong, should not rise to the levelof criminal conduct. At the same time, the Committee was required to balance its concern for Federal employees and other authorized users against the legitimate need to protect Government computers against abuse by ‘outsiders.’ The Committee struck that balance in the following manner.
In the first place, the Committee has declined to criminalize acts in which the offending employee merely ‘exceeds authorized access' to computers in his own department ... It is not difficult to envision an employee or other individual who, while authorized to use a particular computer in one department, briefly exceeds his authorized access and peruses data belonging to the department that he is not supposed to look at. This is especially true where the department in question lacks a clear method of delineating which individuals are authorized to access certain of its data. The Committee believes that administrative sanctions are more appropriate than criminal punishment in such a case. The Committee wishes to avoid the danger that every time an employee exceeds his authorized access to his department's computers—no matter how slightly—he could be prosecuted under this subsection. That danger will be prevented by not including ‘exceeds authorized access' as part of this subsection's offense. [emphasis added]
Section 2(c) substitutes the phrase ‘exceeds authorized access' for the more cumbersome phrase in present 18 U.S.C. 1030(a)(1) and (a)(2), ‘or having accessed a computer with authorization, uses the opportunity such access provides for purposes to which such authorization does not extend’. The Committee intends this change to simplify the language in 18 U.S.C. 1030(a)(1) and (2)... [note: not to change the meaning, though obviously it does]

[And finally, the quote in the Nosal case, which were "additional" comments in the report, not the report of the committee itself]: [1030(a)(3)] would eliminate coverage for authorized access that aims at ‘purposes to which such authorization does not extend.’  This removes from the sweep of the statute one of the murkier grounds of liability, under which a Federal employee's access to computerized data might be legitimate in some circumstances, but criminal in other (not clearly distinguishable) circumstances that might be held to exceed his authorization.
This collection of history implies four things (to me, at least):
a. The committee well understood that employees could have authorized access to a computer, but could easily, "technically," and "slightly" exceed that authorization by accessing another file on the same computer - and that it was not all about hacking.
b. The committee understood that it was problematic to hold people liable for this.
c. As a result, the committee removed "exceeds authorized access" for federal employee liability, but left it in in (a)(1) (use of U.S. secrets) and (a)(2) (gaining access to finanical information). The legislative history quoted by the court merely affirms that the "murkiness" is solved by removing the phrase altogether, and not by narrowing the scope in other subsections.
The problem is that the worries the committee had about how "exceeds authorized access" might apply to federal employeees never went away, but Congress extended liability to everyone when it expanded (a)(2) in 1996. What Congress should have done in 1996 (or anytime since) was consider the problems facing federal employees when it imposed restrictions on everyone.
A second problem is that Congress likely did not envision widespread computer servers with open access to information, whereby the only "authorization" limitations would be contractual rather than technologically based.
This leads me, again, to my conclusion above. The courts reading of the statute, while "good," is not quite right. But the panel's original reading was not quite right either.
I return to the suggestions I made in prior posts, bolstered by the legislative history here: we should look to the terms of authorization of access to see whether they have been exceeded. This means that if you are an employee who intentionally accesses information for a purpose you know is not authorized, then you are exceeding authorization.
It also means that if the terms of service on a website say explicitly that you must be truthful about your age or as a condition of authorization to access the site, then you are exceeding authorization. And that’s not always an unreasonable access limitation.  If there were a kids only website that excluded adults, I might well want to criminalize access obtained by people lying about their age. That doesn’t mean all access terms are reasonable, but I’m not troubled by that from a statutory interpretation standpoint.
I’m sure one can attack this as vague – it won’t always be clear when a term is tied to authorization. But then again, if it is not a clear term of authorization, the state shouldn’t be able to prove that authorization was exceeded. It also means that if the authorization terms are buried or unread, then there may not be an intentional access that exceeds authorization.

Posted by Michael Risch on May 3, 2012 at 01:03 PM in Information and Technology | Permalink | Comments (7) | TrackBack

Tuesday, April 17, 2012

“Breaking and Entering” Through Open Doors: Website Scripting Attacks and the Computer Fraud and Abuse Act, Part 2

Two notes: 1) Apologies to Prawfs readers for the delay in this post. It took my student and I longer than anticipated to complete some of the technical work behind this idea. 2) This post is a little longer than originally planned, because last week the Ninth Circuit en banc reversed a panel decision in United States v. Nosal which addressed whether the CFAA extends to violations of (terms of) use restrictions. In reversing the panel decision, the Ninth Circuit found the CFAA did *not* extend to such restrictions.


The idea for this post originally arose when I noticed I was able to include a hyperlink in a comment I made on a Prawfs' post. One of my students (Nick Carey) had just finished a paper discussing the applicability of the Computer Fraud and Abuse Act (CFAA) to certain types of cyberattacks that would exploit the ability to hyperlink blog comments, so I contacted Dan and offered to see if Prawfs was at risk, as it dovetailed nicely with a larger project I'm working on regarding regulating cybersecurity through criminal law.

The good news: it's actually hard to "hack" Prawfs. As best we can tell the obvious vulnerabilities are patched. It got me thinking, though, that as we start to clear away the low-hanging fruit in cybersecurity through regulatory action, focus is likely to shift to criminal investigations to address more sophisticated attackers.

Sophisticated attackers often use social engineering as a key part of their attacks. Social engineering vulnerabilities generally arise when there is a process in place to facilitate some legitimate activity, and when that process can be corrupted -- by manipulating the actors who use it -- to effect an outcome not predicted (and probably not desired). Most readers of this blog likely encounter such attacks on a regular basis, but have (hopefully!) been trained or learned how to recognize such attacks. One common example is the email, purportedly from a friend, business, or other contact, that invites you to click on a link. Once clicked on, this link in fact does not lead to the "exciting website" your friend advertised, but rather harvests the username and password for your email account and uses those for a variety of evil things.

I describe this example, which hopefully resonates with some readers (if not, be thankful for your great spam filters!), because it resembles the vulnerability we *did* find in Prawfs. This vulnerability, which perhaps is better called a design choice, highlights the tension in legal solutions to cybercrime I discuss here. Allowing commenters to hyperlink is a choice -- one that forms the basis for the "open doors" component of this question: should a user be held criminally liable under federal cybercrime law for using a website "feature" in a way other than that intended (or perhaps desired) by the operators of a website, but in a way that is otherwise not unlawful.

Prawfs uses TypePad, a well-known blogging software platform that handles (most) of the security work. And, in fact, it does quite a good job -- as mentioned above, most of the common vulnerabilities are closed off. The one we found remaining is quite interesting. It stems from the fact that commenters are permitted to use basic HTML (the "core" language in which web pages are written) in writing their comments. The danger in this approach is that it allows an attacker to include malicious "code" in their comments, such as the type of link described above. Since the setup of TypePad allows for commenters to provide their own name, it is also quite easy for an attacker to "pretend" to be someone else and use that person's "authority" to entice readers to click on the dangerous link. The final comment of Part 1 provides an example, here.

A simple solution -- one to which many security professionals rush -- is just to disable the ability to include HTML in comments. (Security professionals often tend to rush to disable entirely features that create risk.) Herein lies the problem: there is a very legitimate reason for allowing HTML in comments; it allows legitimate commenters to include clickable links to resources they cite. As we've seen in many other posts, this can be a very useful thing to do, particularly when citing opinions or other blog posts. Interestingly, as an aside, I've often found this tension curiously to resemble that found in debates about restricting speech on the basis of national security concerns. But that is a separate post.

Cybercrime clearly is a substantial problem. Tradeoffs like the one discussed here present one of the core reasons the problem cannot be solved through technology alone. Turning to law -- particularly regulating certain undesired behaviors through criminalization -- is a logical and perhaps necessary step in addressing cybersecurity problems. As I have begun to study this problem, however, I have reached the conclusion that legal solutions face a structurally similar set of tradeoffs as do technical solutions.

The CFAA is the primary federal law criminalizing certain cybercrime and "hacking" activities. The critical threshold in many CFAA cases is whether a user has "exceeded authorized access" (18 U.S.C. § 1030(a)) on a computer system. But who defines "authorized access?" Historically, this was done by a system administrator, who set rules and policies for how individuals could use computers within an organization. The usernames and passwords we all have at our respective academic institutions, and the resources those credentials allow us to access, are an example of this classic model.

What about a website like Prawfs? Most readers don't use a login and password to read or comment, but do for posting entries. Like most websites, there is a policy addressing (some of) the aspects of acceptable use. That policy, however can change at any time and without notice. (There are good reasons this is the case, the simplest being it is not practical to notify every person who ever visits the website of any change to the policy in advance of such changes taking effect.) What if a policy changes, however, in a way that makes an activity -- one previously allowed -- now impermissible? Under a broad interpretation of the CFAA, the user continuing to engage in the now impermissible activity would be exceeding their authorized access, and thereby possibly running afoul of the CFAA (specifically (a)(2)(C)).

Some courts have rejected this broad interpretation, perhaps most famously in United States v. Lori Drew, colloquially known as the "MySpace Mom" case. Other courts have accepted a broader view, as discussed by Michael Risch here and here. I find the Drew result correct, if frustrating, and the (original) Nosal result scary and incorrect. Last week, the Ninth Circuit en banc reversed itself and adopted a more Drew-like view of the CFAA. I am particularly relieved by the majority's understanding of the CFAA overbreadth problem:

The government’s construction of the statute would expand its scope far beyond computer hacking to criminalize any unauthorized use of information obtained from a computer. This would make criminals of large groups of people who would have little reason to suspect they are committing a federal crime. While ignorance of the law is no excuse, we can properly be skeptical as to whether Congress, in 1984, meant to criminalize conduct beyond that which is inherently wrongful, such as breaking into a computer.

(United States v. Nosal, No. 10-10038 (9th Cir. Apr. 10, 2012) at 3864.)

I think the court recognizes here that an overbroad interpretation of the CFAA is similar to extending a breaking and entering statute to just walking in an open door. The Ninth Circuit appears to adopt similar thinking, noting that Congress' original intent was to address the issue of hackers breaking into computer systems, not innocent actors who either don't (can't?) understand the implications of their actions or don't intend to "hack" a system when they find the system allows them to access a file or use a certain function:

While the CFAA is susceptible to the government’s broad interpretation, we find Nosal’s narrower one more plausible. Congress enacted the CFAA in 1984 primarily to address the growing problem of computer hacking, recognizing that, “[i]n intentionally trespassing into someone else’s computer files, the offender obtains at the very least information as to how to break into that computer system.” S. Rep. No. 99-432, at 9 (1986) (Conf. Rep.).

(Nosal at 3863.)

Obviously the Ninth Circuit is far from the last word on this issue, and the dissent notes differences in how other Circuits have viewed the CFAA. I suspect at some point, unless Congress first acts, the Supreme Court will end up weighing in on the issue. Before that, I hope to produce some useful thoughts on the issue, and eagerly solicit feedback from Prawfs readers. I've constructed a couple of examples below to illustrate this in the context of the Blawg.

Consider, for example, a change in a blog's rules restricting what commenters may link to in their comments. Let's assume that, like Prawfs, currrently there are no specific posted restrictions. Let's say a blog decided it had a serious problem with spam (thankfully we don't here at Prawfs), and wanted to address this by adjusting the acceptable use policy for the blog to prohibit linking to any commercial product or service. We probably wouldn't feel much empathy for the unrelated spam advertisers who filled the comments with useless information about low-cost, prescriptionless, mail-order pharmaceuticals. We definitely wouldn't about the advance-fee fraud advertisers. But what about the practitioner who is an active participant in the blog, contributes to substantive discussions, and occassionally may want to reference or link to their practice in order to raise awareness?

Technically, all three categories of activity would violate (the broad interpretation of) (a)(2)(C). Note that the intent requirement -- or lack thereof -- in (a)(2)(C) is a key element of why these are treated similarly: the only "intent" required for violation is intent to access. (a)(2)(C) does not distinguish among actors' intent beyond this. As I have commented elsewhere (scroll down), one can easily construct scenarios under a "scary" reading of the CFAA where criminal law might be unable to distinguish between innocent actors lacking any reasonable element of what we traditionally consider mens rea, and malicious actors trying to takeover or bring down information systems. At the moment, I tend to think there's a more difficult problem discerning intent in the "gray area" examples I constructed here, particularly the Facebook examples when a username/password is involved. But I wonder what some of the criminal law folks think about whether intent really *is* harder, or if we could solve that problem with better statutory construction of the CFAA.

Finally, I've added one last comment to the original post (Part 1) that highlights both how easy it is to engage in such hacking (i.e., this isn't purely hypothetical) and how difficult it is to address the problem with technical solutions (i.e., those solutions would have meant none of this post -- or of my comments on the Facebook passwords post -- could have contained clickable links). I also hope it adds a little bit of "impact factor." The text of the comment explains how it works, and also provides an example of how it could be socially engineered.

In sum, the lack of clarity in the CFAA, and the resulting "criminalization overbreadth," is what concerns me -- and, thankfully, apparently the Ninth Circuit. In the process of examining whether Prawfs/TypePad had any common vulnerabilities, it occurred to me that in the rush to defend against legitimate cybercriminals, there may develop significant political pressure to over-criminalize other activities which are not proper for regulation through the criminal law. We have already seen this happen with child pornography laws and sexting. I am extremely interested in others' thoughts on this subject, and hope I have depicted the problem in a way digestible to non-technical readers!

Posted by David Thaw on April 17, 2012 at 07:07 PM in Criminal Law, Information and Technology | Permalink | Comments (0) | TrackBack

Thursday, March 22, 2012

Wired, and Threatened

I have a short op-ed on how technology provides both power and peril for journalists over at JURIST. Here's the lede:
Journalists have never been more empowered, or more threatened. Information technology offers journalists potent tools to gather, report and disseminate information — from satellite phones to pocket video cameras to social networks. Technological advances have democratized reporting... Technology creates risks along with capabilities however... [and] The arms race of information technology is not one-sided.

Posted by Derek Bambauer on March 22, 2012 at 02:11 PM in Current Affairs, First Amendment, Information and Technology, International Law, Web/Tech | Permalink | Comments (0) | TrackBack

Wednesday, February 22, 2012

“Breaking and Entering” Through Open Doors: Website Scripting Attacks and the Computer Fraud and Abuse Act, Part 1


IMPORTANT: clicking through to the main body of this post may will cause unusual behaviors in your web browser.
Seriously. Please read more below before clicking through to the post!

Thank you Dan, Sarah, and the other Prawfs hosts for giving me the opportunity to guest Blawg! I will be writing about a project I am currently working on with one of my students (Nick Carey), examining common website cybersecurity vulnerabilities in the context of cybercrime law.

The purpose of this post is to examine these (potential) cybersecurity vulnerabilities in PrawfsBlawg. It is the first of what I hope will be a few posts examining how current federal cybercrime law (the Computer Fraud and Abuse Act, or CFAA) applies to certain Internet activities that straddle the line between aggressive business practices and criminal intent.

While certainly possible to analyze these without a public post, making the post public provides more opportunity to showcase these vulnerabilities in a way that brings the debate to life without the "risk" of engaging attackers set on causing damage.

As other scholars have observed, judicial references to the CFAA notably increased over the past decade. Part 2 of this post, which will be forthcoming after we identify which vulnerabilities are (and are not) present in the Blawg, will provide a more substantive treatment of the legal issues involved and a (better) place for discussion.



Posted by David Thaw on February 22, 2012 at 02:57 PM in Criminal Law, Information and Technology | Permalink | Comments (3)

Wednesday, February 15, 2012

Coasean Positioning System

Ronald Coase's theory of reciprocal causation is alive, well, and interfering with GPS. Yesterday, the FCC pulled the plug on a plan by LightSquared to build a new national wireless network that combines cell towers and satellite coverage. The FCC went along with a report from the NTIA that LightSquared's network would cause many GPS systems to stop working, including the ones used by airplanes and regulated closely by the FAA. Since there's no immediately feasible way to retrofit the millions of GPS devices out in the field. LightSquared had to die so that GPS could live.

LightSquared's "harmful interference" makes this sound like a simple case of electromagnetic trespass. But not so fast. LightSquared has had FCC permission to use the spectrum between 1525 and 1559 megahertz, in the "mobile-satellite spectrum" band. That's not where GPS signals are: they're in the next band up, the "radionavigation satellite service" band, which runs from 1559 to 1610 megahertz. According to LightSquared, its systems would be transmitting only in its assigned bandwidth--so if there's interference, it's because GPS devices are listening to signals in a part of the spectrum not allocated to them. Why, LightSquared plausibly asks, should it have a duty of making its own electromagnetic real estate safe for trespassers?

The underlying problem here is that "spectrum" is an abstraction for talking about radio signals, but real-life uses of the airwaves don't neatly sort themselves out according to its categories. In his 1959 article The Federal Communications Commission, Coase explained:

What does not seem to have been understood is that what is being allocated by the Federal Communications Commission, or, if there were a market, what would be sold, is the right to use a piece of equipment to transmit signals in particular way. Once the question is looked at in this way, it is unnecessary to think in terms of ownership of frequencies of the ether.

Now add to this point Coase's observation about nuisance: that the problem can be solved either by the polluter or the pollutee altering its activities, and so in a sense should be regarded as being caused equally by both of them. So here. "Interference" is a property of both transmitters and receivers; one man's noise is another man's signal. GPS devices could have been designed with different filters from the start, filters that were more aggressive in rejecting signals from the mobile-satellite band. But those filters would have added to the cost of a GPS unit, and worse, they'd have degraded the quality of GPS reception, because they would have thrown out some of the signals from the radionavigation-satellite band. (The only way to build a completely perfect filter is to make it capable of traveling back in time. No kidding!) Since the mobile-satellite band wasn't at the time being used anywhere close to as intensively as LightSquared now proposes to use it, it made good sense to build GPS devices that were sensitive rather than robust.

There are multiple very good articles on property, tort, and regulatory lurking in this story. There's one on the question Coase was concerned with: regulation versus ownership as means of choosing between competing uses (like GPS and wireless broadband). There's another on the difficulty of even defining property rights to transmit, given the failure of the "spectrum" abstraction to draw simple bright lines that avoid conflicting uses. There's one on the power of incumbents to gain "possession" over spectrum not formally assigned to them. There's another on investment costs and regulatory uncertainty: LightSquare has already launched a billion-dollar satellite. And there's one on technical expertise and its role in regulatory policy. Utterly fascinating.

Posted by James Grimmelmann on February 15, 2012 at 10:12 AM in Information and Technology | Permalink | Comments (1)

Wednesday, February 08, 2012

Criminalizing Cyberbullying and the Problem of CyberOverbreadth

In the past few years, reports have attributed at least fourteen teen suicides to cyberbullying. Phoebe Prince of Massachusetts, Jamey Rodemeyer of New York, Megan Meier of Missouri, and Seth Walsh of California are just some of the children who have taken their own lives after being harassed online and off.

These tragic stories are a testament to the serious psychological harm that sometimes results from cyberbullying, defined by the National Conference of State Legislatures as the "willful and repeated use of cell phones, computers, and other electronic communications devices to harass and threaten others." Even when victims survive cyberbullying, they can suffer psychological harms that last a lifetime. Moreover, an emerging consensus suggests that cyberbullying is reaching epidemic proportions, though reliable statistics on the phenomenon are hard to come by. Who, then, could contest that the social problem of cyberbullying merits a legal response?

In fact, a majority of states already have legislation addressing electronic harassment in some form, and fourteen have legislation that explicitly uses the term cyberbullying. (Source: here.) What's more, cyber-bullying legislation has been introduced in six more states: Georgia, Illinois, Kentucky, Maine, Nebraska, and New York. A key problem with much of this legislation, however, is that legislators have often conflated the legal definition of cyberbullying with the social definition. Though understandable, this tendency may ultimately produce legislation that is unconstitutional and therefore ineffective at remedying the real harms of cyberbullying.

Consider, for instance, a new law proposed just last month by New York State Senator Jeff Klein (D- Bronx) and Congressman Bill Scarborough. Like previous cyberbullying proposals, the New York bill was triggered by tragedy. The proposed legislation cites its justification as the death of 14-year-old Jamey Rodemeyer, who committed suicide after being bullied about his sexuality. Newspaper accounts also attribute the impetus for the legislation to the death of Amanda Cummings, a 15 year old New York teen who committed suicide by stepping in front of a bus after she was allegedly bullied at school and online. In light of these terrible tragedies, it is easy to see why New York legislators would want to take a symbolic stand against cyberbullying and join the ranks of states taking action against it.

The proposed legislation (S6132-2011) begins modestly enough by "modernizing" pre-existing New York law criminalizing stalking and harassment. Specifically, the new law amends various statutes to make clear that harassment and stalking can be committed by electronic as well as physical means. More ambitiously, the new law increases penalties for cyberbullying of "children under the age of 21," and broadly defines the activity that qualifies for criminalization under the act. The law links cyberbullying with stalking, stating that "a person is guilty of stalking in the third degree when he or she intentionally, and for no legitimate purpose, engages in a course of conduct directing electronic communication at a child [ ], and knows or reasonably should know that such conduct: (a) causes reasonable fear of material harm to the physical health, safety or property of such child; or (b) causes material harm to the physical health, emotional health, safety or property of such child." (emphasis mine) Even a single communication to multiple recipients about (and not necessarily to) a child can constitute a "course of conduct" under the statute.

Like the sponsors of this legislation, I deplore cyber-viciousness of all varieties, but I also condemn the tendency of legislators to offer well intentioned but sloppily drafted and constitutionally suspect proposals to solve pressing social problems. In this instance, the legislation opts for a broad definition of cyberbullying based on legislators' desires to appear responsive to the cyberbullying problem. The broad statutory definition (and perhaps resorting to criminalization rather than other remedies) creates positive publicity for legislators, but broad legal definitions that encompass speech and expressive activities are almost always constitutionally overbroad under the First Amendment.

Again, consider the New York proposal. The mens rea element of the offensive requires only that a defendant "reasonably should know" that "material harm to the . . . emotional health" of his target will result, and it is not even clear what constitutes "material harm." Seemingly, therefore, the proposed statute could be used to prosecute teen girls gossiping electronically from their bedrooms about another teen's attire or appearance. Likewise, the statute could arguably criminalize a Facebook posting by a 20-year-old college student casting aspersions on his ex-girlfriend. In both instances, the target of the speech almost certainly would be "materially" hurt and offended upon learning of it, and the speakers likely should reasonably know such harm would occur. Just as clearly, however, criminal punishment of "adolescent cruelty," which was a stated justification of the legislation, is an unconstitutional infringement on freedom of expression.

Certainly the drafters of the legislation may be correct in asserting that "[w]ith the use of cell phones and social networking sites, adolescent cruelty has been amplified and shifted from school yards and hallways to the Internet, where a nasty, profanity-laced comment, complete with an embarrassing photo, can be viewed by a potentially limited [sic] number of people, both known and unknown." They may also be correct to assert that prosecutors need new tools to deal with a "new breed of bully." Neither assertion, however, justifies ignoring the constraints of First Amendment law in drafting a legislative response. To do so potentially misdirects prosecutorial resources, misallocates taxpayer money that must be devoted to passsing and later defending an unconstitutional law, and block the path toward legal reforms that would address cyberbullying more effectively.

With regard to criminal law, a meaningful response to cyberbullying--one that furthers the objectives of deterrence and punishment of wrongful behavior--would be precise and specific in defining the targeted conduct. A meaningful response would carefully navigate the shoals of the First Amendment's protection of speech, acknowledging that some terrible behavior committed through speech must be curtailed through educating, socializing, and stigmatizing perpetrators rather than criminalizing and censoring their speech.

Legislators may find it difficult to address all the First Amendment ramifications of criminalizing cyberbullying, partly because the term itself potentially obscures analysis. Cyberbullying is an umbrella term that covers a wide variety of behaviors, including threats, stalking, harassment, eavesdropping, spoofing (impersonation), libel, invasion of privacy, fighting words, rumor-mongering, name-calling, and social exclusion. The First Amendment constraints on criminalizing the speech behavior involved in cyberbullying depends on which category of speech behavior is involved. Some of these behaviors, such as issuing "true threats" to harm another person or taunting them with "fighting words," lie outside the protection of the First Amendment. (See Virginia v. Black and Chaplinsky v. New Hampshire; but see R.A.V and my extended analysis here.). Some other behaviors that may cause deep emotional harm, such as name-calling, are just as clearly protected by the First Amendment in most contexts. (Compare, e.g., Cohen v. California with FCC v. Pacifica).

But context matters profoundly in determining the scope of First Amendment protection of speech. Speech in schools and workplaces can be regulated in ways that speech in public spaces cannot (See, e.g., Bethel School Dist. No. 403 v. Fraser). Even within schools, the speech of younger minors can be regulated in ways that speech of older minors cannot (Cf. Hazelwood with Joyner v. Whiting (4th Cir)) , and speech that is part of the school curriculum can be regulated in ways that political speech cannot. (Compare, e.g., Tinker with Hazelwood). Outside the school setting, speech on matters of public concern receives far more First Amendment protection than speech dealing with other matters, even when such speech causes tremendous emotional upset. (See Snyder v. Phelps). But speech targeted at children likely can be regulated in ways that speech targeted at adults cannot, given the high and possibly compelling state interest in protecting the well-being of at least younger minors. (But see Brown v. Ent. Merchants Ass'n). Finally, even though a single instance of offensive speech may be protected by the First Amendment, the same speech repeated enough times might become conduct subject to criminalization without exceeding constitutional constraints. (See Pacifica and the lower court cases cited here).

Any attempt to use criminal law to address the social phenomenon should probably start with the jurisprudential question of which aspects of cyberbullying are best addressed by criminal law, which are best addressed by other bodies of law, and which are best left to non-legal control. Once that question is answered, criminalization of cyberbullying should proceed by identifying the various forms cyberbullying can take and then researching the specific First Amendment constraints, if any, on criminalizing that form of behavior or speech. This approach should lead legislators to criminalize only particularly problematic forms of narrowly defined cyberbullying, such as . While introducing narrow legislation of this sort may not be as satisfying as criminalizing "adolescent cruelty," it is far more likely to withstand constitutional scrutiny and become a meaningful tool to combat serious harms.

Proposals to criminalize cyberbullying often seem to proceed from the notion that we will know it when we see it. In fact, most of us probably will: we all recognize the social problem of cyberbullying, defined as engaging in electronic communication that transgresses social norms and inflicts emotional distress on its targets. But criminal law cannot be used to punish every social transgression, especially when many of those transgressions are committed through speech, a substantial portion of which may be protected by the First Amendment.

[FYI: This blog post is the underpinning of a talk I'm giving at the Missouri Law Review's Symposium on Cyberbullying later in the week, and a greatly expanded and probably significantly changed version will ultimately appear in the Missouri Law Review, so I'd particularly appreciate comments. In the article, I expect to create a more detailed First Amendment guide for conscientious lawmakers seeking to regulate cyberbullying. I am especially excited about the symposium because it includes mental health researchers and experts as well as law professors. Participants include Barry McDonald (Pepperdine), Ari Waldman (Cal. Western), John Palfrey (Berkman Center at HLS), Melissa Holt (B.U.), Mark Small (Clemson), Philip Rodkin (U. Ill.), Susan P. Limber (Clemson), Daniel Weddle (UMKC), and Joew Laramie (consultant/former direction of Missouri A.G. Internet Crimes Against Children Taskforce).]

Posted by Lyrissa Lidsky on February 8, 2012 at 08:37 AM in Constitutional thoughts, Criminal Law, Current Affairs, First Amendment, Information and Technology, Lyrissa Lidsky, Web/Tech | Permalink | Comments (8) | TrackBack

Friday, February 03, 2012

The Used CD Store Goes Online

On Monday, Judge Sullivan of the Southern District of New York will hear argument on a preliminary injunction motion in Capitol Records v. ReDigi, a copyright case that could be one of the sleeper hits of the season. ReDigi is engaged in the seemingly oxymoronic business of "pre-owned digital music" sales: it lets its customers sell their music files to each other. Capitol Records, unamused, thinks the whole thing is blatantly infringing and wants it shut down, NOW.

There are oodles of meaty copyright issues in the case -- including many that one would not think would still be unresolved at this late date. ReDigi is arguing that what it's doing is protected by first sale: just as with physical CDs, resale of legally purchased copies is legal. Capitol's counter is that no physical "copy" changes hands when a ReDigi user uploads a file and another user downloads it. This disagreement cuts to the heart of what first sale means and is for in this digital age. ReDigi is also making a quiver's worth of arguments about fair use (when users upload files that they then stream back to themselves), public performance (too painfuly technical to get into on a general-interest blog), and the responsibility of intermediaries for infringements initiated by users.

I'd like to dwell briefly on one particular argument that ReDigi is making: that what it is doing is fully protected under section 117 of the Copyright Act. That rarely-used section says it's not an infringement to make a copy of a "computer program" as "an essential step in the utilization of the computer program." In ReDigi's view, the "mp3" files that its users download from iTunes and then sell through ReDigi are "computer programs" that qualify for this defense. Capitol responds that in the ontology of the Copyright Act, MP3s are data ("sound recordings," to be precise), not programs.

I winced when I read these portions of the briefs.

In the first place, none of the files being transferred through ReDigi are MP3s. ReDigi only works with files downloaded from the iTunes Store, and the only format that iTunes sells in is AAC (Advanced Audio Coding), not MP3. It's a small detail, but the parties' agreement to a false "fact" virtually guarantees that their error will be enshrined in a judicial opinion, leading future lawyers and courts to think that any digital music file is an "MP3."

Worse still, the distinction that divides ReDigi and Capitol -- between programs and data -- is untenable. Even before there were actual computers, Alan Turing proved that there is no difference between program and data. In a brilliant 1936 paper, he showed that any computer program can be treated as the data input to another program. We could think of an MP3 as a bunch of "data" that is used as an input to a music player. Or we could think of the MP3 as a "program" that, when run correctly, produces sound as an output. Both views are correct -- which is to say, that to the extent that the Copyright Act distinguishes a "program" from any other information stored in a computer, it rests on a distinction that collapses if you push too hard on it. Whether ReDigi should be able to use this "essential step" defense, therefore, has to rest on a policy judgment that cannot be derived solely from the technical facts of what AAC files are and how they work. But again, since the parties agree that there is a technical distinction and that it matters, we can only hope that the court realizes they're both blowing smoke.

Posted by James Grimmelmann on February 3, 2012 at 11:59 PM in Information and Technology, Intellectual Property | Permalink | Comments (16)

Monday, December 19, 2011

Breaking the Net

Mark Lemley, David Post, and Dave Levine have an excellent article in the Stanford Law Review Online, Don't Break the Internet. It explains why proposed legislation, such as SOPA and PROTECT IP, is so badly-designed and pernicious. It's not quite clear what is happening with SOPA, but it appears to be scheduled for mark-up this week. SOPA has, ironically, generated some highly thoughtful writing and commentary - I recently read pieces by Marvin Ammori, Zach Carter, Rebecca MacKinnon / Ivan Sigal, and Rob Fischer.

There are two additional, disturbing developments. First, the public choice problems that Jessica Litman identifies with copyright legislation more generally are manifestly evident in SOPA: Rep. Lamar Smith, the SOPA sponsor, gets more campaign donations from the TV / movie / music industries than any other source. He's not the only one. These bills are rent-seeking by politically powerful industries; those campaign donations are hardly altruistic. The 99% - the people who use the Internet - don't get a seat at the bargaining table when these bills are drafted, negotiated, and pushed forward. 

Second, representatives such as Mel Watt and Maxine Waters have not only admitted to ignorance about how the Internet works, but have been proud of that fact. They've been dismissive of technical experts such as Vint Cerf - he's only the father of TCP/IP - and folks such as Steve King of Iowa can't even be bothered to pay attention to debate over the bill. I don't mind that our Congresspeople are not knowledgeable about every subject they must consider - there are simply too many - but I am both concerned and offended that legislators like Watt and Waters are proud of being fools. This is what breeds inattention to serious cybersecurity problems while lawmakers freak out over terrorists on Twitter. (If I could have one wish for Christmas, it would be that every terrorist would use Twitter. The number of Navy SEALs following them would be... sizeable.) It is worrisome when our lawmakers not only don't know how their proposals will affect the most important communications platform in human history, but overtly don't care. Ignorance is not bliss, it is embarrassment.

Cross-posted at Info/Law.

Posted by Derek Bambauer on December 19, 2011 at 01:49 PM in Blogging, Constitutional thoughts, Corporate, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Music, Property, Television, Web/Tech | Permalink | Comments (1) | TrackBack

Wednesday, December 14, 2011

Six Things Wrong with SOPA

America is moving to censor the Internet. The PROTECT IP and Stop Online Piracy Acts have received considerable attention in the legal and tech world; SOPA's markup in the House occurs tomorrow. I'm not opposed to blacklisting Internet sites on principle; however, I think that thoughtful procedural protections are vital to doing so in a legitimate way. Let me offer six things that are wrong with SOPA and PROTECT IP: they harm cybersecurity, are wildly overbroad and vague, enable unconstitutional prior restraint, undercut American credibility on Internet freedom, damage a well-working system for online infringement, and lack any empirical justification whatsoever. And, let me address briefly Floyd Abrams's letter in support of PROTECT IP, as it is frequently adverted to by supporters of the legislation. (The one-word summary: "sellout." The longer summary: The PROTECT IP letter will be to Abrams' career what the Transformersmovie was to that of Orson Welles.)

  1. Cybersecurity - the bills make cybersecurity worse. The most significant risk is that they impede - in fact, they'd prevent - the deployment of DNSSEC, which is vitally important to reducing phishing, man-in-the-middle attacks, and similar threats. Technical experts are unanimous on this - see, for example, Sandia National Laboratories, or Steve CrockerPaul Vixie / Dan Kaminsky et al. Idiots, like the MPAA's Michael O'Leary, disagree, and simply assert that "the codes change." (This is what I call "magic elf" thinking: we can just get magic elves to change the Internet to solve all of our problems. Congress does this, too, as when it includes imaginary age-verifying technologies in Internet legislation.) Both bills would mandate that ISPs redirect users away from targeted sites, to government warning notices such as those employed in domain name seizure cases. But, this is exactly what DNSSEC seeks to prevent - it ensures that the only content returned in response to a request for a Web site is that authorized by the site's owner. There are similar problems with IP-based redirection, as Pakistan's inadvertent hijacking of YouTube demonstrated. It is ironic that at a time when the Obama administration has designated cybersecurity as a major priority, Congress is prepared to adopt legislation that makes the Net markedly less secure.
  2. Wildly overbroad and vague- the legislation (particularly SOPA) is a blunderbuss, not a scalpel. Sites eligible for censoring include those:
    •  
      • primarily designed or operated for copyright infringement, trademark infringement, or DMCA § 1201 infringement
      • with a limited purpose or use other than such infringement
      • that facilitate or enable such infringement
      • that promote their use to engage in infringement
      • that take deliberate actions to avoid confirming high probability of such use

    If Flickr, Dropbox, and YouTube were located overseas, they would plainly qualify. Targeting sites that "facilitate or enable" infringement is particularly worrisome - this charge can be brought against a huge range of sites, such as proxy services or anonymizers. User-generated content sites are clearly dead. And the vagueness inherent in these terms means two things: a wave of litigation as courts try to sort out what the terminology means, and a chilling of innovation by tech startups.

  3. Unconstitutional prior restraint - the legislation engages in unconstitutional prior restraint. On filing an action, the Attorney General can obtain an injunction that mandates blocking of a site, or the cutoff of advertising and financial services to it - before the site's owner has had a chance to answer, or even appear. This is exactly backwards: the Constitution teaches that the government cannot censor speech until it has made the necessary showing, in an adversarial proceeding - typically under strict scrutiny. Even under the more relaxed, intermediate scrutiny that characterizes review of IP law, censorship based solely on the government's say-so is forbidden. The prior restraint problem is worsened as the bills target the entire site via its domain name, rather than focusing on individualized infringing content, as the DMCA does. Finally, SOPA's mandatory notice-and-takedown procedure is entirely one-sided: it requires intermediaries to cease doing business with alleged infringers, but does not create any counter-notification akin to Section 512(g) of the DMCA. The bills tilt the table towards censorship. They're unconstitutional, although it may well take long and expensive litigation to demonstrate that.
  4. Undercuts America's moral legitimacy - there is an irreconciliable tension between these bills and the position of the Obama administration - especially Secretary of State Hillary Clinton - on Internet freedom. States such as Iran also mandate blocking of unlawful content; that's why Iran blocked our "virtual embassy" there. America surrenders the rhetorical and moral advantage when it, too, censors on-line content with minimal process. SOPA goes one step farther: it permits injunctions against technologies that circumvent blocking - such as those funded by the State Department. This is fine with SOPA adherents; the MPAA's Chris Dodd is a fan of Chinese-style censorship. But it ought to worry the rest of us, who have a stake in uncensored Internet communication.
  5. Undercuts DMCA - the notice-and-takedown provisions of the DMCA are reasonably well-working. They're predictable, they scale for both discovering infringing content and removing it, and they enable innovation, such as both YouTube itself and YouTube's system of monetizing potentially infringing content. The bills shift the burden of enforcement from IP owners - which is where it has traditionally rested, and where it belongs - onto intermediaries. SOPA in particular increases the burden, since sites must respond within 5 days of a notification of claimed infringement, with no exception for holidays or weekends. The content industries do not like the DMCA. That is no evidence at all that it is not functioning well.
  6. No empirical evidence - put simply, there is no empirical data suggesting these bills are necessary. The content industries routinely throw around made-up numbers, but they have been frequently debunked. How important are losses from foreign sites that are beyond the reach of standard infringement litigation, versus losses from domestic P2P networks, physical infringement, and the like? Data from places like Switzerland suggests that losses are, at best, minimal. If Hollywood wants America to censor the Internet, it needs to make a convincing case based on actual data, and not moronic analogies to stealing things off trucks. The bills, at their core, are rent-seeking: they would rewrite the law and alter fundamentally Internet free expression to benefit relatively small yet politically powerful industries. (It's no shock two key Congressional aides who worked on the legislation have taken jobs in Hollywood - they're just following Mitch Glazier, Dan Glickman, and Chris Dodd through the revolving door.) The bills are likely to impede innovation by the far larger information technology industry, and indeed to drive some economic activity in IT offshore.

The bills are bad policy and bad law. And yet I expect one of them to pass and be signed into law. Lastly, the Abrams letter: Noted First Amendment attorney Floyd Abrams wrote a letter in favor of PROTECT IP. Abrams's letter is long, but surprisingly thin on substantive legal analysis of PROTECT IP's provisions. It looks like advocacy, but in reality, it is Abrams selling his (fading) reputation as a First Amendment defender to Hollywood. The letter rehearses standard copyright and First Amendment doctrine, and then tries to portray PROTECT IP as a bill firmly in line with First Amendment jurisprudence. It isn't, as Marvin Ammori and Larry Tribe note, and Abrams embarrasses himself by pretending otherwise. Having the government target Internet sites for pre-emptive censorship, and permitting them to do so before a hearing on the merits, is extraordinary. It is error-prone - look at Dajaz1 and mooo.com. And it runs afoul of not only traditional First Amendment doctrine, but in particular the current Court's heightened protection of speech in a wave of cases last term. Injunctions affecting speech are different in character than injunctions affecting other things, such as conduct, and even the cases that Abrams cites (such as Universal City Studios v. Corley) acknowledge this. According to Abrams, the constitutionality of PROTECT IP is an easy call. That's only true if you're Hollywood's sockpuppet. Thoughtful analysis is far harder.

Cross-posted at Info/Law.

Posted by Derek Bambauer on December 14, 2011 at 09:07 PM in Constitutional thoughts, Culture, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Music, Property, Web/Tech | Permalink | Comments (1) | TrackBack

On the Move

Jane Yakowitz and I have accepted offers from the University of Arizona James E. Rogers College of Law. We're excited to join such a talented group! But, we'll miss our Brooklyn friends. Come visit us in Tucson!

Posted by Derek Bambauer on December 14, 2011 at 05:39 PM in Current Affairs, Getting a Job on the Law Teaching Market, Housekeeping, Information and Technology, Intellectual Property, Life of Law Schools, Teaching Law, Travel | Permalink | Comments (2) | TrackBack

Saturday, December 10, 2011

Copyright and Your Face

The Federal Trade Commission recently held a workshop on facial recognition technology, such as Facebook's much-hated system, and its privacy implications. The FTC has promised to come down hard on companies who abuse these capabilities, but privacy advocates are seeking even stronger protections. One proposal raised was to provide people with copyright in their faceprints or facial features. This idea has two demerits: it is unconstitutional, and it is insane. Otherwise, it seems fine.

Let's start with the idea's constitutional flaws. There are relatively few constitutional limits on Congressional power to regulate copyright: you cannot, for example, have perpetual copyright. And yet, this proposal runs afoul of two of them. First, imagine that I take a photo of you, and upload it to Facebook. Congress is free to establish a copyright system that protects that photo, with one key limitation: I am the only person who can obtain copyright initially. That's because the IP Clause of the Constitution says that Congress may "secur[e] for limited Times to Authors... the exclusive Right to their respective Writings." I'm the author: I took the photograph (copyright nerds would say that I "fixed" it in my camera's memory). The drafters of the Constitution had good reason to limit grants of copyright to authors: England spent almost 150 years operating under a copyright-like monopoly system that awarded entitlements to a distributor, the Stationer's Company. The British crown had an excellent reason for giving the Company a monopoly - the Stationer's Company implemented censorship. Having a single distributor with exclusive rights gives a government but one choke point to control. This is all to say that Congress can only give copyright to the author of a work, and the author is the person who creates / fixes it (here, the photographer). It's unconstitutional to award it to anyone else.

Second, Congress cannot permit facts to be copyrighted. That's partly for policy reasons - we don't want one person locking up facts for life plus seventy years (the duration of copyright) - and partly for definitional ones. Copyright applies only to works of creative expression, and facts don't qualify. They aren't created - they're already extant. Your face is a fact: it's naturally occurring, and you haven't created it. (A fun question, though, is whether a good plastic surgeon might be able to copyright the appearance of your surgically altered nose. Scholars disagree on this one.) So, attempting to work around the author problem by giving you copyright protection over the configuration of your face is also out. So, the proposal is unconstitutional.

It's also stupid: fixing privacy with copyright is like fixing alcoholism with heroin. Copyright infringement is ubiquitous in a world of digital networked computers. Similarly, if we get copyright in our facial features, every bystander who inadvertently snaps our picture with her iPhone becomes an infringer - subject to statutory damages of between $750 and $30,000. Even if few people sue, those who do have a powerful weapon on their side. Courts would inevitably try to mitigate the harsh effects of this regime, probably by finding most such incidents to be fair use. But that imposes high administrative costs, and fair use is an equitable doctrine - it invites courts to inject their normative views into the analysis. It also creates extraordinarily high administrative costs. It's already expensive for filmmakers, for example, to clear all trademarked and copyrighted items from the zones they film (which is why they have errors and omissions insurance). Now, multiply that permissions problem by every single person captured in a film or photograph. It becomes costly even to do the right thing - and leads to strategic behavior by people who see a potential defendant with deep pockets.

Finally, we already have an IP doctrine that covers this area: the right of publicity (which is based in state tort law). The right of publicity at least has some built-in doctrinal elements that deal with the problems outlined above, such as exceptions when one's likeness is used in a newsworthy fashion. It's not as absolute as copyright, and it lacks the hammer of statutory damages, which is probably why advocates aren't turning to it. But those are features, not bugs.

Privacy problems on social networks are real. But we need to address them with thoughtful, tailored solutions, not by slapping copyright on the problem and calling it done.

Cross-posted at Info/Law.

Posted by Derek Bambauer on December 10, 2011 at 06:03 PM in Constitutional thoughts, Corporate, Culture, Current Affairs, Film, First Amendment, Information and Technology, Intellectual Property, Property, Torts | Permalink | Comments (4) | TrackBack

Tuesday, December 06, 2011

Cry Baby Cry

The project to crowdsource a Tighter White Album (hereinafter TWA) is done, and we’ve come up with a list of 15 songs that might have made a better end product than the original. Today I want to discuss whether I've done something wrong, legally or morally. 

I am no expert on European law, or its protection of the moral rights of the author, but I was reminded by Howard Knopf that my hypothetical exercise could generate litigation, as the author has rights against the distortion or mutilation of the work, separate from copyright protection.  The current copyright act in the UK bars derogatory "treatments" of the work. A treatment can include "deletion from" the original, and the TWA is just that -- 15 songs were trimmed from the trimmed White Album, ostensibly to make something "better than" the original. To the extent the remaining Beatles and their heirs can agree on anything, it might be the sanctity of the existing discography in its extant form, at least as it encapsulates the end product stemming from the individual proclivities of the Beatles at the time. But see Free as a Bird. Fans and critics reacted strongly to Danger Mouse's recent splice of Jay-Z's Black Album and the Beatles' White Album, with one critic describing it as "an insult to the legacy of the Beatles (though ironically, probably intended as a tribute)". Could the TWA implicate the moral rights of the Beatles?

 

On one level, I and my (perhaps unwitting) co-conspirators are doing nothing more than music fans have done for generations: debating which songs of an artist's body of work merit approval and which merit approbrium. Coffee houses and bars are often filled with these discussions. Rolling Stones has made a cottage industry of ranking and reranking the top songs and albums of the recent past and in recent memory. This project is no different.

On the other hand, I am suggesting, by having the audacity to conduct this survey and publish the results, that the lads from Liverpool did it wrong, were too indulgent, etc., in releasing the White Album in its official form. That's different from saying "Revolution #9" is "not as good" as "Back in the U.S.S.R." (or vice versa). But to my eyes, it falls short of distortion.

Moral rights in sound recordings and musical compositions are not explicitly protected under the Copyright Act. In one case predating the effective date of the current Act, the Monty Python troupe was granted an injunction against the broadcast of its skits in heavily edited form on U.S. television, but that case was grounded more in contract law (ABC having exceded the scope of its license) and a right not to have the hack job attributed to the Pythons under the Lanham Act.*  The TWA doesn't edit individual songs, and whilte the Monty Python case protected 30 minute Python episodes as a cohesive whole, it is difficult to argue that the copyright owners of the White Album are necessarily committed to the same cohesive view of the White Album, to the extent they sell individual songs online. One can buy individual Beatles songs, even from the White Album. Once you can buy individual tracks, can there really be moral rights implications in posting my preferred version of the album in a format that allows you to go and buy it?

On to the standard rights protected under U.S. copyright law. Yesterday, I talked about the possibility that the list itself might be a compilation, with protectable creativity in the selection. Might the TWA also be an unauthorized derivative work, exposing me to copyright liability? A derivative work is one "based on" a preexisting work, in which the original is "recast, transformed or adapted." That's similar to the language used to describe a treatment under UK law. Owners of sound recordings often release new versions, with songs added, outtakes included, and bonus art, ostensibly to sell copies to consumers who already purchased them. I certainly didn't ask the Beatles (or more precisely, the copyright owner of the White Album) for permission to propose a shortened album, but what I have done looks like an abridgement of the sort that might fall into traditional notions of fair use.

Once upon a time, I might have made a mixtape and distributed it to my dearest friends (although when I was young, the 45 minute tape was optimal, so I might have been forced to cut another song or two). Committing my findings to vinyl, compact disc, or mp3, using the original recordings, technically violate 17 USC 106(1)'s prohibition on unauthorized reproduction. If I give an unauthorized copy to someone else, I violate the exclusive right to distribute under section 106(3). Unlike the public performance and display rights, there is no express carve out for "private" copying and/or distribution, although it was historically hard to detect. The mixtape in its analog form seems like the type of private use that should be permitted under any reasonable interpretation of fair use, if not insulated by statute.

If I send my digital mixtape to all of my Facebook friends, that seems a bridge too far. However, Megan Carpenter has suggested that by failing to make room for the mix tape in the digital environment, copyright law "breeds contempt." 11 Nev. L.J. 44, 79-80 (2010).  Jessica Litman, Joseph Liu, Glynn Lunney and Rebecca Tushnet, among others, have argued that space for personal consumption is as important in the digital realm as it was in the good old days when everything was analog.

If I instead use social networking tools like Spotify Social** to share my playlist, I probably don't infringe the 106(4, 6) public performance right. Because I use authorized channels, any streaming you do to preview my playlist is likely authorized. And if I post the playlist on iTunes, you can go and buy it as constituted. That seems somewhat closer to an unauthorized copy, but it's not actually unauthorized. The Beatles sell individual singles through iTunes, so it seems problematic to conclude that consumers are not authorized to buy only those songs they prefer.

So all in all, given that I'm not running a CD burner in my office, I think I'm in the clear. What do you think?

*A recent Supreme Court decision puts in doubt the Lanham Act portion of the Monty Python holding

**The Spotify Social example is complicated by the fact that the Beatles aren't included, although I have found reasonable covers of all the songs included on the TWA. The copyright act explicitly provides for a compulsory license to make cover tunes, so long as the cover doesn't deviate too drastically from the original. 17 USC § 115(a). If the license was paid, and the copyright owner notified, those songs are authorized. My repackaging of them in a virtual mixtape, however, is not. 17 U.S.C. § 114(b).

 

Posted by Jake Linford on December 6, 2011 at 07:31 PM in Information and Technology, Intellectual Property, Music | Permalink | Comments (1) | TrackBack

Revisiting the Scary CFAA

Last April, I blogged about the Nosal case, which led to the scary result that just about any breach of contract on the internet can potentially be a criminal access to a protected computer. I discuss the case in extensive detail in that post, so I won't repeat it here. The gist is that employees who had access to a server in their ordinary course of work were held to have exceeded their authorization when they accessed that same server with the intent of funneling information out to a competitive ex-employee. The scary extension is that anyone breaching a contract with a web provider might then be considered to be accessing the web server in excess of authorization, and therefore committing a crime.

I'm happy to report that Nosal is now being reheard in the Ninth Circuit. I'm hopeful that the court will do something to rein in the case.

I think most of my colleagues agree with me that the broad interpretation of the statute is a scary one. Where some depart, though, is on the interpretive question. As you'll see in the comments to my last post, there is some disagreement about how to interpret the statute and whether it is void for vagueness. I want to address some of the continuing disagreement after the jump.

I think there are three ways to look at Nosal:

    1. The ruling was right, and the extension to all web users is fine (ouch);

    2. The ruling was right as to the Nosal parties, but should not be extended to all web users; and

    3. The ruling was not right as to the Nosal parties, and also wrong as to all web users.

I believe where I diverge from many of my cyberlaw colleagues is that I fall into group two. I hope to explain why, and perhaps suggest a way forward. Note that I'm not a con law guy, and I'm not a crim law guy, but I am a internet statute guy, so I call the statutory interpretation like I see it.

I want to focus on the notion of authorization. The statute at issue, the Computer Fraud and Abuse Act (or CFAA)  outlaws obtaining information from networked computers if one "intentionally accesses a computer without authorization or exceeds authorized access."

Orin Kerr, a leader in this area, wrote a great post yesterday that did two things. First, it rejected tort based tresspass rules like implied consent as too vague for a criminal statute. On this, I agree. Second, it defined "authorization" with respect to other criminal law treatment of consent. In short, the idea is if you consent to access in the first place, then doing bad things in violation of the promises made is does not mean lack of consent to access. On this, I agree as well.

But here's the rub: the statute says "without authorization or exceeds authorized access." And this second phrase has to mean something. The goal, for me at least, is that it covers the Nosal case but not the broad range of activity on the internet. Professor Kerr, I suspect, would say that the only way to do that is for it to be vague, and if so, then the statute must be vague.

I'm OK with the court going that way, but here's my problem with the argument. The statute isn't necessarily vague. Let's say that the scary broad interpretation fron Nosal means that every breach of contract is now a criminal act on the web. That's not vague. Breach a contract, then you're liable; there's no wondering whether you have committed a crime or not. 

Of course, the contract might be vague, but that's a factual issue that can be litigated. It is not unheard of to have a crime based on failure to live up to an agreement to do something. A dispute about what the agreement was is not the same as being vague. Does that mean I like it? No. Does that mean it's crazy overbroad? Yes. Does that mean everyone's at risk and someone should do something about this nutty statute? Absolutely.

Now, here is where some vagueness comes in - only some breaches lead to exceeded access, and some don't. How are we to decide which is which? The argument Professor Kerr takes on is tying it to trespass, and I agree that doesn't work.

So, I return to my suggestion from several months ago - we should look to the terms of authorization of access to see whether they have been exceeded. This means that if you are an employee who accesses information for a purpose you know is not authorized, then you are exceeding authorization. It also means that if the terms of service on a website say explicitly that you must be truthful about your age or you are not authorized to access the site, then you are unauthorized. And that's not always an unreasonable access limitation.  If there were a kids only website that excluded adults, I might well want to criminalize access obtained by people lying about their age. That doesn't mean all access terms are reasonable, but I'm not troubled by that from a statutory interpretation standpoint.

I'm sure one can attack this as vague - it won't always be clear when a term is tied to authorization. But then again, if it is not a clear term of authorization, the state shouldn't be able to prove that authorization was exceeded. This does mean that snoops all over and people who don't read web site terms (me included) are at risk for violating terms of access we never saw or agreed to. I don't like that part of the law, and it should be changed. I'm fine with making it more limiting in ways that Professor Kerr and others have suggested.

But I don't know that it is invalid as vague - there are lots of things that may be illegal that people don't even know are on the books. Terms of service, at least, people have at least some chance of knowing and choose not to. That doesn't mean it isn't scary, because I don't see behavior (including my own) changing anytime soon.

Posted by Michael Risch on December 6, 2011 at 05:18 PM in Information and Technology, Web/Tech | Permalink | Comments (8) | TrackBack

Monday, December 05, 2011

While My (Favorite Beatles Song) Gently Weeps

The voting is done and the world has (or 264 entities voting in unique user sessions have) selected the songs for "The Tighter" White Album (hereinafter TWA). The survey invited voters to make pairwise comparisons between two Beatles songs, under the premise that one could be kept, and one would be cut.

There are several copyright-related implications of my experiment, and I wanted to unpack a few of them. Today, my thoughts on the potential authorship and ownership of the list itself. Tomorrow, a few thoughts on moral rights, whether I’ve done something wrong, and whether what I've done is actionable. [Edited to add hyperlink to Part II]

But first, the results -- An album's worth of music (two sides no longer than 24:25 each, the length of Side Four of the original), ranked from strongest to weakest:

SIDE ONE:

While My Guitar Gently Weeps

Blackbird

Back in the USSR

Happiness is a Warm Gun

Dear Prudence

Revolution 1

Ob-la-di, Ob-la-Da

SIDE TWO:

Helter Skelter

I'm So Tired

I Will

Julia

Rocky Raccoon

Mother Nature's Son

Cry Baby Cry

Sexy Sadie

How did the voters do? Very well, by my estimation. I was pleasantly surprised by the balance. McCartney and Lennon each sang (which by this point in their career was a strong signal of primary authorship) 12 of the 30 tracks, and each had 7 selections on the TWA. (John also wrote "Good Night," which was sung by Ringo and overproduced at Paul's behest, so I think it can be safely cabined.) Only one of George Harrison's four compositions, "While My Guitar Gently Weeps," made the cut, but was the strongest finalist. Ringo's "Don't Pass Me By," no critical darling, did poorly in the final assessment.*

It's possible, although highly unlikely in this instance, that the list of songs is copyrightable expression. As a matter of black letter law, one who compiles other copyrighted works may secure copyright protection in the

collection and assembiling of preexisting materials or of data that are selected, coordinated, or arranged in such a way that the resulting work as a whole constitutes an original work of authorship.

Protection only extends to the material contributed by the author. The Second Circuit has found copyrightable expression in the exercise of judgment as expressed in a prediction about the price of used cars over the next six months, even where the prediction was striving to map as close as possible to the actual value of cars in those markets. Other Second Circuit cases recognize copyright protection in the selection of terms of venery -- labels for groups of animals (e.g., a pride of lions) and in the selection of nine pitching statistics from among scores of potential stats. In each of these cases, there was some judgment exercised about what to include or what not to include.

In this case, I proposed the question, put together the survey, monitored the queue, and recruited respondents through various channels. The voting, however, was actually done by multiple individuals selecting between pairs of songs. It's difficult to paint that as a "work of authorship" in any traditional sense of the phrase. I set up the experiment and then cut it loose. I could have made my own list (and have, but I won't bore you with that), and that list would have been my own work of authorship. This seems like something different, because I'm not making any independent judgment (other than the decision to limit the length of the TWA to twice the length of the longest side of the White Album).

Let's assume for a moment that there is protectable expression, even though I crowdsourced the selection process. Could it be that all 246 voters are joint authors with me in this work? It seems unlikely. The black letter test asks (1) whether we all intended our independent, copyrightable contributions to merge into an inseparable whole, and (2) whether we intended everyone to be a co-author. It's hard to call an individual vote between two songs a separately copyrightable contribution, even with the prompt: "The Beatles' White Album might have been stronger with fewer songs. Which song would you keep?" By atomizing the decision, I might be insulated from claims that individual voters are co-authors of the final list, although I suggested that there was something cooperative about this event in my description of the vote:

We’re crowdsourcing a “Tighter White Album.” Some say the White Album would have been better if it was shorter, which requires cutting some songs. Together, we can work it out. For each pair, vote for the song you would keep. Vote early and often, and share this with your friends. The voting will go through the end of November.

Still, to the extent they took seriously my admonitions, the readers were endeavoring to decide which of the two songs presented belonged on the TWA, whatever the factors that played into the decision. Might that choice also be protected in individual opinions sorted in a certain fashion? This really only matters if I make money from the proposed TWA. I would then need to make an accounting to my joint authors. And even if the vote itself was copyrightable expression, the voter likely granted me an implied license to include it in my final tally.

Should I have copyright protection in this list? Copyright protection is arguably granted to give authors (term of art) the incentive to create expressive works. I didn't need copyright protection as an incentive: I ran the survey so that I could talk about the results (and to satify my own curiosity). And my purposes are served if others take the results and run with them (although I would prefer to be attributed). Maybe no one else needs copyright protection, either, as lists ranking Beatles songs abound on the internet. Rolling Stone magazine has built a cottage industry on ranking and reranking the popular music output of the last 60 years, but uses its archives of rankings as an incentive to pay for a subscription. If the rankings didn't sell, magazines would likely do something else.

As an alternative, Rolling Stone might also arguably benefit from common law protection against the misappropriation of hot news, granted by the Supreme Court in INS v. AP, which would provide narrow injunctive relief to allow it to sell its news before others can copy without permission. The magazine might have trouble with recent precedent from the 2d Circuit which held that making the news does not qualify for hot news protection, although reporting the news might. So if I reproduce Rolling Stone's list (breaking news: Rolling Stone prefers Sonic Youth to Brittany Spears), that might fall outside of hot news misappropriation, although perhaps not outside of copyright protection itself.

 

*Two personal reflections: (1) I am astounded that Honey Pie didn't make the cut. Perhaps voters confused it with Wild Honey Pie, which probably deserved its lowest ranking. (2) I sing Good Night to my five-year old each night as a lullaby, and my world would be different without it. That is the inherent danger in a project like mine, and those who criticize the very idea that the White Album would have been the better had it been shorter can marshall my own anecdotal evidence in support of their skepticism.

 

 

 

 

 

Posted by Jake Linford on December 5, 2011 at 03:35 PM in Information and Technology, Intellectual Property, Music | Permalink | Comments (1) | TrackBack

Sunday, November 27, 2011

Threading the Needle

Imagine that Ron Wyden fails: either PROTECT IP or SoPA / E-PARASITE passes and is signed into law by President Obama. Advocacy groups such as the EFF would launch an immediate constitutional challenge to the bill’s censorship mandates. I believe the outcome of such litigation is far less certain than either side believes. American censorship legislation would keep lots of lawyers employed (always a good thing in a down economy), and might generate some useful First Amendment jurisprudence. Let me sketch three areas of uncertainty that the courts would have to resolve, and that improve the odds that such a bill would survive.

First, how high is the constitutional barrier to the legislation? Both bills look like systems of prior restraint, which loads the government with a “heavy presumption” against their constitutionality . The Supreme Court’s jurisprudence in the two most relevant prior cases, Reno v. ACLU and Ashcroft v. ACLU, applied strict scrutiny: laws must serve a compelling government interest, and be narrowly tailored to that interest. This looks bad for the state, but wait: we’re dealing with laws regulating intellectual property, and such laws draw intermediate scrutiny at most. This is what I call the IP loophole in the First Amendment. Copyright law, for example, enjoys more lenient treatment under free speech examination because the law has built-in safeguards such as fair use, the idea-expression dichotomy, and the (ever-lengthening) limited term of rights.

Moreover, it’s not certain that the bills even regulate speech. Here, I mean “speech” in its First Amendment sense, not the colloquial one. Burning one’s draft card at a protest seems like speech to most of us – the anti-war message is embodied within the act – but the Supreme Court views it as conduct. And conduct can be regulated so long as the government meets the minimal strictures of rational review. The two bills focus on domain name filtering – they impede users from reaching certain on-line material, but formally limit only the conversion of domain name to IP address by an Internet service provider. (I’m skipping over the requirement that search engines de-list such sites, which is a much clearer case of regulating speech.) DNS lookups seem akin to conduct, although the Court’s precedent in this area is hardly a model of lucidity. (Burning the American flag = speech; burning a draft card = conduct. QED.) Other courts have struggled, most notably in the context of the anti-circumvention provisions of the Digital Millennium Copyright Act, to categorize domain names as speech or not-speech, and thus far have found a kind of Hegelian duality to them. That suggests an intermediate level of scrutiny, which would resonate with the IP loophole analysis above.

Second, who has standing? It seems that our plaintiffs would need to find a site that conceded it met the definition of a site “dedicated to the theft of U.S. property.” That seems hard to do until filtering begins – at which point whatever ills the legislation creates will have materialized. (It might also expose the site to suits from affected IP owners.) Perhaps Internet service providers could bring a challenge based on either third-party standing (on behalf of their users, if we think users’ rights are implicated, or the foreign sites) or their own speech interests. However, I think it’s unlikely that users would have standing, particularly given the somewhat dilute harm of being unable to reach material on allegedly infringing sites. And, as described above, it’s not clear that ISPs have a speech interest at all: domain name services simply may be conduct. 

Finally, how can we distinguish E-PARASITE or PROTECT IP from similar legislation that passes constitutional muster? Section 1201 of the DMCA, for example, permits liability to be imposed not only on those who make tools for circumventing access controls available, but even on those who knowingly link to such tools on-line. The government can limit distribution of encryption technology – at least as object code – overseas, by treating it as a munition. And thus far, the federal government has been able to seize domain names under civil forfeiture provisions, with nary a quibble from the federal courts.

To be plain: I think both bills are terrible legislation. They’re certain to damage America’s technology innovation industries, which are the crown jewels of our economy and our future competitiveness. They turn over censorship decisions to private actors with no interest whatsoever in countervailing values such as free expression or, indeed, anything other than their own profit margins. And their procedural protections are utterly inadequate – in my view. But I think it is possible that these bills may thread the constitutional needle, particularly given the one-way ratchet of copyright protection before the federal courts. The decision in Ashcroft, for instance, found that end user filtering was a narrower alternative than the Children’s Online Protection Act. But end user filtering doesn’t work when the person installing the software is not a parent concerned about on-line filth, but one eager to download infringing movies. And that means that legislation may escape narrowness analysis as well. As I wrote in Orwell’s Armchair:

focusing only on content that is clearly unlawful – such as child pornography, obscenity, or intellectual property infringement – has constitutional benefits that can help a statute survive. These categories of material do not count as “speech” for First Amendment analysis, and hence the government need not satisfy strict scrutiny in attacking them. Recent bills seem to show that legislators have learned this lesson – the PROTECT IP Act, for example, targets only those Web sites with “no significant use other than engaging in, enabling, or facilitating” IP infringement. Banning only unprotected material could move censorial legislation past overbreadth objections.

So: the outcome of any litigation is not only highly uncertain, but more uncertain than free speech advocates believe. Please paint a more hopeful story for me, and tell me why I’m wrong.

Cross-posted at Info/Law.

 

Posted by Derek Bambauer on November 27, 2011 at 08:37 PM in Civil Procedure, Constitutional thoughts, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Web/Tech | Permalink | Comments (0) | TrackBack

Monday, November 21, 2011

How Not To Secure the Net

In the wake of credible allegations of hacking of a water utility, including physical damage, attention has turned to software security weaknesses. One might think that we'd want independent experts - call them whistleblowers, busticati, or hackers - out there testing, and reporting, important software bugs. But it turns out that overblown cease-and-desist letters still rule the day for software companies. Fortunately, when software vendor Carrier IQ attempted to misstate IP law to silence security researcher Trevor Eckhart, the EFF took up his cause. But this brings to mind three problems.

First, unfortunately, EFF doesn't scale. We need a larger-scale effort to represent threatened researchers. I've been thinking about how we might accomplish this, and would invite comments on the topic.

Second, IP law's strict liability, significant penalties, and increasing criminalization can create significant chilling effects for valuable security research. This is why Oliver Day and I propose a shield against IP claims for researchers who follow the responsible disclosure model.

Finally, vendors really need to have their general counsel run these efforts past outside counsel who know IP. Carrier IQ's C&D reads like a high school student did some basic Wikipedia research on copyright law and then ran the resulting letter through Google Translate (English to Lawyer). If this is the aptitude that Carrier IQ brings to IP, they'd better not be counting on their IP portfolio for their market cap.

When IP law suppresses valuable research, it demonstrates, in Oliver's words, that lawyers have hacked East Coast Code in a way it was not designed for. Props to EFF for hacking back.

Cross-posted at Info/Law.

Posted by Derek Bambauer on November 21, 2011 at 09:33 PM in Corporate, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Science, Web/Tech | Permalink | Comments (2) | TrackBack

Friday, November 18, 2011

A Soap Impression of His Wife

As I previewed earlier this week, I want to talk about the copyright implications for 3D printers. A 3D printer is a device that can reproduce a 3-dimensional object by spraying layers of plastic, metal, or ceramic into a given shape. (I imagine the process smelling like those Mold-a-Rama plastic souvenir vending machines prevalent in many museums, a thought simultaneously nostalgic and sickening). Apparently, early adopters are already purchasing the first generation of 3D printers, and there are websites like Thingiverse where you can find plans for items you can print in your home, like these Tardis salt shakers.*

Tardis salt shakers

Perhaps unsurprisingly, there can be copyright implications. A recent NY Times blog post correctly notes that the 3D printer is primarily suited to reproduce what § 101 of the Copyright Act calls "useful articles," physical objects that have "an intrinsic utilitarian function," and which, by definition, receive no copyright protection...except when they do. 

 A useful article can include elements that are protectable as a "pictorial, graphic, [or] sculptural work." The elements are protectable to the extent "the pictorial, graphic, or sculptural features...can be identified separately from, and are capable of existing independently of, the utilitarian aspects of the article." There are half a dozen tests courts have employed to determine whether protectable features can be separated from utilitarian aspects. Courts have rejected copyright protection for mannequin torsos and the ubiquitous ribbon bike rack, but granted it for belt buckles with ornamental elements that were not a necessary part of a functioning belt.

Carol Vaquero 1


Print out a "functional" mannequin torso (or post your plans for it on the internet) and you should have no trouble. Post a schematic for the Vaquero belt buckle, and you may well be violating the copyright protection in the sculptural elements. But even that can be convoluted. The case law is mixed on how to think about 2D works derived from 3Dworks, and vice versa. A substantially similar 3D work can infringe a 2D graphic or pictorial work (Ideal Toy Corp. v. Kenner Prods. Div., 443 F. Supp. 291 (S.D.N.Y. 1977)), but constructing a building without permission from protectable architectural plans was not infringement, prior to a recent revision to the Copyright Act. Likewise, adrawing of a utilitarian item might be protectable as a drawing, but does not grant the copyright holder the right to control the manufacture of the item.

And if consumers are infringing, there is a significant risk that the manufacturer of the 3D printer could be vicariously or contributorily liable for that infringement. The famous Sony decision, which insulated the distribution of devices capable of commercially significant noninfringing uses, even if they could also be used for copyright infringement, has been narrowed both by recent Grokster filesharing decision and by the DMCA anticircumvention provisions. The easy, but unsatisfying takeaway is that 3D printers will keep copyright lawyers employed for years to come.

Back to the Tardis shakers, for a moment: the individual who posted them to the Thingiverse noted that the shaker "is derivative of thingiverse.com/thing:1528 and thingiverse.com/thing:12278", a Tardis sculpture and the lid of bottle, respectively. I found this striking for two reasons. First, it suggests a custom of attribution on thingiverse, but I don't yet have a sense for whether it's widespread. Second, if either of those first things are protectable as copyrighted works, (which seems more likely for the Tardis sculpture, and less so for the lid) then the Tardis salt shaker may be an unauthorized, and infringing, derivative work, and the decision to offer attribution perhaps unwise in retrospect.

* The TARDIS is the preferred means of locomotion of Doctor Who, the titular character of the long-running BBC science fiction program. It's a time machine / space ship disguised as a 1960s-era London police call box. The shape of the TARDIS, in its distinctive blue color, is protected by three registered trademarks in the UK.

 

Posted by Jake Linford on November 18, 2011 at 09:00 AM in Information and Technology, Intellectual Property, Television, Web/Tech | Permalink | Comments (0) | TrackBack

Thursday, November 17, 2011

Choosing Censorship

Yesterday, the House of Representatives held hearings on the Stop Online Piracy Act (it's being called SOPA, but I like E-PARASITE tons better). There's been a lot of good coverage in the media and on the blogs. Jason Mazzone had a great piece in TorrentFreak about SOPA, and see also stories about how the bill would re-write the DMCA, about Google's perspective, and about the Global Network Initiative's perspective.

My interest is in the public choice aspect of the hearings, and indeed the legislation. The tech sector dwarfs the movie and music industries economically - heck, the video game industry is bigger. Why, then, do we propose to censor the Internet to protect Hollywood's business model? I think there are two answers. First, these particular content industries are politically astute. They've effectively lobbied Congress for decades; Larry Lessig and Bill Patry among others have documented Jack Valenti's persuasive powers. They have more lobbyists and donate more money than companies like Google, Yahoo, and Facebook, which are neophytes at this game. 

Second, they have a simpler story: property rights good, theft bad. The AFL-CIO representative who testified said that "the First Amendment does not protect stealing goods off trucks." That is perfectly true, and of course perfectly irrelevant. (More accurately: it is idiotic, but the AFL-CIO is a useful idiot for pro-SOPA forces.) The anti-SOPA forces can wheel to a simple argument themselves - censorship is bad - but that's somewhat misleading, too. The more complicated, and accurate, arguments are that SOPA lacks sufficient procedural safeguards; that it will break DNSSEC, one of the most important cybersecurity moves in a decade; that it fatally undermines our ability to advocate credibly for Internet freedom in countries like China and Burma; and that IP infringement is not always harmful and not always undesirable. But those arguments don't fit on a bumper sticker or the lede in a news story.

I am interested in how we decide on censorship because I'm not an absolutist: I believe that censorship - prior restraint - can have a legitimate role in a democracy. But everything depends on the processes by which we arrive at decisions about what to censor, and how. Jessica Litman powerfully documents the tilted table of IP legislation in Digital Copyright. Her story is being replayed now with the debates over SOPA and PROTECT IP: we're rushing into decisions about censoring the most important and innovative medium in history to protect a few small, politically powerful interest groups. That's unwise. And the irony is that a completely undemocratic move - Ron Wyden's hold, and threatened filibuster, in the Senate - is the only thing that may force us into more fulsome consideration of this measure. I am having to think hard about my confidence in process as legitimating censorship.

Cross-posted at Info/Law.

Posted by Derek Bambauer on November 17, 2011 at 09:15 PM in Constitutional thoughts, Corporate, Culture, Current Affairs, Deliberation and voices, First Amendment, Information and Technology, Intellectual Property, Music, Property, Web/Tech | Permalink | Comments (9) | TrackBack

Tuesday, November 15, 2011

You Say You Want a Revolution

Two potentially revolutionary "disruptive technologies" were back in the news this week. The first is ReDigi, a marketplace for the sale of used "legally downloaded digital music." For over 100 years, copyright law has included a first sale doctrine, which says I can transfer "lawfully made" copy* (a material object in which a copyrighted work is fixed) by sale or other means, without permission of the copyright owner. The case law is codified at 17 U.S.C. § 109.

ReDigi says its marketplace falls squarely within the first sale limitation on the copyright owner's right to distribute, because it verifies that copies are "from a legitimate source," and it deletes the original from all the seller's devices. The Recording Industry Association of America has objected to ReDigi's characterization of the fair use claim on two primary grounds,** as seen in this cease and desist letter.

First, as ReDigi describes its technology, it makes a copy for the buyer, and deletes the original copy from the computer of the seller. The RIAA finds fault with the copying. Section 109 insulates against liability for unauthorized redistribution of a work, but not for making an unauthorized copy of a work. Second, the RIAA is unpersuaded there are ReDigi can guarantee that sellers are selling "lawfully made" digital files. ReDigi's initial response can be found here

At a first cut, ReDigi might find it difficult to ever satisfy the RIAA that it was only allowing the resale of lawfully made digital files. Whether it can satisfy a court is another matter. It might be easier for an authorized vendor, like iTunes or Kindle, to mark legitimate copies going forward, but probably not to detect prior infringement.

Still, verifying legitimate copies may be easier than shoehorning the "copy and delete" business model into the language of § 109. Deleting the original and moving a copy seems in line with the spirit of the law, but not its letter. Should that matter? ReDigi attempts to position itself as close as technologically possible to the framework spelled out in the statute, but that's a framework designed to handle the sale of physical objects that embody copyrightable works.

This is not the only area where complying with statutory requirements can tie businesses in knots. Courts have consistently struggled with how to think about digital files. In London-Sire Records v. Does, the court had to puzzle out whether a digital file can be a material object and thus a copy* distributed in violation of § 106(3). The policy question is easy to articulate, if reasonable minds still differ about the answer: is the sale and distribution of digital files something we want the copyright owner to control or not?

As a statutory matter, the court in London-Sire concluded that material didn't mean material in its sense as "a tangible object with a certain heft," but instead "as a medium in which a copyrighted work can be 'fixed.'" This definition is, of course, driven by the statute: copyright subsists once an original work of authorship is fixed in a tangible medium of expression from which it can be reproduced, and the Second Circuit has recently held in the Cablevision case that a work must also be fixed -- embodied in a copy or phonorecord for a period of more than transitory duration -- for infringement to occur. Policy intuitions may be clear, but fitting the solution in the statutory language sometimes is not. And a business model designed to fit existing statutory safe harbors might do things that appear otherwise nonsensical, like Cablevision's decision to keep individual copies of digital videos recorded by consumers on its servers, to avoid copyright liability.

Potentially even more disruptive is the 3D printer, prototypes of which already exist in the wild, and which I will talk more about tomorrow.

* Technically, a digital audio file is a phonorecord, and not a copy, but that's a distinction without a difference here.

** The RIAA also claims that ReDigi violates the exclusive right of public performance by playing 30 second samples of members' songs on its website, but that's not a first sale issue.

Posted by Jake Linford on November 15, 2011 at 04:22 PM in Information and Technology, Intellectual Property, Music, Web/Tech | Permalink | Comments (1) | TrackBack

Thursday, November 10, 2011

Cyber-Terror: Still Nothing to See Here

Cybersecurity is a hot policy / legal topic at the moment: the SEC recently issued guidance on cybersecurity reporting, defense contractors suffered a spear-phishing attack, the Office of the National Counterintelligence Executive issued a report on cyber-espionage, and Brazilian ISPs fell victim to DNS poisoning. (The last highlights a problem with E-PARASITE and PROTECT IP: if they inadvertently encourage Americans to use foreign DNS providers, they may worsen cybersecurity problems.) Cybersecurity is a moniker that covers a host of problems, from identity theft to denial of service attacks to theft of trade secrets. The challenges are real, and there are many of them. That's why it is disheartening to see otherwise knowledgeable experts focusing on chimerical targets.

For example, Eugene Kaspersky stated at the London Cyber Conference that "we are close, very close, to cyber terrorism. Perhaps already the criminals have sold their skills to the terrorists - and then...oh, God." FBI executive assistant director Shawn Henry said that attacks could "paralyze cities" and that "ultimately, people could die." Do these claims hold up? What, exactly, is it that cyber-terrorists are going to do? Engage in identity theft? Steal U.S. intellectual property? Those are somewhat worrisome, but where is the "terror" part? Terrorists support malevolent activities with all sorts of crimes. But that's "support," not "terror." Hysterics like Richard Clarke spout nonsense about shutting down air traffic control systems or blowing up power plants, but there is precisely zero evidence that even nation-states can do this sort of thing, let alone small, non-state actors. The "oh, God" part of Kaspersky's comment is a standard rhetorical trope in the apocalyptic discussions of cybersecurity. (I knock these down in Conundrum, coming out shortly in Minnesota Law Review.) And paralyzing a city isn't too hard: snowstorms do it routinely. The question is how likely such threats are to materialize, and whether the proposed answers (Henry thinks we should build a new, more secure Internet) make any sense.

There are at least two plausible reasons why otherwise rational people spout lurid doomsday scenarios instead of focusing on the mundane, technical, and challenging problems of networked information stores. First, and most cynically, they can make money from doing so. Kaspersky runs an Internet security company; Clarke is a cybersecurity consultant; former NSA director Mike McConnell works for a law firm that sells cybersecurity services to the government. I think there's something to this, but I'm not ready to accuse these people of being venal. I think a more likely explanation flows from Paul Ohm's Myth of the Superuser: many of these experts have seen what truly talented hackers can do, given sufficient time, resources, and information. They then extrapolate to a world where such skills are commonplace, and unrestrained by ethics, social pressures, or sheer rational actor deterrence. Combine that with the chance to peddle one's own wares, or books, to address the problems, and you get the sum of all fears. Cognitive bias matters.

The sky, though, is not falling. Melodrama won't help - in fact, it distracts us from the things we need to do: to create redundancy, to test recovery scenarios, to deploy more secure software, and to encourage a culture of testing (the classic "hacking"). We are not going to deploy a new Internet. We are not going to force everyone to get an Internet driver's license. Most cybersecurity improvements are going to be gradual and unremarkable, rather than involving Bruce Willis and an F-35. Or, to quote Frank Drebin, "Nothing to see here, please disperse!" Cross-posted at Info/Law.

Posted by Derek Bambauer on November 10, 2011 at 03:53 PM in Criminal Law, Current Affairs, Information and Technology, International Law, Web/Tech | Permalink | Comments (1) | TrackBack

Saturday, November 05, 2011

De-lousing E-PARASITE

The House of Representatives is considering the disturbingly-named E-PARASITE Act. The bill, which is intended to curb copyright infringment on-line, is similar to the Senate's PROTECT IP Act, but much much worse. It's as though George Lucas came out with the director's cut of "The Phantom Menace," but added in another half-hour of Jar Jar Binks

As with PROTECT IP, the provisions allowing the Attorney General to obtain a court order to block sites that engage in criminal copyright violations are, in theory, less objectionable. But they're quite problematic in their particulars. Let me give three examples.

First, the orders not only block access through ISPs, but also require search engines to de-list objectionable sites. That not only places a burden on Google, Bing, and other search sites, but it "vaporizes" (to use George Orwell's term) the targeted sites until they can prove they're licit. That has things exactly backwards: the government must prove that material is unlawful before restraining it. This aspect of the order is likely constitutionally infirm.

Second, the bill attacks circumvention as well: MAFIAAFire and its ilk become unlawful immediately. Filtering creep is inevitable: you have to target circumvention, and the scope of circumvention targeted widens with time. Proxy services like Anonymizer are likely next.

Finally, commentators have noted that the bill relies on DNS blocking, but they're actually underestimating its impact. The legislation says ISPs must take "technically feasible and reasonable measures designed to prevent access by its subscribers located within the United States" to Web sites targeted under the bill, "including measures designed to prevent the domain name of the foreign infringing site (or portion thereof) from resolvingto that domain name's Internet protocol address." The definitional section of the bill says that "including" does not mean "limited to." In other words, if an ISP can engage in technically feasible, reasonable IP address blocking or URL blocking - which is increasingly possible with providers who employ deep packet inspection - it must do so. The bill, in other words, targets more than the DNS.

On the plus side, the bill does provide notice to users (the AG must specify text to display when users try to access the site), and it allows for amended orders to deal with the whack-a-mole problem of illegal content evading restrictions by changing domain names or Web hosting providers.

The private action section of the bill is extremely problematic. Under its provisions, YouTube is clearly unlawful, and neither advertising or payment providers would be able to transact business with it. The content industry doesn't like YouTube - see the Viacom litigation - but it's plainly a powerful and important innovation. This part of E-PARASITE targets sites "dedicated to the theft of U.S. property." (Side note: sorry, it's not theft. This is a rhetorical trope in the IP wars, but IP infringement simply is not the same as theft. Theft deals with rivalrous goods. In addition, physical property rights do not expire with time. If this is theft, why aren't copyright and patent expirations a regulatory taking? Why not just call it "property terrorism"?)

So, what defines such a site? It is:

  1. "primarily designed or operated for the purpose of, has only limited purpose or use other than, or is marketed by its operator or another acting in concert with that operator for use in, offering goods or services in a manner that engages in, enables, or facilitates" violations of the Copyright Act, Title I of the Digital Millennium Copyright Act, or anti-counterfeiting laws; or,
  2. "is taking, or has taken, deliberate actions to avoid confirming a high probability of the use of the U.S.-directed site to carry out the acts that constitute a violation" of those laws; or, 
  3. the owner "operates the U.S.-directed site with the object of promoting, or has promoted, its use to carry out acts that constitute a violation" of those laws.

That is an extraordinarily broad ambit. Would buying keywords, for example, that mention a popular brand constitute a violation? And how do we know what a site is "primarily designed for"? YouTube seems to have limited purpose or use other than facilitating copyright infringement. Heck, if the VCR were a Web site, it'd be unlawful, too. 

The bill purports to establish a DMCA-like regime for such sites: the IP owner provides notice, and the site's owner can challenge via counter-notification. But the defaults matter here, a lot: payment providers and advertisers must cease doing business with such sites unless the site owner counter-notifies, and even then, the IP owner can obtain an injunction to the same effect. Moreover, to counter-notify, a site owner must concede jurisdiction, which foreign sites will undoubtedly be reluctant to do. (Litigating in the U.S. is expensive, and the courts tend to be friendly towards local IP owners. See, for example, Judge Crotty's slipshod opinion in the Rojadirecta case.)

I've argued in a new paper that using direct, open, and transparent methods to censor the Internet is preferable to our current system of "soft" censorship via domain name seizures and backdoor arm-twisting of private firms, but E-PARASITE shows that it's entirely possible for hard censorship to be badly designed. The major problem is that it outsources censorship decisions to private companies. Prior restraint is an incredibly powerful tool, and we need the accountability that derives from having elected officials make these decisions. Private firms have one-sided incentives, as we've seen with DMCA take-downs

In short, the private action measures make it remarkably easy for IP owners to cut off funding for sites to which they object. These include Torrent tracker sites, on-line video sites, sites that host mash-ups, and so forth. The procedural provisions tilt the table strongly towards IP owners, including by establishing very short time periods by which advertisers and payment providers have to comply. Money matters: WikiLeaks is going under because of exactly these sort of tactics. 

America is getting into the Internet censorship business. We started down this path to deal with pornographic and obscene content; our focus has shifted to intellectual property. I've argued that this is because IP legislation draws lower First Amendment scrutiny than other speech restrictions, and interest groups are taking advantage of that loophole. It's strange to me that Congress would damage innovation on the Internet - only the most powerful communications medium since words on paper - to protect movies and music, which are relatively small-scale in the U.S. economy. But, as always with IP, the political economy matters. 

I predict that a bill like PROTECT IP or E-PARASITE will become law. Then, we'll fight out again what the First Amendment means on the Internet, and then the myth of America's free speech exceptionalism on-line will likely be dead.

Cross-posted at Info/Law.

Posted by Derek Bambauer on November 5, 2011 at 05:06 PM in Civil Procedure, Constitutional thoughts, Culture, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Music, Property, Web/Tech | Permalink | Comments (2) | TrackBack

Thursday, November 03, 2011

Why Don't We Do It In the Road?

In that White Album gem, "Why Don't We Do It In the Road?", Paul McCartney insinuated that whatever "it" was wouldn't matter because "no one will be watching us." The feeling of being watched can change the way in which one engages in an activity. Often, perceiving one's own behavior clearly is an essential step in changing that behavior.  

I've thought about this lately as I've tried to become more productive in my writing, and I'm drawn to resources that help me externalize my monitoring procress. There are various commitment mechanisms out there, which I've lumped roughly into three groups. Some are designed to make me more conscious of my own obligation to write. Other mechanisms are designed to bring outsiders on board, inviting /forcing me to give an account of my productivity or lack thereof to others. And some, like StickK, combine the second with the means to penalize me if I fail to perform.

Should I need tricks to write? Perhaps not, but even with the best of intentions, it's easy to get waylaid by the administrative and educational requirements of the job. Commitment mechanisms help me remember why I want to fill those precious moments of downtime with writing. Below the fold I'll discuss some methods I've tried in that first category, and problems that make them less than optimal for my purposes. Feel free to include your suggestions and experiences here, as well. Also note that over at Concurring Opinions, Kaimipono Wenger has started the first annual National Article Finishing Month, a commitment mechanism I have not yet tried (but just might). In subsequent posts, I'll tackle socializing techniques and my love / hate relationship with StickK.

Perhaps like many of you, I find the Internet to be a two-edged sword. While I can be more productive because so many resources are at my fingertips, I also waste too much time surfing the webs. I've tried commitment mechanisms that shut down the internet, but have so far found them lacking. I've tried Freedom, which kills the entire internet for a designated period of time. That's helpful to an extent, although I store my documents on Dropbox, so my work moves with me from home to office without the need to keep track of a usb drive. While Dropbox should automatically synch once Freedom stops running, I've found that hasn't been as smooth as I hoped. This in turn makes me hesitant to rely on Freedom.

What makes me even more hesitant to use Freedom is that I have to reboot my computer to get back to the internet every other time I use it. If you are not saving your work to the cloud, you may see that as a feature and not a bug.

I turned next to StayFocusd, a Chrome App that allows me to pick and choose websites to block. Stay Focusd reminds me when I'm out of free browsing time with a pop-up that dominates the screen, with a mild scold, like "Shouldn't you be working?" If you are the type to use multiple browsers for different purposes, however, Stay Focusd is only a Firefox window away from being relatively ineffectual.

The self-monitoring tool I've liked the best so far is Write or Die. You set the amount of time you want to write, the words you propose to generate, and you start writing. As I set it, if you stop typing for more than 10 seconds, the program makes irritating noises (babies crying, horns honking, etc.) until you start typing again. Write or Die is great for plowing through a first draft quickly, but is less effective if the goal is to refine text. This is part because the interface gives you bare bones text. I'm too cheap to download the product, which has more bells and whistles than the free online version (like the ability to italicize text). In addition, in the time it takes to think about the line I'm rewriting, the babies begin to howl again. 

So, what commitment mechanisms do you use when you don't feel like writing?

 

Posted by Jake Linford on November 3, 2011 at 09:00 AM in Information and Technology, Life of Law Schools, Web/Tech | Permalink | Comments (0) | TrackBack

Wednesday, October 26, 2011

How Baseball Made Me a Pirate

Major League Baseball has made me a pirate, with no regrets.

Nick Ross, on Australia's ABC, makes "The Case for Piracy." His article argues that piracy often results, essentially, from market failure: customers are willing to pay content owners for access to material, and the content owners refuse - because they can't be bothered to serve that market or geography, because they are trying to force consumers onto another platform, or because they are trying to leverage interest in, say, Premier League matches as a means of getting cable customers to buy the Golf Network. The music industry made exactly these mistakes before the combination of Napster and iTunes forced them into better behavior: MusicNow and Pressplay were expensive disasters, loaded with DRM restrictions and focused on preventing any possible re-use of content rather than delivering actual value. TV content owners are now making the same mistake.

Take, for example, MLB. I tried to purchase a plan to watch the baseball playoffs on mlb.com - I don't own a TV, and it's a bit awkward to hang out at the local pub for 3 hours. MLB didn't make it obvious how to do this. Eventually, I clicked a plan that indicated it would allow me to watch the entire postseason for $19.99, and gladly put in my credit card number.

My mistake. It turns out that option is apparently for non-U.S. customers. I learned this the hard way when I tried to watch an ALDS game, only to get... nothing. No content, except an ad that tried to get me to buy an additional plan. That's right, for my $19.99, I receive literally nothing of value. When I e-mailed MLB Customer Service to try to get a refund, here's the answer I received: "Dear Valued Subscriber: Your request for a refund in connection with your 2011 MLB.TV Postseason Package subscription has been denied in accordance with the terms of your purchase." Apparently the terms allow fraud.

Naturally, I'm going to dispute the charge with my credit card company. But here's the thing: I love baseball. I would gladly pay MLB to watch the postseason on-line. And yet there's no way to do so, legally. In fact, apparently the only people who can are folks outside the U.S. And if you try to give them your money anyway, they'll take it, and then tell you how valued you are. But you're not.

So, I'm finding ways to watch MLB anyway. If you have suggestions or tips, offer 'em in the comments - there must be a Rojadirecta for baseball. And next season, when I want to watch the Red Sox, that's the medium I'll use - not MLB's Extra Innings. MLB has turned me into a pirate, with no regrets.

Cross-posted at Info/Law.

Posted by Derek Bambauer on October 26, 2011 at 07:48 PM in Criminal Law, Culture, Information and Technology, Intellectual Property, International Law, Music, Odd World, Sports, Television, Web/Tech | Permalink | Comments (34) | TrackBack

Thursday, October 20, 2011

Policing Copyright Infringement on the Net

Mark Lemley has a smart editorial up at Law.com on the hearings at the Second Circuit Court of Appeals in Viacom v. YouTube. The question is, formally, one of interpreting Title II of the Digital Millennium Copyright Act (17 U.S.C. 512), and determining whether YouTube meets the statutory requirements for immunity from liability. But this is really a fight about how much on-line service providers must do to police, or protect against, copyright infringement. Mark, and the district court in the case, think that Congress answered this question rather clearly: services such as YouTube need to respond promptly to notifications of claimed infringement, and to avoid business models where they profit directly from infringement. The fact that a site attracts infringing content (which YouTube indubitably does) can't wipe out the safe harbor, because then the DMCA would be a nullity. It may be that the burden of policing copyrights should fall more heavily on services such as YouTube than it currently does. But, if that's the case, Viacom should be lobbying Congress, not the Second Circuit. I predict a clean win for YouTube.

Posted by Derek Bambauer on October 20, 2011 at 05:52 PM in Corporate, Current Affairs, Information and Technology, Intellectual Property, Music, Web/Tech | Permalink | Comments (0) | TrackBack

Monday, October 17, 2011

The Myth of Cyberterror

UPI's article on cyberterrorism helpfully states the obvious: there's no such thing. This is in sharp contrast to the rhetoric in cybersecurity discussions, which highlights purported threats from terrorists to the power grid, the transportation system, and even the ability to play Space Invaders using the lights of skyscrapers. It's all quite entertaining, except for 2 problems: 1) perception frequently drives policy, and 2) all of these risks are chimerical. Yes, non-state actors are capable of defacing Web sites and even launching denial of service attacks, but that's a far cry from train bombings or shootings in hotels

The response from some quarters is that, while terrorists do not currently have the capability to execute devastating cyberattacks, they will at some point, and so we should act now. I find this unsatisfying. Law rarely imposes large current costs, such as changing how the Internet's core protocols run, to address remote risks of uncertain (but low) incidence and uncertain magnitude. In 2009, nearly 31,000 people died in highway car crashes, but we don't require people to drive tanks. (And, few people choose to do so, except for Hummer employees.)

Why, then, the continued focus on cyberterror? I think there are four reasons. First, terror is the policy issue of the moment: connecting to it both focuses people's attention and draws funding. Second, we're in an age of rapid and constant technological change, which always produces some level of associated fear. Few of us understand how BGP works, or why its lack of built-in authentication creates risk, and we are afraid of the unknown. Third, terror attacks are like shark attacks. We are afraid of dying in highly gory or horrific fashion, rather than basing our worries on actual incidence of harm (compare our fear of terrorists versus our fear of bad drivers, and then look at the underlying number of fatalities in each category). Lastly, cybersecurity is a battleground not merely for machines but for money. Federal agencies, defense contractors, and software companies all hold a stake in concentrating attention on cyber-risks and offering their wares as a means of remediating them.

So what should we do at this point? For cyberterror, the answer is "nothing," or at least nothing that we wouldn't do anyway. Preventing cyberattacks by terrorists, nation states, and spies all involve the same things, as I argue in Conundrum. But: this approach gets called "naive" with some regularity, so I'd be interested in your take...

Posted by Derek Bambauer on October 17, 2011 at 04:43 PM in Criminal Law, Current Affairs, Information and Technology, International Law, Law and Politics, Science, Web/Tech | Permalink | Comments (7) | TrackBack

Friday, October 14, 2011

Behind the Scenes of Six Strikes

Wired has a story on the cozy relationship between content industries and the Obama administration, which resulted in the deployment of the new "six strikes" plan to combat on-line copyright infringement. Internet security and privacy researcher Chris Soghoian obtained e-mail communication between administration officials and industry via a Freedom of Information Act (FoIA) request. (Disclosure: Jonathan Askin and I represent Chris in his appeal regarding this FoIA request.) The e-mails demonstrate vividly what everyone suspected: Hollywood - in the form of the music and movie industries - has an administration eager to be helpful, including by pressuring ISPs. Stay tuned.

Posted by Derek Bambauer on October 14, 2011 at 11:10 AM in Blogging, Culture, Current Affairs, Film, Information and Technology, Intellectual Property, Judicial Process, Law and Politics, Music, Web/Tech | Permalink | Comments (0) | TrackBack

Thursday, October 13, 2011

The Pirates' Code

There have been a number of attempts to alter consumer norms about copyright infringement (especially those of teenagers). The MPAA has its campaigns; the BSA has its ferret; and now New York City has a crowdsourced initiative to design a new public service announcement. At first blush, the plan looks smart: rather than have studio executives try to figure out what will appeal to kids (Sorcerer's Apprentice, anyone?), leave it to the kids themselves.

On further inspection, though, the plan seems a bit shaky. First, it's not actually a NYC campaign: the Bloomberg administration is sockpuppeting for NBC Universal. Second, why is the City even spending scarce taxpayer funds on this? Copyright enforcement is primarily private, although the Obama administration is lending a helping hand. Third, is this the most effective tactic? It seems more efficient to go after the street vendors who sell bootleg DVDs, for example - I can buy a Blockbuster Video store's worth of movies just by walking out the front door of my office. 

Yogi Berra (or was it Niels Bohr?) said that the hardest thing to predict is the future. And the hardest thing about norms is changing them. Larry Lessig's New Chicago framework not only points to the power of norms regulation (along the lines of Bob Ellickson), but suggests that norms are effectively free - no one has to pay to enforce them. This makes them attractive as a means of regulation. The problem, though, is that norms tend to be resistant to overt efforts to shift them. Think of how long it took to change norms around smoking - a practice proven to kill you - and you'll appreciate the scope of the challenge. The Bloomberg administration should save its resources for moving snow this winter...

Posted by Derek Bambauer on October 13, 2011 at 06:52 PM in Film, Information and Technology, Intellectual Property, Music, Property, Television, Web/Tech | Permalink | Comments (5) | TrackBack

Monday, October 10, 2011

Spying, Skynet, and Cybersecurity

The drones used by the U.S. Air Force have been infected by malware - reportedly, a program that logs the commands transmitted from the pilots' computers at a base in Nevada to the drones flying over Iraq and Afghanistan. This has led to comparisons to Skynet, particularly since the Terminators' network was supposed to become self-aware in April. While I think we don't yet need to stock up on robot-sniffing dogs, the malware situation is worrisome, for three reasons.

First, the military is aware of the virus's presence, but is reportedly unable to prevent it from re-installing itself even after they clean off the computers' drives. Wired reports that re-building the computers is time-consuming. That's undoubtedly true, but cyber-threats are an increasing part of warfare, and they'll soon be ubiquitous. I've argued that resilience is a critical component of cybersecurity. The Department of Defense needs to assume that their systems will be compromised - because they will - and to plan for recovery. Prevention is impossible; remediation is vital.

Second, the malware took hold despite the air gap between the drones' network and the public Internet. The idea of separate, isolated networks is a very attractive one in security, but it's false comfort. In a world where flash drives are ubiquitous, where iPods can store files, and where one can download sensitive data onto a Lady Gaga CD, information will inevitably cross the gap. Separation may be sensible as one security measure, but it is not a panacea.

Lastly, the Air Force is the branch of the armed forces currently in the lead in terms of cyberspace and cybersecurity initiatives. If they can't solve this problem, do we want them taking the lead on this new dimension of the battlefield?

It's not clear how seriously the drones' network has been compromised - security breaches have occurred before. But cybersecurity is difficult. We saw the first true cyberweapon in Stuxnet, which damaged Iran's nuclear centrifuges and set back its uranium enrichment program. That program too looked benign, on first inspection. Let's hope the program here is closer to Kyle Reese than a T-1000.

Posted by Derek Bambauer on October 10, 2011 at 05:55 PM in Information and Technology, International Law, Web/Tech | Permalink | Comments (2) | TrackBack

Tuesday, October 04, 2011

America Censors the Internet

If you're an on-line poker player, a fan of the Premier League, or someone who'd like to visit Cuba, you probably already know this. Most people, though, aren't aware that America censors the Internet. Lawyers tend to believe that a pair of Supreme Court cases, Reno v. ACLU (1997) and Ashcroft v. ACLU (2004), permanently interred government censorship of the Net in the U.S. Not so.

In a new paper, Orwell's Armchair (forthcoming in the University of Chicago Law Review), I argue that government censors retain a potent set of tools to block disfavored on-line content, from using unrelated laws (like civil forfeiture statutes) as a pretext to paying intermediaries to filter to pressuring private actors into blocking. These methods are not only indirect, they are less legitimate than overt, transparent regulation of Internet content. In the piece, I analyze the constraints that exist to check such soft censorship, and find that they are weak at best. So, I argue, if we're going to censor the Internet, let's be clear about it: the paper concludes by proposing elements of a prior restraint statute for on-line content that could both operate legitimately and survive constitutional scrutiny. 

Jerry Brito of George Mason University's Mercatus Center kindly interviewed me about the issues the article raises for his Surprisingly Free podcast. It's worth a listen, even though my voice is surprisingly annoying.

Cross-posted at Info/Law.

Posted by Derek Bambauer on October 4, 2011 at 06:14 PM in Civil Procedure, Constitutional thoughts, Current Affairs, First Amendment, Information and Technology, Intellectual Property, Law and Politics, Web/Tech | Permalink | Comments (3) | TrackBack

Sunday, October 02, 2011

What Commons Have in Common

Thanks to Dan and the Prawfs crew for having me! Blogging here is a nice distraction from the Red Sox late-season collapse.

I thought I'd start with a riddle: what do roller derby, windsurfing, SourceForge, and GalaxyZoo have in common?

Last week, NYU Law School hosted Convening Cultural Commons, a two-day workshop intended to accelerate the work on information commons begun by Carol Rose, Elinor Ostrom, and Mike Madison / Kathy Strandburg / Brett Frischmann. All four of the above were presented as case studies (by Dave Fagundes, Sonali Shah, Charles Schweik, and Mike Madison, respectively). Elinor Ostrom gave the keynote address, and sat in on most of the presentations. It's exciting stuff: Mike, Kathy, and Brett have worked hard to adapt Ostrom's Institutional Analysis and Development framework to analysis of information commons such as Wikipedia, the Associated Press, and jambands. Yet, there was one looming issue that the conferees couldn't resolve: what, exactly, is a commons?

The short answer is: no one knows. Ostrom's work counsels a bottom-up, accretive way to answer this question. Over time, with enough case studies, the boundaries of what constitutes a "commons" become clear. So, the conventional answer, and one supported by a lot of folks at the NYU conference, is to go forth and, in the spirit of Clifford Geertz, engage in collection and thick description of things that look like, or might be, commons.

As an outsider to the field, I think that's a mistake.

What commons research in law (and allied disciplines) needs is some theories of the middle range. There is no Platonic or canonical commons out there. Instead, there are a number of dimensions along which a particular set of information can be measured, and which make it more or less "commons-like." Let me suggest a few as food for thought:
  1. Barriers to access - some information, like Wikipedia, is available to all comers; other data, like pooled patents, are only available to members of the club. The lower the barriers to access, the more commons-like a resource is. 
  2. State role in management - government may be involved in managing resources directly (for example, data in the National Practitioner Data Bank), indirectly (for example, via intellectual property laws), or not at all. I think a resource is more commons-like as it is less managed by the state.
  3. Ability to privatize - information resources are more and less subject to privatization. Information in the public domain, such as Shakespeare's plays, cannot be privatized - no one can assert rights over them (at least, not under American copyright law). Some information commons protected by IP law cannot be privatized, such as software developed under the GPL, and some can be, such as software developed under the Apache License. The greater the ability to privatize, I'd argue, the less commons-like.
  4. Depletability - classic commons resources (such as fisheries or grazing land) are subject to depletion. Information resources can be depleted, though depletion here may come more in the form of congestion, as Yochai Benkler argues. Internet infrastructure is somewhat subject to depletion, while ideas or prices are not. The greater the risk of depletion,the less commons-like.

Finally, why do we care about the commons? I think that commons studies are a reaction to the IP wars: they are a form of resistance to IP maximalism. By showing that information commons are not only ubiquitous, but vital to innovation and even a market economy, legal scholars can offer a principled means of arguing against ever-increasing IP rights. That makes studying these resources - and, hopefully, putting forward testable theories about what are and are not attributes of a commons - vital to enlightened policymaking.

(Cross-posted to Info/Law.)

Posted by Derek Bambauer on October 2, 2011 at 05:22 PM in Culture, Information and Technology, Intellectual Property, Legal Theory, Property, Research Canons | Permalink | Comments (0) | TrackBack

Friday, July 29, 2011

A Plan for Forking Wikipedia to Provide a Reliable Secondary Source on Law

Wikipedia_rate_this_page

Recently Wikipedia rolled out a feedback feature (example at right) that allows readers to rate the page they are looking at. You can give the page a score from one to five in each of four categories: Trustworthy, Objective, Complete, and Well-written. Then, there's an optional box you can check that says "I am highly knowledgeable about this topic."

This may be good for flagging pages that need work. But, of course, Wikipedia's trustworthiness problem is not going to be solved by anonymous users, self-declared to be "highly knowledgeable," deeming articles to be trustworthy.

I do think, however, that if you forked Wikipedia to create a version with an authenticated expert editorship, then the ratings could evolve Wikipedia content into being a credible source. In fact, I think it could work well for articles on law.

"Forking," for software developers, means taking an open-source project and spliting off a version that is then developed separately. The open-source license specifically allows you to do this. Forking a project is not always productive, but I think it could be useful in creating an encyclopedia-type reference about the law that is both trustworthy and freely available.

There's a real need for such material in the legal sphere. Right now, there seems to be an accessibility/credibility trade off with secondary sources of legal information: Wikipedia is accessible, but not credible. Traditional binder sets are credible, but not very accessible – using them is generally either expensive (in terms of subscriptions fees) or burdensome (via a trip to the nearest law library).

If, however, you could take Wikipedia and apply credibility on top of it, you would have a secondary source that is both credible and accessible.

Imagine grabbing all the Wikipedia pages about law – which at this point are generally very well developed – and then, while continuing to make them viewable to the public, locking them so that only authenticated lawyers and law students could edit them. These expert editors could then correct errors where they find them. Where they don't find errors, they could click a box indicating trustworthiness. As time went on, pages would have errors weeded out, and trustworthiness scores would accumulate.

Trustworthiness ratings on pages editable only by experts would relieve the need for internal citations. Right now, the Wikipedia community pushes hard for citations in articles. Citations are important in the Wikipedia context because of the lack of credentials on the part of the writers. But if pages were only editable by authenticated lawyers, then cumulative positive ratings would make pages more reliable even without citations.

Admittedly, a forked version of Wikipedia edited by lawyers and law students would not replace the big binder sets. The depth of the material, at least as it stands now, is too limited. Wikipedia, even if reliable, wouldn't help a trust-and-estates lawyer with trust and estates. But if it were imbued with trustworthiness, Wikipedia content does have enough depth to be useful for lawyers orienting themselves to an unfamiliar field of law. Likewise, it has enough detail for non-lawyers who are looking to gain a general understanding of some specific doctrinal topic. 

So, what do you think? It would be fairly easy to put together from a technical perspective, but would it be worthwhile? Do you think people with legal knowledge would contribute by removing errors and scoring pages? Or would a forked law wiki sit fallow? (I do notice there are lots of lawyers participating actively on Quora, crafting very good answers to individual questions.)

Maybe it would work in other fields aside from law, as well. It does seem to me, though, that law is a particularly good subject for forking.

Posted by Eric E. Johnson on July 29, 2011 at 05:55 PM in Information and Technology, Intellectual Property, Web/Tech | Permalink | Comments (4) | TrackBack

Friday, July 22, 2011

JSTOR: What is it Good For?

JSTOR logo and Aaron Swartz
Swartz is facing 35 years behind bars for allegedly cracking JSTOR. (Photo: Demand Progress)

Information-liberation guru and alleged hacktivist Aaron Swartz is facing 35 years in prison on a federal indictment [PDF] for breaking into MIT's systems and downloading more than four million academic articles from JSTOR. So, I can't help but see Stuart Buck's point that his doing so wasn't a good idea.

But is JSTOR a good idea?

JSTOR is a non-profit organization that digitizes scholarship and makes it available online – for a fee. JSTOR was launched by the philanthropic Andrew W. Mellon Foundation in 1995 to serve the public interest. But it seems to me that today, JSTOR may be more mischievous than munificent.

For instance, on JSTOR I found a 1907 article called "Criminal Appeal in England" published in the Journal of the Society of Comparative Legislation. I was offered the opportunity to view this 11-page antique for the dear price of $34.00. And this is despite the fact that my university is a JSTOR "participating institution," and despite the fact that this article is so old, it is no longer subject to copyright.

What's more, by paying $34.00, I would take on some rather severe limitations under JSTOR's terms of service. I could not, for instance, turn around and make this public-domain article available from my own website.

You might ask, why can't JSTOR just make this stuff available for free? After all, JSTOR says its mission is "supporting scholarly work and access to knowledge around the world."

Why indeed, especially considering I CAN GET THE ENTIRE 475-PAGE JOURNAL VOLUME FOR FREE FROM THE INTERNET ARCHIVE and GOOGLE BOOKS. (That includes not just "Criminal Appeal in England," but also such scintilators as "The Late Lord Davey" by the Right Honourable Lord Macnaghten. All for the low, low price of FREE.)

And it's not just old, public domain articles. Current scholarship is increasingly being offered for free from journals' own websites as soon as it is printed. Not to mention the quickly accreting mass of articles on free-access sites SSRN and the Arxiv.

Maybe JSTOR seemed like a good idea when it was launched 16 years ago. But today, if other organizations are doing for free what JSTOR is charging for, then perhaps JSTOR should be dismantled or substantially reworked. JSTOR in a big enterprise, and I am not familar with all of its parts. It may be that JSTOR provides important services that otherwise wouldn't be available. I don't know. But I do know that, at least with part of its collection, JSTOR is playing rope-a-dope, hoping to shake money out of chumps too unlucky to know they could have gotten the same wares for nothing if they just had clicked elsewhere.

JSTOR should act now to make as much of its database free as it can, including all public domain materials and all copyrighted materials for which the requisite permissions can be obtained. It seems to me that the only reason JSTOR would not do so at this point is because JSTOR has become so entrenched that it's more interested in self-perpetuation than public good. And if that's the case, JSTOR may constitute a continuing menace to the preservation of scholarship and access to it – the very things it was founded to promote.

Posted by Eric E. Johnson on July 22, 2011 at 05:32 PM in Information and Technology, Web/Tech | Permalink | Comments (6) | TrackBack

Monday, June 20, 2011

Inside Job

 

Last night I finally got around to watching the academy award winning documentary "Inside Job." I had been planning to watch it for some time, but somehow ended up finding other things to watch instead. I enjoyed it and found it to be very interesting, but I imagine that readers of prawfs might be split on its merits. A good number of professors (primarily business/economics ) get skewered pretty well in the interviews.

Here are some of my favorite quotes from the movie:

Andrew Sheng: Why should a financial engineer be paid four times to 100 times more than a real engineer? A real engineer build bridges. A financial engineer build dreams. And, you know, when those dreams turn out to be nightmares, other people pay for it

Michael Capuano: You come to us today telling us "We're sorry. We won't do it again. Trust us". Well i have some people in my constituency that actually robbed some of your banks, and they say the same thing.

(My paraphrase) "As I recall I was revising a textbook." (You'll have to watch the movie for context on this one)

Posted by Jeff Yates on June 20, 2011 at 03:09 PM in Corporate, Criminal Law, Culture, Current Affairs, Film, First Amendment, Information and Technology, Law and Politics | Permalink | Comments (1) | TrackBack

Monday, June 13, 2011

Who would be your graduation speaker ...

if you could have anyone do it? Here's Conan O'Brien giving the commencement address at Dartmouth:

 

Posted by Jeff Yates on June 13, 2011 at 09:21 AM in Culture, Current Affairs, Information and Technology, Law and Politics, Odd World, Television, Travel | Permalink | Comments (0) | TrackBack

Thursday, April 28, 2011

When the Right Interpretation of the Law is a Scary One (CFAA Edition)

A divided 9th Circuit panel decided U.S. v. Nosal today. The case initially looks like a simple employee trade secret theft case, but the Court's interpretation of the Computer Fraud and Abuse Act has potentially far reaching ramifications. Here's the thing - the court (in my view) reached the right ruling with the right statutory interpretation. However, that interpretation could possibly make many people liable under the CFAA that probably shouldn't be.

I discuss more below.

Here are the basic facts: Nosal is charged with conspiracy to violate the CFAA, 18 U.S.C. 1030 because he conspired with employees at his former employer. Those employees accessed a database to obtain secret information that Nosal allegedly used in a competing business. Importantly, those employees had full access rights to that database. They didn't hack, steal a password, rummage around, or anything else. They just logged on and copied information. Those employees had agreements that said they would not use the information for purposes other than their employment. I suspect that the agreement would not have even been necessary if it were reasonably clear that the information was trade secret, but that's an issue for another post.

The provision at issue is 1030(a)(4), which outlaws: "knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value...."

 

The district court dismissed the indictment, ruling that the employees could not have exceeded authorization. The court relied on a prior case, called LVRC HoldingsLLC v. Brekka, to rule that the employees could not have exceeded authorized access because database access was within their employment. According to the lower court, one can only exceed authorization if one wanders into an area where there is no authorized access. The appellate panel talks about drive letters. If the employees could access the F: drive, but not the G: drive, then any data taken from the F: drive for any purpose could not exceed authorized access, but gathering data from the G: drive would exceed because the employees were not supposed to go there. By analogy here, there could be no exceeded authority because the database was part of the employee access rights.

The Ninth Circuit panel disagreed. It starts with the definition in 1030(e)(6):

the term “exceeds authorized access” means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter

The Court focuses on the "so" term. It argues that "so" would be superfluous under the district court's reading. After all, exceeding authorized access means you must have had the right to be there in the first place. To limit this to different areas of the database doesn't work, since the statute plainly outlaws access to the computer when such access is then used to obtain information that the accessor is not entitled to obtain.

The problem with this reading, of course, is that the employees arguable were entitled to obtain the information. Not so, says the Court - and this is where the trade secret angle comes in. The employees were decidedly (or at least allegedly) not entitled to access the information if the purpose was to leak it to Nosal. 

How does the court deal with LVRC? It appears that the two cases are consistent:

1. LVRC says that "without authorization" requires no access at all to a drive, not exceeded authorization (there are some parts of the statute with require no authorization, and some where exceeded authorization is enough).

2. LVRC makes clear that where employers set access policies and communicate them, then employees may be deemed to have acted without authorization.

3. LVRC envisions exactly the result in this case: 

Section 1030(e)(6) provides: "the term `exceeds authorized access' means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter." 18 U.S.C. § 1030(e)(6). As this definition makes clear, an individual who is authorized to use a computer for certain purposes but goes beyond those limitations is considered by the CFAA as someone who has "exceed[ed] authorized access." On the other hand, a person who uses a computer "without authorization" has no rights, limited or otherwise, to access the computer in question.

Of course, it is not this easy. LVRC had a footnote:

On appeal, LVRC argues only that Brekka was "without authorization" to access LVRC's computer and documents. To the extent LVRC implicitly argues that Brekka's emailing of documents to himself and to his wife violated §§ 1030(a)(2) and (4) because the document transfer "exceed[ed] authorized access," such an argument also fails. As stated by the district court, it is undisputed that Brekka was entitled to obtain the documents at issue. Moreover, nothing in the CFAA suggests that a defendant's authorization to obtain information stored in a company computer is "exceeded" if the defendant breaches a state law duty of loyalty to an employer, and we decline to read such a meaning into the statute for the reasons explained above. Accordingly, Brekka did not "obtain or alter information in the computer that the accesser is not entitled so to obtain or alter," see 18 U.S.C. § 1030(e)(6), and therefore did not "exceed[ ] authorized access" for purposes of §§ 1030(a)(2) and (4).

This footnote seems directly contrary to the outome in Nosal. It is also an example of something I tell my cyberlaw students - make every argument you can! How could LVRC not have made the exceeded authorization argument directly on appeal? Surely that issue merited more than a footnote.

The court doesn't deal with this footnote, but instead makes some factual distinctions that work for me. First, in LVRC the defendant had unfettered access with no clear rules about the data. Second, in this case there is a clear trade secret misappropriation, whereas in LVRC the allegation was a nebulous "breach of duty" argument without any real showing that the email accessed would be competitively used against LVRC.

Maybe it is because of my background in trade secret law, and I suspect that I may be in the minority among my cyberlaw colleagues, because I think this was the right interpretation and the right outcome.  Exceeding authorized access has no meaning if it does not apply in this case. To me, at least, this was a textbook case of access that starts authorized, but becomes unauthorized as soone as the nefarious purpose for the access is revealed.

And now the scary part

That said, this is still scary - but the problem is with the law, not the court's ruling. Why is it scary?

First, employees who look where they shouldn't could now be considered a criminal under the CFAA, so long as they are looking at material they know they shouldn't be accessing.

Second, this is not necessarily limited to employees. Anyone using a website who starts using information from it in a way that the web operator clearly does not desire could theoretically be criminally liable.

Now that's scary.

The Nosal court tries to explain this away by saying that fraudulent intent and obtaining something of value are required under 1030(a)(4). True enough, but that's not the only subsection in the CFAA. Section 1030(a)(2), for example, outlaws simply obtaining information. Sure, the penalties may not be as severe, but it is still barred.

So, how do we reconcile this case with common sense? Are all web users now criminals if they lie about their age or otherwise commit minor violations? I doubt it. 

First, I think there must be some independent wrongful action associated with the action - a tort that common folk would understand to be wrongful. In this case, trade secret misappropriation was clear. LVRC v. Brekka went the other way because it was not at all clear the action was independently wrongful and thus something the employer would never authorize. I tend to think that browsewrap agreements on websites won't cut it.

Second, the wrongful action has to be tied somehow to the unauthorized access. In other words, lying about your age shouldn't affect access rights generally, but lying about your age might very well be a problem if the reason you did so was to prey on young children. I'll leave others to debate how this might apply to the Lori Drew case. The recent case of MDY v. Blizzard makes this connection for the Digital Millenium Copyright Act, and it seems like a reasonable one under the CFAA as well.

The CFAA scares me, and it should scare you, too. But its not as scary as many make it out to be - at least I hope not.

Posted by Michael Risch on April 28, 2011 at 04:41 PM in Information and Technology, Intellectual Property | Permalink | Comments (10) | TrackBack

Friday, April 15, 2011

Liberate Transcripts from Vacuum Tubes

Transcript

I've looked at approximately 1.3 bazillion college transcripts this year as a member of my law school's Admissions Committee. Along the way, I've developed some strong opinions about how they should be formatted. Here are my thoughts:

In general:

Use modern computers to generate your transcripts. Most transcripts look like they've been spit out of a midcentury computer packed with vacuum tubes, back when computer memory was so expensive it was conserved on a byte-by-byte basis. Well, that's no longer the case. Let clarity and legibility flourish.

Write transcripts with an external audience in mind. A lot of transcripts look like internal record keeping, a sort of cryptic log book. Format them so that they are readily accessible and meaningful to an outsider.

Some specifics:

Spell out the full names of courses. And use lowercase letters appropriately. Don't put "SEM INT INT POL TH LIT WR REQ." Instead, write "Seminar: Introducing International Politics Through Literature (taken to fulfill writing requirement)."

Skip the weird abbreviations in place of grades. If someone withdrew from a course, just put "withdrew," not "Q," "W," WD," or whatever. If the course is in progress, put "in progress," not "IP," "Z," or some other symbol. If someone earned a pass in a non-graded course, say "Pass" or at least use a "P". If you stick a "CR" on someone's transcript, you're just putting a "C" in the grade spot. It makes me look twice, and then it causes me to wonder why this person had to take Beginning Archery to get a bachelors degree.

Put the degree earned, and honors, if any, in a prominent box on the upper right of the the first page of the transcript. Some schools do this, and it is very helpful. There's no reason anyone should have to hunt for this information.

Format the transcript with portrait orientation, not landscape. In other words, the 8.5-inch sides of the paper should be along the top and bottom. That's how everything else in an admissions file is formatted. That's how résumés and recommendation letters are formatted. Follow suit.

Put all relevant information on the front of the transcript. Putting explanatory information on the back creates a headache when things are copied. And then, just reproduce the relevant information. There is no reason for explanatory information to include the crazy grading system the university used during the 1973-1974 academic year if the student didn't go to school then. If you use a post vacuum-tube computer, you can personalize that information and put it on the front of the last page of the transcript, making it easy to use and digest.

Posted by Eric E. Johnson on April 15, 2011 at 06:11 PM in Information and Technology | Permalink | Comments (1) | TrackBack

Friday, April 01, 2011

New ABA Accreditation Rule 304(g) and the Outsourcing of Legal Education

Earth_Globe_Earth_suble_tone

One thing that I always thought was great about being a law professor was that ours was a job that was basically guaranteed not to be outsourced. So I was shocked by today's news that the ABA Section of Legal Education and Admissions to the Bar voted to approve Standard 304(g):

"A program of instruction may be completed through exclusively online instruction."

I called up a Section member who is an acquaintance of mine, Avril Sutela, and I asked her if this meant that a law school could be located outside the United States, and she said that as long as the school meets the other accreditation standards, there was nothing to prevent that. She actually told me that Accenture Education - a division of the global outsourcing company Accenture – has already initiated conversations about beginning the approval process for an ABA-accredited school located overseas with an online instructional program.

I understand the strong economic arguments for trade liberalization, and I couldn't blame law students for considering alternative educational opportunities that could radically lower their tuition bills, but, nonetheless, I have misgivings.

I feel like this happened without the kind of considered, deep thinking that the topic deserved. It also makes me think that if the AALS were in charge of law school accreditation, this never would have happened.

It's great to be back on PrawfsBlawg! Thanks to Dan and the gang for having me. And Happy April 1st.

Posted by Eric E. Johnson on April 1, 2011 at 04:53 PM in Information and Technology, Teaching Law | Permalink | Comments (2) | TrackBack

Monday, March 28, 2011

Presence makes the heart grow fonder?

Making my way through Edward Glaeser's thrilling new book, Triumph of the City: How Our Greatest Invention Makes Us Richer, Smarter, Greener, Healthier, and Happier (Penguin 2011).  Glaeser points us to 19th century English economist, Wm. Stanley Jevons, whose noted the paradox that efficiency improvements lead to more, rather than less, consumption (fuel efficient cars end up consuming more gas, and the like).  In a neat twist, Glaesar applies insight to info technology, noting as a "complementarity corollary" that greater improvements in info technology increase demand for face-to-face contact.

This seems intriguingly true at the level of ordinary academic conferences and meetings.  Rise of SSRN and easy access to recently published research generates enthusiasm to get together (*every AALS now brings invites to blogger happy hours, for instance).  More interesting to me is whether and to what extent this corollary extends to intra-faculty interactions.  Is there anything to the much-remarked fear that info technology will crowd out faculty socialization and face-to-face exchange of ideas, information, and other spillover benefits associated with density?  My anecdotal sense is that the opposite is true.  As Jevins-Glaesar hypothesis, peer-to-peer interactions have grown as better info technology, and ready access to eclectic, extensive research "has made the world more information intensive, which in turn has made knowledge more valuable than ever, and that has increased the value of learning from other people in cities" (Gleaser at 38).  Substitute faculties for cities, and I believe the insight is right on.

One can be Panglossian about this picture, of course.  Info overload competes with schmoozing and adventitious collegiality.  But, still and all, my impression is that both the quality and calibre of faculty discussion in workshops and other informal settings and also the breadth of conversations over professional matters has been steadily improving.

A testable hypothesis is that is at least loosely related to the following:  There is more awareness of what our colleagues are up as a result of these human interactions and, therefore, more citations to faculty working in the same areas (supposing, perhaps implausibly, that one can control for the "sucking up" phenomenon).

 

Posted by dan rodriguez on March 28, 2011 at 09:46 AM in Information and Technology, Life of Law Schools | Permalink | Comments (1) | TrackBack

Thursday, March 24, 2011

Google Takes One on the Chin

Unless you’ve been living under a rock for the past five years, you’ve heard about Google Book Search, an online database that Google is filling by scanning books from the collections of multiple academic libraries. Google partnered up with the libraries for access to their collections, concluding that it didn’t need to ask permission from copyright owners to copy the books or make them available online (with restrictions at Google’s discretion). Author and publisher groups brought a class action law suit, and Google sat down with the plaintiffs to hammer out a settlement agreement. The agreement, as presented to Judge Chin, then of the Southern District of New York, not only proposed to settle claims of prior copyright infringement, but also set up future business relationship between Google and copyright owners, using an opt-out mechanism instead of securing a license.

Judge Chin, sitting by designation as a newly appointed Second Circuit judge, issued an opinion Tuesday denying the motion to approve the settlement agreement. While the opinion does not rule on the merits of Google’s fair use defense, some of the court’s language suggests it would cast a skeptical eye on Google’s behavior in a case on the merits. Judge Chin used words like “blatant” to describe Google’s copying activity, and quoted hostile language from objectors to the settlement, who called the opt-out strategy “a shortcut” and “a calculated disregard of authors’ rights.”

I disagree that there was calculated disregard of authors’ rights. The way many scholars have viewed the case, it was a close call whether Google could copy whole books and place them in an online database for search (and eventual commercialization). Google took a calculated risk, under a standard view of fair use analysis, and one that could still pay dividends if the case gets litigated. I don’t think Google's fair use argument is a close call at all, but not for the standard reasons.

In a paper now available for consideration by your local law review editorial board, I argue that built into the Copyright Act are the seeds of a limited right of first online publication. Historically, the right of first publication protected the ability of the copyright owner to decide when to bring a work to market, and the right was strong enough to trump an otherwise reasonable fair use defense. A right of first online publication would thus dispose of (or at least weigh heavily against) a fair use defense raised to excuse the unauthorized dissemination of a work online—even those works previously disseminated in a more restricted format.

Courts and scholars tend to treat publication as a “one-bite” right, exhausted once the work was released in any format. A reexamination of its history indicates instead that the right of first publication often protected successive market entries. Where courts perceived a significant difference in scope and exposure to risk between a limited initial publication and more expansive subsequent publication, the right of first publication protected that subsequent entry.

In addition to the historical analysis, networks theory sheds light on the difference in scope between print and online publication. The dissemination of print books occurs in a conserved spread: while the physical embodiment of the content moves from point to point, the total number of copies in the network remains stable allowing the owner to correctly assess the risks inherent with market entry. Online dissemination occurs as nonconserved spread: any holder of a digital copy can instantly disseminate it to any point online while retaining the original. The differences are significant enough that print dissemination should not be held to exhaust or abandon the right of first online dissemination.

Copyright law will likely need to adapt to multiple format changes over the effective life of a copyrighted work. Recognizing the right of first publication as a rule governing transitions into new formats will provide courts, copyright owners, and technology innovators with firm rules allowing the copyright owner to decide if and when to adopt a new technology. Intra-format fair use is an important part of the bargain between copyright owners and society. We should be solicitous of intra-format fair use, but much less solicitous on inter-format fair use, particularly in those cases where unauthorized use imports the work into a previously unexploited format, and the new format is significantly broader in scope.

There are close fair use calls in disputes over unauthorized use of copyrighted works. Google’s opt-out copyright strategy, for books never before made available online, was no close call at all.

Posted by Jake Linford on March 24, 2011 at 10:05 PM in Current Affairs, Information and Technology, Intellectual Property | Permalink | Comments (5) | TrackBack

Wednesday, March 16, 2011

Public Forum 2.0

If you are interested in social media and/or the First Amendment, you might be interested in the article I just posted on ssrn.  The abstract is below, and the link is here.

Abstract: Social media have the potential to revolutionize discourse between American citizens and their governments. At present, however, the U.S. Supreme Court's public forum jurisprudence frustrates rather than fosters that potential. This article navigates the notoriously complex body of public forum doctrine to provide guidance for those who must develop or administer government-sponsored social media or adjudicate First Amendment questions concerning them. Next, the article marks out a new path for public forum doctrine that will allow it to realize the potential of Web 2.0 technologies to enhance democratic discourse between the governors and the governed. Along the way, this article diagnoses critical doctrinal and conceptual flaws that block this path. Relying on insights gleaned from communications theory, the article critiques the linear model underlying public forum jurisprudence and offers an alternative. This alternative model will enable courts to adapt First Amendment doctrines to social media forums in ways that further public discourse. Applying the model, the article contends that courts should presume government actors have created public forums whenever they establish interactive social media sites. Nevertheless, to encourage forum creation, governments must retain some power to filter their social media sites to remove profane, defamatory, or abusive speech targeted at private individuals. Although some will contend that ceding editorial control is no more necessary in social media than in physical forums, the characteristic "disorders" of online discourse, and particularly the prevalence of anonymous speech, justify taking this path.

Posted by Lyrissa Lidsky on March 16, 2011 at 04:20 PM in Article Spotlight, Constitutional thoughts, First Amendment, Information and Technology, Lyrissa Lidsky, Web/Tech | Permalink | Comments (0) | TrackBack

Sunday, February 06, 2011

Stanford Law Review as Ebook: Kindle, Nook and iPad

My friend and co-blogger Alan Childress's venture, Quid Pro Books, has helped the Stanford Law Review become the first general law review to publish in an ebook version.  Volume 63, Issue 1 is now available for Kindle, Nook, and in the Apple iTunes store.  See more details at Legal Profession Blog.

Quid Pro Books also re-releases out of print classics in print and ebook formats.  I was delighted to connect Alan with Susan Neiman, who republished her account of Berlin in the years before the fall of the Wall, Slow Fire: Jewish Notes from Berlin. 

For more information, follow the above link to Quid Pro's website.

Posted by Jeff Lipshaw on February 6, 2011 at 06:50 AM in Books, Information and Technology | Permalink | Comments (0) | TrackBack

Monday, January 31, 2011

A Small-Town, Social-Media Parable

    My co-author Daniel Friedel and I  just finished a book chapter for a book called Social Media: Usage and Impact (Hana Noor Al-Deen & John Allen Hendricks, eds., Lexington Books, 2011). In the course of writing the chapter, I came across Moreno v. Hanford Sentinel, Inc., 172 Cal. App. 4th 1125 (Cal. App. 2009), a case I had somehow missed when it came out. The Moreno case is a cautionary tale about the dangers of engaging in undue self-revelation online, but that is not the only reason I find it fascinating.  I find it fascinating because I grew up in a small, remote town, and I am acutely aware of the social dynamics that underlie the case.

    Here are the facts. While Cynthia Moreno was a college student at Berkeley, she visited her hometown of Coalinga, California. Moreno subsequently published on her MySpace page a very negative “Ode to Coalinga,” in which she stated, among other things, that “the older I get, the more I reliaze that I despise Coalinga.” (1128) The principal of Coalinga High School obtained the Ode and forwarded it to a local reporter. After publication of the Ode in the local newspaper, Cynthia Moreno’s family received death threats, and a shot was fired at their home. They were forced to move away from Coalinga, abandoning a 20-year-old business. (1129). They sued the principal and the local newspaper for invasion of privacy and intentional infliction of emotional distress. The trial court dismissed the case against the newspaper under a California's antil-slapp  statute. The Moreno family did not appeal the ruling regarding the newspaper, but they did appeal the trial court’s dismissal of their claims against the principal for invasion of privacy and for intentional infliction of emotional distress. (1129).

    With regard to the privacy claim, the California appellate court held that the revelations concerning the Ode simply were not "private" once Cynthia Moreno posted them on MySpace, “a hugely popular internet site.” (1130). The Court found, in essence, that Cynthia had waived any reasonable expectation of privacy through her "affirmative act." (1130). It was immaterial to the court that few viewers actually accessed Moreno’s MySpace page. By posting it, she opened her thoughts to “the public at large,” and “[h]er potential audience was vast” regardless of the size of the actual one. As Cynthia learned to her sorrow, there is no privacy invasion when information shared with a seemingly “friendly” audience is repeated to a hostile one. (1130). [Query, however, whether this could lead to a false light claim in the right circumstances.]  Although the court dismissed Cynthia's privacy claim, the court held open the possibility that a claim for intentional infliction of emotional distress could succeed. (1128). 

    The Moreno case sends a mixed message about legitimate use of information shared in "public" social media. On one hand, the information is not private. On the other, republication can still lead to liability if done for the purpose of inflicting emotional distress on another in a manner that jurors might subsequently deem “outrageous.”  At first blush and maybe at  second, too, this latter holding regarding intentional infliction (the court left its reasoning unpublished) seems inadequately protective of speech.  After all, the principal was merely republishing lawfully obtained truthful and non-private information, arguably about a matter of local significance.  The principal had no "special relationship" (protective, custodial, etc.) with Cynthia Moreno that might have imposed on him a duty to give special regard to her interests or emotional well-being.  It seems from the facts, however,that Cynthia's younger (minor) sister might have been attending the high school of  which the defendant was principal, which perhaps could make his actions more "outrageous." Moreover,  how much solicitude should be given to a defendant such as the principal if his intent in publishing the statement was to ensure the ostracization of Cynthia and her family, or even if he acted with reckless disregard that those consequence would result? Of course, the statement need not have been published in a newspaper in order to produce a similar result.  In a town of fewer than 20,000 residents, the gossip mill very well might have ensured that the information got to those most likely to be interested in it. Moreover, if the principal could foresee harm to Cynthia through the gossip mill, arguably so could Cynthia (even if the scope and exact manner in which those harms would occur were unforeseeable). After all, she did grow up there and should have known, even at 18 years old or especially at 18 years old, how vicious the repercussions might be if her Ode got out.  I feel great empathy for Cynthia, but I do worry about the free speech implications of giving a tort remedy based on the repetition of lawfully obtained, truthful, non-private information, even when done with bad motives.

Posted by Lyrissa Lidsky on January 31, 2011 at 12:06 PM in First Amendment, Information and Technology, Lyrissa Lidsky, Torts, Web/Tech | Permalink | Comments (0) | TrackBack

Wednesday, December 15, 2010

The Patent System as an Enforcer of Intensive Parenting

Thanks to Dan and the  other permanent bloggers on PrafwsBlawg for having me back. Recently, I have been studying the ways in which the law enforces Intensive Parenting. In an article titled Over-Parenting my co-author Zvi Triger and I show that parenting has changed over the last two decades. The contemporary parent is is on a constant quest to obtain updated knowledge of best child rearing practices and use this information actively to cultivate her child and monitor all aspects of the child’s life.  In the article we highlight the drawbacks of Intensive Parenting and caution against its incroporation into the law.

Yet, Intensive Parenting is not merely about social norms. The intensive parent uses a vast array of technologies to cultivate, monitor and remain informed. Technology companies cater to the norms of Intensive Parenting and reinforces them by producing technologies that facilitate Intensive Parenting.

This fall, Apple received a patent for a technology that enables parents to control the text messages their children send and receive. Parents can use the technology in different ways. One way is  to prevent children  from sending or receiving objectionable text messages. But, control can take more subtle forms. For example, the patent abstract states the technology could require that a child learning Spanish will text a certain number of Spanish words per day. 

As we know, children today text hundreds of messages a day. It seems, in fact, that texting has at least partly replaced conversations. In  generations that preceeded texting, parents did not filter their children's conversations from objectionable messages nor could they enforce the practice of Spanish away from the  class or home.  Yet, parents driven by Intensive Parenting norms to regulate and monitor every aspect of their child's life can now control their children's social conversations away from home.

I am often asked - so what does the law have to do with Intensive Parenting? In our article, we show some ways in which the law purposefully endorses and enforces intensive parenting norms. Here, however, we have an unexpected and unintentional enforcer of Intensive Parenting - The Patent Office. The Patent Office has awarded Apple a patent and through this action indirectly enforced Intensive Parenting.

How does the award of a patent to a parental texting control technology enforce  Intensive Parenting? First, the goal of awarding a patent is to encourage innovation that promotes progress. In this instance, the award of a patent encourages innovation in parental control technologies. Secondly, although the patent system does not employ moral criteria in deciding which inventions warrant a patent, it does have an expressive value. For many in the general public, the grant of a patent endorses an invention as a good and useful invention. Hence. by granting a patent to an invention that lets parents control an important part of their children's social life, the patent system effectively promotes intensive parenting norms.  

Posted by Gaia Bernstein on December 15, 2010 at 09:27 AM in Culture, Information and Technology, Intellectual Property | Permalink | Comments (4) | TrackBack

Monday, November 29, 2010

Using Smokescreens and Spoofing to Undermine Wikileaks

The leaked diplomatic cables story already been much discussed and will be discussed further elsewhere.  As a privacy law scholar, my starting point is to think of this problem as a data security issue and then wonder how Pfc. Bradley Manning evidently obtained access to such a large cache of highly sensitive information.  I suspect that some (but not all) of that story will be told in the coming months.  The government's failure to better safeguard sensitive national security information against someone in Manning's position is a genuine scandal, one that ought to prompt careful investigations into what went wrong and rapid data security improvements.

That said, we have seen from analyzing commercial data security breaches that breaches will inevitably occur.  To that end, it may make sense for the government to supplement heightened data security measures by regularly leaking false (but plausible) intelligence to Wikileaks.  The current controversy has generated great attention because the information exposed on Wikileaks is believe to be genuine.  But if disclosures to Wikileaks are frequent and sorting between true and false leaks is difficult, then the damage resulting from inevitable disclosures of true information would be reduced.  It might even make sense for the government to announce publicly that it will be leaking lots of false diplomatic cables going forward so that foreign governments are not unnerved by what they read on Wikileaks and in the press. 

A version of this strategy - called spoofing - was used effectively by the recording industry to make illicit peer-to-peer file swapping less attractive. 

If users couldn't distinguish between real Kanye West mp3 files and garbled noises on Limewire, then they might get frustrated enough to switch to iTunes.  Applications that fostered the swapping of unlicensed p2p files tried to combat spoofing through file rating systems, but these were also subject to gaming by spoofers.  Similarly, a spoofing approach to national security leaks might lower the reliability of Wikileaks information enough to get people to stop paying so much attention to the content that is posted there.  Recall the little boy who cried wolf too many times.  Wikileaks could invest in trying to sort between legitimate and phony leaks, but doing so would be costly and time-consuming, and it might bring more of Wikileaks's contacts with the Bradley Mannings of the world into the open.

One question that arises is whether the government is doing this already.  Corporations evidently engage in such behavior with some regularity.  Indeed, using misdirection strategies against competitors may be part of the "reasonable precautions" that a firm ought to engage in to guard its trade secrets.  Are there instances of the government leaking false documents to Wikileaks?  If the government isn't engaging in those strategies, should it start doing so?

UPDATE: Not-so-great minds think alike.

Posted by Lior Strahilevitz on November 29, 2010 at 05:13 PM in First Amendment, Information and Technology, Web/Tech | Permalink | Comments (3) | TrackBack