« Thoughts on Yeager: Role of Appellate Judges, Special Verdicts, and the Meaning of a Hung Jury | Main | Sprawl and Climate Change »

Monday, June 22, 2009

I Do What I Want!

One of my favorite refrains from South Park is Eric Cartman's declaration that "I do what I want!" on the Maury Povich show. The phrase reminds me a bit of Section 230 of the Communications Decency Act, which immunizes web sites and "interactive computer service" from liability stemming from content provided by "another information content provider." At its best, 230 encourages wide collaboration of ideas (like this blog!) and encourages the airing of unpopular viewpoints - free expression in pure form.  At its worst, 230 allows a downward spiral of garbage, harassment, and anonymous defamation (think Juicy Campus).

Section 230 is one of my favorite topics that's not on my research agenda, so I plan to explore some of the issues in a couple of blog posts instead. Something seems wrong with how it has been applied, but I think it is difficult to put a finger on what and why, because the underlying policy makes sense.

In this first post, I plan to discuss the policy basis for Section 230 as well as the soundness of that policy.

Section 230 was enacted to cure a curious whipsaw in the common law.  The general rule of defamation liability is that publishers are liable for false statements and distributors are not unless they have notice.  This makes sense intuitively - Random House is liable for statements of its authors in books, but Barnes & Noble isn't liable for selling the book unless it learns of the false statement.

A couple of cases in the 1990's applied these rules to the internet - if you run a networked service (these were pre-internet - Compuserve and Prodigy) with user provided content, then you were not liable if you leave all user content untouched, because you are a distributor.  If, however, you edited a single posting, then you became a publisher, and were liable.

The costs of such a rule are tremendous - providers could either leave the site untouched, leading to no control over unabated user content (and we've seen how bad that can be) or providers would have to closely scrutinize every single posting made by a user, an extraordinarily expensive proposition.  The result in either case is a disincentive to provide online services for user content.

In steps the Communications Decency Act, which immunizes providers for all content provided by others, whether or not some of the provider polices that content.  The statute turns the common law rule on its head - it doesn't matter that there is direct notice of the falsity, a fact that most practicing attorneys I talk to have a hard time getting their arms around.  The statute has been extended to cover defamation, stock fraud, and all sorts of other wrongs, when such wrongs are perpetrated by users.
It can be a rough rule, as some have learned when they try to sue providers for the terrible things their users do. 

Why the CDA, though?  You would think an act dedicated to limiting online indecency would not allow for this kind of free-for-all.  The argument is simple enough.  Under the common law rule, people had a disincentive to do any kind of filtering of indecent or offensive content, lest they be held liable for the borderline stuff.  So, the CDA immunizes providers even if they do filtering (and even has a section that expressly immunizes it), so that providers have an incentive to weed out the worst, even if some slips through.

When approached from this angle, the policy behind immunity is sound.  After all, those who want garbage on their site would choose not to filter anyway, and would have always been immune under the common law.  This way, those sites that want to do some clean-up now have an incentive to do so because they, too, are immune.

Of course, the policy could have gone the other way, holding people liable even if they did not filtering, but attaching liability for passiveness would force all providers to closely scrutinize (and fact check!) every single piece of content provided by a user. The costs of such a system would be astronomical, and would dissuade all sorts of web sites that we know and love today - blogs, facebook, linked-in, youtube, and any other site that allows user content.

The middle ground is a notice and takedown system, but this too is problematic, as people would ask sites to take down all sorts of content is properly posted.  Those who follow the DMCA can attest to the overuse of takedown notices for content that is legally posted. Here, at least, the question is closer based on costs and benefits, but I still lean toward free discourse.  I'm willing to be persuaded to the contrary, and a lot of scholars are looking at ways to align incentives properly.

So, that's a basic introduction to the immunization provided by 230. While I think the policy behind the rule is fundamentally sound, the courts have mucked up the statute a bit, and in ways that blur the reason why we have the statute in the first place.  I'll address these points in my next post on this subject.

Posted by Michael Risch on June 22, 2009 at 01:09 PM in Legal Theory | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference I Do What I Want!:


One of the odder parts of this story is that, despite the fact that Prodigy was an unpublished New York state trial court case from Long Island from 14 years ago, and the CDA was passed one year later, non-lawyers today STILL think that Prodigy is the law. I see people, sometimes even lawyers, saying authoritatively that if you edit/delete any comments on a blog, you will become liable for defamation. I think this disconnect is a bit like the one chronicled by Bob Ellickson in Order Without Law, and is explained by the fact that the "don't touch the comments" rule matches with the non-Internet-lawyer view of how the Internet *should* work, whereas "nearly absolute immunity" doesn't.

Posted by: Bruce Boyden | Jun 22, 2009 4:46:26 PM

The comments to this entry are closed.