« January 2015 | Main | March 2015 »

Friday, February 27, 2015

It's white, no blue . . . aaaah

Doesn't this illustrate everything that Dan Kahan, current GuestPrawf Dave Hoffman, and others (including me) have been saying about video evidence? If no one can agree on the color of the dress,* how can anyone agree on whether the force used was excessive or whether the protesters were peacefully gathered and marching?

* It's light blue and gold.

Posted by Howard Wasserman on February 27, 2015 at 04:23 PM in Howard Wasserman | Permalink | Comments (6)

Fr. Theodore M. Hesburgh, R.I.P.

Not just my own University of Notre Dame, but also American higher education and, in many ways, the country has lost a truly great and really good man, "Fr. Ted" Hesburgh.  You can learn a lot more about his work and life here.   And, the Washington Post's obituary is here.

Fr. Hesburgh was retired by the time I arrived at Notre Dame, but I did have the chance to meet and talk with him several times, including in connection with the University's education-reform efforts.  I remember him expressing surprise, and a bit of irritation, when I told him back in 2000 that vouchers and school-choice were still controversial and politically challenging.  "I thought L.B.J. and I took care of that back in 1965!", he said.  "There are a few details still being worked out," I assured him.  God bless Fr. Ted.

Posted by Rick Garnett on February 27, 2015 at 02:09 PM in Rick Garnett | Permalink | Comments (0)

Teeth Whitening for Lawyers

Thanks to prawfsblawg for having me and to Dan Markel for having been such a welcoming presence when I first entered academia a few years ago.  Most of my posts will focus on areas of criminal law/procedure, but today I want to look at Unauthorized Practice of Law (UPL) rules (proscribing who can practice law, usually defined incredibly broadly, and enforced mainly by bar associations) in the context of a recent Supreme Court decision. 

In North Carolina State Board of Dental Examiners v. FTC, decided on this past Wednesday, the Supreme Court ruled that North Carolina's dental board could not restrict non-licensed teeth-whiteners from beautifying North Carolinians' smiles.  This case may have more impact on lawyers, and particularly bar associations, than you might think.  The Court relied heavily on an earlier ruling holding that bar associations, who used their UPL rules to prevent nonlawyers from providing "legal" services, came under the ambit of the Sherman Act.  

Despite that ruling, bar associations continue to apply UPL rules to inhibit competition not only from nonlawyers who wish to appear in court (traditional lawyer activity) but to those who wish to fill out simple contract forms (to purchase a home for instance), or advise a friend on her will.  I, and other more prominent scholars, have argued that these rules are not only anticompetitive but also do a great disservice to the 3/5 of American plaintiffs who appear pro se because they cannot afford an attorney, not to mention the millions more who forgo advice on transactional arrangements for the very same reason. The mantra from bar associations is that these rules protect the public interest, but, as in the N.C. dentist case, it is often hard to see whose interest is protected other than the professional degree-holders.  

I am curious to see whether this recent case will revive challenges to UPL rules.  I am also curious to hear arguments from those who believe UPL rules actually do serve the American public.

Posted by Kate Levine on February 27, 2015 at 01:30 PM in Blogging | Permalink | Comments (3)

It's Been a While

Hi folks.  It's a bittersweet pleasure to come back to Prawfs, which was my first blogging home as an academic.  I joined the academy in 2004 and blogged here for my first year.  I last was on the site as an author in 2005 - October  31 to be precise - the day I left for CoOp. 2005!  Remember? When applications were up, SSRN was new, and blogging wasn't stagnant?

Actually, I'm not sure that last bit is true.  Yes, law professor blogging has come to taken on an increasingly navel-gazing tone - more posts about socks, rankings, rankings of socks, and sometimes lateral moves.  But at the same time, contrary to my predictions, blogs haven't by-and-large consolidated; most of the blogs around in 2005 are still chugging along, and one blog - Volokh - has clearly made a serious, sustained, and substantial contribution to the world in its role in motivating ACA litigation. 

Dan Markel believed in this medium.  Among other things, he was the first to see that junior law professors would want a place to anonymously gripe about submissions and hiring. I argued often -online and off-that Prawfs fora are almost entirely bad for the profession. I still think I'm correct, but Danny was right to see an unmet demand for community across our various, isolated, schools and subject matter specialities. Danny was a connector. Like so many of you, I feel his loss still in missed connections, phone calls, sometimes presumptuous stories, and scholarship.  And like so many of you, I remain astonished by the lack of action in his case. Danny's scholarship was, in some way, about the social costs of crime. It's ironic that his death provides such a clear example of theory in action. Law professors spend so much time on  innocents in jail that they sometimes forget to account for the human costs of crime unsolved. 

In any event, this month I'll try to engage with these topics, as well as those more evergreen: JD/PhDs (good, bad, scam?); skills education (and its relationship with employment); the problem with p-values; and, of course, promoting an article I've out for submission.

Posted by Dave Hoffman on February 27, 2015 at 10:10 AM in Blogging | Permalink | Comments (0)

Thursday, February 26, 2015

Declaring victory?

At CoOp, Ron Collins discusses the ACLU's new 2015 Workplan: An Urgent Plan to Protect Our Rights, which listed 11 "major civil liberties battles" that the organization plans to focus on--none of which have anything directly to do with the freedom of speech or of the press. Ron wonders why, given the ACLU's history and founding purpose. He emailed ACLU Executive Director Anthony Romero about this and was told Romero intends to respond.

I look forward to hearing Collins report on Romero's response. But let me offer one possible (if not entirely accurate) answer: We won. There are no "major civil liberties battles" to be fought or won with respect to the freedom of speech. Yes, we still have situations in which government passes laws or does other things that violate the First Amendment and those must be fought in court. But the First Amendment claimant wins most of those cases and much of the doctrine seems pretty stable at this point; it simply is a matter of having to litigate. Importantly, these do not (or at least do not appear to) reflect a systematic assault on free speech rights across wide areas of the country on a particular matter. There is no overwhelmingly adverse legal precedent that must be changed (compare surveillance), no overwhelming series of incidents highlighting the problems (compare police misconduct), and no systematic assault on a right by political branches or other majoritiarian institutions (compare Hobby Lobby; reproductive rights; voter ID).

The only "major battle" arguably to be fought on the First Amendment is over campaign finance. But the ACLU is famously divided over that issue, with past leaders fighting among themselves and divisions within the current leadership. The rules governing public protest have evolved to overvalue security at the expense of the right to assemble and speak in public spaces, especially at singularly important events (political conventions, meetings, etc.). But there are so many variables at work there, it is hard to see how to create a battle plan on that.

That's it. Police still seem unsure about what to do with people filming them in public, but that is not because the doctrine is not clear. The student-speech doctrine is a horror show, but that is not an issue on which you hinge your fundraising. Campus speech codes are a pervasive and systematic problem (but see Eric Posner), but the ACLU may be divided on that issue as well (since much of the targeted speech is deemed racist, sexist, etc.). And anyway, other organizations (notably FIRE) have made this their specialty. Not every challenged trademark involves a racial slur. Am I missing something else?

Note that I do not mean to suggest that we won and that there are, in fact, no more systematic threats to free expression. Yes, I feel a lot better about my right to burn a flag, defame the President, or watch "Fifty Shades of Grey" than I do about my daughter's future right to control her body. But it would be a mistake for the ACLU (or anyone else) to declare victory on free speech and drop the mic.

Posted by Howard Wasserman on February 26, 2015 at 09:31 AM in First Amendment, Howard Wasserman | Permalink | Comments (12)

Wednesday, February 25, 2015

Crime, Policing, and CompStat: An OVB/Endogeneity Exacta

Continuing my examination of the Brennan Center report on crime and incarceration, I want to turn my attention now to its treatment of CompStat and policing. The report finds, based primarily on Steve Levitt’s prior work, that policing is responsible for about 0% to 10% of the crime drop in the 1990s and very little in the 2000s; conversely, using city-level data, the report suggests that CompStat contributed to 5% to 15% of the drop in city-level crime. The report quite likely understates the effectiveness of policing and overstates the effectiveness of CompStat.

Let’s start with CompStat. The first concern I have with the report is that despite city-level data showing that CompStat works, it isn’t included in the state-level models. The justification—that CompStat is a city-level program, not a state-level one—is one of those arguments that makes sense at first but unravels with more thought.

Consider how the authors themselves explain this decision:

Because policing is a local function, executed on the city and county level, an empirical analysis of CompStat must be conducted at a local level instead of a state level.

But the state-level models include police numbers which, as they say, is a local function executed at a local level. So why include police numbers but not CompStat in a state-level regression? For that matter, most of their other factors are local- and county-level as well, such as unemployment, income, even the use of the death penalty. In all these cases, state-level aggregates are homogenizing county- and local-level variation.

And that may be totally okay. But why treat CompStat any differently?

The only reason I can see is that it is harder to aggregate. It is easy to add up all the police in a state, or to average out the local unemployment rates. But how do you “average” a binary choice like CompStat? A city either has it or it does not. So how do you code “CompStat” for, say, California, if Los Angeles and San Diego use it but San Francisco does not? And do you even care if Sonoma uses it or not?

Tricky questions, to be sure. But that doesn’t mean we can just punt on them. There may be creative ways to manage this issue: how about percent of population or percent of crimes taking place in cities or counties using CompStat, which would be akin to some sort of police-per-capita number? Or perhaps designate a “big enough city” cutoff point (at least x% of the state population), and then have an indicator for each city? 

I have no doubt there are problems with each of these, and these are just ideas off the top of my head. But the point is that if CompStat matters at the city level, it matters at the state level, since cities—especially big cities, which are the ones most likely to adopt CompStat—exert sizable influence on state-level statistics. So CompStat can’t simply be dropped without cost.

And while CompStat appears to influence crime (though more on that in a moment), it could also shape incarceration directly. The decision to adopt CompStat could reflect a certain degree of criminal justice sophistication, or perhaps a more cost-based managerial approach, that could reflect a broader shift away from incarceration beyond trends in crime rates. So the standard omitted variable bias concern recurs.

Here, it looks like CompStat reduces both crime and—if my theory here is right—incarceration, so its exclusion, like all the others I’ve identified so far, will tend to make the model understate the effect of incarceration on crime.

Even at the city level, though, there is another risk of omitted variable bias when it comes to CompStat: not CompStat itself (obviously), but how the police use it. The authors of the report are very careful to point out that their variable makes no assessment of how the police use CompStat or respond to its findings—broken windows, stop-and-frisk, hot-spot policing, etc.—on the (again, understandable) grounds that such data is hard to gather.

Unfortunately, it might also be important. A recent paper that randomized police responses in designated hot spots found that foot patrols and problem-oriented policing responses had almost no effect on crime, while offender-oriented tactics witnessed a 42% drop in all violent crimes and a 50% drop in violent felonies. The tactics appear to matter significantly.

So what is this CompStat variable picking up? Is it just the adoption of CompStat, or do cities that adopt CompStat more often than not also adopt tactics along side it that appear to work? It’s possible that much of the “CompStat” effect this model detects is really a “Tactics” effect, with the tactics correlated to the adoption of CompStat.

This is not a trivial question. If I am a cash-strapped police department debating between buying new computer systems or training my officers, I’d really like to know if what matters more is the precision of the response vs. how my officers respond in general. Unfortunately, the Brennan Center model can’t tell us that.

Which is not to say that identifying CompStat or its correlates as important (assuming the rest of the model has no problems) is not useful to know. That could be very useful—it at least narrows down the options for jurisdictions to consider. But it is important to acknowledge the limited nature of the finding.

Finally, I want to turn briefly to the report’s focus on policing. The authors note that their regressions return only a minimal effect of policing on crime, but they are quick to rightly point out that endogeneity is big problem here as well, as Steve Levitt pointed out clearly years ago.* The authors then draw two conclusions:

  1. They decide to simply accept Levitt’s numbers for the 1990s.
  2. They point out that the number of sworn officers has been flat or falling slightly over the 2000s, so policing’s effect during that time is likely minimal.

I have concerns with both of these claims.

First, the authors decline to generate new results for policing on the grounds that endogeneity is hard to correct, so it makes sense to stick with Levitt’s popular findings. But Levitt’s instrument is actually pretty easy to use—as long as you have data on when mayoral and gubernatorial elections occurred, you can extend Levitt’s instrument into the present. Admittedly Levitt’s data is city-level, not state-level, but the authors could have run new city-level regressions to see if Levitt’s findings are robust to more data. They may have even been able to directly model whether there are diminishing returns to policing in an era of falling crime, although there could be some technical problems there.**

Moreover, while they are right that the number of officers has been flat, the levels of crime have been falling. So it is likely that officers per crime have gone up. Might that suggest that the relative strength of police forces has risen, even if their absolute number has declined? In which case, might policing have more of an effect into the 2000s? It obviously doesn’t have  to—maybe policing becomes less effective as crime falls since it becomes harder to actually uncover it—but the claim that a flat number of officers can be taken as evidence of negligible effect strikes me as perhaps looking at the wrong number (officers, instead of officers per crime).

The tl;dr version, then? The effect of CompStat could be picking up important tactical choices, and the ineffectiveness of policing could reflect incorrect modeling choices. Neither of these problems is guaranteed, but the risk is non-trivial, and the policy implications of being wrong here are non-trivial as well.

 

* I must admit to finding it somewhat strange that the authors are quick to accept Levitt’s argument that endogeneity is a problem here but not when it comes to incarceration.

** I have no idea off the top of my head if it is easy to instrument for a variable that appears in some sort of quadratic equation.

 

Posted by John Pfaff on February 25, 2015 at 12:30 PM in Criminal Law | Permalink | Comments (0)

JOTWELL: Erbsen on Klerman & Reilly on forum selling

The new Courts Law essay comes from Allen Erbsen (Minnesota), reviewing Daniel Klerman & Greg Reilly's Forum Selling, which discusses how particular courts make themselves attractive places for parties to forum shop. The article and the review essay are worth a read.

Posted by Howard Wasserman on February 25, 2015 at 11:23 AM in Article Spotlight, Civil Procedure, Howard Wasserman | Permalink | Comments (0)

Tuesday, February 24, 2015

Another twist in the march to marriage equality

Two weeks ago, Judge Granade enjoined Mobile Probate Judge Don Davis to stop enforcing the state's SSM ban and to begin issuing marriage licenses to same-sex couples. Last week, Davis refused to grant a second-parent adoption to Cari Searcy and Kimberly McKeand, the plaintiffs in the first action in which Judge Granade invalidated the state ban. Davis entered an interlocutory decree granting Searcy temporary parental rights, but declining to issue a final adoption order until after SCOTUS decides the Marriage Cases this spring. Searcy and McKeand have filed a new action against Davis, seeking not only an injunction, but also compensatory and punitive damages (I have not been able to find the complaint).

First, this illustrates the importance of determining the true and proper scope of an injunction. In Strawser, the Court enjoined Davis from enforcing the SSM ban and to issue licenses to Strawser and some other named plaintiffs. But that is the limit of the court order. It does not and cannot apply to enforcing (or not) the SSM ban as to anyone else or in any other context. Thus, the argument that Davis is bound by any court order to grant this adoption is wrong. Otherwise, we have, at most, persuasive authority that the SSM ban is unconstitutional, nothing more.

Second, this new lawsuit seems to have other problems. Adoption decisions by probate judges, unlike decisions to grant or deny marriage licenses, appear to be judicial in nature, involving petitions, hearings, evidence, interlocutory and final orders, and appeals. This raises a couple of issues. First, if this is a judicial act, Davis is absolutely immune from damages--Davis was named in Searcy's original action and this was one argument he made in his motion to dismiss. And if Davis was acting in a judicial capacity, then under § 1983 the plaintiffs at this point can only obtain a declaratory judgment but not an injunction. Second, if this is a judicial act, this action should be barred by Rooker-Feldman--Searcy and McKeand are state court losers (they did not get the remedy they wanted in state court) and functionally are asking the federal court to reverse the state court decision. This argument is a bit weaker within the Eleventh Circuit, as there is some district court caselaw that Rooker-Feldman only applies to final state court decisions but not interlocutory orders. Still, if Davis was wrong to deny the adoption in a state judicial proceeding, the plaintiff's move is to appeal, not to run to federal court.

Update: Thanks to commenter Edward Still for sharing the Complaint, which is as bad as I thought. It asks for an injunction against a judge without having gotten a declaratory judgment; it asks for damages and attorney's fees against a judge for what the complaint itself makes clear is a judicial act; and it asks the district court to "strike" an order of a state-court judge and to command that state judge to grant parties relief. I am not big on Rule 11 sanctions against civil rights plaintiffs, but this one asks for so much that is so obviously legally barred by clear statutory language as to be a bit ridiculous.

Posted by Howard Wasserman on February 24, 2015 at 10:02 PM in Civil Procedure, Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (2)

Yale's proposed faculty-conduct code

Inside Higher Ed has the story, here, about what at least some faculty at Yale University are calling "a 'curious' and 'confusing' proposed faculty conduct code threatening undefined sanctions for a mishmash of transgressions."  (It strikes me that "mishmash of transgressions" could be the title of a David Lodge book, or maybe a sequel to Lucky Jim.)  Here's just a bit from the piece:

The draft, which is not publicly available but which was obtained by Inside Higher Ed, says it seeks to summarize those principles and “provide examples of conduct that falls short of the professional behavior they require.” It continues: “The examples of conduct listed here are not exhaustive, and if a faculty member’s behavior violates the faculty’s shared principles, he or she may be subject to sanction whether or not the behavior is specifically described below.”

Examples of sanctionable behaviors include “arbitrary and capricious denial” of access to instruction or academic resources, failure to contribute to the “teaching mission” of the university “reasonably required” by a faculty member’s program and the failure to meet “reasonable deadlines” in evaluating a trainee’s work or providing career support, such as letters of recommendation. The document does not specify what kinds of sanctions might be meted out. . . .

 

Posted by Rick Garnett on February 24, 2015 at 02:59 PM in Rick Garnett | Permalink | Comments (2)

Do Law School Exams Encourage Bad Legal Writing?

Do law school exams teach lousy legal writing? I am thinking of the “issue-spotting” exam in which the student is expected (or thinks that he or she is expected) to touch on as many issues as possible to demonstrate that he or she did her time in the course, taking notes, briefing cases, and soaking up information. Typically, such exam answers consist of lots of points hurriedly raised and rarely resolved or argued effectively. Such answers often adopt an indecisive “one-hand-other-hand” style of a bad bench memo, noting that there are opposing arguments on a point but not making any effort to evaluate whether and how one argument is better than another.

These symptoms of a certain type of exam answer writing also seem to be characteristics of bad legal writing by young attorneys starting out as associates, at least according to senior partners that I canvassed a couple of summers ago, in an effort to learn how to improve NYU’s legal writing program. The most common complaint was that new hires’ emails, memos, and draft briefs did not make an argument for a particular position. Instead, the novices summarized too much at too great length without arriving at any plain bottom line. “Don’t they know they we’re paid to be advocates?” one lawyer complained. “Clients pay for answers, not encyclopedias,” said another.

Law students, however, pay to take issue-spotting exams. And sometimes I think that this genre corrupts their legal writing later, by causing them to slight the ranking and evaluation of arguments in favor of the spotting of issues and the quick summarizing of arguments without really evaluating them.

I’ve tried to move away from the sort of exam that induces this response from students, and I am inclined to think that, at least with the right sort of exam question, the following piece of advice from Howard Bashman on writing effective appellate briefs should apply to exam-writing as well:

Experienced appellate advocates agree that raising too many issues on appeal hurts, rather than helps, the appealing party. Raising one to four issues on appeal is best; raising a few more issues than that is acceptable when absolutely necessary. In United States v. Hart, 693 F.2d 286, 287 n.1 (3d Cir. 1982), the Third Circuit endorsed Circuit Judge Ruggero J. Aldisert's statement that "when I read an appellant's brief that contains ten or twelve points, a presumption arises that there is no merit to any of them." It does not suffice merely to raise an issue; be sure also to include argument on the point in the argument section of your brief.

Posted by Rick Hills on February 24, 2015 at 05:54 AM | Permalink | Comments (11)

Monday, February 23, 2015

John Oliver on electing judges

Obviously, I would disagree with the part that suggests Roy Moore is defying federal courts or federal orders. But the rest, as it highlights the ridiculousness of electing judges and the perverse incentives that creates, just sings.

 

 

Posted by Howard Wasserman on February 23, 2015 at 05:35 PM in Howard Wasserman, Law and Politics | Permalink | Comments (2)

Sunday, February 22, 2015

Real-Life Exam Questions: Do They Require a "Flipped" Class?

For the last few years, I have given a set of "real-life" exam questions to my students in NYU's required Legislation & Regulatory State course. My basic method is to check the NPRMs pending in the Federal Register for tough questions of statutory interpretation. I also call up my friends working in city, state, or federal agencies to ask for help identifying some tangle of statutory ambiguity -- ideally one with a term that, if taken literally, would defeat the obvious purpose of the law. Faster than you can say King v. Burwell, I can generally get a genuinely impossible statutory mess cooked up by Congress. (For some truly impossible problems arising out of some statutes on energy conservation, you can download my 2014 LRS exam question) The "real-life" context poses some challenges for the exam question writer. Realism requires a completely open universe of materials -- all of the relevant precedents (and they have to decide what's relevant), all of the relevant rules and statutes (ditto), all of the relevant and important comments on an NPRM at regulation.gov. (As an act of mercy, I boil down the key comments in a cover memo). These conditions require me to come up with problems on which there are no publicly available briefs or judicial opinions. Hence, my recourse to NPRMs and pals at agencies.

The open universe presents two big challenges for my students: relevance and reading comprehension. Based on years of training, students sometimes waste space spotting (but not resolving) issues and displaying gratuitous erudition, larding up their answers with recitals of law that, despite their accuracy, are unnecessary for solving the specific problem. Likewise, the complex, nested clauses of multiple statutes can induce basic reading errors (overlooking a "not," for instance), causing large sections of some answers to be nonsense. Figuring out which knowledge is relevant and which, gratuitous is an important skill (especially in a world of stricter pages limits on briefs and limited judicial attention span. So is careful parsing of complex statutes. But neither of these basic skills have much to do with understanding foundational principles that stand in the background of any interpretative problem -- federalism, Presidential versus congressional authority (sources of various substantive canons like Gregory v. Ashcroft or Chevron), the adjustment of powers between past and present Congresses (canon against implied repeal), and so forth.

So I am wondering whether I need to flip my class -- pre-record the lecture on the foundational question (say, textualism's relationship to the enforcement of legislative deals) and spend class time having teams of students work on some real-life practice problems with tangled statutes and oral arguments from volunteers, just to help them read more accurately and write and speak more to the point. Despite plugs from distinguished teachers like Deborah Merritt, I am a bit worried about taking the plunge. In particular, I cannot help but think that "flipping" creates tensions between the teaching of micro-skills (e.g., how to parse ten nested clauses of two different statutes or come up with a a pithy, telling phrase to capture why an apparently relevant canon adverse to one's client is inapplicable to a specific set of facts) and foundational principles. After the jump, a bit more on this challenge of "flipping" and a plea for advice.


I have no shortage of sample real-life problems with which to occupy my students. For the last few years of LRS, I have been distributing such problems every week, going over them during office hours for whomever wants to show up to hone their skills in close reading and clever argument. The difficulty, however, is that students (understandably) do not have time, on top of their normal course load, for an apparently extra-curricular discussion of real-life problems requiring substantial preparatory work. If they are already reading a dense "academic" assignment on (for instance) the relationship between textualism and legislative deal-making, then they cannot easily spare time to figure out (for instance) how to make an effective focused argument about the application of the canon against implied repeal to a specific dispute about the Outer Continental Shelf Lands Act's division of power between the EPA and the Secretary of the Interior (which is one of my weekly exercises).

If I flip the class, then I will flip the time constraints. Students will spend more time refining their skills of focused reading, but they will short-change the foundational principles of interpretation and authority that stand in the background of any dispute about the application of specific statutory clause to a specific set of facts.

The obvious answer is a mix of approaches, I guess. But it would be helpful to hear from people who have created such mixes. Which general "academic" discussions did they sacrifice to make room for class sessions devoted to problem solving? Which specific problems allowed for the simultaneous practice of close reading skills and deep discussion of foundational questions?


Posted by Rick Hills on February 22, 2015 at 09:32 PM | Permalink | Comments (3)

The 2016 U.S.News Rankings Are Still Not Out Yet--Getting Ahead on the Methodology of the Law (and Business) rankings

We are fast approaching the date that U.S. News issues it’s graduate school rankings.  According to Robert Morse, chief data strategist for U.S. News & World Report, the official date is March 10th but they usually leak faster.  Paul Caron at Taxprof blog is, of course, already on this and will probably be first out of the box with the analysis when the time comes, so  I thought it might be helpful for those who want to prepare to interpret and explain them to read ahead on the methodology the magazine will use. (this could also be a good time to learn how to set  a Google Alert or some other  automatic notification method )  There have been some substantial changes in the law methodology over the past several years—so if you haven’t checked this out recently you might be surprised.    I also had a look at the methodology for ranking business schools because those seem to have much greater fluctuations than law schools—and indeed found some interesting information I don't know how to evaluate.  Out of the 435 programs U.S.News contacted for information, 285 responded but only “127 provided enough data needed to calculate the full-time MBA rankings.”  I leave the interpretation to others, but if my math checks out, they’re only ranking about 30% of the accredited programs.

Back to the law school rankings—

There a few things of note—a change I didn’t hear much about last year is that “for the first time” the “the lawyer and judge survey” which is weighted by .15 comes from names that “were provided to U.S. News by the law schools themselves. This change resulted in a much higher lawyer and judge survey response rate than in previous years.”  This should be of considerable benefit to schools whose reputations don’t extend far beyond their regions.

Another thing of note is that placement success, weighted by .20, was adapted to reflect “enhanced American Bar Association reporting rules on new J.D. graduates' jobs data” so that , “Full weight was given for graduates who had a full-time job lasting at least a year where bar passage was required or a J.D. degree was an advantage. Many experts in legal education consider these the real law jobs.”

However, “less weight went to full-time, long-term jobs that were professional or nonprofessional and did not require bar passage; to pursuit of an additional advanced degree; and to positions whose start dates were deferred. The lowest weight applied to jobs categorized as both part-time and short-term and those jobs that a law school was unable to determine length of employment or if they were full time or part time.”

 

It’s also interesting to hear about how the specialty rankings are put together:

I knew that thespecialty rankings are based solely on votes by legal educators, who nominated up to 15 schools in each field. Legal educators chosen were a selection of those listed in the Association of American Law Schools' Directory of Law Teachers 2010-2011 as currently teaching in that field. In the case of clinical and legal writing, the nominations were made by directors or members of the clinical and legal writing programs at each law school.”

 

But I didn’t know that there was a “floor” so that no school is ranked unless it receives at least 7 nominations.   “Those programs that received the most top 15 nominations appear and are numerically ranked in descending order based on the number of nominations they received as long as the school/program received seven or more nominations in that specialty area. This means that schools ranked at the bottom of each law specialty ranking have received seven nominations.”

 

Posted by Jennifer Bard on February 22, 2015 at 06:16 PM in Blogging, Life of Law Schools | Permalink | Comments (0)

Giving Authoritarianism Its Due: Teaching "Western Values" (Like Hobbes' Unlimited Executive Sovereignty) in Shanghai

I am spending the Spring Term teaching U.S. Constitutional Law at NYU's Shanghai campus, a product of a partnership between NYU, East China Normal University (ECNU), and the Shanghai municipal government. One common and completely natural reaction to this program is suspicion that, by dealing so closely with a government not famed for its protection of academic freedom, NYU is somehow selling out its values in order to get a foothold in the Chinese market for higher education.
This February, Yaxue Cao posted one such expression of suspicion on her "China Change" blog: Boiled down to its essentials, her post asked whether Chinese money and oversight caused faculty members to self-censor ourselves or otherwise change what we teach to suit Chinese authorities. In response, I shared my syllabus with Yaxue and, in our ensuing email exchange, I explained that, while I could not speak for anyone else at NYY-Shanghai, I myself am teaching exactly what I want with the usual lack of oversight enjoyed by any prof teaching at NYU in Washington Square. AS an example of my unhindered freedom, my course requires the students to compare U.S. and Chinese constitutional rules and concepts, and, as background for this comparison, I assign "sensitive" documents like the infamous "Document Number 9," an internal Chinese Communist Party document urging careful controls on the infiltration of "western" ideas like constitutionalism, freedom of speech, and civil society into universities and newspapers. Very 敏感, as CCP officials are prone to say.

I am not, however, taking my freedom as an opportunity to preach "western" ideas of constitutionalism (whatever they might be) to my students. Instead, I am inclined to take the CCP's principles about constitutional government seriously and remain agnostic about whether their authoritarian system is better or worse than the American system of speech libertarianism, competitive political parties, and separation of powers. As part of this agnosticism, I have divided my students into two teams, the "Maoist Leftists" (mascots: Mao and Lenin) and "Western Liberals" (mascots: Locke and Madison) who are assigned the job of trying to persuade the Central Political and Legal Affairs Commission of the CCP either (depending on their team) to adopt or reject American-style judicial review, limits on executive power, and limits on subnational discrimination against non-residents (the so-called hukou system). The Maoist Leftists' task of making the case against "western" constitutionalism is just as important as the Western Liberals' job of defending this congeries of concepts. I take Hobbes, Filmer, Fisher Ames (telling Jefferson to mind his own business on the Alien & Sedition Acts), and Lincoln (suspending habeas corpus and telling Taney to go to hell) just as seriously as Locke or Madison in this course.

Why give authoritarianism its due in this way? After the jump, I give my reasons, which are, in brief, that the true experience of a liberal education is to be skeptical about liberalism.


More specifically, consider three reasons that even a dedicated "western liberal" might take anti-liberal authoritarian ideas about constitutionalism seriously:

1. Missionaries discredit the ideas that they preach: China has had horse doctors' doses of western missionaries, from the Christian ones of the 19th century to the Marxist ones like Michael Borodin of the twentieth. I am not inclined to join their ranks. Even if I were inclined to serve as a Fifth Column to spread some sort of American constitutional propaganda, the worst way to do so would be to play the role of a preacher. Nationalistic hostility to western ideas, especially among young people, runs deep in China. Disliking a practice for no better reason than its foreign origins is completely natural -- witness, for example, American conservatives' suspicion of soccer -- so getting preachy about the beauties of American system of government is a sure way to discredit that system in the eyes of Chinese. Better, I think, to model good behavior, by holding a completely open debate in which the prof does not play the role of Great Helmsman but instead lets everyone make up their own mind.

2. Authoritarianism is as "western" and American as Liberalism: There is an important anti-liberal tradition in the West. To understand Locke, one needs to appreciate Filmer and Hobbes. To take seriously Jefferson's case against the Alien & Sedition Act, one needs to take seriously Fisher Ames' and John Adams' argument that the marketplace of ideas is a failed marketplace in need of governmental correction. The idea that a strong executive should brush aside courts to get things done is as American as apple pie -- or as American as Abraham Lincoln, telling Taney where he can stuff his Ex Parte Merryman order. (Yes, I teach Merryman in my class). By giving authoritarianism its due in the American constitutional tradition, one upends the popular contrast between "western" liberalism and allegedly "eastern" (or "Asian" or "Chinese") notions of power and authority. We, too, have a political tradition based on filial piety and deference to authority (Filmer's Patriacha and, more generally, the western tradition of defining a political hierarchy based on the "Great Chain of Being" in which everyone has a "natural" place). Taking seriously western authoritarianism is not only a good way of putting liberal ideas into their proper context but also of exploding the the notion that authoritarianism is somehow indigenously or distinctively "Chinese" rather than -- quite ofte -- just another foreign import from some westerner like Lenin or Carl Schmitt.

3. It is hard to take liberalism seriously unless one takes authoritarianism seriously: The slogans of American constitutionalism -- the marketplace of ideas, ambition countering ambition, the least dangerous branch, etc. -- are often just a bunch of empty cliches to American students. Being part of our consensus, they cannot really be taken seriously as ideas worth discussing. In China, these platitudes can be seen for what they really are -- weird and even possibly dangerous notions that might not actually be true. Why believe, for instance, that the political nation will be able to distinguish truth from falsehood in publications about public figures? American students shrug and say, "because New York Times v. Sullivan (or Brandeis or Chaffee or Tom Emerson) said so." Chinese authoritarians -- yes, I've met a few now -- actually push back, such that one can have an interesting discussion. "You do not let consumers assess the merits of toothpaste on their own, without governmental guidance," one Left Maoist told me: "Why trust them to tease apart claims about economics and politics?" Good question, actually.

There is something refreshing about being forced to defend in China basic ideas that tend to be unquestioned orthodoxy back home. Such a defense actually treats those ideas with more respect than the unthinking -- one might even say, authoritarian -- acquiescence that they enjoy in the USA. Giving authoritarianism its due, in other words, might be the only way to give liberalism its due.

Posted by Rick Hills on February 22, 2015 at 04:02 AM | Permalink | Comments (2)

Saturday, February 21, 2015

A tribute to Judge Morris S. Arnold

After law school, Nicole Stelle Garnett and I had the pleasure and privilege of clerking for Judges Morris ("Buzz") and Richard Arnold, in Little Rock.  Judge Richard passed away a few years ago.  Last week, though, the Arkansas bar hosted a really nice tribute-event for Judge Buzz, and Nicole was able to attend, along with a bunch of former clerks.  With her permission, I'm sharing -- and highly recommending -- the short presentation she gave (Download Judge arnold).  In a nutshell:  "The law matters, even the mundane can be magical, and the government doesn’t always get to win."

Posted by Rick Garnett on February 21, 2015 at 03:24 PM in Rick Garnett | Permalink | Comments (1)

Friday, February 20, 2015

Levels of Generality in Means/Ends Analysis

It is a familiar lesson of U.S. constitutional doctrine that the outputs of decision rules will sometimes depend on the level of generality with which their inputs are defined.  This theme is perhaps most evident in substantive due process doctrine: Defining a liberty interest in broad terms can increase the likelihood of its qualifying as “fundamental,” just as defining the interest in narrow terms can reduce that likelihood. But generality levels can make a difference in other areas of the law as well. A “right” may be more likely to qualify as “clearly established” for purposes of a qualified immunity defense if we characterize that right broadly rather than narrowly, a “matter” may be more likely to qualify as “of a public concern” when the matter itself is defined abstractly rather than specifically, a “power” may be more likely to qualify as “great substantive and independent” (and hence not implied by the enumerated powers of Article I) if the power is described in general rather than specific terms, and so forth. In these and other contexts, the outcome of a doctrinal inquiry can depend not just on the content of its evaluative criteria (e.g., what does it mean for a right to be “fundamental”?, what matters are and are not of “public concern”?, what does it mean for a power to be “great substantive and independent”?, etc.), but also on how one defines/describes/characterizes the objects to which those criteria apply (e.g., what is the “liberty interest” whose “fundamentality” is at issue, what is the “matter” whose “public-concerned-ness” we are evaluating?, what is the “power” whose “greatness”/“independence” we are measuring?, etc.).

I’d originally figured that this insight applied with equal force to the various forms of means/ends analysis that pervade constitutional law. Means/ends analysis, after all, requires some estimation of the strength of a “government interest” said to justify a constitutionally suspect enactment, and the strength of that interest will in turn depend on the level of generality with which we define it.  Think, for instance, of Holder v. Humanitarian Law Project. There the Court rejected an as-applied First Amendment challenge to the federal “material support” statute, brought by plaintiffs “seek[ing] to facilitate only the lawful, nonviolent purposes” of foreign groups designated to be terrorist organizations.  One can define the government interest in HLP at different levels of generality. From most to least specific, the interest might be characterized as that of (1) “cutting off support for the lawful, nonviolent activities of foreign organizations designated as terrorist groups,” (2) “undermining foreign organizations designated to be terrorist groups,” (3) “undermining foreign terrorist groups,” (4) “combating terrorism,” or (5) “promoting national security,” with a bunch of intermediate options in between. And as the generality-level of the government interest increases, so too should the ease of demonstrating that interest’s overall importance. (All else equal, for instance, the government will have less difficulty in highlighting the vital importance of “national security” than that of “cutting off support for the lawful, nonviolent activities of foreign organizations designated to be terrorist groups.”)  In that sense, means/ends analysis does indeed seem to be at least somewhat sensitive to changes in generality-levels, with increased generality levels yielding increased odds of a government-friendly result.

But things turn out not to be so simple, as something interesting happens when we proceed to ask whether the law is sufficiently closely related to the government interest we have identified.  Here, we encounter something akin to the opposite relationship between generality-levels and justificatory ease: the more generally we have characterized the government interest, the more difficult it will become to show the requisite means/ends fit.  It would not be difficult to show that the material support statute is “necessary to further” the government’s interest in “cutting off support for the lawful, nonviolent activities of foreign organizations designated as terrorist groups”—that objective, after all, is precisely what the material support statute purports to pursue.  But would the law count as necessary to further the more generally defined interest in “promoting national security”?  Maybe, but maybe not. The problem is that the government can “promote national security” in many more possible ways than it can “cut off support for the lawful, nonviolent activities of foreign organizations designated to be terrorist groups.”  And the wider the range of potential means of achieving an interest, the more likely it becomes that a “less restrictive” or “less discriminatory” means will emerge from the heap—thus demonstrating that the chosen means was fatally over- or underinclusive with respect to the interest in question.

In other words, broadening our characterization of the government interest may make things easier for the government (and more difficult for the challengers) when evaluating the strength of the interest, but it will then make things more difficult for the government (and easier for the challengers) when evaluating the degree of fit between the interest and a challenged law.

That’s not to say that the government can’t ever win an argument about “means” where the relevant “end” has been defined in highly general terms.  Indeed, the government did end up winning in HLP, notwithstanding the Court’s decision to characterize the government interest broadly rather than narrowly (the Court went with Option #4, “combating terrorism”). Rather, the point is that because it focuses attention on both means and ends, means/ends analysis may manage to mitigate the influence of generality-levels on the ultimate outcome of the test.  One can ratchet-up the generality level of the government interest to assist in justifying the ends, but one must then pay a price when attempting to justify the means. And one can ratchet-down the generality level to assist in justifying the means, but one must then pay a price when attempting to justify the ends. As long as one maintains the same description of the government interest throughout the analysis, then there should be some level of equilibration across the two prongs of the test.

Now before anyone starts publishing banner headlines about this observation, let me identify a few grounds for skepticism. The hypothesis I've offered may turn out to be (a) false or (b) trivial:

-       Why the hypothesis may be false: Even if everything I've said is right, it still might be true that the generality levels matter more at the first step of the inquiry than at the second. In other words, the chosen generality-level of the government interest may exert a major positive influence on whether or not that interest counts as “compelling,” “important,” “legitimate,” or what have you, while exerting only a minor negative influence on whether there is a sufficiently close fit between the interest itself and the law under review. If that is true, then means/ends analysis would remain highly susceptible to manipulation via characterizations of the relevant government interest, with the positive effects of high-generality at the “ends” stage of the inquiry drowning out its negative effects at the “means” stage of the inquiry.

-       Why the hypothesis may be trivial: I can imagine two arguments to this effect—one grounded in cynicism and the other grounded in optimism, with both concerning the overall constraining effect of doctrinal rules.

  • The cynic’s argument would maintain that even if means/ends analysis is not sensitive to fluctuations in the generality-level of a government interest, means/ends analysis as a whole remains malleable, manipulable, and ultimately non-constraining in a myriad other ways. A judge who is dead-set on upholding a law subject to strict scrutiny (or striking down a law subject to rational basis review) can always find always find the arguments necessary to justify the desired means/ends result, no matter what how generally or non-generally the interest has been defined. So the cynic would say, it doesn't much matter whether my hypothesis is right or wrong. Even if it is right, the outcomes will be what they will be because judges can and will manipulate other parts of the means/ends test to go where they want to go.
  • The optimist’s argument would focus instead on what has thus far been an unstated (and undefended) premise of my argument—namely, that choices among generality-levels are to some extent arbitrary and difficult to predict ex ante. But if that point turns out to be false—if, in other words, there does exist a coherent, predictable, and defensible way of identifying the operative generality level of a given constitutional input, then any “equilibrating” or “self-regulating” process built-in to means/ends analysis would be of only marginal significance.  Either way, the optimist would maintain, means/ends analysis would operate in a principled fashion—either levels of generality matter, in which case means/ends analysis is influenced by a principled legal choice, or levels of generality do not matter, in which case means/ends analysis will still be influenced by other principled legal choices.

Posted by Michael Coenen on February 20, 2015 at 03:04 PM in Constitutional thoughts | Permalink | Comments (1)

Crime, Lead, and Abortion (Bear With Me on OVB)

Continuing my examination of the Brennan Center report on crime and incarceration, I now want to consider whether the failure to include measures of lead exposure and abortion rates introduce serious concerns of omitted variable bias. In my previous post, I suggested that omitting inflation and consumer confidence probably didn’t raise many concerns since it is unlikely those variables had much impact on crime.

Here, I want to argue that omitting changes in lead exposure and abortion rates is also likely not  particularly problematic, though perhaps a bit more so than dropping inflation and consumer confidence. 

Given that abortion and lead are thought to be two of the most important causes of declining crime in the 1990s and 2000s, it clearly can’t be for the same reason I wasn’t bothered by the failure to include inflation and consumer confidence.* Instead, for these two variables it seems unlikely that there is a strong correlation between either of them and incarceration rates (though I note at the end that there could be a slightly attenuated one). As I explained before, the bias from OVB grows with both the size of the direct effect of the omitted variable (lead, abortion) on the outcome variable (crime), and with the correlation between the omitted variable and the variable of interest (incarceration). If either is low, the bias is low.

What makes lead and abortion potentially different from crack is that they are not directly related to crime control policies. The crack epidemic was framed as a criminal justice issue from the start and thus likely sparked collateral changes in criminal police that could influence incarceration rates above and beyond crack’s direct effect. But it is harder to see such a connection between lead and abortion on the one hand and incarceration, and criminal justice more generally, on the other.

 After all, there was no sense of lead’s link until Jessica Reyes’s paper in 2007, and none about the abortion-crime link until John Donohue’s and Steven Levitt’s paper in 2001 (which went viral with the publication of the 2000 working paper). By both 2000 and 2007, the crime drop was well underway, and major legal changes were not taking place as frequently. If these factors were shaping other criminal justice outcomes, they were doing so in the background.

That said, there is one way in which they could matter, though I haven’t been able to completely figure out how it all plays out. The papers by Reyes and Donohue and Levitt work statistically only because states varied in their rates of lead and abortion. In both cases, it could be possible that more-liberal states experienced bigger changes: such states may have bought into environmental clean-up faster, and they may have been more politically or culturally tolerant of expanded abortion rights.** And these more-liberal tendencies may have led these states to take less-punitive reactions to crime in general. 

If these assumption are correct, then omitting lead will cause a model to understate the effect of incarceration on crime.*** But note the attenuation that is here. The correlation isn’t just that between lead and incarceration—unlike with crack, I have a hard time seeing what that direct effect could be. Instead, the correlation is actually a chain: lead is correlated with politics, and politics is correlated with incarceration rates. The relevant correlation here will thus be the product of these two effects, and therefore less than either one of them alone (since all correlations are less than one, and some of these might be significantly less than one).

The same political story applies to abortion as well. Once again, like with lead, more-liberal states will have responded more quickly, once again omitting the variable will bias the estimated effect of incarceration towards zero, and once again the size of the bias is mitigated by the chained nature of the correlation (abortion with politics, politics with incarceration rate).

There are two problems, though, with how the report looks at lead and abortion. For lead, the authors say they could not get lead data for the period 1980 to 2012, and they argue that omitting lead isn’t a problem since changes in lead levels had pretty much leveled out by the 1990s:

 Further, lead’s effect on the crime drop likely waned in the 2000s. While reduced lead levels in gasoline may continue to depress crime rates, it likely has a minimal role in this decade. The prevalence of lead in gasoline has been at consistently lower levels since the early 1990s. Thus, individuals who were around age 22 in the 2000s were exposed to consistently low rates of lead similar to previous cohorts. Thus, because there was not much change in the prevalence of lead in gasoline, it likely had little effect on propensity to commit crime.

First, they don’t need lead data through 2012, and they need lead data before 1980. Lead operates with a lag of about 15 to 20 years. So study the impact of lead on crime between 1980 and 2012, you want to look at lead exposures from around 1960 to 1992. Thus stable post-1990 levels of lead won’t really matter for a few more years. As the following figure, lifted from here, demonstrates, lead exposure to those aging into violent crime in 2000 was still substantially higher than that for those aging into violent crime in 2011. (Although if we look at a 15-year lag, the flattening of lead exposure should start to play more of a role, especially for property crime, but only at the very end of the period.)

 

Screen Shot 2015-02-20 at 1.14.59 AM

 

The authors make a similar, and seemingly also-incorrect, claim about abortion. They assert: 

Even if the abortion theory is valid, it is unlikely that an increase in abortions had much effect on a crime drop in the 2000s. The first cohort that would have been theoretically affected by abortion, 10 years after the 1990s, would be well beyond the most common crime committing ages in the 2000s. Based on available data, the frequency of abortions appears to currently be fairly constant. Since the variable does not appear to be shifting, a change in crime would not be expected. Although it may have had some small residual effect, there would likely be no effect on the 2000s drop attributed to legalized abortion.

But the figure below, from a Guttmacher Institute report, again indicates pretty significant shifts in abortion rates over much of the sample period. It is true that there is a flattening in abortions throughout much of the 1980s, but then abortions start to decline steadily in the 1990s. Between 1990 and 1997, the number of annual abortions falls by almost 20%. And those born in 1997 are 18 today: well into the property-crime stage and entering into the violent-crime phase.****

Screen Shot 2015-02-20 at 1.21.41 AM

Of course, to understand the impact of the decline in abortion on crime, we also have to ask who is driving the decline. A core aspect of Donohue and Levitt’s causal story was that the spike in abortions following Roe was due to a disproportionate increase in abortions by more socially-marginalized women, women whose children ran a greater risk of offending. If the decrease is due to a different cohort—perhaps wealthier, better-insured women who have more access to alternatives—then the response need not be symmetric (i.e., that the increase reduced crime does not mean the decrease will increase crime if the increase and decrease are two different populations).

I’ll stop here. The tl;dr version: omitting lead and abortion are likely less problematic than omitting crack. There may be some bias, and it’ll run in the same direction as crack, namely towards understating the crime-fighting aspect of incarceration, but the size of the bias may be fairly attenuated. My next posts on this will turn to CompStat and variables the authors did not include at all.

 

 

 

 

* Obviously, if you think that either of these factors didn’t have much of an impact in crime—and the last thing on Earth I want to do is get dragged into a debate about the abortion/crime link—then the risk of OVB goes away on that front. So my point here is assuming these factors have a large effect, we still don’t necessarily need to be concerned with OVB.

** Tellingly, the only states to legalize abortion prior to Roe v Wade were Alaska, California, Hawaii, New York, and Washington State, four of which are now solidly blue and have often been on the more-liberal end of the spectrum.

*** The bias is towards zero because lead is positively correlated with crime (more lead, more crime), and it is positively correlated with incarceration (according to my politics theory, high-lead states will be dispositionally more punitive). So the bias is positive and the true effect of incarceration is negative, so the estimated effect will be less negative than the true effect.

**** It is worth noting that lead is flat starting in the 1990s, and abortion rates are flat until the 1990s. And both should operate with similar, though not identical, lags. Thus the impact of omission with vary across time: omitting lead will become less and less important, but omitting abortion should become more and more important.

Posted by John Pfaff on February 20, 2015 at 10:22 AM in Criminal Law | Permalink | Comments (0)

Crime, Inflation, and Consumer Confidence: Unbiased Omitted Variables

As I mentioned in my previous post, the recent Brennan Center report on the effect of incarceration on crime identified fourteen possible factors that could explain crime trends, but included only eight in their regressions. So I wanted to think a bit about how omitted variable bias might throw off their findings. Last post I focused on just one, the failure to control for trends in crack use, and suggested that its exclusion likely leads to report to understate the crime-reducing impact of incarceration.

In my next few posts (spreading these out over several as a concession to “wonky” + “long” = “unreadable”), I want to consider the remaining five variables that didn’t make the cut. I feel like four of them don’t really raise any concerns, but one—the adoption of CompStat—does. 

First, the four that don’t matter so much. These are trends in inflation, consumer confidence, lead exposure, and abortion. In this post I’ll consider the first two, and I’ll look at lead and abortion in the next.

For inflation and consumer confidence, my guess is that both their direct effects on crime and their correlations with incarceration rates are weak. According to the Brennan report, the evidence linking inflation to crime comes only from national-level studies (since inflation is not gathered at the state or local level), which link it primarily to property crime. It is easy to see why one might think there is a connection between inflation and property crime, as the figure below suggests.

Screen Shot 2015-02-19 at 10.22.49 PM
But this is the trouble with national-level data. While inflation and property crime track each other closely, especially during the 1960s and 1970s, the correlation is likely spurious. A lot was changing during that time, which was a period of great social and economic upheaval. Surely these huge forces were driving both inflation and crime. Especially with national-level data, which just gives you one time series to work with, it is easy to get overlapping patterns that are random or spurious. (See this for more spurious-correlation awesomeness.) 

As for consumer confidence, it too is measured only at the national level (from a survey of only 500 households), and thus faces a greater risk of spurious correlation. Furthermore, the paper proposing this connection that the report cites uses only a handful of explanatory variables and thus likely suffers from OVB itself. It’s almost certain that its estimate of consumer confidence is picking up a lot of other stuff.

But there is an even deeper reason to be wary of linking consumer confidence overall to crime rates. As David Weisburd and others have shown, crime is intensely geographically concentrated, not just within a state, not just within a city, not just within neighborhoods, but within blocks of those neighborhoods: New York City is more violent than Westchester, Brooklyn is more violent than Staten Island, East New York (a high-crime Brooklyn neighborhood) is more violent than Park Slope (Brooklyn’s hatchery), and there are persistently “good blocks” and “bad blocks” in East New York.

Given this concentration, it is unlikely that national-level surveys of just a few households are really going to capture the nature of confidence where crime is most densely located. Perhaps not rigorous evidence, but I do remember an episode of the Chris Rock Show from 2000 in which Rock goes to the South Bronx to ask people there whether they are feeling the benefits of the dot-com economic boom; the answers are predictable (and the clip sadly not on YouTube, as far as I can tell). I would expect the true effect of confidence on crime to thus be slight.

Or, put more carefully, the consumers whose confidence we measure are systematically not the consumers who are either committing or experiencing crime. Not only would these consumers be unlikely to show up in a 500-person survey just by chance alone, but the very nature of their more-difficult lives suggests that they will be systematically under-sampled.

Finally, even if you think there is a strong relationship between either of these and crime, I’m hard pressed to see much of a connection between them and incarceration rates. No previous study has ever thought to look at them (at least as of the time I wrote this review), and it strikes me that any effect they do appear to have is more likely due to the underlying economic shifts driving inflation and confidence (like trends in personal income, overall state economic output, unemployment rates, and maybe inequality), all of which are actually easier to measure at the state level anyway.

So dropping inflation and consumer confidence these variables thus shouldn’t bias the report’s estimate of incarceration at all. Which is good.

Posted by John Pfaff on February 20, 2015 at 10:12 AM in Criminal Law | Permalink | Comments (0)

Holmes and Brennan

My new article, Holmes and Brennan, is now on SSRN. This is an article-length joint book review of two terrific legal biographies--Thomas Healy's The Great Dissent and Lee Levine and Stephen Wermiel's The Progeny. I use the books explore the connections between Abrams and Sullivan as First Amendment landmarks and between the justices who authored them and who are widely regarded as two leaders in the creation of a speech-protective First Amendment vision.

The abstract is after the jump.

This article-length book review jointly examines two legal biographies of two landmark First Amendment decisions and the justices who produced them. In The Great Dissent (Henry Holt and Co. 2013), Thomas Healy explores Oliver Wendell Holmes’s dissent in Abrams v. United States (1919), which arguably laid the cornerstone for modern American free speech jurisprudence. In The Progeny (ABA 2014), Stephen Wermiel and Lee Levine explore William J. Brennan’s majority opinion in New York Times v. Sullivan (1964) and the development and evolution of its progeny over Brennan’s remaining twenty-five years on the Court. The review then explores three ideas: 1) the connections and intersections between these watershed opinions and their revered authors, including how New York Times and its progeny brought to fruit the First Amendment seeds that Holmes planted in Abrams; 2) three recent Supreme Court decisions that show how deeply both cases are engrained into the First Amendment fabric; and 3) how Brennan took the speech-protective lead in many other areas of First Amendment jurisprudence.

Posted by Howard Wasserman on February 20, 2015 at 09:31 AM in Article Spotlight, First Amendment, Howard Wasserman | Permalink | Comments (0)

Thursday, February 19, 2015

A Preview of Henderson v. United States

Over at SCOTUSBlog, I have a preview of Henderson v. United States. Here's the opening:

Next Tuesday, the Court will hear argument in Henderson v. United States, a complex case that offers a blend of criminal law, property, and remedies, with soft accents of constitutionalism. The basic question is this: when an arrested individual surrenders his firearms to the government, and his subsequent felony conviction renders him legally ineligible to possess those weapons, what happens to the guns?

Posted by Richard M. Re on February 19, 2015 at 10:21 AM | Permalink | Comments (2)

Crime, Incarceration, and Crack

In my first post on the new Brennan Center report on prison’s impact on incarceration, I examined its problematic treatment of endogeneity bias. Today I want to look at how it addresses another tricky empirical morass, namely omitted variable bias.*

To the report’s credit, the authors think through a long list of possible causal factors. In the end, they come up with fourteen:

1. Increased incarceration

2. Increased policing

3. Death penalty

4. Concealed-carry laws

5. Unemployment

6. Growth in income

7. Inflation

8. Consumer confidence

9. Decreased alcohol consumption

10. Aging population

11. Decreased crack use

12. Legalized abortion

13. Decreased lead in gasoline

14. Introduction of CompStat

That’s a pretty long list. There are other factors that should be included, and I’ll look at that OVB problem in a future post. Here, I just want to consider the OVB problems with the variables they listed.

Because while they assembled this long list, only only eight of these made it into their state-level analyses. Their major regressions dropped inflation, consumer confidence, decreased crack use, abortion, decreased lead, and CompStat.

For all but CompStat, the rationale was lack of data. Inflation and consumer confidence data are available only at the regional (inflation) or national (confidence) level. Crack data isn’t available before 1990 and the authors claim that there is no state-by-state data for any years (although this is wrong, as we’ll see). There is no publicly available state-level data of lead levels; the famous study by Jessica Reyes apparently  relied on data she collected herself, and the authors obliquely state that they “could not recover this data from her.”** The abortion data is available at the state level, but it is missing (because never gathered) for fifteen of the years between 1983 and 2011. CompStat data is dropped from the state regressions but included in the city-level ones on understandable (but debatable) grounds that policing is a city matter, not a state one.

The list of dropped variables is initially fairly concerning. Three of the six are considered by many to be major explanatory variables for the crime drop: see Steve Levitt here for crack and abortion, and this article (same link as above) for lead. 

In all three cases, though, the report’s authors argue that whatever important effect these factors had on crime in the 1980s and 1990s, all three had started to influence crime rates much less by the 2000s and 2010s. If true, that addresses the OVB problem, since as noted in my earlier post, the omission of a variable that does not influence crime can’t bias the estimate of incarceration’s effect on crime.

Moreover, even if any or all these factors have a strong impact on crime even to this day, their omission won’t bias the estimate of incarceration unless they are also correlate with incarceration. Is that true in these cases?

For the rest of this post, I’ll just focus on crack. I’ll come back to the other factors in future posts. (To foreshadow a bit, my feeling right now is that the only other variable besides crack whose omission could be a problem is CompStat, especially for the post-2000 data, but maybe even for the whole time period.)

For crack, let’s start with the correlation issue. It’s plausible that city- or state-level exposure to crack could shape penal outcomes outside of crack’s effect on crime rates: crack-related violence could have spurred police to take more extreme action because of the fear and panic it created. Non-crack laws (such as gun laws, etc.) could have toughened in response as well. So increased crack use could lead to increased incarceration, even independently of its effect on crime.***

So as long as crack is increasing crime, omitting crack will bias the estimate of the effect of incarceration, and it will bias it towards zero (i.e., the regression will understate the true effect of incarceration).****

But the authors respond, somewhat correctly, that crack-related offending had declined enough by the 2000s that its effect on crime was likely minimal from that point on. And the authors of the one major study that does look at the crack-crime relationship (Roland Fryer, Paul Heaton, Steven Levitt, and Kevin Murphy) do argue that crack-related violence and other crack-related pathologies had declined significantly by the time their study ended, in 2000.

At the same time, the Fryer-Heaton-Levitt-Murphy paper argues that crack consumption remained high in 2000, at about 65% to 70% its peak levels, even as many of the social ills (like exceptionally high murder rates for young black men) dissipated. As long as higher crack use leads to higher incarceration rates outside of its effect on offending—perhaps high-use states continue to adopt or maintain tougher sentencing laws, or deploy more police per unit of crime, or are more urbanized (and urbanization shapes incarceration rates), etc., etc.—and as long as crack use continues to contribute to offending (perhaps less the violent drug-market wars of the past, and now more lower-level offenses committed by addicts, as suggested by the recent work by Shawn Bushway and others arguing that the greying of US prisons comes from an older cohort of heavy drug users who continue to offend much later in the life than expected), then the bias will persist, although likely less strongly than it was in the 1980s and 1990s.

But there is an additional wrinkle to omitting crack use. Crack actually belongs on both sides of the equation. Crack can lead to more violent and property crime, but crack use and distribution are crimes themselves, though ones not counted in this report. The report just looks at the index offenses gathered by the Uniform Crime Reports (murder, aggravated assault, forcible rape, arson, robbery, burglary, larceny-theft, and motor vehicle theft), thus leaving out all drug offenses and less-serious violent, property, and public-order offenses—for all of these, the FBI just gathers arrest data, not offending data, and variations in arrest data (especially for drug offenses) need not closely track variations in underlying offending.

The problem here is clear. High-crack states will have higher incarceration rates due in part to higher crime rates (i.e., more crack offenses), but those higher crime rates aren’t captured in the crime variable. This could further magnify the bias discussed above. Assume we have two states with identical violent and property crime rates, but one has a bigger crack problem than the other. The high-crack state will have a higher prison population but same apparent crime rate, making incarceration look less effective.

And while only 17% of state prisoners are in prison on drug charges, a large share of those are serving time for crack or cocaine charges. So the numbers being dropped from the crime variable but included in the incarceration term are not trivial.

Finally (at last), the authors actually aren’t entirely right when they say that there is no state-level crack-use data. They are right about the gaps in the official data. As for the Fryer-Heaton-Levitt-Murphy data, the authors state that they “could not secure the data,” even though they are publicly available right here (more here—but don’t ask me why the files are pdfs).

The index, based on a weighing of crime rates, media accounts, and other factors, is certainly not above reproach, but then neither are any of our problematic official accounts of drug use, which is an understandably hard thing to measure. As long as the index is sufficiently correlated with actual crack use/sale, an imperfect proxy is generally better than an omitted variable.

Of course, 1980 - 2000 doesn’t match the entire period the authors wish to consider, but there is nothing stopping them from looking at that subperiod, seeing if excluding crack alters the results in any meaningful way, and then using that information to gauge the costs of omitting it more generally. After all, if there is no apparent bias from omission during the 1980 - 2000 period, then there almost certainly isn’t one during the 2000 - 2012 period; and if there is one, then that is likely the upper bound on whatever bias persists into the 2000s. To simply ignore the issue because it time periods don’t align is not a convincing approach.

So, does the omission of crack bias their results? It almost certainly causes them to understate the effect of incarceration in the 1980s and 1990s. But live today, not then. What does it mean for today? The big questions are (1) how much does higher crack use lead to higher incarceration and crime rates, and (2) how important is the omission of crack offenses on the crime side of the model? Both of these are tough questions, but both also show the need to be careful when interpreting the report’s findings. At the very least, we should continue to be concerned that they are underestimating the effect of incarceration. (But again, we should also be careful not to run too far with that and argue that this means current levels are efficient, which they almost certainly are not.)

 

* For those unfamiliar with OVB and how its skews empirical results, I wrote up a brief primer here.

** I really wonder what this means. Was the data corrupted? In some sort of format that made it hard to share (not all final datasets are neatly and cleanly assembled)? Or did she refuse to share the data?

*** So does a decline in crack lead to a decline in toughness? Maybe not—these sorts of shocks may be asymmetric if it is easier to be tough when things are bad than more-lenient when times are safer, which raises more concerns about how to properly model them statistically.

**** Recall that the true effect is likely negative, the correlation with crack and incarceration is positive, and the effect of crack on incarceration is positive. So the bias factor is a positive (it’s a positive times a positive), so it’ll make the estimated effect less negative.

 

Posted by John Pfaff on February 19, 2015 at 09:40 AM | Permalink | Comments (0)

Omitted Variable Bias: A Quick Primer

The next potentially serious issue with the Brennan Center report that I want to consider is one that arises in pretty much every empirical social science paper, namely the always-present threat of omitted variable bias. I actually want to spend a few posts on this issue, so I thought it could be helpful to start with a brief, nontechnical overview of why and when this is problem for the more non-statistical readers of this blog. That way I can refer back to this in future posts, rather than “see the middle of a longer, more substantive post.” And those already familiar with OVB can just skip this one.

Here’s a simple example to demonstrate how—and when, and to what extent—OVB throws off a model’s results. Let’s say we are trying to understand what causes an individual to engage in crime, and we think those with more education are less likely to commit crime. So we include education as an explanatory variable. However, due to a lack of data, we can’t include any information on whether someone is using drugs. Does this omitted variable matter, and to what extent?

It’s easy to show how it matters. I mean, how much clearer could this be?

Screen Shot 2015-02-18 at 10.43.04 PM

I kid. I mean, that really the magnitude of OVB (picture stolen from here), but that’s not exactly intuitive.

The concern with OVB is this: people using drugs are less likely to attend school, so they’ll generally have a lower level of education. And they are more likely to commit crime. So drugs are correlated with education, and drugs are correlated with criminal offending.

So when I run a regression of education on crime but omit drugs, what does the result for education that computer spits back at me capture? Well, it picks up the real effect of education on crime, but it also picks up part of the effect of drugs: those on drugs have less education, so part of the reason that those who have lower education appear to commit more crimes is actually because of their generally-higher levels drug use. 

In other words, within the pool of those classified as “low education” are high and low drug users, and similarly within the “high education” pool, although a greater fraction of the low education pool uses drugs at a high level. And it is likely that within the pool of lower education people, those with higher drug use offend more. If we had data on drug use, the model could separate these two effects out, but without it, it just returns some sort of average effect of education and drugs.

We can actually be much more precise about this. There are three components to thinking about OVB (this really is the equation above now, but still: ignore it). There’s the true effect of education on crime, there is the correlation between education and drugs, and there is the true effect of drugs on crime. The coefficient that the regression returns is basically:

the true effect of education plus (the correlation between education and drugs times the true effect of drugs).  

Thus if a 10% increase in education reduces the probability of offending by 5%, if a 10% increase in drug use increases the risk of offending by 7%, and the correlation between drug use and eduction is –0.3 (since education and drug use are negatively correlated), then the regression will tell you that a 10% increase in education reduces offending by –5% + (–0.3 x 7%) = –7.1%. In other words, it will overstate the effect of education. (For those of you expecting no math, I apologize: this is basically the last of it.) 

This makes sense: increased education is associated with less offending as well as less drug use, and less drug use is associated with less offending. But by omitting drug use from the model, the education terms picks up some of both effects, making education look more effective than it should.

So, two big points:

First: we can see when OVB matters. If the omitted variable is uncorrelated with what we are looking at, then it is irrelevant. Perhaps area temperature influences crime rates—it is easier to commit crimes when it is warm and everyone is outside—but maybe (maybe!) climate is uncorrelated with educational outcomes. Then omitting climate has no effect on our estimate of education, since changes in education tell us nothing about changes in weather. 

Similarly, if the omitted variable has no independent effect on crime we can ignore it, no matter how correlated it is with education

Or, put more generally, the smaller the correlation between the included and omitted variable, and the smaller the direct effect of the omitted variable on whatever you are looking at, the less serious the bias is.

Second: We can (in simple cases) predict the direction of the bias, which can actually be quite useful.

Recall that what the regression reports is true effect + (correlation times omitted effect). So in our education case, the true effect is negative (education reduces crime), the correlation is negative (drug use and education are negatively correlated), and the omitted effect is positive (more drugs leads to more offending). So the “bias factor” will be negative (a negative times a positive), and a negative plus a negative is even more negative: the regression will overstate the true effect. 

That’s useful to know. In our example above, then, we know that –7.1% is a ceiling: the true value is something less than that (i.e., closer to zero). We don’t know how much less, but we know it can’t be more.

Of course, if the omitted variable were positively correlated with both education and crime—something that causes people to both offend more but also achieve more in school, perhaps some sort of aggressive ambition that is hard to detect, say—then the regression would understate the true effect of education (a negative true effect plus a positive bias would push the number too close to zero). And so on and so on for positive and negative correlations and positive and negative omitted effects.

Now, in practice, there is a limit to this. Often multiple variables will be omitted, and the effect of education would capture all of these: the more-negative bias of drug use, the less-negative bias of ambition, etc., etc. And in this case, it would be almost impossible to know how all the various biases net out. But where we think only one or two key variables are missing, then we can at least know if our estimate is a ceiling or a floor.

So that’s a crash primer on OVB. The next post will start to look at how it plays out in the Brennan report.

Posted by John Pfaff on February 19, 2015 at 09:36 AM in Criminal Law | Permalink | Comments (3)

Wednesday, February 18, 2015

A Few Words on Why E-Cigarettes Are Still Being Marketed to Children-Even though they are just as addictive as other Tobacco products.

It’s likely that everyone reading this will have heard of e-cigarettes (and vaping)  and has at least a vague impression of claims made that they are less dangerous than regular ones.  It’s possible that impression comes from the fact that they are advertised heavily in a way that cigarettes are not—at sporting events, through free coupons in the mail, on the radio.  They are also available in a multitude of flavors.  That wouldn’t be possible unless the FDA had decided that they posed less of a threat to children’s health than other tobacco products would it?

But in fact the FDA has made no such determination.  Quite the opposite.  Under its authority to protect children from tobacco, the only authority it has to regulate cigarettes at all, the FDA has already proposed a “deeming rule” to put e-cigarettes in the same category as other tobacco products.  That is, perfectly legal for adults to purchase and enjoy, but not allowed to be marketed  in ways attractive to children.  And it’s children who are being targeted here.  Kids who have heard anti-smoking warnings all their lives, but are led to believe that e-cigarettes are different.    A recent poll out of Utah found that “nearly one-third of teens who used e-cigarettes in the past 30 days have never tried a cigarette.”

So far, the rules are on hold because Congress is concerned that this form of regulation is the first step towards “banning” them, even though that has yet to come anywhere close to happening with regular cigarettes.   At the close of its call for comments last July, the FDA had received 70,000 of them

I haven't read all the comments quite yet, but it's a safe bet that none of them suggest that it's safe for kids to become addicted to nicotine.  Or that e-cigarettes are any less addictive.  Because they are not.  The nicotine in e-cigarettes is the same nicotine as in any other tobacco product.    Rather, the claims are about the relative dangers of e-cigarettes as opposed to tobacco ones for people who already smoke.   But  this post is about people who don't already smoke and aren't yet addicted.  And those people are very young.  Almost everyone who becomes addicted started well before their 18th birthday.  The "peak years" for starting to smoke are between sixth and seventh grade.    And the next biggest group is young adults (our students) to whom e-cigarettes are being marketed heavily.  Look around your town for the vaping parlors, billboards, and advertisements.

 I blogged about this last spring as a gateway to teaching administrative law and will have an article out soon in the Saint Louis University  Journal of Health Law & Policy (hi Rachel) in a few months, but the regulatory struggle going on now to prevent the FDA from treating e-cigarettes as it does all other nicotine delivery devices deserves attention as a paradigm of how closely tied our public health system is to politics and how difficult that makes it to protect children.

Posted by Jennifer Bard on February 18, 2015 at 07:51 PM | Permalink | Comments (0)

Collins on Terrorist's Veto

Great post from Ron Collins at CoOp on the need for democratic society's to stand firm in the face of the terrorist veto, which he calls the "savage cousin of the heckler's veto."

Posted by Howard Wasserman on February 18, 2015 at 09:31 AM in Constitutional thoughts, First Amendment, Howard Wasserman | Permalink | Comments (1)

Tuesday, February 17, 2015

And more crazy in Alabama

With briefing moving forward in the state mandamus action, the plaintiffs in Strawser have filed an  Emergency Motion to Enforce the federal injunction, specifically by ordering Alabama Attorney General Luther Strange to assume control over the mandamus action and dismiss it; the government has responded. (H/T: Reader Edward Still, a civil rights attorney in Alabama). The gist of the plaintiffs' argument is that the Attorney General controls all litigation brought by or on behalf of the state, including through private relators; in order to comply with the injunction, which prohibits him from enforcing the state ban on same-sex marriage, he must end the state litigation.

The state's response is interesting for what it acknowledges about the mandamus action, confirming that it is largely symbolic and annoying.

First, the state acknowledges that the mandamus, if issued, cannot run against Probate Judge Don Davis of Alabama, who is a party in Strawser and is enjoined from denying licenses to same-sex couples. The state also acknowledges that, even if the mandamus issues, a couple denied a license could sue the denying probate judge in federal court and obtain an injunction, and that judge would be compelled to comply with that injunction. In other words, the state mandamus action does not set-up any conflict with the federal court or federal court orders, which the state acknowledges would trump the mandamus, whether existing orders or future orders. Thus, the sole effect of the mandamus would be to prevent non-party probate judges from being persuaded by Judge Granade's order or from issuing licenses so as to avoid suit and an award of attorney's fees. The only way they could issue licenses is if sued and ordered by a federal court to do so, which in turn has the effect of forcing every couple to sue every probate judge in the state. This is annoying and time-consuming. But, again, it does not reflect state defiance so much as state legal obstinacy.

Second, as has frequently been the case here, the big question is one of Alabama law--how much control the attorney general has over privately initiated litigation on behalf of the State. The Attorney General can seize control over litigation initiated as the state by local prosecutors and other executive officers; it is less clear whether he can do the same when suit is brought by private actors. The plaintiffs argue for a a broad understanding of FRCP 65 as to the scope of injunctions.

Third, as predicted, the state tries to play the abstention card. Also as predicted, they screwed it up. The state tries to argue that the Anti-Injunction Act bars the federal court from enjoining this pending state proceeding, emphasizing the narrowness of the statute's exceptions. But one exception is when Congress expressly authorizes an injunction by statute, which it did in enacting § 1983. Strawser and all other actions challenging SSM bans are § 1983 actions, so the AIA imposes no limit on the injunction here. The state also tries to argue Rooker-Feldman, a doctrine which also has no application here, since the plaintiffs are not state-court losers or even parties to the state court action.

Posted by Howard Wasserman on February 17, 2015 at 05:22 PM in Civil Procedure, Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (8)

Hail to the Chief

HLR 108

Congrats to Penn Law for choosing Ted Ruger as the new dean of the school.  Ted was president of Volume 108 of the HLR (you may recognize the baton), and he had almost mythic status at the school.  In fact, that year's parody ("The Cocky Lawyer Picture Show," I believe) had a character named "Rugerman" -- essentially a mild-mannered student turned superhero.  The character captured Ted's humble nature as well as his otherworldly abilities.  Penn Law is fortunate to have not only Ted but also fellow vol. 108 editorial board member Cathie Struve, seated to Ted's left.  (And you may notice a certain senator also named Ted in the picture, seated down the row to the right.)  

Posted by Matt Bodie on February 17, 2015 at 03:50 PM | Permalink | Comments (4)

Crime, Incarceration, and Difficult Empirical Questions: Some Initial Thoughts on the Brennan Center Report

For the past few days, I’ve been struggling with what I think about the Brennan Center’s new report on the effect of incarceration on crime. What has me torn is this:

1. On the one hand, I think the report’s basic claim is likely more or less correct. The report’s central argument is that incarceration’s impact on crime exhibits diminishing returns. As we lock up more and more people in a time of falling crime, that seems like a reasonable claim.

2. On the other hand, the methods the paper uses are simply wrong, and their invalidity has been well documented for nearly two decades. Moreover, while the report’s basic claim is likely true, its estimates of the exact size of incarceration’s impact on crime are almost certainly too low. 

Now that second claim might initially seem like the clearly less-important one. So what if they say that prison contributed to 10% of crime’s decline when it should have been 15%? People only care about the general trend. In fact, policy can only really be based on the general trend—social science isn’t like putting a man on the moon. We operate by rough estimates, not fractions of an inch. 

Right? Well… no. 

First, the report argues that the effect in recent years could be zero. That’s not a quantitative error, that’s a media-friendly qualitative error. It’s a “nothing works” argument for the prison reform movement.

Second, given the statistical flaws, I can’t actually be sure that my intuition about demising returns is right. The whole reason statistics exists is that our intuitions are quite often wrong. If intuition and reality lined up on a regular basis, we wouldn’t need stats people.

And third, even if the authors caught a break this time and got vaguely-valid results using invalid methods, future studies that use these techniques may not be so lucky. Calling out the bad methods in high-profile work may give those critiques the attention they need to prevent more-serious future failures.

So over the next few posts I want to dig into the statistical flaws with how this paper was written, and what they mean for its conclusions in particular, and for how we should approach difficult statistical issues more generally.

This poses, however, a unique challenge. The report’s claims are facially plausible, as are its estimates. It is not as if it said the earth was flat. It’s more like it said that vaccines cause autism: the correlation exists, there is a causal story that seems at least possible to lay readers, and the result aligns with many people’s (sincere) prior beliefs. And as the medical community has discovered, displacing such “empirical” beliefs is tough.

But I’ll wade in, nonetheless. So in this post I want to zero in on what strikes me so far as being the report’s cardinal sin: the failure to properly account for the feedback effects between prison and crime.

Estimating the relationship between incarceration and crime raises the specter of a fairly intractable statistical problem called “endogeneity” or “simultaneity.” For the basic regression model that the Brennan Center report uses to work, one assumption that has to hold is this: the explanatory variable (here, incarceration) has to affect the outcome variable (here, crime), but not vice versa. So while trends in incarceration can shape crime rates, the model fails if crime rates also shape prison populations.

But that assumption obviously doesn’t hold here: prison populations surely shape crime, but crime rates themselves influence how many people are in prison, both directly (more arrests, convictions, admissions) and indirectly (by, say, changing attitudes towards crime). Due to this problem, simple regression results will be biased.

Not just biased, though. Biased upwards. That is: towards zero, or towards a positive (criminogenic) effect.* So when the Brennan Center argues that prison has no effect anymore, that might very well be false: the uncorrected bias pushes results away from finding a crime-reducing effect.  If anything, thanks to the bias a zero-effect suggests that there is at least still some crime-reducing impact to incarceration. (Which is not to say that it is a cost-justifiable effect!)

Now, in the report’s defense the authors do admit that it exists. But their response? “It’s really hard, the one solution people generally use, this thing call instrumental variables, is really tricky to use, so we’re just going to ignore it.” Lest you think I’m being harsh, here’s the relevant passage: 

There are other ways to address simultaneity. One is through a controlled experiment. However, with something like incarceration, this is not feasible. Another is through natural experiments or instrumental variable techniques…. However, good instruments are difficult to construct, and even then the results can be highly dependent on the instrument chosen. For instance, Levitt’s 1996 paper uses prison overcrowding legislation as an instrument (it is plausibly correlated with prison populations and plausibly uncorrelated with crime) and finds a large downward effect of incarceration on crime. But Geert Dhondt’s 2012 study uses cocaine and marijuana mandatory minimum sentencing as an instrument and actually finds an upward effect of increased incarceration on crime. The authors recognize the potential issue of simultaneity but due to the complications invoked by instrumental variables did not apply that technique to their analysis. 

The authors are right that IVs are tricky: there’s plenty to criticize with Levitt’s, but there are also lots of reasons to suspect that Dhondt’s isn’t valid either.** But it is that last sentence that really troubles me: “since it is tough, we will simply ignore it.”

Now one reason the authors are willing to dismiss the problem is that in the paragraph above the one I quote they cite three papers for the proposition that endogeneity isn’t really a problem in crime-prison models. But they are wrong about that. One paper doesn’t discuss the issue at all. The other two improperly use a test (called the Granger test) to dismiss the problem.

Furthermore, at no point does the report ever cite this fantastic Vera institute report that demonstrates just how important endogeneity is. The Vera report divides the literature into articles that control for endogeneity and those that don’t, and the results are striking (look at Table 1 on page 6): those that try to control for it consistently return much higher results than those that don’t. At the very least, this makes clear that the blithe assertion that we needn’t be concerned with endogeneity is wrong.

To be clear, I’m not saying that by refusing to use, say, Steve Levitt’s instrument the paper is invalid. Nor am I saying that Dhondt’s instrument can’t be valid, even though it produces results that don’t align with my prior assumptions. But what I am saying is that the literature on the importance of endogeneity to this particular question is extensive enough, and the biases introduced by endogeneity in general are well-known enough, that simply punting on the problem is just… unacceptable, particularly in a report that is going to get, and already is getting, so much attention.

This post is getting overly long as it is, so let me just wrap it up with the first big takeaway from all this: The most obvious point by now should be that any of the original results produced by this report should be viewed with great caution. Most likely the model is consistently understating the true impact of crime. This isn’t the only problem with the estimates, and I’ll turn to more in the days ahead, but this is a big one.

At the same time, don’t throw out the baby with the bathwater. Prison may still have a bigger impact on crime than the report states while (1) exhibiting diminishing returns to scale and (2) no longer being cost-justifiable.

In later posts, I’ll think a bit more carefully about a deeper issue that extends beyond this paper, namely what we should do when there is no easy solution to this problem. What if we all (who is this “we”?) ultimately decide that no instrument exists for this problem? There are some technical solutions that may exist, but there is also the more philosophical question of how to make decisions when we know we can’t solve a problem. But more on that down the line.

 

* Theoretically, more prisons leads to less crime, but more crime leads to more prisons. What a regression returns, simplifying grossly, is the net correlation of these two effects. So if the real effect is, say, that a 10% increase in incarceration results in a 4% reduction in crime, a regression could return a result of a 2% decline, or maybe even a 4% increase. All because it is also picking up the effect to which a 10% increase in crime leads to some sort of increase in incarceration.

** The statement that “the results can be highly dependent on the instrument chosen” is also quite dubious. If there are multiple valid instruments, then we should expect that IV models using each type of instrument would return fairly similar results. That Levitt’s instrument increases the crime-reducing effect of prison and Dhondt’s the crime-increasing effect suggests that one instrument is simply better than the other (or that the models are otherwise differently designed—and again, in ways such that one is better and the other worse). But the authors here make it sound more like random noise, not about the very difficult question of assessing the relative merits of various IVs. 

Posted by John Pfaff on February 17, 2015 at 11:28 AM in Criminal Law | Permalink | Comments (1)

Monday, February 16, 2015

Mardi Gras

Happy Mardi Gras everyone! In honor of the holiday, I thought I’d direct your attention to Chapter 34 of the New Orleans Code of Ordinances, which sets forth most of the rules and regulations governing Carnival in the Crescent City. If you want to know whether you can throw things from floats (generally yes, but not “marine life”—see the section on “prohibited throws”), whether you can throw things at floats (categorical no), whether you can “fasten[] two or more ladders together” while watching a parade (no), or whether you can you can bring your pet reptile to the festivities (not within 200 yards), you can find your answers here.

Posted by Michael Coenen on February 16, 2015 at 11:56 PM | Permalink | Comments (0)

Sunday, February 15, 2015

If possible, Alabama could get more confusing

Al Jolson said it best. Two anti-marriage-equality groups have filed a Petition for Writ of Mandamus in the Alabama Supreme Court's original jurisdiction, seeking an order preventing probate judges from issuing licenses on the strength of Judge Granade's decision and ordering them to wait until a "court of competent jurisdiction"--which petitioners define as only SCOTUS--decides the matter. The court ordered briefing on the petition, with two justices dissenting; Chief Justice Moore apparently took no part in the decision.

So how will this play out and what effect will it have?

This sort of mandamus action has been attempted before, in a slightly different context. In Oklahoma and South Carolina, state attorneys general sought to mandamus individual county clerks who intended to issue licenses in light of a federal appeals court decision invalidating SSM bans in other states. These clerks were under no federal injunction and there had been no decision addressing bans in their own states. But now-binding Fourteenth Amendment precedent made legally certain what would happen in any federal action challenging those bans, so the clerks were simply avoiding that lawsuit and injunction. The mandamus was intended to make the clerks wait and not to issue licenses unless and until compelled to do so.

In Alabama, probate judges other than Don Davis of Mobile who are issuing marriage licenses are doing so on the persuasive force of the district decision, but without an injunction. They, too, are trying to avoid a lawsuit, one whose outcome is both more and less obvious than in the other two cases. Here, there is only persuasive, and not binding, federal precedent, although it involves a declaration as to this state's marriage ban.

The mandamus action raises a whole series of state-law questions. One is whether these organizations have standing, as their only injury seems to be that probate judges are doing something the petitioners don't like. It also would require the court to conclude that a probate judge is forbidden (not simply not obligated, forbidden) from adhering to district court precedent. It is not clear whether the petition also will require the court to decide the constitutionality of its marriage ban, which would be the only federal issue in play; otherwise, any decision is insulated from SCOTUS review.

The mandamus petitioners rely on one fundamental misunderstanding--that the only court of competent jurisdiction to declare the state's marriage-equality ban unconstitutional is SCOTUS. This erroneously minimizes the effect of lower-court precedent. While only SCOTUS precedent binds state courts, here probate judges are performing administrative functions; they can be sued in federal court, where circuit court precedent will be binding and district court precedent is at least persuasive. Again, I really believe the question of federal precedent in state court is beside the point. And in taking this step, petitioners misunderstand that point.

Finally, if the mandamus issues, the real effect will depend on how broad the order is. If it simply applies until a probate judge comes under a federal-court injunction, then its effect is more practical than legal. Formally, no probate judge has any direct legal obligation to issue a license until sued in federal court and enjoined;  the mandamus would simply provide a court order emphasizing that reality. It would force every couple seeking a license to sue every probate judge individually, rather than allowing couples to gain the benefit of persuasive authority. This is inconvenient and inefficient (although not costly, since plaintiffs should get attorney's fees), but not a significant change to the landscape of actual legal obligations. The mandamus also would open the door to the probate judges trying to raise Younger, Rooker-Feldman, Pullman, and Burford in the federal district court; this is what happened in both the Oklahoma and South Carolina cases, although both courts soundly and properly rejected those arguments.

On the other hand, if the mandamus bars probate judges from issuing any licenses until SCOTUS decides the issue of marriage equality, we have genuine problems. The inevitable federal injunction would set up the very direct conflict and confusion the petitioners purport to be trying to resolve. There actually would be directly conflicting orders--a state mandamus prohibiting every probate judge from issuing a license and a federal injunction commanding a named probate judge to do so.

Posted by Howard Wasserman on February 15, 2015 at 11:15 AM in Civil Procedure, Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (0)

Friday, February 13, 2015

Mitchell/Hamline

People have been wondering when law schools would close in the new reality. Here comes a sort-of closure: William Mitchell College of Law and Hamline University School of Law are merging, forming Mitchell/Hamline School of Law as stand-alone not-for-profit with a "strong and long-lasting affiliation to Hamline University." The joint announcement from the associate deans at both schools is reprinted after the jump.

We write to share the news that our two law schools have announced plans to combine, to further our shared missions of providing a rigorous, practical, and problem-solving approach to legal education.

The combination will occur following approval by the American Bar Association.  Until then the two schools will continue to operate their current programs, while taking steps to ensure a smooth transition for students when ABA acquiescence is obtained.

Once combined, the law school will offer expanded benefits for its students, including three nationally-ranked programs: alternative dispute resolution, clinical education, and health law; an array of certificate and dual degree programs, and an alumni network of more than 18,000.

The combined school will be named Mitchell|Hamline School of Law and will be located primarily on William Mitchell’s existing campus in Saint Paul. Mitchell|Hamline School of Law will be an autonomous, non-profit institution governed by an independent board of trustees, with a strong and long-lasting affiliation to Hamline University.
 

Posted by Howard Wasserman on February 13, 2015 at 01:56 PM in Howard Wasserman, Teaching Law | Permalink | Comments (1)

Thursday, February 12, 2015

You say potato . . .

Does anyone know how the federal judge at the center of the Alabama craziness pronounces her name? I have lived in South Florida for too long, so my instinct is to pronounce it Grah-nah'-day. The non-Spanish version (which I have heard some reporters use) would be grah-nayd'.

If the latter, then recent events have earned her a place on the Mount Rushmore of Appropriate Judicial Names, alongside Learned Hand, John Minor Wisdom, and William Wayne Justice.

Posted by Howard Wasserman on February 12, 2015 at 05:37 PM in Howard Wasserman | Permalink | Comments (6)

Lower federal courts and state administrative actions

Thanks to Amanda for her post about her article and the effect of lower-federal-court precedent on state courts. I look forward to reading it and using it in a larger article on the procedural insanity we are seeing between Windsor and the decision this June.

But I wonder if this issue is just a distraction here, partly triggered by Moore's memo and order, which focused heavily on it. Probate judges are not acting in a judicial capacity or deciding cases in issuing (or declining to issue) marriage licenses. They are acting in an executive or administrative capacity, such that there is no such thing as "binding" or "persuasive" precedent. Absent a federal judgment against him, precedent does not act directly on any executive or administrative actor; its force is in the fact that, if sued, the precedent will bind the court hearing the case and the executive will almost certainly be enjoined.

So the non-binding nature of Judge Granade's original decision is in play here. But not because it is not binding on state courts; rather, because it is not binding on other federal district courts. Thus, the possibility of a different district judge disagreeing with Judge Granade justifies a probate judge, acting in an administrative capacity and performing an administrative function, in not immediately following that decision.

Posted by Howard Wasserman on February 12, 2015 at 05:32 PM in Civil Procedure, Howard Wasserman, Law and Politics | Permalink | Comments (0)

Now we have a meaningful federal order

The New York Times reports that Judge Granade has enjoined Mobile County Probate Judge Don Davis from denying marriage licenses to same-sex couples. The injunction comes in Strawser v. Strange, an action by a male couple to obtain a license. In January, Judge Granade enjoined the attorney general from enforcing the ban on same-sex marriage, an injunction that, as we have seen, has no real effect on the issuance of marriage licenses. On Tuesday, the plaintiffs amended their complaint to add Judge Davis as a defendant.

So, since even the Times article linked above does not have it quite right, let's be clear on where we are now:

1) Judge Davis is legally obligated to issue a marriage license to Strawser and his future husband; if he fails to do so, he can (and probably will) be held in contempt.

2) Judge Davis probably is not obligated by the injunction to grant anyone else a license, since there are no other couples joined as plaintiffs, this was not brought as a class action, and Judge Davis does not exercise supervisory authority or control over other probate judges. But anyone in Mobile denied a license will be able to intervene or join as a plaintiff in Stawser and Judge Granade will immediately extend the injunction to cover the new plaintiffs. So Judge Davis should pretty well understand that he should issue licenses to everyone who requests one.

3) No other probate judge in the Southern District of Alabama is obligated by the injunction to grant anyone a license. But they all should be on notice that, if they fail to do so, they will end up before Judge Granade (either because a new action goes to her or because the new plaintiff jumps into Strawser and adds the next probate judge as defendant) and she will enjoin them.

4) No probate judge in the Middle or Northern District is obligated by the injunction to do anything, nor are they bound by the precedent of her opinion. Formally, it will take a new lawsuit by a different couple and a new opinion and injunction by a judge in each district. But as I wrote earlier in the week, I believe that, once one probate judge in the state had been enjoined, everyone else would fall in line, even if not yet legally obligated to do so. So while Roy Moore may continue to shout at the rain, I would be very surprised if any other probate judge bothers denying anyone else a license; it just is not worth the effort, as I cannot see a federal judge in either district reaching a different conclusion about the constitutionality of same-sex marriage bans.

Update: Important addition: If a probate judge in situations ## 3-4 did decline to issue a license to anyone, they would not be acting in disregard or defiance of Judge Granade's order, which still does not bind them or compel them to do anything. And I feel pretty confident that Judge Davis would not be acting in defiance of the order in situation # 2.  In other words, today's order likely will have the practical effect of getting probate judges statewide to fall in line; it does not have that legal effect.

Posted by Howard Wasserman on February 12, 2015 at 05:15 PM in Civil Procedure, Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (8)

Amanda Frost on Chief Justice Moore and the "Inferior" Federal Courts

[The following guest post is by my friend and WCL colleague Amanda Frost:]

Alabama Chief Justice Roy Moore is making news again.  As reported on this blog by Howard Wasserman, he has advised Alabama probate judges to ignore an Alabama federal district court’s ruling that Alabama’s ban on same sex marriage is unconstitutional.  In a fascinating memo laying out his position, Moore argues that state courts are not obligated to follow lower federal courts’ decisions.  I’m very interested in this question, and I recently wrote an article examining the constitutional relationship between state courts and the lower federal courts.  (My article was cited by an Alabama Supreme Court Justice Bolin, who concurred in that Court’s decision on Monday refusing to “clarify” the question for the probate judges.)

I begin my article by noting that states and lower federal courts often disagree over the meaning of federal constitutional law, creating intra-state splits that can linger for years before the Supreme Court grants certiorari to resolve them.  Those splits occur because most (though not all) state courts adopt the mainstream view that they need not follow lower federal court precedent.  But the constitutional arguments for that position are surprisingly shaky.  For example, Chief Justice Moore (like others) argues that state courts don’t have to follow lower federal court precedent because lower federal courts don’t review state court decisions.  But of course appellate review is not the primary reason that one court follows another’s precedent.  (If it were, then all federal courts should be free to ignore state supreme courts’ views on state law, since state courts never review federal court decisions.) 

True, as a constitutional matter Congress is under no obligation to establish lower federal courts, and thus the Framers accepted the possibility that state courts would decide all federal questions in the first instance.  But Congress has created a large network of “inferior” federal courts, so perhaps the presumption should now be that these courts’ decisions “preempt” conflicting state court decisions.  Moreover, the Framers assumed that Supreme Court review would always be available to reverse recalcitrant (or simply wrongheaded) state judges who failed to follow federal law.  That was true in the early days of our nation, but in an era in which the Supreme Court reviews no more than a handful of state court decisions each year, it makes sense to require state courts to avoid creating intra-state splits by following their regional court of appeals. 

Some might argue that requiring state courts to follow lower federal court precedent would impinge on state sovereignty.  But state sovereignty arguments in favor of state court independence seem particularly weak in our cooperative system in which state courts are required to hear federal issues whether they want to or not, and are also required to follow the precedent set by the U.S. Supreme Court.  Moreover, state executive branch officials are required to follow lower federal court decisions, so why shouldn’t state judges be required to do so too?  [Aside:  In my view, Chief Justice Moore errs by failing to acknowledge that state probate judges are acting as executive branch officials, and not judges, when they grant marriage licenses.]

In my article, I acknowledge that state courts are unlikely to start following lower federal court precedent just because the constitutional underpinnings for that position are weak.  But I argue that Congress, or even the Supreme Court, could establish a rule requiring state courts to follow their regional federal court of appeals.   This would preserve the benefits of “percolation,” by which an issue gets thoroughly vetted in the lower courts before Supreme Court review, but would also avoid the problem of intra-state conflicts.  If such a rule were in place, Chief Justice Moore would have one less leg to stand on.

Posted by Steve Vladeck on February 12, 2015 at 03:51 PM in Steve Vladeck | Permalink | Comments (3)

Thinking Further About Cognitive Effort: Some Additional Thoughts on the "Simms Postulate"

My previous post explored the connection between the “closeness” of a legal issue and the level of cognitive effort that goes into its resolution.  In particular, I introduced an idea called the “Simms Postulate.”  Named in honor of a dubious but thought-provoking assertion that Phil Simms once made about the NFL’s “indisputable video evidence” rule, the Simms Postulate posits a positive correlation between cognitive effort and the closeness of an issue (or “issue-closeness” for short), holding that the harder a decision-maker works to resolve an issue, the more plausible it becomes to characterize the issue as “close,” “disputable,” “on the borderline,” etc.  The goal of the post (football pun intended) was to suggest that the Simms Postulate might be and indeed has been used when judges conduct doctrinal inquiries that turn on the closeness of an issue that has already been decided on its merits. 

I have thus far reserved judgment both as to the validity of the Simms Postulate itself and as to its utility as a tool of legal analysis.  But let’s now open that door.  Specifically, this post identifies and discusses five questions that strike me as potentially relevant to the overall value of the Simms Postulate. To those of you expecting a comprehensive and definitive normative conclusion, I must apologize in advance:  What follows is tentative and conjectural, aimed more at beginning an evaluation of the subject rather than completing it.  To those of you who like to read short blog posts, I should also apologize. I really didn't intend for this one to go on for so long, but, alas, it may now be eligible for the so-called “tl;dr” treatment. With those caveats offered, however, let me share some highly preliminary thoughts:

(1)  Does cognitive effort always signify issue-closeness?

The answer to this question has to be “no.”  Just because an individual has labored over the answer to a legal question does not mean that reasonable minds may disagree as to what that answer should be.  For one thing, high cognitive effort may simply signal a decision-maker’s unfamiliarity with (or inability to grasp) the law/facts that are implicated by the question itself.  Under those circumstances, high levels of cognitive effort may be expended, but only for the purpose of realizing that the answer to a question turns out to be fairly straightforward.

Somewhat more interestingly, even “expert” decision-makers with firm knowledge of a subject might sometimes end up devoting significant cognitive energy to resolving an issue whose answer turns out to be clear.  The truth of Fermat’s Last Theorem is now beyond doubt, but it took mathematicians over 350 years to show why.  I suppose that’s another way of saying that complexity is not the same thing as closeness: Some problems might be very difficult to solve ab initio, but once the solution emerges, no other answer is possible. Now whether there exist distinctly legal problems of this sort strikes me as an interesting question, but to the extent that such problems exist (perhaps, e.g., certain calculations of tax liability under the Internal Revenue Code?), then the complexity/closeness distinction is worth bearing in mind.

Still, even if cognitive effort does not always signify closeness, it might still prove to be a good enough indicator of closeness, at least in some circumstances.  So, the absence of an ironclad link between the two variables doesn't necessarily disqualify the Simms Postulate across the board.

(2)  Do we need a proxy for issue-closeness?

Phrased less kindly, this question asks whether the Simms Postulate poses a solution in search of a problem.  If it turns out that the relevant decision-makers are fully capable of asking directly whether a given constitutional right is “clearly established,” whether a given legal claim is “frivolous” or “substantial,” whether an agency’s reading of a statute is “reasonable,” etc., then why bother using an indirect proxy instead?  Even if valid, the Simms Postulate may not be needed; at best, it would simply complicate a set of inquiries that judges are already well-suited to perform.

The answer to this question depends in part on the findings of human psychology, a subject that falls outside the scope of my limited expertise.  Theoretically, though, the findings would have to show (or do in fact show?) that direct estimations of issue-closeness are likely to be biased or distorted in a systematic way. (Perhaps, for instance, I am hardwired to resist the sort of cognitive dissonance that would arise from suggesting that an issue I myself have decided in one way might reasonably have gone the other way.)  And if the findings did not indicate any such bias, then any need to rely on the Simms Postulate would indeed become less pressing.

What I might propose, however, is another way of framing the question that doesn't stack the deck so heavily against the Simms Postulate. Rather than ask whether it should displace a direct inquiry into “issue closeness,” we might more modestly ask whether the Simms Postulate could usefully inform such an inquiry.  When directly evaluating the closeness of a legal issue, a decision-maker will often look to several different variables: the language of the applicable text, the instructiveness of the applicable case law, how other judges have evaluated the closeness of analogous issues, etc.  Why not throw the added variable of “cognitive effort” into the mix? And indeed, if one revisits the examples I highlighted in my previous post, one sees the Simms Postulate functioning in this way, with the cited indicia of cognitive effort sometimes acting in concert with—rather than instead of—other variables that support the ultimate conclusion. If that is the relevant use of the postulate, then the urgency of the psychological question goes down. The usefulness of the Simms Postulate would no longer depend on a showing that judges suffer from systematic biases of other cognitive deficiencies when attempting to measure issue-closeness directly.

(3)  Do superior, alternative proxies exist?

Related to Question (2), we might wonder whether there exist easier or more reliable ways of approximating issue-closeness.  One immediate such candidate is the extent of disagreement that exists across a group of decision-makers.  Consider, for instance, the recent suggestion of Eric Posner and Adrian Vermeule—thanks to an earlier commenter for the pointer!—that judges might consider the votes and positions of their peers when evaluating the reasonableness of an agency interpretation. (Consider also the somewhat related suggestion of Vermeule and Jake Gersen that Chevron deference might be better implemented by way of a supermajority voting rule on multi-member courts.)  If, in short, “close” legal questions are questions on which reasonable minds might disagree, then the extentof judicial agreement or disagreement on the merits could in theory provide valuable information as to the closeness of the question itself.

Even if imperfect (and Posner and Vermeule do highlight potential complications with their approach), the “judicial-disagreement” metric may well be superior to the “cognitive effort” metric—superior enough, in fact, as to render the latter of limited usefulness.  On the other hand, the Simms Postulate might still remain useful in scenarios where only a single decision-maker has rendered a determination on the merits and thus lacks information as to other decision-makers’ views.  Furthermore, investigations into cognitive effort and investigations into judicial disagreement might sometimes operate alongside one another in a mutually supportive way.  Posner and Vermeule suggest, for instance, that one judge might sometimes wish to compare her own level of confidence about the rightness or wrongness of a position with the confidence levels of her colleagues, so as to gauge the depthof judicial disagreement in addition to its breadth.  And in that scenario, it still might be helpful for Judge 1 to ask whether Judge 2 struggled mightily with the issue or instead resolved it with ease.

(4)  How do you measure “cognitive effort”?

Three possibilities come immediately to mind. First, we might look to opinion length. Second, we might look to deliberation time. And third, we might look to first-person testimony. A few quick notes on each:

  • Opinion Length: The intuition here is that a lengthier opinion reflects a greater amount of cognitive effort than does a shorter opinion.  Notice that the claim is not that lengthier opinions require more effort to write—a point that is likely true but also immaterial to the question we are considering here. Rather, the claim is that we can infer from a lengthy opinion that the opinion-writer worked hard in deliberating over the outcome.  That may be true to some extent, but other variables might still complicate the inference. Perhaps the opinion-writer is longwinded.  Perhaps the opinion-writer wanted to opine on some matter of tangential relevance.   Or, perhaps the case simply involved a large number of issues, each one of which required little-to-no effort to resolve.  Interestingly—and on that last point—I’ve come across a few unpublished district court opinions that went of their way to attribute their length to the number of issues raised in a habeas petition; thus preemptively rebutting any Simms-inspired claim that the length of the opinion says something about the “substantiality” of the petitioner’s grounds for relief.  See, e.g., Peterson v. Greene, 2008 WL 2464273 (S.D.N.Y. June 18, 2008) (“The length of this opinion is a function of the number of arguments made by Peterson, rather than of the merit, or even difficulty, of any of them. None of the grounds he presents in seeking habeas corpus with respect to either of his convictions has the slightest merit. Accordingly, the petitions are denied. Because petitioner has not made a substantial showing of the denial of a constitutional right, a certificate of appealability will not issue . . . .”)
  • Deliberation Time: The intuition here is similar: when a decision-maker waits before rendering a decision, we might attribute the delay to an internal cognitive struggle: all else equal, the harder it is to decide a question, the longer one will wait before doing so.  Here too, however, delay may be attributable to any number of other factors: perhaps the decision-maker was busy working on other cases, perhaps the decision-maker was procrastinating, perhaps the decision-maker was agonizing over the stylistic aspects of an opinion, and so forth.  And the variable of deliberation time seems especially tricky as applied to multi-member bodies such as juries: True, a delayed verdict might indicate that all twelve jurors struggled with the question of whether to convict; but it also might indicate that a single stubborn juror held things up for a while.
  • First-Person Testimony:  Many doctrinal frameworks require judges to gauge the overall closeness of an issue that they themselves have already decided. So, if the evaluator of issue-closeness turns out to be the same person as the first-order decider of the issue, then that person might simple report on his or her own experience in deciding the issue as a means of justifying a subsequent decision regarding its closeness. “Trust me,” the judge might say, “I lost plenty of sleep trying to answer that question on the merits. Therefore, I conclude that the underlying claim was not frivolous.”  This metric carries the virtue of directness; but it is also susceptible to manipulation: The first-person decision-maker has privileged access to the workings of her own mind, and so is well positioned to exaggerate or downplay the degree of cognitive effort expended on the question.

A final point regarding all of these metrics: Recall my earlier observation that cognitive effort does not itself always signal issue-closeness.  So, even if long opinions, delayed judgments, or subjective descriptions tell us something about the level of cognitive effort that a judge has devoted to a legal problem, it does not necessarily follow that those problems qualify as “close” (as opposed to, say, “complex”).  That point, along with the difficulties I have attributed to each individual metric, suggests that exclusive reliance on the Simms Postulate is a risky business indeed. Rather, the postulate likely works better when accompanied by other independent measures of issue-closeness and/or substantive arguments concerning the nature of the issue itself.

(5)  Are there other factors to consider?

Of course there are other factors to consider!  For example, would an open embrace of the Simms Postulate induce first-order decision-makers to engage in unwanted strategic behavior? (E.g., “Because I want to deny qualified immunity, I’ll write a really short opinion on the merits and then point to that opinion to support my conclusion that the government official violated a clearly established right.”) Or might it simply confuse first-order decision-makers who otherwise might be trying to behave sincerely? (E.g., “Gosh, I’m taking a while to write this opinion. Does that mean the issue is more difficult than I initially thought? I guess I need to consider the issue further…”).  Should judges be more inclined to use the Simms Postulate when evaluating the closeness of an issue that they themselves have decided, or when evaluating the closeness of an issue that someone else has decided?  Are there other metrics of issue-closeness beyond the three I considered above? To the extent I do want to invoke the Simms Postulate, precisely whose cognitive efforts should figure into the mix? (e.g., When I am reviewing an agency’s interpretation of a statute, should I consider the amount of effort that agency officials expended on the interpretive question, in addition to the amount of effort that I myself expended?). And to what extent does the applicability/usefulness of the Simms Postulate vary according to the different ways that doctrines formulate and accord significance to issue closeness? (e.g., Does it make more sense to consider cognitive effort when considering whether a claim is "frivolous" than it does when considering whether an agency position is "reasonable," or when considering a government official has violated "clearly established law"?) And so on...

***

There is, I admit, something silly about all of this.  Real-world invocations of the Simms Postulate are infrequent at best and not likely to increase in frequency any time soon.  And, as my analysis suggests, this may well be so for good reason.  Perhaps the game simply isn't worth the candle, especially given that courts can and do make arguments about issue-closeness without in any way relying on the variables I have discussed in this post. At the same time, I figure that the Simms Postulate is in the air enough to justify some focused thinking about its underlying merits. Tackling the issue won’t win us any games, but it may at least allow us to move the ball forward and score a few analytical points.

Posted by Michael Coenen on February 12, 2015 at 11:35 AM in Judicial Process | Permalink | Comments (1)

LSAC Report on Best Practices

A report recommending to LSAC best practices on accommodating LSAT test-takers with disabilities has issued from a panel convened pursuant to a consent decree between LSAC and DOJ. Here are the Executive Summary and the full report. (H/T: Ruth Colker (Ohio State), the sole lawyer on the panel).

Posted by Howard Wasserman on February 12, 2015 at 09:31 AM in Howard Wasserman, Life of Law Schools, Teaching Law | Permalink | Comments (11)

Bazelon sort-of defends Roy Moore

Emily Bazelon makes a sort-of defense of Roy Moore in The New York Times Magazine, turning out many of the arguments I have been making here.

Posted by Howard Wasserman on February 12, 2015 at 09:29 AM in Civil Procedure, Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (0)

Zoning in baseline hell: How landowners get a sense of entitlement to their neighbor's land

Zoning is an area of law stuck in a conceptual space that I call "baseline hell," a space in which, because social norms about entitlement are contested, any change in the status quo can be painted as either the exercise or invasion of private rights. In zoning disputes, for instance, residential land users assume that they have a quasi-property easements over neighboring landowners' lots, even when the very existence of the easement is the result of the burdened landowner's exercise of their own property rights.

Take, for instance, the Brooklyn Heights neighbors of the Pierhouse, a luxury apartment building now being erected next to Pier 1 by the Toll Brothers in the Brooklyn Bridge Park. The development is providing hefty funding for the park next door: The combined ground lease and payments in lieu of taxes (both dedicated to the upkeep of the park) add roughly a $1.50 per square foot to the common charges paid by owners.

The new structure also imposes costs on the neighbors in Brooklyn Heights: It blocks their view of the Brooklyn Bridge. On one view of the baseline of entitlement, the developer appears to have harmed the neighbors with a nuisance-like cost. But appearances can be deceptive in baseline hell: The view that the neighbors seek to preserve was actually created by Toll Brothers when they demolished an old warehouse that stood on the site now being developed for apartments. The view to which the neighbors now claim they are entitled existed only from 2010 (when the developer broke ground) to 2014 (when the Pierhouse reared its stories next to Squibb Park. In effect, the Brooklyn Heights homeowners want a scenic easement that would not exist but for the very development that they want to stop.

At various public hearings, the neighbors are vociferous with righteous wrath about Toll Brothers' blocking the view that Toll Brothers created, apparently on the theory that, sometime between the the time that the dust settled on the old warehouse rubble and the moment that an "I" beam blocked the resulting view, they developed a vested right to the unobstructed vista. But is there some less partial way to determine who is invading the rights of whom? Are the neighbors trying to confiscate Toll Brothers' investment with zoning restrictions, or is Toll Brothers imposing a nuisance on the neighbors?

As I explain after the jump (with a little help from a paper by my colleague Adam Samaha), there is no easy way to answer this question. Welcome to baseline hell.

One might attempt to resolve the conflict between the neighbors and Toll brothers by pointing to the hoary old legal idea, expressed by lots of cases (here's a recent one) that no one has a vested right to the continued existence of a zoning category. Landowners and neighbors should count on laws' changing. So Toll Brothers wins? The neighbors counter that Toll Brothers might have built their structure higher than their initial agreement with Brooklyn Bridge Park. A deal's a deal: Score one for the neighbors? But Toll Brothers notes that Hurricane Sandy required bulkheads that unexpectedly raised the structure in a way that the RFP did not anticipate, requiring Toll Brothers to rely instead on the zoning code, which allows the additional height. So Toll Brothers wins after all?

The problem is that NYC zoning rules are so complex that any intuitive sense of baselines is lost in a miasma of sky exposure planes, height factors, and other NYC Zoning Resolution verbiage. In a rough and ready way, this legal complexity leaves three different and competing types of baselines in the zoning context:

(1) The actual legal status quo (what land-use types call "the zoning envelope"),
(2) The actual uses physically present on the ground (which the neighbors latch on to as their property entitlement, regardless of what legal doctrine might say), and...
(3) what Adam Samaha calls a "process" baseline -- that is, the zoning change that is produced by the normal operation of the local administrative and legislative process.

This last sort of "process baseline" is rooted in the intuition that a change in legal classification emerging from the ordinary operation of expected legal processes is, in the eyes of the law, no change at all: It is the change that everyone ought to have expected. In the case of the Pierhouse dispute, no rational homeowner could have expected that the under-used, shabby warehouses that occupied prime waterfront for years prior to the creation of the Brooklyn Bridge Park would or should have remained the land-use status quo. The whole point of the zoning process is to update obsolete zoning categories created when (for instance) Brooklyn was, prior to the Container Revolution, a center of heavy industry.

One might conclude, therefore, that the Brooklyn Heights neighbors are behaving unreasonably when they angrily denounce Toll brothers for taking scenic easements that those neighbors could not reasonably expect to monopolize.

This interpretation of the relevant baselines, however, ignores the tendency of the zoning code to reward outrage about change. The ordinary and expected strategy for influencing zoning decision-makers is to flood the hearing room with one's booing and hissing opponents of change in the physical status quo. The refrain of such opponents of Baseline #3 is that Baseline #2 is the right and proper baseline defining the truly just expectations, because the normal operations of legal processes would produce results incompatible with the existing physical status quo. If enough people show enough outrage, then prudent zoning decision-makers (often elected officials or people who serve at elected officials' pleasure) may choose Baseline #2. If not, then Baseline #3 rules.

Adam Samaha refers to clashes between Baseline #3 and Baselines ##1-2 as "process-results" combinations. As a friendly amendment, I would suggest that the American zoning process is intended to produce such conflicts between proponents of different baselines, so that astute political decision-makers can see which way the wind is blowing.

In short, insofar as zoning is concerned, we live in Baseline Hell by design. From the point of view of a New York City Community Board member or developer's lawyer, I guess it must be tedious to listen to so much outrage devoted to so little purpose. But for law profs who teach and study zoning, it is nothing but fun, so I am not complaining.

Posted by Rick Hills on February 12, 2015 at 02:21 AM | Permalink | Comments (1)

Wednesday, February 11, 2015

The wrong vehicle?

Judge Granade has scheduled a hearing for Thursday to decide whether to add Alabama Probate Judge Don Davis back into the case as a defendant and whether to enjoin him from enforcing the state ban on same-sex marriage. That injunction is all-but-certain to issue. Believe it or not, however, it may not end the controversy. We still have a scope-of-the-injunction problem. Since Searcey and her wife remain the only plaintiffs, the injunction would only compel Davis to allow Searcey to adopt her wife's child. That's it. Even as to Davis, the effect of the opinion as to anyone else's rights would be merely persuasive.

The problem is that Searcey may be the wrong litigation vehicle for getting probate judges to issue licenses, since it is not a marriage-license case but an adoption case. And it seems to me that it is impossible to turn it into a license case by adding new plaintiffs (through joinder or intervention) who are looking for licenses rather than to adopt, since they are seeking entirely different relief. Perhaps the fact that the same-sex marriage ban (and whether the plaintiffs are or can be married) is a common question of law or fact. But the questions are arising in such wildly different contexts and settings.

Update: Thanks to the commenter below for correcting me. The events are happening in Strawser, an action brought by a male couple in January, originally against Attorney General Luther Strange and which produced a (largely meaningless) injunction against him; Davis has been added as a defendant and a hearing on a preliminary injunction against Davis is scheduled for Thursday. In addition, according to this story, there is a second action in the Southern District by several couples, naming Davis and Moore as defendants.

Now we are beginning to see some progress. Once Davis is directly enjoined to issue licenses, expect everyone else to fall in line.

Posted by Howard Wasserman on February 11, 2015 at 02:31 PM in Civil Procedure, Howard Wasserman, Law and Politics | Permalink | Comments (10)

JOTWELL: Walsh on Re on Narrowing Precedent

The new Courts Law essay comes from Kevin Walsh (Richmond), reviewing new PermaPrawf Richard Re's Narrowing Precedent in the Supreme Court (Colum. L. Rev.). As always, both are worth a read.

Posted by Howard Wasserman on February 11, 2015 at 01:44 PM in Article Spotlight, Howard Wasserman | Permalink | Comments (0)

Introducing Skills Training in the Doctrinal Classroom: An Overview and a New Coursebook

The following post is by Hillel Levin (Georgia) and is sponsored by West Academic.

For several years—decades now!—there have been clarion calls for changes to law school pedagogy. Buzzwords like experiential education, practical learning, skill building, problem solving, and others have been thrown around with increasing frequency. These calls have only grown louder as the market for legal services has experienced both cyclical and structural changes.

Many law school professors want to answer these calls and to include skill-building in the doctrinal classroom. Sessions devoted to this topic at annual law conferences (like SEALS) are typically among the best-attended; the topic comes up repeatedly in chatter on blogs and listservs; and faculty members are constantly sharing notes and ideas. Yes, it is clear: the demand for appropriate teaching materials is high.

Unfortunately, until the past couple of years, professors have not been able to find much, as authors and legal publishers have been unsure of how to meet the demand for this new pedagogy. In the absence of published solutions, some professors developed their own materials, much to the benefit of their students.

However, many professors have expressed frustration with the difficulties inherent in developing such materials for the doctrinal classroom. Which skills should I focus on? What makes for a “good” simulation? How should I review and discuss case documents with students? How can I naturally integrate these novel materials? How do these materials fit alongside the traditional casebook that the course is built around? Do I really have to invent all of this from scratch? How do I give useful feedback? How do students work collaboratively in class while receiving individual grades? Should I ask students to do research? Write memos? How much time will it take? What will I have to sacrifice in terms of substantive course coverage? How do I explain to students the purpose and use of this “extra” material so that they buy in? Will students rebel?

Since I began teaching in the doctrinal classroom six years ago, I have been committed to developing practical lawyering materials for each of my courses (Legislation and Statutory Interpretation, Civil Procedure, Constitutional Law II, Administrative Law, and Education Law and Policy). I took an everything-including-the-kitchen-sink approach, introducing new materials every year, tweaking old assignments, and tossing whatever hadn’t worked the first time around and couldn’t be salvaged.

In introducing these materials, I always (1) explain to students the purpose of each assignment, (2) am transparent about the experimental nature of the material, and (3) request anonymous feedback for everything. I have found students to be remarkably open to the experimentation, appreciative of my effort to help prepare them to be better lawyers, and insightful in their feedback. Even when an experiment fails and/or places unfamiliar and time-consuming demands on students, they unfailingly express gratitude at the attempt. I suspect that some are simply bored with traditional law school teaching by their second or third year; others never liked it in the first place; others simply appreciate a variety of teaching techniques; and others very much want more skills training. In any event, the response from students has been overwhelmingly positive. Most rewarding of all have been the emails I receive from students in summer or post-graduation jobs (sometimes years later) recounting how they impressed a supervisor or were particularly prepared for an assignment thanks to something we did in class.

I discovered early on that some of my courses are more naturally given to this kind of experimentation than others. My Legislation and Statutory Interpretation class, which focuses on statutory interpretation but also covers legislative and regulatory processes, proved to be a natural fit. As I introduced more and more practical lawyering materials, students began to ask me to replace the casebook altogether with my own materials. After five years of teaching the course, I finally felt ready to tackle the challenge. West Academic Publishing, which has been making a concerted effort to publish practical lawyering materials (primarily, but not exclusively, with course supplements), quickly accepted my proposal.

The result is Statutory Interpretation: A Practical Lawyering Course, a new paperback (and thus comparatively affordable) coursebook that serves as a standalone text for any course anchored to statutory interpretation, though it also includes materials suitable for related courses, like Legislation or Leg/Reg. It covers the leading cases and doctrines, but it also offers a variety of experiential and skills-building exercises. The teachers’ manual includes a sample syllabus, case summaries, points for discussion, and perhaps most importantly, detailed suggestions for how to successfully use the exercises. It offers guidance for exercises geared to improving students’ skills in negotiating and drafting legislation, strategizing, organizing arguments, responding to counter-arguments, conducting legal research, writing briefs, and more. My plan is to refresh the book every two years in order to keep the cases and assignments current.

The central innovation of this book (I hope) is that it brings practical lawyering skills into the framework of the doctrinal classroom without casting off the benefits of traditional law school pedagogy. It explains why students are asked to do some things that may be unfamiliar to them, and it makes explicit the connections between the traditional doctrinal and case-based materials, the novel materials and exercises, and the role of the attorney in the real world. In addition, it gives professors substantial freedom to work with these materials as they see fit.

Publishers have finally begun to respond to the demand for these kinds of materials by offering a variety of products. We are in an exciting period of innovation in law school teaching, and I am thrilled to be a part of it.

 

 

Posted by Howard Wasserman on February 11, 2015 at 09:31 AM in Sponsored Announcements, Teaching Law | Permalink | Comments (0)

Dorf on Roy Moore and Alabama

Mike Dorf's take on Roy Moore and the events in Alabama. Mike concludes "that while Chief Justice Moore's memo was a lawyerly piece of work, it ultimately does not advance his (distasteful) cause. It's at best a cover for his Faubusian agenda." He argues that Moore ultimately was playing a losing hand because couples always could sue the probate judges in federal court (because, as I have argued, issuing the licenses is not a judicial function). In playing it, therefore, Moore was simply trying to play Orval Faubus (or George Wallace, to keep it in the same retrograde state).

I agree that Moore likely is doing all this for bigoted reasons. But that is not necessarily established by the fact that the probate judges could be sued and enjoined. I never read Moore as denying that or denying that this would change the analysis and their obligations (certainly some probate judges recognized as much). Moreover, what difference should it make that Moore's position will ultimately prove a loser? The question is whether it is wrong to force the plaintiffs go through the process of establishing their legal rights and of not departing your preferred position (non-issuance) unless formally compelled to do so, even when you know exactly how it will play out (and even when it likely will cost the taxpayers attorney's fees).

There is an obvious comparison between Alabama and Florida. In both states, officials charged with issuing licenses (county clerks in Florida, probate judges in Alabama) took the position that they were not bound by the initial district court order or opinion invalidating the state ban. And in both, the federal court issued a "clarification" that the earlier injunction did not compel any non-parties to issue licenses, but the Constitution did (whatever that means). But then they part ways. In Florida, the county clerks folded their tents following the clarifying order and began issuing licenses across the state,* although I they were not legally compelled to do so by that clarification and did so only as a strategic choice of avoiding being sued. But the Alabama probate judges, and Moore, have not done the same; unlike the Florida clerks, they seem intent on making the plaintiffs take the steps of obtaining those individualized federal injunctions.

* Mostly. Clerks in several counties avoided having to issue licenses to same-sex couples by ceasing issuing licenses at all.

So two questions: 1) Why is Alabama playing out differently. Is it Moore and other officials playing Wallace/Faubus by demanding formal legal processes? 2) Is it wrong of them to demand those processes be followed (and by that I mean not merely less preferable or more expensive, but morally or legally wrong)?

Posted by Howard Wasserman on February 11, 2015 at 12:44 AM in Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (4)

Tuesday, February 10, 2015

Brian Williams, Eye Witness Testimony and the Permeability of Memory

I have no idea after reading this article in the New York Times if Brian Williams does or does not believe that he witnessed the helicopter crash when he was actually nowhere near it, but I do believe, based on scientific evidence discussed in this post, that our memories are highly permeable.  Things that we see and hear later can become part of what we think are events we actually experienced.    In other words, our memory is not like a hard-drive or a camera where events are recorded.  Instead, they are a creation of our imagination that recreates themselves every time we think of them.  See this article in Scientific America for the details.   In an article I’m preparing for the current submission season, I start with reference to the charming  Lerner & Lowe song from Gigi  where Maurice Chevalier and Hermione Ginglold  compare conflicting versions of the first time they met—each equally sure they are right.  And that’s the problem—our mind gives us memories as a seamless whole, we cannot perceive cracks or seams. 

But what does this have to do with law?  Well, Brian Williams will be fine whatever happens.  However, the millions of people in the United States who have been convicted based on inaccurate eye-witness testimony are far less fortunate.  Here at Texas Tech University we recently honored the memory of Tim Cole, a student at the university, who died in prison after being wrongly convicted of a rape based on now recanted eye-witness testimony.

Elizabeth Loftus, the research psychologist who did the most to make this phenomena known in the criminal justice community, describes here research in this TED Talk and her website at the UC Irving School of Law will lead you to her substantial body of work.   My very favorite study showing how false memories can be created involves individuals who were convinced that they shook hands with Bugs Bunny at Disneyland (an intellectual property impossibility).  Other legal scholars to check out are Mark Godsey at the University Of Cincinnati College of Law School, Sandra Guerra Thompson at the University Of Houston Law Center, Professor Brandon Garrett at the University of Virginia School of Law, Patricia J. Williams at Columbia Law School.  For a compilation of materials see these collections put together by the Huffington Post and The Innocence Project including this piece by Barry Scheck highlighting a recent National Academy of Sciences report.

Posted by Jennifer Bard on February 10, 2015 at 12:53 PM in Criminal Law, Current Affairs | Permalink | Comments (0)

The irony of trying to have it both ways

Much of what is happening with same-sex marriage in Alabama right now is a product of  a hierarchical and geographically dispersed judiciary. The district courts hear cases first and may decide quickly, but the decision (beyond the parties themselves) has limited precedential value. The courts of appeals and SCOTUS create sweeping binding precedent, but it takes longer to get those decisions.

Had the Eleventh Circuit or SCOTUS ruled that the Fourteenth Amendment prohibits same-sex marriage bans, the obligations of state officials would be clearer. It would be certain that any district court would order them to issue the license because the precedent would be binding and that to not issue licenses would subject them to contempt. It also would be certain they would be on the hook for attorney's fees. And they may even be on the hook for damages, because the law would be clearly established. But we are still early in the process in Alabama, so we only have a persuasive-but-not-binding opinion from a district court. And we see what we would expect--it is persuading some actors, not persuading others; when lawsuits start coming, it may persuade some district courts and not persuade others.

In the short term, of course, this may give us Swiss cheese--one report this morning said 16 out of 67 counties are issuing licenses. Uniformity within the state comes with that binding precedent from the reviewing court. But it takes time.

There is a way to avoid Swiss cheese, of course: Have the district court decision and order stayed pending appeal. Then everyone will be able to marry at the same time--once the reviewing court provides binding precedent that same-sex marriage bans are invalid, after which everyone is bound. Of course, no one on the pro-marriage equality side wants to wait. I would guess everyone would strongly prefer marriages in 16 counties to marriages in none.

But that is the choice. You can have marriages begin without binding precedent, but not every official or court will go along with the precedent, so not everyone will gain the benefit of it. Or you can get uniformity from the eventual binding precedent so that everyone will be bound and everyone will benefit, but you have to wait. You cannot get both. And while frustrating, it is wrong to attribute this procedural reality to malfeasance by state officials.

Posted by Howard Wasserman on February 10, 2015 at 11:53 AM in Civil Procedure, Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (4)

Public Defenders as Prosecutors?

I want to continue to think about how we should handle criminal cases involving police misconduct, particularly (though not only) police-involved killings. The core problem, obviously, is that local DAs need the cooperation of local police, making it hard for the DAs to vigorously prosecute office misconduct. The failure to secure indictments in the Michael Brown and Eric Garner cases highlighted this problem.

In my previous two posts, I considered some of the limitations with Wisconsin’s solution, namely relying on outside investigators to provide local DAs with a report, and with what New York’s AG wants, namely a special police-focused prosecutorial unit in the state AG’s office.

But criticism is easy. If I have problems with the Wisconsin and New York plans, what would I recommend?

I want to suggest something fairly radical, which I haven’t seen anywhere in the debate. There could be a host of reasons why this is impractical politically, or why implementation could never work, etc. etc. But that’s the great thing about a blog: you can float a trial balloon and see if it is filled with helium or lead.

What if we created a special police-misconduct prosecutors office in the public defender’s office?

I admit that this might sound a little implausible, but the more I’ve thought about it, the more appealing it seems to me. Or, at the very least, thinking about it this way highlights the very real challenges creating an effective police-violence prosecution unit faces.

So what are the benefits of relying on public defenders to prosecute police cases?

1. It solves the problem of confirmation bias/personal investment. Unlike prosecutors, public defenders work against the police. So they lack the inherent inclination to believe them (that’s the confirmation bias part), and they don’t have to worry about jeopardizing on-going relationships. Of course, the problem could run the other direction, that they overestimate the likelihood the police are guilty and go to trial too often. But my data-free gut instinct is that “net” confirmation bias would drop. Regardless, the beyond a reasonable doubt standard should protect police from excessive PD zeal. 

2.  Not only does relying on PDs eliminate the problem of attacking people needed to do one’s job, but it addresses the internal promotion problem as well. A prosecutor who effectively prosecutes police will likely face limited promotion options: no one is going to become a top senior DA based on the number of police sent to prison/disciplined/etc. But those internal incentives switch in a PD’s office. The public defender who aggressively defends defendant/civilian interests by targeting police misconduct is doing exactly what the office seeks to do.

3.  More broadly, calling on PDs to handle these cases likely creates a better alignment with underlying senses of purpose. In a system that has almost no jury trials, PDs already see their job as providing one of the few meaningful barriers between criminal defendants and the power of the state. Punishing criminally-malfeasant members of that state seems consistent with that goal not just doctrinally, but also conceptually. PDs have already voluntarily taken on the unpopular job of representing those disliked by society. Defending the unpopular seems to be a close parallel to prosecuting the popular.

4.  Asking PDs to prosecute police misconduct also seems to solve a major information problem that all the other proposals ignore. The entire conversation about outside prosecutors has focused on a single issue: how to handle police-involved killings. And these cases are easy to identify: they quickly go public, and a killing can’t not be reported (in general). In a future post, though, I want to think about other types of police offending that merits formal prosecutorial response, such as aggravated assault or (perhaps more for prison guards) rape. And here information about the offense becomes trickier to uncover.

How would, say, the AG unit based in Albany or Sacramento know about these cases? Who would refer them to the AG? Who would screen the claims in the AG’s office, and how would they know which claims are viable/legitimate? Compare that to a system where the PD representing the defendant need only go down the hall to the PD unit handling police abuses and say “my client claims he was beaten senseless while handcuffed. And I believe him because….” Information flows are faster and easier, and the social networks of PDs provides a ready screening mechanism.

All that said, several concerns jump to mind:

1.  PDs are already over-worked and under-funded. Now I’m dumping even more work, and politically unpopular work at that, on them. Do we think that state legislators will provide sufficient additional funding to ramp up such offices? The AG’s office has a lot more political clout, and it has the political “cover” of not just representing the “bad” guys (and now going after the “good” guys), so it could likely fund such an office much better than the PD could.

2. Along the exact same lines, PD offices may be concerned that effectively prosecuting police will lead to budget cuts, for the very purpose of shutting down such actions. 

3. A lot of PDs may feel that everyone deserves to be protected from punishment by the state, even state actors who violate the law. So calling on PDs to prosecute even these sorts of cases may not align as well with underlying preferences as I suggest above.

4. This approach doesn’t work so well for cases of non-lethal police abuse involving people never arrested, since these victims never raise their claims to PDs. But then neither does the AG approach, so this more indicates that this proposal isn’t perfect (really??), not that it is worse than the alternatives.

Now, all this said, I don’t actually expect to see bills start moving forward in state legislators to set up PD Prosecution Units, although it would be great if they did. But hopefully at the very least by highlighting how well-incentivized PDs would be to handle these police-involved cases, this points out the incentive challenges that arise when we ask any sort of prosecutor to manage them.

 

* The immediate rebuttal here is that BRD does not often appear to protect defendants in a world of plea bargains. But I think police defendants would be qualitatively different than the usual criminal defendant. They have a better understanding of the law, they will almost certainly make bail (between police union fundraising and likely more-favorable treatment from judges—and thus lack the incentive to plead to time served), and they will have solid representation. For defendants like this, BRD likely has a lot of oomph.

Posted by John Pfaff on February 10, 2015 at 11:40 AM in Criminal Law | Permalink | Comments (7)

And the media does not help

Most counties in Alabama were not issuing licenses as of yesterday, not improperly so as a matter of process. But you would not know it from the media, with headlines such as Most Alabama Counties Defy Feds by Blocking Gay Marriage (ABC News, complete with video of George Wallace in the doorway) and Judicial Defiance in Alabama: Same-sex marriage begins, but most counties refuse (Wash. Post); The Supreme Court Refused to Stop Gay Marriage in Alabama, But the State's Governor and Chief Justice Are Refusing to Listen (TNR); and Alabama's Roy Moore Defies Federal Order, Refuses to Allow Gay Marriage (Slate's Mark Joseph Stern, who can't help himself, calling it a "stunning display of defiance against the judiciary").

Posted by Howard Wasserman on February 10, 2015 at 07:13 AM in Civil Procedure, Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (7)

Getting Rid of NYC's Zoning Bazaar: Let's Not Make a Deal.

In last Tuesday’s State of the City Speech, Mayor de Blasio made housing affordability the centerpiece of his Administration. He has promised to create 240,000 new units of housing, including 80,000 “affordable” units as antidotes to NYC’s rising rents. But is this a promise he can keep?

The economics of housing affordability are a lot easier than the politics. Economically speaking, it is a familiar point that big cities like New York have excessive zoning restrictions that reduce the supply of land and thereby drive up rents without producing any commensurate environmental benefit for the city. These zoning walls exclude lower-income workers from economic opportunity, but they often add little value to local residents – for instance, keeping land locked in districts reserved for manufacturing when such industry is better pursued in jurisdictions with better access to rail transit.

Neighbors in big cities, however, love their zoning as much as any suburbanite loves the gates of their gated community. Therein lies the tricky politics. Backed by “aldermanic privilege,” each city council person excludes new housing from their own district to placate their vociferous NIMBY residents, creating a housing shortage for the city as a whole.

There are two ways to overcome this perverse political dynamic towards urban exclusion. First, developers bargain individually with each neighborhood and its council member, crafting a special land-use deal in which the neighbors relent in their opposition, bought off by new schools, parks, plazas, or affordable housing. Call this the “zoning bazaar” method. Second, the mayor could promulgate a binding and general plan ahead of time, specifying the conditions under which floor-area ratio can be added to lots in different parts of the City. Call this the “posted price” policy. The latter seems rigid and unwieldy, but, as David Schleicher and I argue elsewhere and as I explain after the jump, it may actually be a more libertarian approach than the seemingly more flexible lot-by-lot bargaining.


The problem with the “zoning bazaar” is that the uncertainty it creates makes it more difficult for landowners everywhere in the city to develop their lots. Effective bargaining with the City requires a lot of inside knowledge and contacts, meaning that developers who want to re-zone under-used parcels must hire well-connected fixers to massage the right politicians into a mood to deal. Quite apart from the opportunities for old-style palm-greasing corruption that such a system creates, the lot-by-lot deal-making requires big scale economies in political negotiation – hiring lobbyists, making regular campaign contributions, engaging the recognized firms for environmental review, and so forth -- that deter outsiders lacking connections from developing lots. In effect, the “bazaar system” is a gigantic cloud on the title of every city lot: The actual value really depends on the connections of the owner, slowing down construction of housing wherever those connections cannot be mobilized.

Two mega-projects recently approved for a desolate piece of East River waterfront illustrate the problem. City Council has recently approved two mega-projects of mixed residential and commercial uses for a peninsula of mostly desolate warehouses in West Queens. “Hallets Point” will contain 1,900 market-rate units and 483 units of subsidized housing to be located in a nearby public housing complex: Originally a Lincoln Equities project, it was taken over by the Durst family’s operation (real estate royalty, to you non-New Yorker readers). Just next door, Alma Realty finally secured approval for “Astoria Cove,” consisting of 1,240 market-rate units, in return for which the City will get 460 market-rate units, a public school, and the refurbishing of a local park.

Why did Durst have to provide only 20% of their total units, while Alma had to provide 27%? Who knows? Maybe Alma hired the wrong lobbyist: They retained Sean Crowley, brother of Congressman Joe Crowley, while Durst retained Peter Vallone, Sr., member of the political dynasty and father of Astoria city council member. Whatever Vallone provided to Durst and Durst’s predecessor and now junior partner, Lincoln Equities, did not come cheap in either time or money: Lincoln Equities, Durst’s predecessor developer, filed plans with the City Planning Department in 2009 and spent almost $2 million on lobbying expenses to move the project through the approval process.

Forget, for a moment, about the semi-corruption of this sort of incestuous deal-making, in which politicians deal with the people they know – often their relatives – with unlimited discretion to tailor the terms of each deal to fit the influence being peddled. Instead, focus on the costs of uncertainty imposed on every other lot not being developed, because the owners lack the scale to hire the right political muscle to change the zoning. Such a system creates a bias in favor of mega-projects on out-of-the-way industrial sites where public opposition will be muted and the development’s size will be large enough to carry the “soft costs” -- lobbying, legal dickering, consulting, environmental reviewing, etc. – that lot-by-lot bargaining entails. Siting housing in this way is perverse: It tends to place large buildings far from public transit, such that the City has to chip in millions for ferry service to serve these proposed west Queens developments. As Peter Vallone, Jr. (the city council, not his dad, the lobbyist) remarked, “If they [the developers] can’t come in big,” he said, “they’re not coming in.”

Mayor de Blasio boasted that his Administration had bargained tough with Alma for that extra 7% in affordable housing. But how many units of housing do we lose, because smaller sites cannot get variances, special use permits, or map amendments through ULURP because they cannot retain the local talent needed to negotiate the lot-by-lot bazaar?

Consider an alternative way of promoting housing. Instead of gearing up for discretionary approvals of mega-projects, why not press through city-wide laws loosening up the rules on accessory uses and specifying general guidelines for approving use variances and conditional use permits that promote housing until the City’s housing goals are met?

The advantage of city-wide policies rather than site-specific deals is that, because they do not rely on site-specific developments, they do not arouse the same neighborhood opposition. Individual members cannot as easily invoke aldermanic privilege to hold up the deal for local goodies for their constituents. City-wide policies also promote finer-grained development patterns that allow housing markets rather than politics choose sites for housing.

Could de Blasio (or any other big city mayor) get such general policies through Council jealous of its zoning prerogatives? One advantage of the city-wide policy is that it overcomes the collective action problem that each council member faces when they are confronted with a site-specific proposal. Even if they might accept their fair share of housing, they do not want to be the only district that ends up with extra density. To insure that every member feels fairly treated, each supports every other member’s veto of local development. But no member is left holding the bag if a general policy with ramifications for every district is proposed.

Moreover, many city-wide policies could be pressed by the mayor without any legislation at all. A planning document announcing that certain general types of housing proposals would be exempt from costly environmental review or would be presumptively deemed not to have a detrimental effect on neighboring properties, for instance, could have a big impact on the behavior of administrative bodies granting variances and special use permits. If given deference by state courts, such documents could also dampen neighbors’ NIMBY lawsuits.

The de Blasio Administration has not yet released the details of its housing plan. Those details are all-important: They include the details on mandatory inclusionary zoning and transferable development rights. These anticipated policies provide great opportunities for the de Blasio Administration to do something truly visionary. Rather than approve this or that member of real estate royalty's special mega-project, the de Blasio Administration could create non-discretionary as-of-right categories of building rights with published "price lists" (i.e., published and detailed conditions for extra floor-area ratio) that would banish the zoning bazaar. Although it seems counter-intuitive, such centralized command-and-control rules would move the City far closer to a true market system than any "Atlantic Yards"-style development ever approved under Mayor Bloomberg, the ostensible advocate of private enterprise.

With Nixon-to-China logic, it might take de Blasio, ostensible lefty, to spread True Private Enterprise Gospel: Private enterprise means that private parties should be able to spend less time making deals with the government so they can spend more time making deals with each other.

Posted by Rick Hills on February 10, 2015 at 03:15 AM | Permalink | Comments (4)

Monday, February 09, 2015

Comments working again

We have found a temporary fix for the problem with Comments, so readers should be able to resume commenting. Thanks for your patience.

Posted by Howard Wasserman on February 9, 2015 at 11:45 PM in Blogging, Howard Wasserman | Permalink | Comments (0)

Measels--An Update and Some Constitutional Issues

So things are moving fast on the Measles front.  Today I’m going to do a quick overview of mandatory vaccination for childhood disease and later this week what it tells us about our efforts to prepare for a bioterrorism event (spoiler, nothing good).

 The measles outbreak has spread now to 17 states and the District of Columbia.     And things are worse than they seem. The current “outbreak” (the number of cases that can be traced back to the original Disneyland exposure) signals how many people in the U.S. lack immunity not just to measles, but most likely to the other two deadly diseases which the MMR vaccine protects against—Mumps and Rubella (German Measles).  For an overview of the damage done by Andrew Wakefield’s now discredited article see here.  See how Megyn Kelly explains it here.  Last year I gathered some resources specific to young adults, and they are here.

 Rubella poses a serious risk to developing fetuses.   According to the CDC A  pregnant woman has “at least a 20% chance of damage to the fetus if….infected early in pregnancy.”  This damage is called CRS-congenital rubella syndrome.  Warning-you may want to take my word that this potential damage is serious rather than read this very descriptive CDC report .  Mumps is also quite serious.  Again a warning, it may be enough to know that the virus causes swelling in various body parts and can be a contributing factor to infertility or low fertility in a small but real percentage of men who become infected. 

Moreover, it seems unlikely that MMR is the only vaccine these children lack.  They are also at risk for polio, diphtheria, tetanus, whooping cough, chickenpox, hepatitis B(and no, it’s not just a sexually transmitted disease),meningococcal disease , and something really unpleasant for which there is now a vaccine—rotavirus.  Here’s the list.

The public focus has turned very quickly to law and ending  vaccination exemptions, see here and here, —so these are some resources if this comes up.   Top legal experts like Professor Lawrence O. Gostin are making clear, there is no Constitutional requirement to exempt anyone from mandatory vaccination in the face of a credible threat to the public’s health. The Supreme Court in held Jacobson v. Massachusetts that the individual states have full authority to pass mandatory vaccination laws and that they are not obligated to give exemptions for reasons of philosophy or preference.  For more background on the Constitutional issues see Prof. Parmet here, here, and here and  Professor Edward P. Richards. The situation is a closer call when it comes to religion, but not much.  As Justice Ginsberg points out in her dissenting opinion in Burwell v. Hobby Lobby, “Religious objections to immunization programs are not hypothetical.”  134 S.Ct. 2751, 2805, n. 31 (2014).  And in terms of an adult’s right to claim a religious exemption from medical care for a minor, the law is if anything clearer.  Even when making a “martyr” of oneself doesn’t pose a threat to others, a state still has the power to intervene when the religious belief is claimed on behalf of a minorHere’s a helpful overview by the Congressional Research Service about vaccination laws in the US and here's one that looks at laws overseas. 

You may be interested to know that the CDC is tracing several outbreaks at the moment including Listeria monocytogenes from caramel apples and sprouts

Read Professor Edward Richards’ article or this one by Profs Mariner, Parmet and Annas,  if you want to get ahead.  go here if you want to get ahead on the bioterrorism via infectious disease.

Posted by Jennifer Bard on February 9, 2015 at 05:07 PM in Constitutional thoughts, Current Affairs, First Amendment, Religion | Permalink | Comments (0)

No contempt for you

Motion for Contempt denied--as expected and as appropriate. Judge Granade emphasized that Judge Davis is not a party. And she pointed out that her clarification order "noted that actions against Judge Davis or others who fail to follow the Constitution could be initiated by persons who are harmed by their failure to follow the law." In other words, plaintiffs' lawyers, pay attention to what the judge tells you.

Posted by Howard Wasserman on February 9, 2015 at 04:40 PM in Civil Procedure, Constitutional thoughts, Howard Wasserman, Law and Politics | Permalink | Comments (5)