« How old is Citizens United? | Main | More on Hosanna-Tabor »

Wednesday, January 11, 2012

Cavazos v Smith II

In my previous post—which I admit was embarrassingly long ago for the first half of a two-part post—I raised some concerns with how the majority in Cavazos handled dueling experts. Now I want to turn my attention to the dissent, written by Ginsburg and signed onto by Breyer and Sotomayor. The dissenting judges attempt to provide their own non-technical meta-analysis of the empirical literature on shaken baby syndrome (SBS), raising serious questions of technical competence.

In Cavazos (for a quick review), the Court upheld a murder conviction based on SBS in which there were no witnesses to the alleged shaking and the evidence of SBS was ambiguous. It was a conventional “dueling experts” case. For procedural reasons, the dissent wants to demonstrate that the evidence that shaking alone can kill a baby is weak. To do this, the dissenters decide to survey the literature on SBS.

It seems clear that the evidence about whether it is possible to kill a child solely by shaking him is mixed; Edward Imwinkelried has summarized the competing studies, and several cases in several countries are wrestling with whether SBS is real, or at least reliably diagnosable. So the dissent is surely right when it says that there is increasing doubt about whether death through shaking alone is diagnosable, or even possible. Whether the evidence base is sufficiently ambiguous to justify the procedural argument it is trying to make is a question of law, not science, and thus beyond the scope of what I want to talk about here (albeit an important one).

Instead, I just want to use the Cavazos case to highlight a challenging issues in the judicial use of scientific evidence: When engaging in empirical literature reviews, what should justices do to make sure they get the “right” articles, or get the “right” sense of the literature?

In this case, the dissent summarizes the findings from at seven journal articles (here, here, here, here, here, here, and here, for those interested). How did it choose these seven? None of the articles chosen by the dissent comes from a leading journal like JAMA, BMJ, or NEMJ, and Google Scholar indicates that there are at least 5,840 articles on this issue, with at least 828 published since 2010 alone. How can we be sure that the justices didn’t just cherry-pick articles to suit their fancy? (This is an allegation Justice Scalia has lobbed at some of his data-quoting colleagues in the past, although I imagine it is one many historians would toss right back at him.)

Of course, the obvious answer is the amicus brief. But there are two concerns to keep in mind with this solution:

  1. Not every case has the necessary resources. Like Cavazos. The Smiths appear to be a poor family that was poorly represented, and their case did not even merit oral arguments before the Court. The American Pediatric Association is unlikely to file an amicus in this case, and the Smiths likely lacked the resources to fund such a brief.
  2. Though potentially helpful, amicus briefs still run the risk of being too partisan. Despite being called amicus curiae briefs, they really should be called amicus sectae (?) briefs: briefs filed by a friend of the party, not of the court. Dueling amicus briefs may simply recreate the dueling-expert problem at the appellate level.

Perhaps the justices themselves could commission amicus briefs, thus eliminating some of the partisanship.* But Cavazos illuminates a clear potential problem:  what if only some of the justices want such a brief? In that case, partisanship concerns arise once more.

Assume that Ginsberg, Sotomayor, and Breyer are concerned that the evidence base in support of SBS is weak and commission an amicus brief to shed light on the issue. There is a risk that they would choose—perhaps consciously, perhaps not—someone sympathetic to their concerns to write the brief. At the very least, the author of the brief would likely have a sense of what motivated the justices to ask for the brief.**

In this case, a professionally-written brief could actually be worse in some cases. The lay justices may be much more transparent in their biases than technical (but partisan) experts. The experts may be much more effective in covering up their biases than the less-savvy justices. At least from a getting-to-the-truth perspective, obvious bias is preferable to hidden bias.

Perhaps, then, we should ask if there are any steps that the justices themselves can take that are relatively inexpensive. And one immediately jumps to mind.

At the very least, the justices themselves should explain, in detail, how they chose the articles they use. This is standard practice in scientific literature reviews. Any sort of rigorous systematic review precisely explains how the authors searched for articles (such as the databases they used and the search terms they entered), and the methods by which they decided which articles to keep and which to ignore. Given that the justices and their clerks are much more likely to do a poor job surveying a literature, the need for them to be transparent is all the greater. If the justices are going to engage in armchair scientific summaries, they should at least be willing to adhere to standard procedures that enable others to check their work.

Such an approach would certainly increase the transparency of the process, and in doing so could reduce cherry-picking in two different ways.*** First, it could reduce intentional cherry-picking by increasing the potential costs of engaging in it. It is easier for other justices and outside observers to identify dubious behavior and call the justices out on it. Second, it could reduce unintentional cherry-picking (i.e., the confirmation bias problem) by forcing the justices to be more careful in how they gather and assess studies.

There could be a collateral benefit here as well. If the justices were more transparent, and if academics and scientists pointed out the flaws in how the justices selected the articles to consider, the justices could become more skilled in this task over time.

I think I’ll stop here—I’m close to writing more about Cavazos than the opinion itself, and this isn’t even a major case. But it does provide a good example of a fundamental problem with the judicial use of empirical evidence: even before asking whether the justices are properly evaluating the empirical work they are reading, we need to ask if they are finding the right articles, and they currently take absolutely no steps to help us determine if they are. This is a troubling absence of transparency, and one that should be rectified.


* The Court at times requests that the parties submit briefs on particular issues, but this is a different solution: the justices would not be asking the parties for anything, but instead would acquire the briefs on their own.

** I generally dislike, fairly intensely, the rhetorical move I myself used just here (one that is common in a lot of legal scholarship): pointing out potential risks without any empirical evidence about whether they are real or not, and if real how large in magnitude. So I want to be clear that when I say “potential” I really mean “potential”: I could be completely wrong about this, but it is a concern at least worth thinking about.

*** To the extent that amicus briefs are equally non-transparent, they are worthy of the same criticism.


Posted by John Pfaff on January 11, 2012 at 02:13 PM | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference Cavazos v Smith II:


Post a comment