« Radio Interview on piracy and international law (Eugene Kontorovich) | Main | After 15 Years of Excellence, Sonia Sotomayor Suddenly Became a DUMB BULLY--But Only Just Before the Last Election! »

Thursday, May 14, 2009

Systematic Reviews and Adversarialism: Truth and Arbitrariness

In my last post, I laid out some of the major arguments for favoring adversarial debates about scientific evidence over independent experts. Now I want to start pointing out how systematic reviews retain the benefits of independent expertise while addressing the concerns raised by the defenders of adversarialism. In this post I'll talk about the relationship between systematic reviews and (1) uncovering the truth and (2) avoiding arbitary outcomes; in a subsequent post I'll take on party control and dispute resolution.

At one level, my previous posts (assuming you accept my arguments) make the question of truth-finding a trivial issue. Systematic reviews produce knowledge better than adversarial techniques, whether in a courtroom or a lab. But I want to take care to point out how systematic reviews directly confront the concerns voiced about independent experts.

1. Incentives to find relevant evidence. The rigorous search methods used in systematic reviews are designed to find every relevant study. Part of the impetus to develop EBP was the very fear that researchers often overlooked important results. But these search methods are still being developed, and analysts may still miss studies. Not a problem. Here, systematic reviews and adversarialism can work hand in hand: let the parties submit studies for consideration. Such an approach retains the information-gathering incentives of adversarialism while preserving the independence of the review itself.

2. Incentives to find the best expert. First, we have a definitional issue. The best expert for the parties is not necessarily the best expert for the factfinder. Parties have an incentive to find the expert who will advocate their positions most strongly. So if the evidence strongly favors Party A over Party B, the best expert for Party B is likely the worst expert for a neutral factfinder trying to get to the truth. So by "best" I mean best for the factfinder.

Here, we have competing effects. It is true that judges have less of an incentive to look for experts. But work by Hooper et al. indicates (not surprisingly) that scientists dislike adversarial procedures. So as the process becomes less adversarial, the pool of avaiable experts will deepen, and judges thus will not have to look so hard to find experts. Third-party assistance, such as the AAAS's CASE project, can help as well. And finally, we can once again tap into the parties' stronger incentives, such as by having the parties suggest names and then requiring both to sign off on the chosen experts.

3. Confirmation bias. Again, this was a concern that motivated the development of EBP. The ex ante guidelines are designed in part to confront this very problem--the guidelines are established before the bias has set in. Having multiple reviewers can further mitigate this risk (especially if they review the studies in different random orders). Plus, note that the sequential presentation of evidence at trial can lead to this problem even in the adversarial context.

4. Using cross-examination to reveal biases. Again, the purpose of guidelines is to highlight and reveal biases in underlying work. Guidelines can be designed in many ways to captures numerous sources of bias. Thus they can ask not only "did the study control for self-selection bias?" but "was the study funded by an industry group?" Any concern that could come up on cross-examination can be put into the guidelines.

Moreover, the party weighing the implications--the independent expert as opposed to the jury--is much more able to carry out that task. Experts are much more capable than jurors of understanding how important a particular source of bias is. As a result, outcomes will be less arbitrary, less based on guesswork by an epistemically incompetent party. Quality guidelines retain the benefits of bringing methodological, ideological, and financial problems to light, but greatly mitigate the risk of inappropriate confusion.

5. Judicial manipulation of (not-un) due deference. This is addressed by the flip side of the guidelines' goals. Guidelines are designed not just to reveal biases, but to constrain the reviewer. And by limiting reviewer biases, guidelines greatly reduce the ability judges could have to selectively choose experts.

6. Avoiding corpusculation. One concern with Daubert is the risk of what Thomas McGarity refers to as "corpusculation." the court evaluates each study on its own and finds it insufficient; each study is tossed out one by one. If viewed as a whole, the studies may have reinforced each other to provide a whole greater than the sum of its parts--this is just the crossword puzzle once more. But corpuscular review misses the forest for the trees. Clearly, systematic reviews are designed to overcome this flaw.

Despite these benefits, the use of systematic reviews is not without its challenges. If nothing else, there exists a serious issue I have glossed over: so far I've taken the existence of the guidelines as a given. In my next post I will examine how to develop guidelines in an adversarial system.

Posted by John Pfaff on May 14, 2009 at 10:28 AM | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference Systematic Reviews and Adversarialism: Truth and Arbitrariness:


Thanks for this series of posts. I'm inclined to agree with many of your criticisms of the adversarial approach.

One of the problems here has to do with the fact that there is rarely 100% consensus among experts, but sometimes there is, say, 99% consensus. The reason why 99% consensus is more common than 100% consensus: scientists are trained to subvert the dominant paradigm (we value and reward, most of all, those discoveries that overthrow the dominant paradigm, so scientists are always looking out for those kinds of discoveries), which means that it's not unusual to find professional contrarians among the ranks of scientists. Personally, I value that diversity. But it means that if you pick a random subject that's reasonably well-settled, you can probably still find one or two credentialled contrarians to take the other side. In an adversarial system, the party that's got the weaker case can still pick that one contrarian. This means that you often end up with a situation where you have an expert witness from each side, each disputing the others' claims; and psychologically in that case it's easy to throw up your hands and say "the scientific community can't make up their minds, so we'll ignore them both -- a pox on both their houses" or treat it as a 50-50 split among experts in the field (though in reality the question is better-settled than the courtroom battle would suggest).

I like that your proposed replacement might address this.

However, I do have some concerns about this guideline business. If there is just one independent expert, appointed by the court, I'm pretty skeptical that up-front guidelines can prevent errors, oversights, or bias from that expert. It seems to me your measures could reduce those kinds of problems but not eliminate them. It seems to me that, in those (hopefully rare) cases where the independent expert's report has problems or oversights, it might be important to reserve the right for the parties to produce their own experts to identify those shortcomings. I'm not convinced that the independent expert's report will always be perfect. If your proposal doesn't provide some way to address this, I'd be concerned about the potential that we might be replacing one set of problems with another.

Posted by: Anonymous Computer Scientist | May 16, 2009 3:36:01 PM

The comments to this entry are closed.