« Iqbal III: The Death of Supervisory Liability | Main | Amazon's "Tag Suggestions" »

Tuesday, May 19, 2009

How Far We've Come, and the Rhetoric of Evidence Based Policy

Just two short comments today.

First, last month I talked a bit about the technological revolution that has taken place in computing over the past several decades. Thanks to io9, I came across this article, which is not only a tribute to 1960s stock photography, but a great reminder of just how far we've come. Those machines make the TRS-80s (which we refered to as "Trash 80s" for a reason) in my grade school computer lab look sleek and modern.

Second, there's recently been a bit of a debate not just among the on-line blogentia but also in Congress about comparative effectiveness research. The debate itself is rather infuriating--some Congressional Republicans are opposed to studies to see which medical treatments are more cost-effective because such research could be used in some hypothetic future to ration healthcare. (Or, put more accurate, could be used by the government to ration health care, since our insurance companies already do that on a daily basis.) Of course, there is no positive thing of any sort out there for which we cannot envision some sort of potentially bad use.

But that isn't what I want to talk about. Instead, I found the following argument interesting:

I worry that "Comparative Effectiveness"  or "CE" is going to be the next medical buzz word, just like "Evidence Based Medicine" or "EBM" has been the buzz word for a decade.  "Evidence Based Medicine" is a term which makes about as much sense as "Sex-based intercourse"--Were we practicing based on zodiac signs before EBM came along?  (By the way, I borrowed "sex based intercourse" after hearing a prominent chair of medicine say it--I don't know if he coined it, but I thought it was brilliant). Soon we'll have a generation of physicians who are CE experts to bump out the EBM experts. 

At one level, Verghese may be right about the rhetorical flair of "evidence based medicine." Who could possibly oppose that? Perhaps a less aggressive name could be "actuarial based medicine," although given that the actuarial sciences started when life insurance companies wanted to figure out when people would die, that may have had a particularly unfortunate resonance.

But there is a deeper conceptual problem with Verghese's argument. His argument implies that all evidence is the same, or that all ways of looking at complex statistical evidence are the same. EBM is built on the idea that there are different types of evidence, and that medicine has been relying on the wrong type for too long. Too often I've heard colleagues defend a methodologically weak study (not this-regression-is-missing-a-term weak, but this-question-is-not-amenable-to-that-type-of-empirical-investigation weak) as "a different way of looking at the issue," when it might simply be the wrong way to approach it. One needs to be careful and modest when declaring approaches wholly incorrect--I've criticized EBM for its categorical dismissal of non-experimental research--but still willing to differentiate, at a high level, the expected quality of various classes of evidence.

(I'm sure I could point out that the "sex-based intercourse" analogy suffers from a similar all-types-of-sex-are-equal flaw, but my Episcopalian upbringing utterly inhibits me from doing so.)

Here, the debate is not between evidence and astrology, but between clinical and actuarial evidence. Perhaps it is somewhat unfair of EBM, by adopting the "E," to suggest that non-actuarial evidence is not evidence at all--and it is possible to detect that kind of dismissive tone in the medical literature, equating observational work to opinion. But that does not undermine the basic point that two types of evidence are not the same. (And in assessing them, it is essential to avoid the "broken-leg" trap, which Varghese may have done in a follow-up post.*) That EBM and EBP may be a bit snobby in their rhetoric should not distract us from the important point that they raise.

* I should be clear that I am not arguing for a categorical ban on clinical assessment in general (since the arguments about EBM apply anywhere discretion exists). Varghese points to an actuarial misdiagnosis and suggests that a clinical reassessment could have caught the error, just like it could catch the broken leg. Perhaps. And if we can regulate the use of discretion to certain well-defined, obvious situations, perhaps such human judgment can be beneficial. But the more we let people override the model on a case-by-case basis, the more we lose the benefits of the actuarial model. And we are more likely to see the cases where the model gets it wrong (since the discretionary actor will be quick to say "see? I would have done it differently) than where the discretionary actors gets it wrong (since he will be disinclined to say "whew! I would have screwed that one up!"). Eric Janus and Robert Prentky raise a similar concern when they point out that the flaws of actuarial models are more transparent than those of clinical assessment, which can make courts unwilling to use the former.

Posted by John Pfaff on May 19, 2009 at 10:44 AM | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference How Far We've Come, and the Rhetoric of Evidence Based Policy:



Another interesting post.

Typically, and correct me if I'm wrong, what is often meant by EBM is randomized controlled trials (RCTs; of course it's not just that but...) and I want to say a few things about those and then, EBM in general (and not all my comments are meant as criticism of something you said here). First, I agree that not all forms of evidence are the same and, in addition, our standards of what counts for sufficient or compelling evidence may differ according to the experimental or research context.

About RCTs, I think Rachel Cooper is correct in arguing that 1) RCTs can only be as good as the methods used to select subject populations, 2) that RCTs can only be as good as the methods used to judge success and 3) that RCTs are better suited to judging certain types of treatment than others. There are other, largely extrinsic (e.g., those having to do with specific social and economic conditions and imperatives of the sort identifed by the late John Ziman as particularly powerful in 'post-academic science) and ethical factors, that are also important in any discussion of RCTs, but I'll leave those aside.

EBM is one and very important factor in clinical judgment (as a species of practical reasoning, such judgment is rightly termed 'the je ne sai quoi of medical practice') but does not and should not invariably trump or determine such judgment (as analytical reasoning or deductive reasoning should not invariably trump or crowd out other forms of reasoning). As to the arguments why, please see, for example, Kathryn Montgomery's How Doctors Think: Clinical Judgment and the Practice of Medicine, 2006 (not to be confused with Jerome Groopman's book of the same name, which, while interesting, is not as important as Montgomery's).

And I think (and have argued in an upublished paper I hope to expand into a book) we should refrain from assuming the impossibility that anything valuable or meaningful from so-called alternative and complementary medicine with relevance to health and healing will be discovered in these medical doctrines and therapeutic modalities apart from that which survives the scientific scrutiny—the scientific sieve if you will—of EBM. There is a problem if we come to think that EBM is simply the de jure and de facto arbiter for what counts as medical truth, for what contributes to the health and well-being of the individual person. In other words, the rules and strictures of EBM are best thought of as regulative principles, epistemic norms, heuristics, maxims and such that are not absolute in character and reflect both the virtues AND shortcomings of a biomedical model of individual health and well-being.

Posted by: Patrick S. O'Donnell | May 19, 2009 2:15:16 PM

The comments to this entry are closed.