« Understanding Civil Rights Litigation | Main | Updated FSU Law Hiring Info »

Tuesday, September 10, 2013

Violent Media, Violent Crime, Old Data, and Self-Selection

Though I realize that the following article is over two weeks old, or a lifetime in BlogTime, I wanted to address a recent provocative op-ed in the New York Times about the alleged link between exposure to violent media and violent behavior. It is a great example of seemingly-compelling but actually low-quality work that so dominates popularized, non-technical discussions of complicated empirical issues. It’s worth spending a few minutes pointing out its flaws and thinking about how to identify the warning signs of bad pop-empiricism.

The article’s basic point is one that likely aligns with intuitions many of us have:

There is now consensus that exposure to media violence is linked to actual violent behavior — a link found by many scholars to be on par with the correlation of exposure to secondhand smoke and the risk of lung cancer.

It makes sense, instinctively, that watching something violent would stir up violent feelings, if not actions, in the viewer. But a lot of red flags quickly appear.

Take the article’s lead piece of evidence: a meta-analysis of 217 studies published between 1957 and 1990 (from an article published in 1994). Kudos to the authors for citing a meta-analysis, not just a handful of cherry-picked studies that accord with their prior beliefs… but. Two big problems:

1.     Meta-analytic techniques have improved greatly over the past twenty or so years, so as a meta-analysis a study from 1994 is quite dated.

2.     More significantly, empirical work in general has improved by leaps and bounds since 1990, so all the studies in this meta-analysis are likely of lesser methodological quality. Garbage in, garbage out.  Empirical work is not like wine: it doesn’t improve with age. With few exceptions, I’m generally wary of any study that is more than ten or twenty years old. Either its been superseded, or you can find a more-recent study that validates its findings. So a meta-analysis of outdated studies, no matter how methodologically skillful, will be outdated itself. 

And it's not like there are no other options. It took me about one minute on Google Scholar to find a recent meta-analysis of video games that was consistent with their claim. Although that same search turned up at least two reviews suggesting that the 1994 study was an outlier. See also this review, which notes on page 43 that the 1994 study is widely cited likely because it gives the most extreme results, and that subsequent reviews have had a hard time linking media exposure to aggression, much less to violent crime.

More disturbingly, the authors appear to try to mask the staleness of the study on which they rely. Consider the argument they make:

There is now consensus that exposure to media violence is linked to actual violent behavior — a link found by many scholars to be on par with the correlation of exposure to secondhand smoke and the risk of lung cancer. In a meta-analysis of 217 studies published between 1957 and 1990 , the psychologists George Comstock and Haejung Paik found….

The emphasis is mine. The first time I read it, I honestly thought that the meta-analysis was likely from the late-2000s or early 2010s. That’s what “now” means to me. When I tried to find the article and came across the 1994 paper by Comstock and Paik, I was sure I had found an earlier collaboration between them. The age of the paper is further hidden by the fact that the op-ed provides a link not the article itself (which is admittedly hidden behind a paywall), but to a 2008 article that cites the meta-analysis, and one that buries the date on that paper at end-note 86 on a separate page.

Maybe I’m putting too much weight on the word “now,” but it suspicious to me. 

So perhaps I take back some of my kudos. They appear to have cherry-picked their meta-analysis. I’m not alleging any sort of bad faith here—confirmation bias is a hell of a drug. But that doesn’t make the results any more reliable.

They also stretch the findings of other studies to the breaking point. Consider the following argument they make:

The question of causation, however, remains contested. What’s missing are studies on whether watching violent media directly leads to committing extreme violence….

Of course, the absence of evidence of a causative link is not evidence of its absence. Indeed, in 2005, The Lancet published a comprehensive review of the literature on media violence to date. The bottom line: The weight of the studies supports the position that exposure to media violence leads to aggression, desensitization toward violence and lack of sympathy for victims of violence, particularly in children.

The first paragraph is all about the hypothesized link between media violence and actual violence. The Lancet results they cite? “Aggression,” yes. But none of the rest of the factors relate to violent behavior. In fact, what is the true bottom line of the Lancet article? The very last sentence is the following:

However, there is only weak evidence from correlation studies linking media violence directly to crime.

Ouch. No experimental evidence, and only weak correlational evidence.

And the weakness of the correlational evidence is doubly harmful to their basic argument. First, it’s weak, which is bad for them. Second, correlational evidence should overstate the magnitude of the effect, so what we have is a weak overestimate.

Why should it overestimate the effect? Self-selection. Media exposure may lead to violence, but violent tendencies should lead to watching more violent media (it does, and I’m about to discuss one of my favorite papers to show that).1 These self-reinforcing effects mean that the correlation between exposure and conduct will be greater than the causal effect of exposure on conduct. That the correlation is weak thus bodes ill for size of the causal link.

Also, note the weasel-move: “Of course, the absence of evidence of a causative link is not evidence of its absence.” True. Also equally true? “Of course, the absence of evidence could indicate that there is no causative link.” This is all too common a tactic in empirical writing: “we have no evidence, but let’s assume….” It isn’t necessarily wrong, but it should be viewed with great skepticism.

But let’s get back to that selection-effect problem. A paper in Quarterly Journal of Economics once demonstrated that violent crime drops noticeably on days when violent movies have larger attendances. The basic causal claim: violent people go to the movies rather than someplace to drink. So between 6 pm and midnight they are “incapacitated” in the theater, and from midnight to 6 am (when the drop in violence is even greater) they are not out doing stupid drunken things. And there is no evidence of any sort of post-attendance blip even as late as three weeks after the spike in attendance.

In other words, the selection effect appears to be quite strong, at least in older people.

It also means that if you want to fight violent crime, you should start a Kickstarter to fund a reboot of the Saw series.

So where does that leave us? First, to be clear, there may be other public health reasons to oppose violence in media. The same Lancet review that found little evidence linking violent media to crime did note studies linking it to short-run psychological costs like fear, nightmares, etc. These are real costs, and nothing to scoff at.

But this op-ed is a great example of trying to force the data to support a point it doesn’t really support. The authors used an outdated, extreme-valued meta-analysis despite other, newer, easily-found reviews (all of them publicly available2) reaching differing conclusions. The authors blurred violent behavior and other problematic behavioral and emotional outcomes in ways that ultimately appear to misrepresent what papers were arguing. And they never honestly confronted the self-selection problem, which in this case is quite huge.

So, you say, I’ve spent a lot of words tearing down a New York Times op-ed piece. Why?

Well, partly because it just feels good. Seeing bad translational work (from the academic literature to the public) is always upsetting. Doubly so when the authors run a consultancy aimed at translating empirical academic literature for non-technical audiences.

But, less solipsistically, I think there are some good lessons here for non-empirical readers to use when reading these sorts of translational studies:

1. Are the authors relying on meta-analyses or systematic reviews rather than individual studies? Mixed marks here: they do at first, but then turn to newer, individual studies.

2. Are the studies new? Empirical work is a fast-changing field. Philosophers may cite people from the 400s (both AD and BC), but empirical work before, say, 2000 or so, is troubling. A meta-analysis from 1994 including studies from 1957? Two strikes.

3. If there are no experimental or quasi-experimental results, do the authors really engage with the problems of self-selection? Is their discussion convincing, or do they just try to hand-wave it away? Low marks here as well.

All in all, a poor showing. Perhaps in part because the authors aimed too high. They wanted to tie media violence to violent criminal behavior, and the evidence base simply isn’t there for them. Had they tried to argue that media violence can have real, at-least-short-run more-general public health implications, they would have had much more empirical support.

Of course, the flaws with their reasoning here doesn’t actually mean that the authors’ argument is actually wrong: maybe exposure to media violence really does lead to more violence. But this essay doesn’t prove it, and its numerous gaping holes all suggest that it is seriously overstating the problem.

Which itself has real social costs, since in a world of limited resources and attention, we can only attack so many causes of violent behavior. Do we really want legislatures, policymakers, or even media executives to waste time focusing on a factor that could be insignificant, at least without more compelling evidence?3

 

1 Or there is a common correlation: distant parenting leads to both more exposure to violent media and more violent dispositions independently of the media exposure. That’s the problem with the 2013 Pediatric study the authors cite. It relies on parental self-reports to try to control for this possible spurious correlation. That’s not convincing.

2 I visited all the links here from home, not from work, so unless Optimum Cable has a deal with Lancet and other academic journals, all of this is publicly available.

3 Of course, the argument for parents is slightly different. Obviously, there’s no evidence that media violence leads to better behavior, and none of the flaws with the papers these authors cite suggest the true answer is “watch more gunplay!” So having parents restrict what their kids see—like I do with my kids—appears to have no real direct cost besides the hassle of monitoring them, and could have some potential upside. 

Posted by John Pfaff on September 10, 2013 at 10:08 AM | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c6a7953ef019aff4d18f0970b

Listed below are links to weblogs that reference Violent Media, Violent Crime, Old Data, and Self-Selection:

Comments

Funny how that little correlation =/= causation thing always gets forgotten, isn't it?

Posted by: First day o' stats class. | Sep 12, 2013 2:58:32 PM

Post a comment