« Review Granted in VRA Case | Main | That's it, I'm done!? »

Saturday, November 10, 2012

Score 1 for Quants, but Score 5 for Pollsters

There's been a lot of talk after the election about how one big winner (after Obama, I imagine) is Nate Silver, of the FiveThirtyEight blog. He had come under fire in the days/weeks leading up to the election for his refusal to call the race a "toss up" even when Obama had only a narrow lead in national polls. He even prompted a couple of posts here (in his defense). Turns out that Silver called the election right - all fifty states- down to Florida being a virtual tie.

But that's old news. I want to focus on something that may be as, or even more, important. The underlying polling. We take it for granted that the pollsters did the right thing, but their methodology, too, was under attack. Even now, there are people - quants, even - who were shocked that Romney lost because their methodology going in to the election was just plain wrong.

So, that's where I want to focus this post after the jump - not just on "math" but on principled methodology.

It's easy to take the pollster methodology for granted. After all, they've been doing it for many, many years. That, plus the methodology is mostly transparent, and past polls can be measured against outcomes. Taking all of this methodology information into account is where Silver bettered his peers who simply "averaged" polls (and how Silver accurately forecasted a winner with some confidence months ago). Everybody was doing the math, but unless that math incorporated quality methodology in a reasonable way, the results suffered. 

It didn't have to be that way, though. As Silver himself noted in a final pre-election post

As any poker player knows, those 8 percent chances [of Romney winning] do come up once in a while. If it happens this year, then a lot of polling firms will have to re-examine their assumptions — and we will have to re-examine ours about how trustworthy the polls are.

This is the point of my title. Yes, Silver got it right, and did some really great work. The pollsters, however, used (for the most part) methodologies with the right assumptions to provide accurate data to reach the right answers. [11/11 addition: Silver just added his listing of poll result accuracy and methodology discussion here.]

The importance of methodology to quantitative analysis is not limited to polling, of course. Legal and economic scholarship is replete with empirical work based on faulty methodology. The numbers add up correctly, but the underlying theory and data collection might be problematic or the conclusions drawn might not be supported by those calculations.

I live in a glass house, so I won't be throwing any stones by giving examples. My primary point, especially for those who are amazed by the math but not so great at it themselves, is that you have to do more than calculate.  You have to have methods, and those methods have to be grounded in sound scientific practice.  Evaluation of someone else's results should demand as much.

Posted by Michael Risch on November 10, 2012 at 12:51 PM in Law and Politics, Legal Theory | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c6a7953ef017ee4f175cf970d

Listed below are links to weblogs that reference Score 1 for Quants, but Score 5 for Pollsters:

Comments

Right skahammer - that was my point generally. The aggregator can do some massaging and even tweak results for consistent bias (which is ironic - if you have a consistently biased sample, that might still provide info), but if the underlying polls are garbage (e.g. biased one way in one poll, another way in another poll), you can't do anything with that.

It wouldn't surprise me to hear of a bimodal distribution - those who were right, and those who were wrong :) I've amended the original post to add this link to the 538 listing and discussion of poll accuracy and methodology: http://fivethirtyeight.blogs.nytimes.com/2012/11/10/which-polls-fared-best-and-worst-in-the-2012-presidential-race/

Posted by: Michael Risch | Nov 11, 2012 8:38:25 AM

I also note another possibility regarding the distribution of pollster errors: The possibility that those errors fell into a bimodal (not normal) distribution. A distribution of this shape would be consistent with a theory that pollsters tend to slant their results toward one of two audiences: Repubs or Dems. An aggregator like Silver could still use these "reliably flawed" poll data to devise an accurate forecast.

I do recall reading some discussion of bimodal distributions being observed in 2012 poll data — but I can't remember the details at the moment.

My original point remains: If you want to evaluate the accuracy of the polls themselves, then forget for a moment about the forecasts made by Silver and the aggregators. You have to evaluate the individual poll results themselves.

Posted by: skahammer | Nov 11, 2012 2:34:16 AM

The fact that the underlying sample is representative doesn't do anything--does absolutely nothing--to help if the responsive sample is skewed by response rate. There's nothing that you can do at that point, which is why pollsters all simply assume it isn't a problem.

The fact is, the various pollsters make a variety of methodological choices that aren't grounded in sound scientific practice. PPP, the pollster that Silver identifies as the most accurate this year, assume that the electorate would have the same demographic make-up as the 2008 electorate. There is simply no sound scientific reason for that. But it worked. Various other pollsters made choices that also can't be defend on sound scientific grounds. (The pollster that does the most work to justify its decisions on sound scientific practice, Gallup, did not perform particularly well.)

Posted by: Thomas | Nov 10, 2012 7:22:22 PM

Well, some polls are better than others, for sure. This is how they differentiate. The determination of who is a likely voter, for example, is critical, and it's where the Romney camp had it all wrong and most of the pollsters were right. Also important is skew in response rate. 10% response is immaterial if it is a random 10%. The use of cell phones helps with that.

But what we're talking about here is bias, not pinpoint accuracy given the small sample sizes (which can be fixed be aggregation and central limit). The polls all went for the eventual winner and were thus not biased. If the methodology were wrong, they would have all gone for the eventual loser, and no aggregation will fix that. As you aggregate, your normal distribution should get much taller in the center, and if your polls are all over the map, that won't happen.

Posted by: Michael Risch | Nov 10, 2012 1:28:58 PM

But despite Silver's triumph, the pollsters may have been way off anyway. Since Silver's correct forecast might have been a result of properly adjusting for the pollsters' large errors.

For instance, if the pollsters' errors were very large but fell into a normal distribution when aggregated, then it would be relatively easy for an aggregator like Silver (relying on the Central Limit Theorem) to devise an accurate forecast from extensively (but predictably) inaccurate data.

If you're already skeptical about the estimating methods that pollsters use — especially when the response rates to their surveys are under ten percent — then Silver's correct predictions are no reason to reduce your skepticism about those methods. Even when Silver turns out to be bang-on, the pollsters can still be way off. Only analysis of each pollster's individual results will answer this question.

Posted by: skahammer | Nov 10, 2012 1:03:05 PM

The comments to this entry are closed.