Sunday, November 4, 2012

The Art of Polling



It’s the last few days of the election, and the race (as usual) has been tightening between President Barack Obama and Mitt Romney. As such, a flurry of polls will be released, especially those from the battleground swing states, detailing how close the candidates are to winning.

While polling is an important part of the political process, it’s important to understand that while polling is a science, it's just as much an art.

Now, that seems counter-intuitive to many people; statistics are all numbers, right? And numbers are pretty much black and white, right? Well--welcome to the world of statistics, where numbers are anything but.
I won’t go into the philosophy of statistics here (and, yes, there is such a thing), but I’m going to focus specifically on political polling. 

Let’s take a look at a regular poll: the race is 48% Romney, 49% Obama. The poll has a +/- margin of error of 3.5%, and there were 1080 respondents. 

The results are just that: of the people they polled, 48% answered "Romney" and 49% answered "Obama." (Except they totally didn’t, but we’ll get back to that in a moment.) They probably sent out thousands of surveys (most likely robocalls—automated phone calls—but could be gathered in other methods as well) and got 1080 responses. This response rate determines the margin of error—which means those numbers can each be off by 3.5%. If a poll is “within the margin of error,” that means the difference between the two answers (in this case, 1%) is less than what the answers could be (in this case, 3.5%). With varying degrees of likelihood, this poll could easily be 49% Romney and 48% Obama, or (much less likely) 51.5% Romney and 45.5% Obama. There is also a chance (pretty small, but somewhere in the range of 5% depending on the statistical models used) that the entire results are just plain wrong. So basically there is a 95% chance that the 49%-48% results are correct, allowing a range of 3.5% for each result.

(Note to statisticians and econometricists who may be reading this: Yes, I realize I’m glossing over and criminally simplifying a large part of your profession. Deal with it.)

So those are the numbers. But how are those numbers created?

First, we need to look at how the questions are asked. For presidential polling, there are two major components of this. First is the “screen,” which pollsters use to determine if you are a likely voter and/or how devoted you are to a candidate. Usually these seem like mundane questions (“How much attention have you paid to election coverage?”) but they convey a lot of information to pollsters. Most pollsters will have a set of questions they use (and their own formulas to determine a respondent’s likelihood of voting) to screen a person first. 

Then, they have to determine how to ask the question. While this may seem straightforward, it’s actually very, very tricky. Studies have shown that making even token neutral changes in the wording of a question can cause wide swings in results. For example, asking “If the election were held today, who would you vote for?” and “Who would you vote for if the election were held today?” can actually yield different results, even though it’s the exact same question. Most pollsters will account for this, but it’s still a very difficult thing to keep neutral.

Once they have the results, however, they have to be weighted. What happens when you get all 1080 back, but it’s clear that 60% of the respondents identify as Democrat and 40% as Republican, yet the state you are focused on is a true 50/50 split? Your results are clearly going to skew Democrat because you didn’t take a “true” random sample. Pollsters, therefore, will take the results and adjust them to match the demographics of the area they are polling in. So in this case those 60% of Democrats would be weighted down while the answers of the Republicans would be weighted higher. (Note that this would still correctly reflect the correct proportion of Democrats who vote Romney or Republicans who vote Obama.) These demographics aren’t just about party affiliation—they could be about age, income, race, etc.--although they are limited to the questions asked during the polling.

Of course, things can get more creative (and trickier) with that. For example, in 2008 there were a ton of first-time voters who voted for Obama, a portion of which probably won’t vote again (or may vote for Romney). The pollsters may have to take this into consideration when they determine whether someone is a likely voter or not. However, no one really knows exactly what this number will be. Will Obama retain 80% of it or just 75%? That alone might make a different of a half a percentage point in your final results; and if the results are drastically different (50%? 95%?) then all poll calculations are way off.

After all this, throw in voters who give inaccurate information (either intentionally or accidentally) or true undecideds who change their mind after answering to add sort of a built-in, unmeasurable margin of error.
So, when you see any sort of poll, you’re looking at numbers that:


  •  Have been filtered through a screening process to determine how likely the person is to vote;
  • Might be skewed based on the wording of the question;
  • Has been adjusted due to the pollster’s demographic weighting formula
  • The normal built-in fuzziness of undecideds or fickle voters;
  • Has the normal margin of error present in any statistical result
  • Still has a small chance to be completely wrong.

Pretty much every single step in this process requires either a judgment call from someone or some sort of expected variance in the results.  It’s not hard to wonder how accurate the results are.
The good news, of course, is that pollsters are professionals. They try and account for each of these problems the best they can. Most pollsters pride themselves in being accurate and neutral, and so they iron out to the best of their abilities any biases that may arise in the results. In addition, there is sort of a “natural” tendency for some of these factors to balance out—for example, most likely inaccurate/accidental answers are evenly split between the parties so it is not a net gain for either side.

Still, there is a fear that the polls are more art than science. What if pollsters assume that the cell phone numbers they are not calling are 80% Democrats but in reality it’s diminished to 70%? Obviously for non-close states this matters little—who cares if the Obama vote is understated by 4% if he’s going to win by 25% anyway? However, if all the major pollsters are pulling from similar information, they may all be making the same mistakes in their formulas. This is also true in successive polls—if every single poll in Ohio shows Obama with a 2% lead, that either means he has a legitimate 2% lead, or that the pollsters keep repeatedly making the same mistake every time they poll in Ohio. Unfortunately, we won’t know which until after the election.

Since pollsters keep their formulas secret—for good reason—we won’t know if there are any particular biases. However, the main point is that if Romney is winning by 1% in Florida or Obama is winning by 2% in Nevada, it actually means that pollsters--or any of us, really--have no idea who is winning.

2 comments:

  1. I am always, always persuaded by this guy's approach and analysis. He has Obama as an 85% favorite.

    http://fivethirtyeight.blogs.nytimes.com/2012/11/03/nov-2-for-romney-to-win-state-polls-must-be-statistically-biased/#more-37099

    On polling-industrywide statistical bias: "The FiveThirtyEight forecast accounts for this possibility. Its estimates of the uncertainty in the race are based on how accurate the polls have been under real-world conditions since 1968, and not the idealized assumption that random sampling error alone accounts for entire reason for doubt."

    ReplyDelete
  2. Ha. I *specifically* called out Nate Silver because I think his analysis is shaky. I don't necessarily think that he's wrong, but I've looked at his analysis of 2008, and while he was right, he wasn't so fundamentally right more than anyone else that his abilities should be touted as flawless as they are.

    I think looking at historical polling is only useful up to a point (I can't imagine anything in 1968 that would be relevant today), but even looking at, say, 2004 vs. 2008 vs. 2012, there are too many question marks (first time voters, demographic changes, cell phones, etc.) for anyone to be all that certain.

    ReplyDelete