Polling analyst Nate Silver of the New York Times’ FiveThirtyEight blog was referring to competing polls that showed contradictory findings
I’d just seen a Marquette University poll of Wisconsin, which put President Obama 14 points ahead of Mitt Romney there. This came after a Rasmussen Reports poll of New Hampshire, published earlier that day, which had given Mitt Romney a three-point lead in the Granite State.
but he could easily have been speaking of the Peach State, where local “pollster” Insider Advantage showed Romney with a 21-point lead over President Obama, while a competing poll by YouGov showed only a 6-point Romney lead.
So, what’s going on when different “scientific” polls show vastly different results? Silver has one set of plausible explanations.
There are also going to be some outliers — sometimes because of unavoidable statistical variance, sometimes because the polling company has a partisan bias, sometimes because it just doesn’t know what it’s doing. (And sometimes: because of all of the above.)
The San Francisco Chronicle has an article out that discusses factors that may explain differences in polling outcomes.
At this time of year, the difference between poll results can be explained by everything from who is being surveyed (are they “likely” voters or just “registered”) to how many cell phone users (who are generally younger and from more diverse backgrounds) are contacted to how the questions are worded.
And while top pollsters try to adhere to common standards and best practices, there is a lot of room for interpretation in the way each constructs their universe of respondents.
“It’s a mixture of magic and science and research – and there’s more magic now because we have less science to guide our decisions,” said Oakland pollster Amy Simon, who is a leading expert in public opinion on same-sex marriage.
They also have suggestions for how to interpret polls, given the variance that is out there.
Consider the respondents: “Likely voters” are more credible, as they’re, well, more likely to vote. “At this point, don’t look at anything from registered voters,” said Oakland pollster Amy Simon. See if the poll includes cell phone users, who tend to be from more diverse backgrounds, younger and more likely to live in urban areas.
Examine the wording of questions: UC Berkeley Professor Gabe Lenz often teaches his students about a poll from the 1970s where 44 percent of Americans said they would not allow a Communist to give a speech, but only 22 percent would “forbid” it. The difference: Many people are often reluctant to sound harsh to a live interviewer, which “forbid” implies.
Treat a pollster like a movie critic: “Pick a poll and follow it,” said Michael Dimock of the Pew Research Center. “You can follow its nuances and learn its tendencies.” Others, like Lenz, said peace of mind can be found with those who aggregate the major polls and incorporate them into a trend, like Nate Silver of the FiveThirtyEight blog and RealClearPolitics.com
At the end of the day, here’s my recommendation for public consumers of polling data. Take the Olympic scoring approach, where you toss out the highest and lowest numbers, and average the rest based on the sample size. In statistical terms, you’re removing the outliers, and broadening the sample size. That’s not precisely correct, but it’s a pretty good back-of-the-envelope method that might help you make some sense out of competing polls.
Check back later today for more on this issue, and our recent survey on the Presidential race in Georgia.