Georgia Polling Report: Are robopolls reliable?

9
Oct

Georgia Polling Report: Are robopolls reliable?

Last month, we ran a robopoll asking about the upcoming Charter School Amendment and it was picked up in a story by Walter C. Jones of Morris News Service.

When it ran in the Athens Banner-Herald, there was an interesting comment that I didn’t notice until yesterday. Someone purporting to be Barry Hollander wrote:

this appears to have been a robo-poll. Most professionals do not put a lot of stock in these. Still, the results sound reasonable for Georgia on this question.

He made another point about being skeptical of partisan pollsters, but I’ll address that question in another post.

Here, I want to address Professor Hollander’s assertion that most professionals discout robo-polls. It’s simply not true.

Rasmussen is arguably one of the most-widely watched polling firms, and their polls are done by IVR, the technology commonly called robo-polling.

Data for Rasmussen Reports survey research is collected using an automated polling methodology. Field work for all Rasmussen Reports surveys is conducted byPulse Opinion Research, LLC.

Generally speaking, the automated survey process is identical to that of traditional, operator-assisted research firms such as Gallup, Harris, and Roper. However, automated polling systems use a single, digitally-recorded, voice to conduct the interview while traditional firms rely on phone banks, boiler rooms, and operator-assisted technology.

For tracking surveys such as the Rasmussen Reports daily Presidential Tracking Poll or the Rasmussen Consumer Index, the automated technology insures that every respondent hears exactly the same question, from the exact same voice, asked with the exact same inflection every single time.

All Rasmussen Reports’ survey questions are digitally recorded and fed to a calling program that determines question order, branching options, and other factors. Calls are placed to randomly-selected phone numbers through a process that insures appropriate geographic representation. Typically, calls are placed from 5 pm to 9 pm local time during the week. Saturday calls are made from 11 am to 6 pm local time and Sunday calls from 1 pm to 9 pm local time.

Rasmussen is widely recognized for their accuracy, though some detect a bias and they appear to have had a bad year in 2010.

In December 2009, Alan Abramowitz wrote that if Rasmussen’s data was accurate, Republicans would gain 62 seats in the House during the 2010 midterm elections.[44] In a column written the week before the 2010 midterm elections, Rasmussen stated his belief that Republicans would gain at least 55 seats in the House and end up with 48 or 49Senate seats.[45] Republicans ended up gaining 63 seats in the House, and coming away with 47 Senate seats.[46]

Some of the error Rasmussen showed in 2010 may have been caused by their use of weighting.

Rasmussen also weights their surveys based on preordained assumptions about the party identification of voters in each state, a relatively unusual practice that many polling firms consider dubious since party identification (unlike characteristics like age and gender) is often quite fluid.

We’ve discussed weighting and how it can lead to bias in surveys. The issue of weighting is separate from that of the accuracy of robopolls.

Mark Blumenthal, who writes on polling and is a longtime professional, has said that robo-polls are comparable in accuracy to live-agent surveys.

And while I will grant that final-poll pre-election poll accuracy is a potentially flawed measure of overall survey quality, it is the best yardstick we have to assess the accuracy of likely voter selection methods.

On that score, automated “robo” polls have performed well. As PPP’s Tom Jensen noted earlier this week, analyses conducted by the National Council on Public Polls (in 2004), AAPOR’s Ad Hoc Committee on Presidential Primary Polling (2008), and the Wall Street Journal‘s Carl Bialik all found that automated polls performed about as well as live interviewer surveys in terms of their final poll accuracy.

To that list I can add two papers presented at last week’s AAPOR conference (one by Harvard’s Chase Harrison and Farleigh Dickinson Unversity’s Krista Jenkins and Peter Woolley) and papers on prior conferences on poll conducted from 2002 to 2006 (by Joel Bloom and Charles Franklin and yours truly).

All of these assessed polls conducted in the final weeks or months of the campaign and saw no significant difference between automated and live interviewer polls in terms of their accuracy. So whatever care automated surveys take in selecting likely voters, the horse race estimates they produce have been no worse.

Blumenthal isn’t alone in that analysis. The Wall Street Journal wrote after the 2008 General Election that

interactive voice response polls, or IVRs, were as accurate as live-interview surveys, and more thorough. Among phone pollsters, IVR firms SurveyUSA and Rasmussen were active in more states in the last week than any competitor, and Public Policy Polling, which also uses IVR, ranked fourth. The midrange of their error on the margin was a relatively small 2.1 percentage points, 2.8 and 2.1, respectively. (All such estimates in this column are preliminary, as vote totals hadn’t been finalized Wednesday.)

Pew Research Center, a gold-standard for public opinion polling found in 2008 that “[t]he mean error among IVR polls (1.7%) was slightly lower than among those with live interviewers (2.1%).”

The American Association for Public Opinion Research, the professional association for pollsters, found likewise that “[t]he use of either computerized telephone interviewing (CATI) techniques [which use human operators] or interactive voice response (IVR) techniques made no difference to the accuracy of estimates.”

In 2010, Nate Silver found that robopolls tended to skew mildly Republican, but stated that he had not seen that effect previously.

Of course, there is some reluctance to use robopolls by some news media.

Years ago, critics laughed at the notion that “robo-pollers” could generate good numbers. But that line of attack is finished. SurveyUSA and PPP were ranked the second and fourth most accurate pollsters in 2010 by Nate Silver – above such traditional stalwarts as Mason-Dixon and CNN/Opinion Research. And that accuracy was particularly impressive given that those two IVR pollsters focused extensively on harder-to-poll down-ballot races, while traditional firms stuck with statewide contests like senate and governor.

I asked Jon Cohen, the Washington Post‘s polling director, why he wouldn’t let his writers run IVR polling. He responded via email:

Our editorial judgments are based on how polls are conducted, not on their results, or apparent accuracy. Now, we flag polls that have really bad track records, but end-of-campaign precision is a necessary, not sufficient condition in our assessments.On the methods front, the exclusion of cellphones is a big – and growing – cause for skepticism about IVRs.

Note that the Washington Post polling director’s objection to robopolling had nothing to do with the accuracy of the technology.

Maybe journalism is the profession that Barry Hollander was referring to when he wrote “[m]ost professionals do not put a lot of stock in [robopolls].” After all, Hollander is a professor of journalism, not of statistics, political science or research methods. For what it’s worth, pollsters don’t put much stock in the assertions of journalists about polling methods.

Comments ( 2 )