Weighty matters: confirmation bias and polling

4
Oct

Weighty matters: confirmation bias and polling

A little less than ten years ago, the firm I worked for was commissioned to poll the Georgia general election. We included a ballot question for the Governor’s race, which I often like to include because it gives us a benchmark by which to compare our surveys against other publicly-released surveys. At the time every single poll that was published had Barnes winning:

In the final poll taken before the 2002 Georgia gubernatorial election, incumbent Roy Barnes held an eleven point lead over his Republican challenger, Sonny Perdue (Salzer 2002a). This was hardly surprising. Barnes, a first- term Democrat, had led in every public poll taken since the campaign began (Dart 2002), and Perdue never edged closer than seven percentage points until election day (Beiler 2003).

When our sample came back showing Perdue with a narrow victory, I simply didn’t believe it. I was scared to give the client a survey analysis that included something that was so clearly wrong because it would lead the client to doubt everything else in the poll. I actually asked management to authorize a re-sample, and when it came back still showing a Perdue victory, I sheepishly gave the results to our clients.

We now know what happened several days later.

Once Georgians had their say on November 5, however, Barnes’ defeat was more than stunning—it was historic. Not only had Perdue overcome what seemed to be insurmountable polling and fundraising disadvantages, his election broke a Democratic stranglehold on the Georgia governorship that had kept the GOP out since Reconstruction. For a Republican running for governor in Georgia, Perdue won an unprecedented share of the vote among rural whites, an indication of a continuing realignment in favor of the GOP (Wyman 2002, 3). In winning 51 percent of the vote, Perdue had broad support, carrying 118 of the state’s 159 counties.

If I had been more confident at the time, I might have pressed the client to make the poll public, and it would have been stunning, and it would have been correct. Other pollsters in that place might have simply played with weighting the electorate until it gave them the result they wanted or thought was more accurate. But because every published poll showed Barnes with a significant lead, it led me to doubt the results in front of me. This is how confirmation bias can sneak into an otherwise rigorously administered survey.

Confirmation bias is a tendency to believe facts or opinions that reinforces one’s own opinions and to ignore inconvenient ones that contradict. This leads many conservatives to listen to conservative media and insist that the mainstream is wrong. But it can also lead political professionals astray.

Earlier I wrote about the issue of how pollsters are weighting publicly-released polls of the Presidential election. The issue resolves to making a call about whether to assume that the 2012 General Election electorate will resemble 2008 or another year. We showed how assuming that the demographics of the electorate (such as age or ethnicity) can effect the post-weighting results of a survey. Now we’re discussing why large numbers of media pollsters appear to making the same assumption with respect to the composition of the electorate.

Ultimately, much of the question of whether the polls are accurately weighted is an ideological question about what you expect the 2012 General Election electorate to look like. If you believe that the demographics next month will be like 2008, and you weight accordingly, your statistical findings will reflect a greater share of the vote for President Obama. If you believe they will be like 2010, your results will favor Romney. And there’s enough data out there in the form of publicly-release polls to allow you to choose any type of electorate you want, based on sound logic or your own internal bias, and massage the data through weighting until it confirms what you expect the numbers to look like.

But many analysts say that using only “likely voters” leads to a better sample and can be used as a guide for determining the demographics of an upcoming election. But this is problematic because survey respondents lie or overestimate the likelihood that they will vote.
For example, a September 2008 Pew Research Center poll showed 72 percent of respondents aged 18 to 29 said they “definitely plan to vote” in the General Election that year. As a reminder, here’s what actually happened, at least in Georgia, according to the Secretary of State’s website.

So I believe that survey methods that screen based on the respondent’s stated likelihood of voting are unreliable. So what’s a pollster to do? Professor Todd Rogers of the Harvard School of Public Policy gives us some ideas:

Records of past voting behavior predicts turnout substantially better than self-prediction. Self-prediction inaccuracy is not caused by lack of cognitive salience of past voting, or by inability to recall past voting. Moreover, self-reported recall of turnout in one past election predicts future turnout just as well as self-prediction. We discuss implications for political science research, behavioral prediction, election administration policy, and public opinion.

I’ll discuss how I identify likely voters in a future installment.
But back to the problem at hand: how do pollsters decide what assumptions will guide their weighting and thus their analysis? This is where polling is less scientific and more artful. I would say that many pollsters rely, explicitly or not, on their experience in the “real world.” So Democratic pollsters’ opinion of what the mood or mobilization of the electorate will be influenced by the people they talk to, which often will be a lot of other Democratic political professionals. The same holds true for Republicans. And this goes far in explaining the different takes that Democrat and GOP pollsters have on appropriate weighting this election cycle.
So if picking a past election to model your weighting on is an exercise in ideology, and survey measures of voters’ propensity to actually vote are unreliable, what is the pollster to do when he’s seeking as accurate a survey as possible? The answer is to trust in randomness, because the more you weight a poll, the more you introduce your own assumptions and bias.
Comments ( 5 )