Pollsters had long tracked campaigns by calling random samples of potential voters. As campaign became more drawn out and journalistic focus shifted to the horse race aspects of election, these phone polls proliferated. At the same time, though, the response rates dropped sharply, going from more than one in three to less than one in ten.
A big drop in response rates always raises questions about selection bias since the change may not affect all segments of the population proportionally . . . It also increases the potential magnitude of these effects. . . .
Poll responses are basically just people agreeing to talk to you about politics, and lots of things can affect people’s willingness to talk about their candidate, including things that would almost never affect their actual votes . . .
[In September, 2012] the Romney campaign hit a stretch of embarrassing news coverage while Obama was having, in general, a very good run. With a couple of exceptions, the stories were trivial, certainly not the sort of thing that would cause someone to jump the substantial ideological divide between the two candidates so, none of Romney’s supporters shifted to Obama or to undecided. Many did, however, feel less and less like talking to pollsters. So Romney’s numbers started to go down which only made his supporters more depressed and reluctant to talk about their choice.
This reluctance was already just starting to fade when the first debate came along. . . . after weeks of bad news and declining polls, the effect on the Republican base of getting what looked very much like the debate they’d hoped for was cathartic. Romney supporters who had been avoiding pollsters suddenly couldn’t wait to take the calls. By the same token. Obama supporters who got their news from Ed Schultz and Chris Matthews really didn’t want to talk right now.
The polls shifted in Romney’s favor even though, had the election been held the week after the debate, the result would have been the same as it would have been had the election been held two weeks before . . .
So response bias was amplified by these factors:
1. the effect was positively correlated with the intensity of support
2. it was accompanied by matching but opposite effects on the other side
3. there were feedback loops — supporters of candidates moving up in the polls were happier and more likely to respond while supporters of candidates moving down had the opposite reaction.
The above completely anticipates the main result of our Mythical Swing Voter paper, which is based on the Xbox polling data we collected in 2012, analyzed in 2013, wrote up in 2014, and published in 2016, and which was picked up in the news media in time for the 2016 campaign.
I’m not saying our paper was valueless: we didn’t just speculate, we provided careful data analysis. The thing is, though, that the pattern we found, that big swings in Obama support could mostly be explained by differential nonresponse, surprised us. It wasn’t what we expected, it’s not something we thought about at all in our 1993 paper, and it took us awhile to digest this finding. But Palko had already laid out the whole story, all the way including the feedback mechanism by which small swings in vote preference are magnified into big swings in the polls, with all this connecting to the rise in survey nonresponse.
I probably even read Palko’s post when it came out back in 2012, but, if so, I didn’t get the point.
There’s something wrong with the world that his blog (cowritten with Joseph Delaney) doesn’t have a million readers.