The 2014 Election Turned the Polling Universe Upside Down

Pollsters who nailed 2012 got 2014 wrong, and vice versa.

Voters cast their ballots at a polling station at the Community Center November 4, 2014 in Brooklyn, Iowa

Photographer: Steve Pope/Getty Images

Doug Kaplan had it right. Days before the midterms, his company, Gravis Marketing, polled the Virginia Senate race. It came back with a surprise: Republican Ed Gillespie was tied with Democratic Senator Mark Warner. Had the poll been released, it would have joined a survey by the conservative Vox Populi as the only numbers that correctly predicted the race.

The public never saw the Gravis poll. Kaplan figured that the public would dismiss it. "You know the way I'm treated in the media," Kaplan said with a laugh.

Yes indeed. Earlier in 2014, Gravis Marketing was a source of fool's good for Republican campaigns. At my old Slate perch, I wrote a post with the SEO-friendly title "The Worst Poll in America," shaming Gravis for numbers that predicted a much closer contest in Texas's GOP Senate primary than the one voters actually made. In 2012, it was easy to find conservative-friendly polls that overrated Republican chances, and easier to find critics of Gravis.

Not so in 2014. Polling was so bad in Virginia, so wildly overstating Warner's margin, that Democrats and Republicans have called for it to be investigated. (Veterans of the Gillespie campaign grouse, with reason, that the polling scared away the kind of money that could have enabled an upset.) Meanwhile, Gravis had a really good run. After the election, it reminded reporters that it was on the nose in Colorado, Iowa, and North Carolina. It saw a 6-point margin in Iowa when others, like Public Policy Polling, saw a tighter contest.

"We’ve changed our methodology, and we’ve gotten better at researching," Kaplan said. "In 2012 the operation was just me and a statistician. Did we get in over our heads? Probably. This time we brought in someone from a big polling firm. I had a brilliant kid from an Ivy League school."

Meanwhile, pollsters like the North Carolina-based Public Policy Polling had a terrible election. It was used early in the cycle by some progressive groups, like the Progressive Change Campaign Committee, to do polling that found insurgent Democrats looking far better than they ended up. The pollster's combination of automated calls and internet surveys simply didn't predict final results the way some competitors did; PPP went largely silent after the election, only re-emerging to argue with a Nate Silver story that accused it of putting a "thumb on the scale" to bring its numbers in line with averages.

Not so, said PPP's Tom Jensen, as he walked through the firm's post-election analysis of what went wrong.

“Right off the bat, when we went to look at the final polls, 5 to 6 percent of people were undecided," he said. "We found that the undecided gave Obama a minus-60 approval rating. Right there, you could see a three-point advantage for every Republican in close races. That explains a fair amount of the gap. If we look at places where polling is way off, like Kentucky, that was a state that Romney won by 25 points. We were finding an electorate that pretty consistently only voted Romney by 15 points. In retrospect, we had samples that were too Democratic."

And what of the Nate Silver takedown? "Nate Silver has a pretty longstanding grudge against us, and that was the thing driving that article," Jensen said. "There are several people who can make a model, but not many who can hold a grudge like Silver between his Sam Wang grudge, and his Paul Krugman grudge, and his PPP grudge."

Wang, a Princeton professor, offered a competing polling model that sometimes contradicted with Silver. Krugman criticized the new FiveThirtyEight site for deploying data journalism in situations where it didn't work. 

"We are so much more active during cycle than most pollsters, so it’s often our initial polls that determine averages," said Jensen. "He’s saying we change at the end; what we find at the end of these races is similar to what we find a couple months out. If you're averaging pollsters, you should always take the credit or the blame for your results. You can't take glory when you're right and blame somebody else when you're wrong."

This article has been updated to reflect the timing of PPP's work on behalf of progressive groups.

Before it's here, it's on the Bloomberg Terminal.
LEARN MORE