Perils of Polling

By | Updated Nov 11, 2016 3:00 PM UTC

Polls have long been the gold standard for assessing politicians, elections and voter concerns. They haven’t been shining lately. Almost every poll missed the true voter support for Donald Trump, the Republican candidate who won the U.S. presidency. This followed failures to predict the clear victory of the “leave” camp in the U.K. referendum on whether to stay in the European Union and the rejection of the Colombian peace deal with rebels. In 2015, polls were wrong on outcomes in Israel, the U.K. and Greece; in 2014 predictions were far off the mark in the Scottish independence referendum and the U.S. congressional elections. These bungles have undermined the industry’s claim to scientific rigor and cast doubts on its methodology, including its handling of the switch away from landline phones. Can poll crafters devise a new formula that delivers more accurate results in this no-time-to-spare mobile era?

The Situation

In the U.S., polls had shown Trump trailing his rival, Democrat Hillary Clinton, for months. Earlier in 2016, some “horse race” surveys in state nominating contests proved to be misleading at best. Analysts had downplayed individual polls and said the most reliable barometers were the aggregations of polling data offered by HuffPost PollsterFiveThirtyEightRealClearPolitics and others. In part this is because different polling firms have posted results on the same day that have varied by 10 percentage points or more. Yet every one of these aggregating sites came up with a wrong call on the presidential race. Pollsters have certainly faced a range of constraints. In the U.S., almost half of adults use only mobile phone service. So to collect a representative group, firms have increased calls to mobile phones, which are now three-quarters of some samples. To do this, pollsters have to dial numbers by hand (U.S. law bans cell phone autodialing) and make more calls, since mobile users tend to screen out unknown callers and fewer will sit through 20 minutes of a stranger’s questions. This isn’t cheap — mobile surveys can cost nearly twice as much — or easy. Pew Research’s response rate on its 1997 polls was 36 percent; that fell to 9 percent in 2012.

Source: Bloomberg

The Background

George Gallup, an advertising market researcher, created the first scientific political poll in 1932 for his mother-in-law, who was running to be secretary of state of Iowa. (She won.) He founded the American Institute of Public Opinion, later called Gallup Polls, in 1935. During the 1936 presidential election, the prestigious Literary Digest’s survey tallied millions of returned postcards and found overwhelming support for the challenger, Republican Alfred Landon. Gallup interviewed 50,000 people chosen at random and correctly predicted Democratic President Franklin D. Roosevelt would win re-election. Yet Gallup and other pollsters botched calls on the 1948 presidential race, leading to the winner, Harry Truman, gleefully waving a newspaper that relied on surveys for the first-edition headline: “Dewey Defeats Truman.” These polls’ errors included not surveying right up to Election Day, missing people who made last-minute choices. Further refinements in the U.S., including conducting surveys in the evening when more people were home, helped improve accuracy. U.K. polls were overhauled after 1992, when some underestimated the Conservatives’ win over Labour by almost 9 percentage points. This was attributed to “shy Tories” – people who planned to vote Conservative but told pollsters they hadn’t yet made up their minds.

The Argument

Pundits and politicians will be poring over data for years to find out what went wrong in the 2016 U.S. presidential race. There were already fears that as phone surveys decrease in frequency, poll aggregations will be dominated by less scientific polls and accuracy would suffer. Some pollsters now sign up pools of respondents in advance and offer them cash or gift-cards as incentives, which is believed to skew the sample and the quality of answers. Other firms have turned to cheaper web questionnaires, which have the obvious problem of restricting the sample to people who are online; this was a factor in the flawed YouGov U.K. surveys in 2015. Academics warn that judging the accuracy of polling models in the mobile-phone era will require years of comparisons against final voting results to make sure the pool of respondents accurately reflects the electorate in terms of factors like race and geography. In the meantime, some think bad polls are good for democracy. More people might vote if they didn’t think the results were preordained.

How the Polls Got It Wrong

The Reference Shelf

  • A preliminary inquiry into what went wrong with the polls’ predictions in the 2015 U.K. election cited poor sampling and “herding” — when pollsters adjust their findings to be in line with what other polls are reporting.
  • Pollsters have long acknowledged that their surveys carry a measure of inaccuracy, which is why they include a margin of sampling error, explained here by the Massachusetts Institute of Technology.
  • The American Association for Public Opinion offers election polling resources and the British Polling Council has answers for frequently asked questions.
  • A Pew Research Center report: “Coverage Error in Internet Surveys: Who Web-Only Surveys Miss and How That Affects Results.”
  • Andrew Kohut, founding director of the Pew Research Center, traced the history of how polls came to play major roles in policymaking and politics.
  • Bloomberg Politics Polls are collected here. And the Bloomberg Politics Poll Decoder examined how various U.S. polls were constructed. 

First published Feb. 1, 2016

To contact the writer of this QuickTake:
Max Berley in Washington at mberley@bloomberg.net

To contact the editor responsible for this QuickTake:
Anne Cronin at acronin14@bloomberg.net