Markets and Pollsters Both Failed in the U.K.
The debate on whether betting more accurately predicts election results than polls do has been going on for years, with those who favor bookies over pollsters amassing lots of convincing evidence. Yet the bookmakers failed just as badly as the sociologists at forecasting the U.K. election outcome.
The intuition behind trusting markets more than polls is obvious. In 1924, when the U.S. still had active betting on presidential elections, The New York Times (as quoted in a 2004 study of the phenomenon) spelled out why "Wall Street betting odds" were so indicative of future election outcomes:
Wall Street is always the place to which inside information comes on an election canvas … [and] it is a Wall Street habit, when risking a large amount of money, not to allow sentiment or partisanship to swerve judgments—an art learned in stock speculation; …any attempt to force odds in a direction unwarranted by the facts will always instantly attract money to the opposite side, precisely as overvaluation of a stock on the market will cause selling and its under-valuation will attract buying.
In other words, when there's money on the line, people take special care to get it right.
Besides, gamblers try to answer a different question than the one pollsters usually ask: not "Who will you vote for?" but "How do you think most people will vote?" The answer to the first question can be a spur-of-the-moment emotional response and therefore subject to change. The second one requires a reflection and considered judgment.
It really works. In the 15 U.S. presidential elections between 1884 and 1940, before scientific polling arrived, the mid-October betting favorite won 11 times, and the underdog only once; the remaining three races (the oldest ones) were practically photo finishes. In modern history, there are well-documented cases of bookies triumphing over pollsters, which led Justin Wolfers of Stanford University and Andrew Leigh of Harvard to suggest that the press covering Australian federal elections "may have better served its readers by reporting betting odds than by conducting polls."
During last year's Scottish independence referendum, betting markets gave union adherents a wider lead over secessionists than polls did, and they turned out to be right. So one could expect a similar situation during yesterday's election. Leighton Vaughan Williams, director of the Political Forecasting Unit at Nottingham Business School, did, writing this in The Telegraph in March:
Who will win the election in May? According to the current betting market, no party will win an overall majority. Less clear is which party will win most seats, though the Conservatives currently have the edge, with Mr Cameron favourite to remain as PM. The bottom line from the markets, though, is that this election really is too close to call, and all realistic options are still very much in play. That may yet change. If and when it does, the markets will be the first to tell us.
That turned out to be overoptimistic. By polling day, bookmakers Ladbrokes and William Hill had both David Cameron and Ed Miliband at 10/11 to be the next prime minister. Then the exit polls came in, and the bookies changed their odds dramatically, now heavily favoring Cameron. Bookies can do that to cut their losses, while pollsters are stuck with their inaccurate predictions, but what matters in the end is that none of them got the eventual result -- Cameron's convincing, outright victory -- right ahead of time.
It's time for the technical arguments now. Champions of political markets will say betting favored Cameron most of the time, even when polls showed Labour would be the biggest party in parliament. Pollsters will retort that they were not in the forecasting business at all, just taking snapshots of public opinion. Scholars have long pointed out that, to turn these into a forecast, experts would need to make adjustments based on historical data, which would produce more accurate predictions than the market does.
All that, however, is of little use to Labour leader Miliband, for whom the election results were a nasty surprise -- just as the Conservatives strong showing was an exhilarating one for Cameron. Billions of words and days of airtime spent discussing the implications of a hung parliament have been wasted, probably giving many pundits a nasty feeling they've been poking around like blind kittens, not analyzing data and drawing informed conclusions.
U.K. pollsters, and those who commission their work, now have a lot of thinking to do. If polls yield consistently wrong results a day or two before the election, this could be because voters are extremely fickle and prone to changing their mind at the last moment; but faulty poll design is a more likely explanation. Adjusting samples, screening more actively for likely voters, reviewing the data-gathering methods are all part of the homework that sociologists do after failures such as this, if they want to satisfy their clients next time there's a big election.
As for the bookmakers and other political futures markets, the U.K. vote shows that they're about as good at predicting election outcomes as the stock market is at valuing companies. When the raw data people trade on are wrong, assets are mispriced until the data gets corrected. Like it or not, polls are a big part of the raw data of politics; improving their accuracy should become a priority in the U.K.
This column does not necessarily reflect the opinion of Bloomberg View's editorial board or Bloomberg LP, its owners and investors.
To contact the author on this story:
Leonid Bershidsky at email@example.com
To contact the editor on this story:
Mark Gilbert at firstname.lastname@example.org