2016 Elections

Prediction Markets Didn't Call Trump's Win, Either

People who won't talk to pollsters probably also don't bet on elections.

We have a winner.

Photographer: Andrew Harrer/Bloomberg

There’s a lot of reflection, speculation, and soul-searching about the failure of most public-opinion polls to predict Donald Trump’s election victory. But prediction markets – betting markets whose forecasts are often billed as far superior to polls – didn’t do much better.

QuickTake Perils of Polling

Prediction markets are like stock exchanges, but their securities are tied to future events rather than companies. You can trade a security, say, that pays $1 if there is a minimum wage increase by the end of 2017, and $0 otherwise. Just as the prices in stock exchanges reflect the market’s overall perceptions about companies, the prices in prediction markets inform us about collective estimates of how likely different events are.

If the “minimum wage hike” security is trading stably at 26 cents (as it is at time of writing) this means that people are willing to bet 26 cents for a chance to win $1 in the event that a minimum wage increase actually occurs. So we can infer that the public perceives that there’s only a 26 percent chance that the minimum wage will be increased in the foreseeable future.

Prediction markets are often uncannily accurate at aggregating public information about events. Prediction markets called nearly every state correctly in the 2012 presidential election. They correctly forecast the Supreme Court’s recent gay marriage and affirmative action decisions. And they frequently predict Oscar nominees and winners.

Yet they missed Trump’s win by a wide margin, just as they failed to anticipate other recent surprises like the U.K.’s June Brexit vote to leave the European Union, and the margin that gave the U.K. Conservative Party a parliamentary majority in 2015.

QuickTake Brexit

To get a sense of what might have gone awry, we first have to understand what it means when we say that the prediction markets got the forecasts wrong.

On Predictit.org and similar sites, Trump was consistently trading below 35 cents in the month prior to the election, with an average daily closing price around 25 cents. This suggests that the prediction markets’ participants thought Trump had a one-in-four shot at victory. The state-level securities for key battleground states like Florida and Pennsylvania showed similar patterns. Given what we know now, it seems likely that the prediction markets were underestimating Trump’s true probability of winning.

Long Shot

Daily price on Predictit.org for a chance to win $1 on a Trump victory

Source: Predictit.org

But not so fast, you say: We shouldn’t be surprised that an event with one-in-four odds occurred – indeed, we should expect such events to occur roughly one time out of four. That’s true, but when you put Trump together with Brexit, it starts to look like we have a real problem. Predictit.org forecast Brexit at around 3 in 10 – so the likelihood of both a Trump victory and Brexit was estimated at just 7.5 percent. 1

So what happened? It’s possible that most people betting on prediction markets don’t have much contact with the people who voted for Trump and Brexit.

If all the traders in a prediction market are missing a key piece of information, then the market price is missing it, too. Even if the market is liquid and frictionless – so that everyone can trade until the price perfectly reflects the data that market participants have – there’s a big, proverbial (and in this case, Republican) data elephant that is not actually in the room.

If none of the prediction market participants had decent information on the scale of Trump’s support, then all the trading in the world could not lead to a price that correctly reflected his chance of victory. This problem is compounded by the fact that prediction market participants also infer information from the prevailing price – and so may have discounted the signals of Trump’s strength that they did receive. Also, total payouts from prediction markets are too low to create a strong incentive for participants to work really hard to become substantially better-informed.

This chain of logic suggests that prediction markets could be abnormally bad at forecasting events that will be decided by actions of people who aren’t themselves plugged in to prediction markets. And there’s a message here about markets more broadly: Even the best-functioning markets don’t do a good job of pricing when key players aren’t represented.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

  1. This is assuming that those events are sufficiently independent, which seems reasonable given that they were in different countries and several months apart.

To contact the author of this story:
Scott Duke Kominers at kominers@fas.harvard.edu

To contact the editor responsible for this story:
Jonathan Landman at jlandman4@bloomberg.net

Before it's here, it's on the Bloomberg Terminal.