An Economics Lesson for Political Pollsters
In the early 1980s, the economist Robert Shiller made a discovery that would eventually win him a Nobel Prize. He found that stock prices bounced around too wildly to be explained by standard theories. This has been labeled the “excess volatility puzzle,” and financial economists have written countless papers trying to explain it. But now, excess volatility provides a clue to a possible problem with some of the forecasts we’re seeing in the presidential race.
Excess volatility means that a forecast is more volatile than the thing it’s predicting. For example, suppose that some website made weather forecasts that varied wildly from day to day -- one day it predicted scorching heat, the next day blistering cold. But in reality, temperatures don’t vary that much day to day. Even if the crazily gyrating forecasts were unbiased -- that is, if they were no more likely to be too high vs. too low -- they would still be sub-optimal. An optimal forecast doesn’t change much -- its errors should come from true surprise events, not from within the forecasting model itself.
That’s why Shiller’s excess volatility puzzle showed that the stock market probably isn’t efficient. If stock prices have excess volatility, then unusually high prices today imply that stocks are too expensive and are more likely to fall than rise. In fact, that’s exactly the principle that drives Shiller’s cyclically adjusted price-to-earnings ratio, or CAPE -- a popular measure of how expensive or cheap stocks are. Because stocks are more volatile than prices, CAPE generally tends to revert to the mean. This won’t tell you exactly what stocks will do tomorrow or next week, but they offer the promise of a little bit of predictability in an otherwise ineffable market.
Similarly, election forecasts that display mean reversion are inefficient forecasts. If big swings in the probability of a candidate’s victory predictably reverse themselves over the course of days or weeks, the forecast is telling us that the race is more fluid than it actually is. Here is a graph, created by Josh Katz of the New York Times’s Upshot blog, of how that blog’s election forecasts have changed over the summer relative to those of the data journalist Nate Silver’s website FiveThirtyEight:
As you can see, the models generally started and ended the period in question at about the same level. But in between those endpoints, the Upshot forecast stayed relatively stable, while the various FiveThirtyEight forecasts swung strongly toward Donald Trump and then back toward Hillary Clinton.
This sample is undoubtedly cherry-picked. To see which model is really more volatile, you would need to compare these forecasts over the long run. But Katz's graph includes the Republican and Democratic national conventions, so it presents a good picture of how both websites’ models responded to those important events.
Why does the Upshot’s model move less? Probably because it’s designed to be stable. The Upshot incorporates polls over a long time period, meaning that temporary movements get averaged out. Thus forecasts that give more weight to recent polls -- which FiveThirtyEight’s models presumably do -- will bounce around more.
So which is better? It depends on what you want out of an election forecast. If you’re following the race for pure entertainment, a more volatile forecast may be more fun to watch. If you’re a betting person, trying to outfox the prediction markets, you probably don’t want to use public forecasts at all, since these are common knowledge -- instead, you’ll want to make your own model, or go with gut instinct, just like if you were betting on interest rates or stocks.
But if you’re a political strategist trying to figure out where your candidate really stands, you might want to be leery of mean-reverting forecasts. A predictably temporary advantage is one you should probably ignore. For example, FiveThirtyEight’s Harry Enten shows a model of how convention bounces tend to fade over time, based on historical precedent:
If a forecast doesn’t subtract these bounce projections, it’s basically taking a stand that this time is different. But this assumption could lead to forecasts that give unjustified confidence first to Republicans, then to Democrats, as the conventions unfold.
I was especially worried about this in the context of FiveThirtyEight’s models when I read Silver's blog on August 2:
Clinton probably has some further room to grow in this [polls-plus] forecast … Clinton’s lead over Trump in polls-only will probably continue to grow over the next several days.
If Silver really believed those predictions, his models have had excess volatility issues. If your forecast is optimal, you shouldn’t be able to out-forecast it using your own judgment! In fact, Silver’s predictions were confirmed over the next few days, which raises some issues about whether his models -- especially the supposedly more stable “polls-plus” model, which combines polls with economic data -- contain too much noise.
But a few words of caution are needed. One tricky part about identifying excessively volatile election forecasts is that it takes many elections to get a sense of which ones bounce around too much. Robert Shiller’s analysis of the stock market relied on the assumption that stock price behavior is in some way consistent over time. But the election models of FiveThirtyEight, the Upshot and others are tweaked every year, so it’s hard to get a definitive answer to the question of which is the most efficient.
Another issue is that when we’re dealing with election forecasts, efficiency isn’t the only thing we care about. We also care about bias -- while it would be almost unthinkable for stock prices to be too high or too low for decades on end, it’s possible for an election forecast to be skewed toward one candidate or the other. An excessively volatile forecast could still be the best, if it’s the least biased.
So what's the lesson? If you really want to know the state of Election '16, be wary of forecasts that bounce around a lot. Too often, those extra bounces turn to be temporary.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
To contact the author of this story:
Noah Smith at email@example.com
To contact the editor responsible for this story:
Tobin Harshaw at firstname.lastname@example.org