Beware of Models That Called It for Romney
We live in the age of the quant. Mathematics drives an increasingly large number of investment portfolios. In many ways, this is an improvement over the less evidence-based approaches that often relied on instinct and gut feeling. But as we learned during the financial crisis, models are imperfect and they can and do break down.
There are many reasons for this: noisy data series are subject to revision. Sometimes the assumptions underlying models are wrong. Occasionally, time simply moves forward, and events occur that are unanticipated by a model’s designer.
I was reminded of the dangers of using models to predict future outcomes this week by an article in the New York Times Sunday Business section. A column by Jeff Sommer discussed the electoral-prediction model of Ray Fair, an economics professor at Yale. It is a stark illustration of how and why models can fall apart, though in this case the model deals with politics rather than financial markets.
For investors who may rely on models to deploy their capital, there are lessons to be found.
Fair’s presidential forecasting model, which he developed in 1978, has an OK track long-term track record -- it was only wrong three times in the past century, based on recent forecasts and back tests. But two of those errors were fairly recent. It forecast a Republican win for the White House in 1992, when Democrat Bill Clinton defeated Republican George H.W. Bush. The Fair model also failed its most recent test: it predicted that Republican Mitt Romney would defeat Democrat Barack Obama in 2012.
Based on the nine presidential elections since the model's invention, that's a miss rate of 22.2 percent. For an all-or-nothing prediction, this isn't really a very useful record.
Here’s the money quote from the Sommer article:
Consider the limits of the model itself. It has worked reasonably well in most elections since 1980, and it performs well in back-tested analyses of elections since 1916, but it is far from infallible, even in achieving its goal: a prediction of the popular vote for the two main parties. If there is a third candidate (or a fourth) in the general election, the model doesn’t acknowledge the candidate’s existence, in effect assuming that the two traditional parties are affected equally. That assumption may not be correct.
It probably was incorrect in the 1992 election, a terrible one for the model, which predicted that President George H. W. Bush would be re-elected. Ross Perot received 19 percent of the vote, probably hurting President Bush in ways not captured by the model, Professor Fair says. That was also the election in which the campaign of the victor, Bill Clinton, relied on the mantra, “It’s the economy, stupid,” which could also be the slogan for Professor Fair’s model. He tweaked the model after that election, emphasizing economic factors even more. That was the last time he altered the model.
As I noted before, it also got the latest presidential election wrong, even though Fair says he adjusted the model to give greater weight to economic concerns -- of which there were plenty in 2012.
Think of it another way: How much attention should be paid to a forecaster or investment model that claimed to foretell market and economic cycles, yet completely missed the 2008-09 crash? You might take a look, but with knowledge of that huge oversight, you would probably give it little weight.
To be fair, the Sommer article does quote Fair as saying that his model may well be wrong about this coming election too: “Each election has weird things in it, yet the model usually works pretty well . . . This year, though, I don’t know. This year really could be different.”
Let’s look more closely at the process underlying the model.
After the 2012 election flub, the Wall Street Journal’s Justin Lahart wrote that:
The model relies on just three pieces of information: The per capita growth rate of gross domestic product in the three quarters before the election, inflation over the entire presidential term and the number of quarters during the term when GDP per capita growth exceeded 3.2%.
Plug those data points into Mr. Fair’s model, and it says that Mr. Romney should have taken 51% of the two party vote to President Obama’s 49%. Instead, it looks like it was 51.3% for President Obama to Mr. Romney’s 48.7%. It was only the third time since 1916 that the model failed to predict the popular-vote winner.
It's easy to see the problems here:
- Fair's model focuses on the popular vote while ignoring the Electoral College, which determines who wins the election;
- The information it relies upon, especially gross domestic product and inflation, are volatile, often unreliable in the short term and subject to substantial revision;
- The model seems to have been developed to use information that would have successfully predicted past elections; it seems almost trite to say it, but the past is, of course, no guarantee of future results;
- We can't tell if the model's track record was the result of random good fortune or not.
The narrative that was common in the mainstream press during the 2012 presidential election campaign was that it was a very tight race, and the outcome would be close; it wasn't. Meanwhile, other modelers did better; neuroscientist Sam Wang of Princeton precisely nailed the 2012 election, and Nate Silver, of 538, had it mostly right. Though they were both on target, we should be cautious about whether these models were really better or just happened to come up with the correct result.
For investors, the lessons are clear: Don't put too much stock in models that can't stand the test of time, are oversimplified and may be random in their outcomes.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
To contact the author of this story:
Barry Ritholtz at email@example.com
To contact the editor responsible for this story:
James Greiff at firstname.lastname@example.org