It isn't supposed to be guesswork.

Forecasting Is Risky, Especially About the Future

Noah Smith is a Bloomberg View columnist. He was an assistant professor of finance at Stony Brook University, and he blogs at Noahpinion.
Read More.
a | A

A while ago, I wrote about the people who warned in 2010 that quantitative easing would result in inflation, but who didn't seem to change their beliefs very much after inflation failed to materialize. Others wrote about the same phenomenon. Of all the defenses offered by the 2010 inflationistas for the constancy of their views, the most subtle and interesting is the claim that predicting an event is different than predicting the risk of an event. Highly successful finance magnate Cliff Asness, writing at RealClearMarkets, makes this defense:

When you warn of a risk and it doesn't come to pass I do think you owe the world this admission, even if you later explain what it means to warn of a risk not a certainty, and offer good reasons why despite reasonable worry this particular risk didn't come to pass...

We did not make a prediction, something we certainly know how to do and have collectively done many times. We warned of a risk...If you believe the risk of an earthquake is 10 times normal, but 10 times normal is still not a high probability, it's rational to warn of this risk, even if the chance such devastation occurs is still low and you'll look foolish to some when it, in all likelihood, doesn't happen. If you can't point out risks you are left with either silence as an option, or overly and falsely self-confident forecasts...

I think when you boldly forecast a risk you are saying more than "this might happen but either way I can't be blamed" and something less than "this will happen and I stake my reputation on it." We should all be mature enough to know the difference[.]

It is indeed a subtle distinction. In fact, it is several subtle distinctions rolled into one.

First, there is the issue of how to trust a forecaster who only forecasts risks, not events. Obviously, if you predict a 1 percent chance of an earthquake, and it doesn't happen (or even if it does!), that tells you very little about how good the 1 percent number was. Matt Yglesias has a good post at Vox that deals with this issue in the context of election forecasts.

Ideally, the way you would deal with this is to get the forecaster to make many repeated predictions, and then measure whether the percent chances they predict match up to the percent chances that actually happen. But in practice, this usually isn't possible, and when the events are big historic unprecedented one-shot things like QE, it isn't possible at all.

Second, there is the distinction between making a prediction and updating one's beliefs based on the outcome. Brad DeLong points this out. Even if it was reasonable to worry about inflation back in 2010, that doesn't necessarily excuse someone who continues to worry just as much about inflation in 2014. Should one be expected to change one's model of macroeconomics whenever it makes a bad prediction, or should the failure just be regarded as the error term in the model?

Third, there is the issue of time. What if, in 2027, there is a burst of inflation for no apparent reason? Will the people who predicted inflation as a result of QE in 2010 say "See? We told you that Fed balance sheet expansion had to cause inflation sooner or later!"? This sounds like a bit of a silly time horizon, but most modern economic models assume that people really do think and plan that far ahead. This may, of course, say more about modern economic models than about reality, but the question of long-delayed effects shouldn't be disregarded.

Finally, there is the question of what information set someone used when issuing his or her warning. Did the signatories of the 2010 letter think only about the experience of the U.S. in the 1970s when they warned about inflation? Or had they stopped to consider the experience of Japan, whose repeated rounds of QE have never unleashed inflation of more than 1 percent?

The fundamental question is this: Suppose there are people out there who are broken records when it comes to inflation. Rain or shine, come what may, they warn of inflation. In fact, you could replace these people with a simple script that sent you a text message every day saying, "Raise interest rates, shrink the Fed's balance sheet, or you're going to get inflation!"

Obviously, these warnings would have zero informational content about actual inflation. But how would you go about differentiating between such a bot and a real human being? Is there some kind of Turing Test for macroeconomic forecasters?

Perhaps if we had forecasters go on the record with quantitative forecasts instead of vague wordy warnings, we might be able to estimate the informational content of their predictions. Or perhaps, if we had prediction markets where people could bet on different forecasters' predictions, we could harness the wisdom of crowds to extract the value of each prognosticator's prognostications.

In the meantime, our tools for identifying unreliable forecasters are rather primitive -- a combination of reputation, bluster, excuses, insults and counter-insults. It's all a bit silly, and it generates a lot of bad feelings all around. But what else can we do?

This column does not necessarily reflect the opinion of Bloomberg View's editorial board or Bloomberg LP, its owners and investors.

To contact the author on this story:
Noah Smith at nsmith150@bloomberg.net

To contact the editor on this story:
James Greiff at jgreiff@bloomberg.net