Two Lingering Suspicions About Economic Statistics

Yes, the government manipulates the data. That can be a good thing.

Everything is sunny and calm.

Photographer: Louisa Gouliamaki/AFP/Getty Images

The conspiracy theory that Barack Obama’s administration is brazenly making up economic statistics really isn’t credible. The argument that the official unemployment rate disguises the true state of the economy does have merit, but this has more to do with changes in the labor market than changes in how unemployment is measured. 1

There are two other suspicions about government economic statistics, though, that can’t be so easily dismissed. One is that the government numbers are unnaturally smooth, the other is that they suffer from what economics-data skeptic John Williams charmingly dubs “Pollyanna creep,” the tendency of statistics agencies to make adjustments that -- on balance, over the decades -- leave the data looking rosier than the economic reality. Here are some thoughts on both.

Smoothing the data. Yes, the government is manipulating the economic numbers to make them less volatile! Mostly that’s a good thing. Here’s what the closely watched nonfarm payroll employment numbers look like with and without seasonal adjustments:

Smooth and Not Smooth

Monthly change in nonfarm payroll employment

Source: Bureau of Labor Statistics

Without the seasonal adjustments, then, the monthly jobs numbers would be unintelligible. But such techniques can sometimes obscure real changes in the job market.

Consider another regular adjustment that the Bureau of Labor Statistics makes to the payroll numbers, using what’s called the “birth/death model.” The payroll data is assembled from a monthly survey of 146,000 employers that inevitably misses a lot of newly formed businesses. The BLS eventually finds them using state unemployment insurance records that nearly all employers have to file, but in order to generate monthly numbers on a timely basis it uses a statistical model to estimate how many jobs have been created by new businesses each month. As the BLS warned when it unveiled the current approach in 2002 (it was adjusting the numbers before then, too, just with a different technique):

The most significant potential drawback to this or any model-based approach is that time series modeling assumes a predictable continuation of historical patterns and relationships and therefore is likely to have some difficulty producing reliable estimates at economic turning points or during periods when there are sudden changes in trend.

This blindness at turning points plagues a lot of statistics that are released before all the underlying data is in, most notably the quarterly gross domestic product numbers. The early GDP estimates are full of projections and extrapolations based on past experience. Most of the time those extrapolations work out fine, but the advance estimates are less volatile than the eventual revised GDP figures, and were way too optimistic at the beginning of the last recession. 

My friend Aaron Brown, risk manager at AQR Capital Management and the author of several books on risk and gambling, thinks those who use surveys to measure things such as the unemployment rate are also affected by a deeper tendency to ignore inconvenient data. He e-mailed this in response to one of my earlier columns:

They must select a sample and weight responses; they must choose questions and code answers; they must deal with non-responses, ambiguous responses, suspicious responses and lots of other stuff. People have all sorts of employment situations, reducing it all to a single unemployment rate requires all kinds of choices. Moreover the people doing the work have a big incentive to produce clear, consistent, responses that reflect expectations, and zero incentive to be accurate.

I don't know about “zero incentive” to be accurate. If the unemployment rate compiled by the BLS followed a trajectory wildly different from other government employment statistics and from outside metrics such as Gallup’s weekly jobs survey, people would notice. But then that’s one more reason to produce data that’s reasonably consistent with expectations.

For consumers of government economic data, the overarching lesson here would seem to be that they’re not always going to tell you when something new and important is happening. That’s why data watchers often look to other, less-manicured numbers. Alan Greenspan famously used to check the price of No. 1 heavy melt steel scrap every day. These days those who are convinced that the job market is much weaker than the payroll numbers indicate look to the daily tax-receipt data from the Treasury Department. But more-volatile indicators like that are inevitably going to give a lot of false signals, too.

Pollyanna creep. Over the decades, the U.S. statistical agencies have made change after change in how economic indicators are measured and calculated. These changes have been done in the open, often at the urging of academic economists and presidentially appointed commissions. But, the theory goes, changes that make the economy -- and thus the nation’s political leaders -- look better are more likely to be implemented than changes that make the numbers uglier, and the cumulative effect is a statistical picture ever more removed from reality. Here’s how political commentator Kevin Phillips characterized the process in a Harper’s article in 2008:

The deception arose gradually, at no stage stemming from any concerted or cynical scheme. There was no grand conspiracy, just accumulating opportunisms.

It is the changes made in the calculation of inflation over the past quarter-century that have come under the most fire from Phillips and other skeptics. The crucial event in this tale was the creation by the U.S. Senate in 1995 of the Advisory Commission to Study the Consumer Price Index, widely known as the Boskin Commission, for its chairman, Stanford University economist Michael Boskin. The Boskin Commission concluded in 1996 that the Consumer Price Index overstated inflation by about 1.1 percent a year by failing to adequately account for, among other things, consumers’ substitution of cheaper items for relatively more expensive ones and the rising quality of many products.

It did not escape anyone’s attention at the time that this conclusion was extremely convenient for a federal government with huge future spending commitments (Social Security, mainly) that are indexed to inflation. Lowering measured inflation also has the impact of increasing reported real GDP growth, which makes whoever is in office look better.

There are lots of pretty convincing economic arguments, though, for why the BLS should be adjusting the CPI to reflect changing consumer preferences and the changing array of products and services available. It was already doing this before the Boskin Commission; the commission’s report just led to something of an acceleration for a few years in the late 1990s.

So ... have these changes led to a drastic understatement of inflation? Boskin Commission member Robert J. Gordon, an economist at Northwestern University, estimated in 2006 that the CPI was still overstating inflation by a percentage point. BLS economists John S. Greenlees and Robert B. McClelland, in an exasperated and quite informative rebuttal to critics in 2008, allowed that changes made to the CPI formula in 1999 to reflect consumer substitution of relatively cheaper products had reduced reported inflation by 0.28 percentage points a year relative to the old formula. They didn’t have similar estimates for other changes, though.

The aforementioned John Williams, whose Shadow Government Statistics service rose to prominence during the financial crisis and subsequent recession, does have an estimate: He contends that changes made by the BLS since the early 1980s now cause it to understate annual inflation by more than seven percentage points (he reported 8.7 percent inflation in June versus the government’s 1.0 percent).

Wow. Could that really be? I decided to put it to the Big Mac test. Thanks to the Economist’s famous Big Mac Index, which is intended to measure relative purchasing power across nations, it’s easy to get one’s hands on burger prices going back to 1986. Back then, a Big Mac cost $1.60 in the U.S. Now the average price is $5.04.

If prices had tracked the CPI over that period, a Big Mac would cost just $3.49 today. Using the CPI for food outside the home brings that up to $3.70. Look, the government is understating Big Mac inflation! Then again, if prices had tracked Williams’ adjusted CPI, a Big Mac would now cost $16.79. That’s much farther off than the official number.

So, yes, I am willing to believe that there might be Pollyanna creep at work in the government’s economic statistics. But many of today’s skeptics have a tendency to correct for it with what I guess you could call a Cassandra leap.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
  1. If I seem to be dismissing these two concerns too briskly, it’s because I already wrote two columns (linked above) discussing them.

To contact the author of this story:
Justin Fox at

To contact the editor responsible for this story:
Stacey Shick at

Before it's here, it's on the Bloomberg Terminal.