Climate Change

Conservatives' New Climate Argument Fails

Sure, blind faith in scientists is bad. But so is stubborn distrust in the face of overwhelming evidence.

Given the inherent uncertainties in data, who's to say the earth is round?

Photograph: Paolo Nespoli - ESA/NASA via Getty Images

A new argument has started to crop up in debates over climate change. It goes like this: Science couldn’t predict the outcome of the last election, or the bumps in the economy, so why should we believe scientists when they try to predict the future of Earth’s climate?

For example, a recent New York Times column -- the first from new op-ed writer Bret Stephens -- starts with a cautionary tale about the failure of data analytics to guide Team Clinton to victory in 2016, then segues into a discussion of climate-change skepticism. Given the “inherent uncertainties of data,” Stephens argues, doubters have a right to distrust “overweening scientism.” He writes:

We live in a world in which data convey authority. But authority has a way of descending to certitude, and certitude begets hubris. From Robert McNamara to Lehman Brothers to Stronger Together, cautionary tales abound.

But to put this in context, science makes all kinds of predictions that do hold up. Consider last year’s finding of gravitational waves: Scientists reported that they’d detected ripples in space-time generated by a collision of two black holes some 1.3 billion light years away. The invisible waves were predicted by Einstein’s theory of general relativity a century ago.

Even if someone later finds this individual claim was in error, it’s part of a body of knowledge. If physics weren’t on reasonably good footing, we wouldn’t be walking around with devices that talk to satellites to pinpoint our locations. If not for a general trust in physics, airlines would have to drag people kicking and screaming to get them on their planes.

Why, then, can some areas of science predict invisible space-time ripples, but others can’t predict elections? I’ve been talking to scientists, philosophers and historians about this situation for months. There are, it turns out, some common characteristics of scientific pursuits that make good predictions.

One is the tradition scientists in some fields have of submitting to peer review, and making their procedures transparent so other people can reproduce their results. This creates an interconnected body of knowledge. Great science combines great minds.  Einstein himself wavered over whether his theory predicted the existence of the gravitational waves. Other scientists realized that it did, and they dreamed up a creative way to detect them.

Fields of science with good track records for prediction often work by discerning patterns and insights that explain the world. The better the insights, the better the predictions -- on subjects ranging from eclipses to chemical reactions to the behavior of ants to the existence of black holes.

In contrast, many data-driven algorithms developed by private companies and used to, say, predict election results, are opaque. They aren’t peer-reviewed. Their claims aren’t subject to replication. They don’t reveal insights or explanations that others can test.

Established fields of science also gain predictive power by requiring scientists to quantify their uncertainties. For some, this isn’t just good practice but part of the very definition of science. When scientists graph their measurements, they draw vertical lines -- error bars -- which indicate how inherently imprecise their measurement systems are.

There are good cautionary tales about failure to use error bars. One comes from forensic science -- the use of fingerprints, hair analysis and the like to solve crimes. A group of scientists looking into forensics for a recent government report concluded that it shouldn’t be considered a science at all, because people are doing such a poor job of calculating error bars. Expert witnesses mislead juries with statements about “matches” when all they have are probabilities.

So it’s important to look closely at climate science and make sure they’re not making the same mistakes. And investigations by the National Academy of Sciences and others don’t reveal the kinds of problems that plague forensics.

Climate science grew out of physics and chemistry -- disciplines with explicit rules for dealing with uncertainty. The first climate model came from the calculations of Swedish chemist Svante Arrhenius in 1896. The basic principles behind his model have been tested in laboratory experiments and used to predict temperatures on Venus and Mars.

Earth is more complex than its neighbors because it’s covered in water. Atmospheric temperatures affect the state of the water -- ice, liquid or vapor -- which in turn affects the temperature. But that’s okay -- scientists are allowed to deal in complex phenomena as long as they do a good job of calculating their uncertainties.

Individual scientists make mistakes, like everyone else, but if you really want a cautionary tale that’s relevant to climate change, it should involve a whole field misleading the public and being used to make harmful policy. It’s hard to find a better example than the now-discredited belief that dietary fat is killing people. As journalist Gary Taubes described it in Science in 2001, and later in the New York Times Magazine, the idea had political appeal with those on the left who were upset by consumption, cruelty to animals and the environmental toll of raising animals for meat.

As Taubes tells it, scientists were in disagreement and lacked the kind of long-range health data they needed to understand the effects of dietary fat. The National Academy of Sciences investigated and was blasted for failing to approve the anti-fat belief. Back in the labs, scientists were coming across evidence that different fats had different physiological effects, some quite beneficial. But demand was growing for a simple recommendation. “Once politicians, the press, and the public had decided dietary fat policy,” Taubes wrote, “the science was left to catch up.”

If there are lessons to be learned from the fat debacle, it’s that the press and policy makers shouldn’t get ahead of scientific consensus. Scientists do make mistakes, but scientific methods in many fields your guard against unwarranted certainty. (Science can make some predictions with near-certainty -- that solar eclipse will certainly happen on Aug. 21.) And of course, there is a consensus on climate change. Scientists shouldn’t be trusted blindly, but stubborn distrust in the face of evidence defeats the purpose.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

    To contact the author of this story:
    Faye Flam at

    To contact the editor responsible for this story:
    Tracy Walsh at

    Before it's here, it's on the Bloomberg Terminal.