What we learned from the University of Virginia.

Photographer: Andrew Harrer/Bloomberg

A Messy Answer Beats 'Too Good to Be True'

Megan McArdle is a Bloomberg View columnist. She wrote for the Daily Beast, Newsweek, the Atlantic and the Economist and founded the blog Asymmetrical Information. She is the author of "“The Up Side of Down: Why Failing Well Is the Key to Success.”
Read More.
a | A

A really breathtaking academic fraud has the makings of a really well-written whodunnit. When the ending is revealed, you can look back and see all the clues that pointed you to the answer. And yet you can also look back and see yourself stumbling along in the dark, blindly following miscues down the false trail.

So it is with Michael LaCour's pathbreaking paper on changing Americans' minds about gay marriage. He claimed that merely having gay volunteers talk to subjects about the subject produced significant and lasting increases in the level of support for gay marriage. A lot of journalists and academics followed him down the rabbit hole, including "This American Life" and Ireland's gay marriage campaign. That sounds like sneering, but I sure didn't catch any problems with the paper; the only reason I didn't write it up myself is that voter canvassing is a little bit outside of my wheelhouse.

In retrospect, however, the problems are obvious. The effect itself was not surprising (contact with gay people is generally found to improve support for gay marriage). But the effect was huge. The survey was extremely large for a graduate student to be conducting, and he improbably claimed to have collected almost $800,000 worth of grant money which was supposed to help pay for it. Ultimately, the survey firm he claimed had done the work disavowed any knowledge of the project.

Should journalists have checked with this firm and detected this fraud before reporting the study's findings? In short: no. The media is not set up to provide real-time debunking of plausible scientific claims. The academy does aim to do this, with replication. In this case, the system basically worked as it's supposed to. A grad student set out to replicate LaCour's work and could not.

Journalists and academics alike can take a lesson from this case, and from the discredited Rolling Stone story alleging gang rape at the University of Virginia. And from Michael Bellesiles, and from Diederik Stapel, and from a lot of bad research and reporting, not all of it fraudulent but all of it falling short of "correct": the danger of selecting for rubbish.

I wrote about this in regard to the Rolling Stone story a bit, but it also applies to Janet Cooke's infamous (revoked) Pulitzer, Michael Daisey's (retracted) "This American Life" episode, and numerous other stories of fabulism and fraud in the wordsmith industry. Not all of these were fraud on the part of the authors; some of them were taken in. But I think there is a common thread. We reward people not for digging into something interesting and emerging with great questions and fresh uncertainty, but for coming away from their investigation with an outlier -- something really extraordinary and unusual. When we do that, we're selecting for stories that are too frequently, well, incredible.

As anyone who's actually reported a long, complicated issue can tell you, the world rarely offers you the kind of story your colleagues are waiting to hear and cheer: a straightforward, simple narrative with obvious conclusions. Yet we continue to pay the most attention to those who provide those narratives -- and so, we shouldn't be shocked when some of those people turn out to have delivered by being credulous, or fraudulent. I mean, we should be appalled, because it's wrong, but not entirely surprised. And we should take every possible precaution to weed these stories (and the fraudsters) out of our profession.

Something similar can be said for academia, where, as one political science professor of my acquaintance notes, no one is clamoring to get published in the Journal of Null Results. And yet social scientists, like journalists, should understand that large effects and counterintuitive findings are the most unlikely possible outcome of a study. Mostly what you're going to get, when you start out to look at something, is "Meh, it's messy and complicated and I'm not sure there's a clear lesson to be drawn from this." That's valuable information! We should be happy that someone bothered to find out for us. Instead, those people are at a disadvantage when they go on the job market. I don't think it's an accident that LaCour was headed to a top-ranked school when this came out.

This is daft. A good social scientist, or a good reporter, can think of an interesting question. They can find ingenious ways to investigate that question. What they cannot in any way do is guarantee interesting results from that investigation. If they can provide such a guarantee -- if they get fascinating result after fascinating result -- then you should not think "This person must be a genius!" You should think "this person must be doing something wrong."

When you write a book about failure, as I did, you see this over and over: People become focused on the outcome, instead of the process. It's a natural human tendency, because the outcome is easy to observe, while the process is fiddly and tedious. Unfortunately, the process is what actually matters. Most of the time, when a health care provider fails to follow proper hand-washing procedure, nothing bad happens. But if they don't follow it every time, it is almost certain that over the course of their career that they will kill someone -- and also that it will be almost impossible to connect that particular death to a particular failure to wash your hands. Which is why the important thing to do is to make sure they wash their hands. Every. Single. Time.

And if we want fewer false stories from the media and academia, it's no mystery how to do that: We need to reward people for rigorous investigations of interesting questions, not for finding the incredible. Unless we do that, we shoudn't be too astonished when we occasionally learn that we've been stumbling around in the dark.

  1. Yes, an interesting thinker may be able to generate worthwhile observations about messy data, and a good writer can communicate something worthwhile about a complicated story. What I'm talking about here is something different: people who are rewarded for digging up results that are stunning by themselves.

This column does not necessarily reflect the opinion of Bloomberg View's editorial board or Bloomberg LP, its owners and investors.

To contact the author on this story:
Megan McArdle at mmcardle3@bloomberg.net

To contact the editor on this story:
Philip Gray at philipgray@bloomberg.net