Students walk past the Phi Kappa Psi fraternity house on the University of Virginia campus.

Photographer: Jay Paul/Getty Images

You Can't Just Accuse People of Rape

Megan McArdle is a Bloomberg View columnist. She wrote for the Daily Beast, Newsweek, the Atlantic and the Economist and founded the blog Asymmetrical Information. She is the author of "“The Up Side of Down: Why Failing Well Is the Key to Success.”
Read More.
a | A

Writing in the Washington Post about the University of Virginia rape case, Zerlina Maxwell asserts, "We should believe, as a matter of default, what an accuser says. Ultimately, the costs of wrongly disbelieving a survivor far outweigh the costs of calling someone a rapist."

Where to begin with this kind of statement? 

For one thing, even an outlandish accusation would not exactly be cost-free; it could be devastating. There would be police interviews, professional questions. As Maxwell blithely notes in the piece, the accused might be suspended from his job. Does he have enough savings to live on until the questions are cleared? Many people don't. What about the Google results that might live on years after he was cleared? Sure, he can explain them to a prospective girlfriend, employer, or sales prospect. But what if they throw his communication into the circular file before he gets a chance to explain? What about the many folks who will think (encouraged by folks like Maxwell) that the accusation would never have been made if he hadn't done something to deserve it?

But while the effect on the accused is one major problem with uncritically accepting any accusation of rape, it is not the only problem. There's another big problem -- possibly, an even bigger one: what this does to the credibility of people who are trying to fight rape. And I include not only journalists, but the whole community of activists who have adopted a set of norms perhaps best summed up by the feminist meme "I believe."

To see why, I want to digress for a moment into a problem that scientists have been wrestling with for a long time: false positives vs. false negatives. As obvious as it may be why this applies to the accusations that were printed in Rolling Stone, it's worth exploring.

So here's an example of something that happened to me: I got tested for lupus by a lab-happy doctor who liked to check off as many boxes as possible on the forms. Actually, there's no test for lupus, so I got tested for a marker, something known as "Anti-nuclear Antibodies." And the lab report came back "borderline." The test, my doctor gravely informed me, had a very low false positive rate -- only about 5 percent. Things were bad.

As you can imagine, I lost my marbles. I have mild roseacea, which causes facial redness -- could this be the infamous "butterfly mask" of the lupus patient? I cataloged every ache and tic that might have been the beginnings of this terrifying autoimmune disease that kills far too many of its victims. I decided I have every symptom. I started thinking about my will.

Then I saw the immunologist, who ran me through the statistics. The incidence of lupus, he said, is about 1 in 2,000. If you test indiscriminately, your test will turn up about 5 people in every hundred with a borderline or positive response. So if you test 2,000 people, you will get 100 false positives, and one true positive. If you limited it to females, you'd get better results, because women are twice as likely to get lupus, so you'd only get 50 false positives for every true positive. But either way, the upshot was the same: If there was no prior reason to think that you had lupus, then the odds are much greater that this is a false positive than that you actually have lupus -- even though the test itself has a low rate of false positives.

Moreover, if you looked at other things about me, there was even less reason to think that I had lupus. My roseacea looked nothing like the true "butterfly" mask to the trained eye; my sed rate indicated no inflammation; I had another, much less serious autoimmune disease, which he thought might cause a positive response on the ANA test. He couldn't say I didn't have lupus, because anything's possible; he could say that there was no real reason to think that I did.

After that incident, my doctor and I had a long, spirited conversation about statistics and Bayesian analysis. And one reason he is no longer my doctor is that he displayed very poor judgment in handling the trade-off between false positives and false negatives. That test should never have been run, because it was vastly more likely to produce unnecessary emotional anguish (and health-care spending!) than useful information.

These problems confound researchers and professionals who depend on statistics. There is an inherent trade-off between false positives and false negatives. We can try to reduce these problems by getting more accurate tests, but it still remains: Do you use tests and methods that will minimize the chance of false negatives (accepting that this means you'll get some false positives), or do you minimize the false positives, accepting that this means you'll miss some true cases? This is in essence what we are fighting about when we fight over the new recommendations for early mammograms.

Before we go on, let me be clear: I'm not comparing my situation to that of someone who was raped and could not get justice for the terrible crime that was committed against her. I was sad and worried for a month and had an expensive trip to a nice doctor for more blood tests. On the cosmic scale, this cannot even be compared to what a rape victim goes through. I offered the example of a badly done medical test not because it is morally similar, but because it is a concrete, and relatively easy to explain, illustration of the difficulties of decision-making under uncertainty.

So let's look at how these sorts of rules are actually being applied to rape victims on campus. Emily Yoffe's new article on how these cases are being handled is an absolute must-read to understand this landscape. Seriously, go read it right now and come back. I'll still be here.

What do you see in this article? People are frustrated by rape on campus and want it to stop. Their frustration is righteous, their goal laudable. In the name of this goal, however, they are trying to drive the rate of false negatives down to zero, and causing a lot of real problems for real people who are going through real anguish that goes far beyond weeping in the doctor's office. The main character is a boy who had sex with a friend. According to his testimony and that of his roommate (who was there, three feet above them in a bunkbed), the sex was entirely consensual, if extremely ill-advised. According to Yoffe, after the girl's mother found her diary, which "contained descriptions of romantic and sexual experiences, drug use, and drinking," the mother called the campus and announced that she would be making a complaint against the boy her daughter had sex with. Two years later, after a "judicial" process that offered him little chance to tell his side, much less confront his accuser, he is unable to return to school, or to go anywhere else of similar stature because of the disciplinary action for sexual assault that taints his record.

As I've written before, the very nature of rape makes these problems particularly difficult. On campus, especially, sexual assaults usually offer no physical evidence except that of an act that goes on hundreds of times every day, almost always consensually, at those campuses. It involves only two witnesses, both of whom were often intoxicated.

Worse, we don't have the kind of background statistics that my doctor might have used, but didn't, on the actual incidence of the problem. Statistical estimates of sexual assault vary widely depending on the definitions you use, and those definitions are often set by researchers who are determined not to miss any true positives. To see what I mean, let us turn to Yoffe:

It is exceedingly difficult to get a numerical handle on a crime that is usually committed in private and the victims of which—all the studies agree—frequently decline to report. A further complication is that because researchers are asking about intimate subjects, there is no consensus on the best way to phrase sensitive questions in order to get the most accurate answers. A 2008 National Institute of Justice paper on campus sexual assault explained some of the challenges: “Unfortunatelyresearchers have been unable to determine the precise incidence of sexual assault on American campuses because the incidence found depends on how the questions are worded and the context of the survey.” Take the National Crime Victimization Survey, the nationally representative sample conducted by the federal government to find rates of reported and unreported crime. For the years 1995 to 2011, as the University of Colorado Denver’s Rennison explained to me, it found that an estimated 0.8 percent of noncollege females age 18-24 revealed that they were victims of threatened, attempted, or completed rape/sexual assault. Of the college females that age during that same time period, approximately 0.6 percent reported they experienced such attempted or completed crime.

That finding diverges wildly from the notion that one in five women college women will be sexually assaulted by the time they graduate. That’s the number most often used to suggest there is overwhelming sexual violence on America’s college campuses. It comes from a 2007 study funded by the National Institute of Justice, called the Campus Sexual Assault Study, or CSA. (I cited it last year in a story on campus drinking and sexual assault.) The study asked 5,466 female college students at two public universities, one in the Midwest and one in the South, to answer an online survey about their experiences with sexual assault. The survey defined sexual assault as everything from nonconsensual sexual intercourse to such unwanted activities as “forced kissing,” “fondling,” and “rubbing up against you in a sexual way, even if it is over your clothes.”

We also don't know the rate of false accusations -- "false positives." Over the last week, I've read countless articles saying that "we know" the rate is low -- between 2 percent and 8 percent. But there's a lot of cherry-picking in those figures, and even if there weren't, we still wouldn't know the rate of false accusations, because we can't measure that. What we tend to measure is the rate of accusations that the investigators determined were false or unfounded, which is actually a very different number -- especially since the prior views of cops and researchers about the likelihood of false accusations is likely to influence whether they determine that a borderline case was false.

To complicate things even further, what about the cases that don't get investigated? False rape accusations that are made to friends and family or colleges or employers or journalists could certainly have terrible impacts, but wouldn't show up in law-enforcement statistics. If the story by "Jackie" at UVA was false, or mostly false, then it would fall into this category. But we will never know the number of these accusations, or the effect that they have on those accused.

People who are fighting to stop rape, on campuses and elsewhere, are interested in driving the number of false negatives as low as possible. That's understandable. But as in other contexts, the cost is more false positives. The way to ensure you don't miss any cases where a rape actually occurred is to believe every single story you are told, no matter how unlikely your Bayesian analysis suggests it may be. I have never come across a case of a feminist actually saying "women don't lie about rape," but in the media storm surrounding the Duke Lacrosse case, I heard claims nearly that strong. After the case blew up, people became somewhat more cautious, but there is still a presumption that this is so rare that people who question the stories of rape victims are on some sort of misogynist unicorn hunt. Witness the way people who raised very legitimate questions about Jackie's story were called "rape truthers," "rape denialists," and so forth, as if suggesting that this particular rape might not have happened was morally and logically equivalent to saying that rape never happens at all.

One cost of minimizing false negatives is to the false positives who get hurt. But another cost is to the credibility of all rape reports. People who responded to the problems with the Rolling Stone story by saying that this didn't have anything to do with the real problem -- the culture of rape on college campuses -- were missing something important. Actually, two important things. First, that deciding what to do in the face of these trade-offs between false positives and false negatives is actually a vital matter of public debate in all areas of policy, and this story cast important light on how those trade-offs may have been made outside of the public eye. And second, that by declaring that this story, which just a week before was a grave matter demanding the urgent attention of the nation, somehow became trivial and irrelevant when it started to look as if it might be false, writers and activists were suggesting that they simply didn't care about false positives. Which undercuts the very public trust they need to advance their cause.

Authorities are not the only people who weigh statistical probabilities; the public does too. Activists can force authorities to use standards that never fail to include a true report. Journalists like Erdely can weight their journalism toward inclusion, rather than exclusion. But when everyone else knows that you have set a low threshold for declaring a story true, they will evaluate your information accordingly. This is why journalism needed to make it clear that good reporters do attempt to exclude false positives, even at the risk of failing to tell some true stories -- and to their credit, great reporters like Hanna Rosin and T. Rees Shapiro did just that.

In aggregate, focusing only on the problem of false negatives undermines the credibility of people who are trying to make a case that campus rape is a huge problem. In specific, it may also undermine the search for justice.

Activists fighting rape are fighting for two things that actually work against each other. On the one hand, they want the harshest possible moral, social, legal, and administrative sanctions for sexual assault -- as they should, because this is a crime second only to murder in its brutality. On the other hand, they want the broadest possible standards for deciding that a rape has occurred, weighted very, very heavily toward including true assaults, rather than excluding false, ambiguous, or hard-to-prove accusations.

This is not a bargain that a liberal society will strike. You can have drastic punishment of offenses, or you can have a low threshold of evidence for imposing punishments; you cannot have both. If you broaden your criteria to include lesser offenses like "non-consensual kissing," or more cases where there's a higher possibility that the accused was innocent, then you will encounter resistance to heavy punishment. The jury of public opinion will nullify. 

One reading of what has been happening on college campuses in recent years is that under various sorts of pressure, you're seeing colleges adjudicate more cases where it's difficult to ascertain that something horrible definitely happened -- and their judicial systems often respond with a slap on the wrist because the people hearing these cases don't want to take the chance of dropping a draconian penalty on an innocent boy.

Obviously, I wish these tradeoffs didn't exist: that we could be sure of punishing each and every man or woman who commits sexual assault, and only them, with the full and devastating penalties that such a grotesque crime deserves. Yet we live in a world of imperfect people, with imperfect memories; uncertainties abound everywhere we look. So we have to be careful about the balance we strike between false negatives and false positives -- between denying justice to victims, and punishing the innocent. We cannot make these trade-offs go away simply by asserting that they don't matter.

This column does not necessarily reflect the opinion of Bloomberg View's editorial board or Bloomberg LP, its owners and investors.

To contact the author on this story:
Megan McArdle at mmcardle3@bloomberg.net

To contact the editor on this story:
James Gibney at jgibney5@bloomberg.net