Probably ...

Photographer: WILLIAM WEST/AFP/Getty Images

Math Stumps Your Doctor, Too

Faye Flam is a Bloomberg View columnist. She was a staff writer for Science magazine and a columnist for the Philadelphia Inquirer, and she is the author of “The Score: How the Quest for Sex Has Shaped the Modern Man.”
Read More.
a | A

Medical science is getting more and more like math. There’s big data to parse, genes to sequence, neural networks to model. That means doctors and patients will be receiving less advice and more menus of probability ranges -- the kind of information that tends to turn the human mind to mush.

Studies going back to psychologist Daniel Kahneman’s work in the 1970s and ’80s show that even doctors tend to misunderstand probabilities, especially as they apply to risk. That’s a problem but not an insoluble one. Intuition can be retrained. People can learn to look at uncertainty in a different way.

Take the famous hypothetical example of a test that is 95 percent accurate for a disease that affects 0.1 percent of the population. Imagine you’re a doctor and your patient tests positive. What is the chance that she has the disease? Most people’s intuitive answer is a rather dire 95 percent. This is wrong in a big way. Despite the ominous test result, the patient is unlikely to be sick. 

“Even doctors and medical students are prone to this error,” wrote Aron Barbey, a cognitive neuroscientist at the University of Illinois, in a paper on risk literacy published last month in the journal Science.

Some people do get the right answer: that the patient has about a 2 percent chance of having the disease.

Those few with math training can get help from a formula called Bayes’ Theorem.

But there’s also an intuitive approach that requires no formula at all. Imagine 1,000 people getting the test. On average, one will have the disease. The 5 percent error rate means that about 50 of the 999 healthy people will test positive. Now it’s easy to see that the group of false positives is about 50 times bigger than the group of real positives. In other words, just 2 percent of the people testing positive are likely to be sick.

Barbey explains that some people have natural statistical skills, but most are better at working with numbers of events than percentages associated with the probability of a single event. It’s easier to visualize 50 out of 1,000 people getting a false positive result than to picture a 5 percent error rate. This trick works especially well for untangling conditional probabilities like the disease test where the task is to calculate the odds of X given a condition Y -- in this case X being the disease and Y a positive test result. These crop up all the time in medicine, sometimes in cases involving life or death.

Doctors don’t always err on the side of overstating risk. The opposite situation was illustrated last year as an extended cartoon in the Annals of Internal Medicine. In the graphical article, Yehuda Z. Cohen, who teaches clinical investigation at Rockefeller University, tells the story of a mathematician friend who feared he had the deadly neurological disease ALS. The mathematician was worried because one of his legs had gone numb and stayed that way for days. Cohen, then a medical student, reassures him that at their young age, the odds were one in a million. The friend replied that he had an unusual symptom, and surely the odds were much higher if that were taken into consideration. It was a conditional probability problem. His friend, Cohen realized, was right.

It’s easy to see how anyone’s brain could get mushy at the thought of a young friend developing ALS. These are emotional issues, but that’s all the more reason to get the right answer.

In an interview, Barbey said that when dealing with conditional probabilities, people often make the mistake of focusing on just the population statistics (what he calls the outside view) or just the patient’s individual statistics (the inside view). In the ALS story, the doctor saw only the outside view, focusing on the low rate of the disease in the whole population. In the problem of the test that’s 95-percent accurate, people often take the inside view, ignoring the rarity of the disease.

This week, Warren Buffett may have confused outside and inside views on the risks of drinking soda. He was being perfectly rational when he pointed out that downing a sugary drink each day hasn’t killed him. He enjoys soda, and since he’s already 85, there is a zero percent chance that soda will cause him to die prematurely. But he got on shaky ground when he moved to the outside view, trying to argue for soda’s health benefits. There’s large-scale data to the contrary.

Overestimating risk might seem like a harmless way to play things safe, but it can lead to unnecessary stress and harmful treatments. Barbey’s paper uses the example of prostate cancer screening. Statistics show that men who get screened for prostate cancer have 98-percent survival odds after five years, compared to 71 percent for those not getting screened. Those odds make screening seem like a no brainer. But overall death rate is the same in the screened and unscreened groups. That happens because many of the people in the screened group either didn’t have cancer or had a form that was not fatal.

“The important point is that we can improve medical decision making from a psychological standpoint,” Barbey said. “All of these innovations that medicine brings fail unless people can also make the right decisions.”

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story:
Faye Flam at fflam1@bloomberg.net

To contact the editor responsible for this story:
Jonathan Landman at jlandman4@bloomberg.net