Why 'Statistical Significance' Is Often Insignificant
The knives are out for the p-value. This statistical quantity is the Holy Grail for empirical researchers across the world -- if your study finds the right p-value, you can get published in a credible journal, and possibly get a good university tenure-track job and research funding. Now a growing chorus of voices wants to de-emphasize or even ban this magic number. But the crusade against p-values is likely to be a distraction from the real problems afflicting scientific inquiry.
What is a p-value? It’s a bit subtle. Suppose that as a researcher, I’m looking for evidence of something interesting -- the effect of a new drug on blood pressure, or the impact of good teachers on student outcomes. Suppose the effect isn’t really there. Even so, there’s still some possibility that random chance will make the data look as if the effect I’m looking for is there. The p-value is the probability that I got this sort of false positive. So a low p-value means that I found a clear signal of something and that my results probably aren’t just a mirage.