(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)
Joshua Vogelstein writes:
I know you’ve discussed this on your blog in the past, but I don’t know exactly how you’d answer the following query:
Suppose you run an analysis and obtain a p-value of 10^-300. What would you actually report? I’m fairly confident that I’m not that confident :) I’m guessing: “p-value \approx 0.”
One possibility is to determine the accuracy with this one *could* in theory know, by virtue of the sample size, and say that p-value is less than or equal to that? For example, if I used a Monte Carlo approach to generate the null distribution with 10,000 samples, and I found that the observed value was more extreme than all of the sample values, then I might say that p is less than or equal to 1/10,000.
My reply: Mosteller and Wallace talked a bit about this in their book, the idea that there are various other 1-in-a-million possibilities (for example, the data were faked somewhere before they got to you) so p-values such as 10^-6 don’t really mean anything. On the other hand, in some fields such as genetics with extreem multiple comparisons issues, they demand p-values on the order of 10^-6 before doing anything at all. Here I think the solution is multilevel modeling (which may well be done implicitly as part of a classical multiple comparisons adjustment procedure). In general, I think the way to go is to move away from p-values and instead focus directly on effect sizes.
Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science