Palavering about Palavering about P-values

.

Nathan Schachtman (who was a special invited speaker at our recent Summer Seminar in Phil Stat) put up a post on his law blog the other day (“Palavering About P-values”) on an article by a statistics professor at Stanford, Helena Kraemer. “Palavering” is an interesting word choice of Schachtman’s. Its range of meanings is relevant here [i]; in my title, I intend both, in turn.

The American Statistical Association’s most recent confused and confusing communication about statistical significance testing has given rise to great mischief in the world of science and science publishing.[ASA II 2019] Take for instance last week’s opinion piece about “Is It Time to Ban the P Value?” Please.

Admittedly, their recent statement, which I refer to as ASA II, has seemed to open the floodgates to some very zany remarks about P-values, their meaning and role in statistical testing. Continuing with Schachtman’s post:

…Kraemer’s eye-catching title creates the impression that the p-value is unnecessary and inimical to valid inference.

Remarkably, Kraemer’s article commits the very mistake that the ASA set out to correct back in 2016 [ASA I], by conflating the probability of the data under a hypothesis of no association with the probability of a hypothesis given the data:

“If P value is less than .05, that indicates that the study evidence was good enough to support that hypothesis beyond reasonable doubt, in cases in which the P value .05 reflects the current consensus standard for what is reasonable.”

The ASA tried to break the bad habit of scientists’ interpreting p-values as allowing us to assign posterior probabilities, such as beyond a reasonable doubt, to hypotheses, but obviously to no avail.

While I share Schachtman’s puzzlement over a number of remarks in her article, this particular claim, while contorted, need not be regarded as giving a posterior probability to “that hypothesis” (the alternative to a test hypothesis). It is perhaps close to being tautological. If a P-value of .05 “reflects the current consensus standard for what is reasonable” evidence of a discrepancy from a test or null hypothesis, then it is reasonable evidence of such a discrepancy. Of course, she would have needed to say it’s a standard for “beyond a reasonable doubt” (BARD), but there’s no reason to suppose that that standard is best seen as a posterior probability.

I think we should move away from that notion, given how ill-defined and unobtainable it is. That a claim is probable, in any of the manifold senses that is meant, is very different from its having been well tested, corroborated, or its truth well-warranted. It might well be that finding 3 or 4 statistically significant increased risks is sufficient for inferring a genuine risk exists–beyond a reasonable doubt– given the tests pass audits of their assumptions. The 5 sigma Higgs results warranted claiming a discovery insofar as there was a very high probability of getting less statistically significant results, were the bumps due to background alone. In other words, evidence BARD for H can be supplied by H’s having passed a sufficiently severe test (set of tests). It’s denial may be falsified in the strongest (fallible) manner possible in science.…

Perhaps in her most misleading advice, Kraemer asserts that:

“[w]hether P values are banned matters little. All readers (reviewers, patients, clinicians, policy makers, and researchers) can just ignore P values and focus on the quality of research studies and effect sizes to guide decision-making.”

Really? If a high quality study finds an “effect size” of interest, we can now ignore random error?

I agree her claim here is extremely strange, though one can surmise how it’s instigated by some suggested “reforms” in ASA II. It might also be the result of confusing observed or sample effect size with population or parametric effect size (or magnitude of discrepancy). But the real danger in speaking cavalierly about “banning” P-values is not that there aren’t some cases where genuine and spurious effects may be distinguished by eye-balling alone. It is that we lose an entire critical reasoning tool for determining if a statistical claim is based on methods with even moderate capability of revealing mistaken interpretations of data.  The first thing that a statistical consumer needs to ask those who assure them they’re not banning P-values, is whether they’ve so stripped them of their error statistical force as to deprive us of an essential tool for holding the statistical “experts” accountable.

The ASA 2016 Statement, with its “six principles,” has provoked some deliberate or ill-informed distortions in American judicial proceedings, but Kraemer’s editorial creates idiosyncratic meanings for p-values. Even the 2019 ASA “post-modernism” does not advocate ignoring random error and p-values, as opposed to proscribing dichotomous characterization of results as “statistically significant,” or not.

You may have an overly sanguine construal of ASA II (2019 ASA) (as merely “proscribing dichotomous characterization of results”). As I read it, although their actual position is quite vague, their recommended P-values appear to be merely descriptive and do not have error probabilistic interpretations. Granted, the important Principle 4 in ASA I (that data dredging and multiple testing invalidate P-values), suggests error control matters. But I think this is likely to be just another inconsistency between ASA I and II. Neither mentions Type I or II errors or power (except to say that it is not mentioning them). I think it is the onus of the ASA II authors to clarify this and other points I’ve discussed elsewhere on this blog.

[i]

  1. chattering, talking unproductively and at length
  2. persuading by flattery, browbeating or bullying