Regression to the mean continues to confuse people and lead to errors in published research

David Allison sends along this paper by Tanya Halliday, Diana Thomas, Cynthia Siu, and himself, “Failing to account for regression to the mean results in unjustified conclusions.” It’s a letter to the editor in the Journal of Women & Aging, responding to the article, “Striving for a healthy weight in an older lesbian population,” by Tomisek et al. in that journal. Halliday et al. write:

The authors conclude that the SHE [“Strong. Healthy. Energized”] program should be adopted . . . as it demonstrated “effectiveness in improving health behaviors and short-term health outcomes in the target population.” Specifically, the authors make this conclusion based upon a “marked step increase” for participants in the lowest tertile-defined category of baseline step count. However, the analysis does not support this conclusion. This is because regression to the mean (RTM), rather than treatment effectiveness, explains, in part, the arrived-at conclusion.

RTM is a statistical phenomenon that describes the tendency for extreme values observed on initial assessment to be less extreme and closer to the population mean with repeated measurement when the correlation coefficient is less than 1.0 . . . RTM is a concept that has often been ignored and misunderstood in health and obesity-related research . . . Failure to account for RTM often leads to errors in interpretation of results and unjustified conclusions. In pre-/poststudy designs that lack a comparator control group, neglecting RTM can lead to the inaccurate conclusion that an intervention was effective in improving a health outcome in a group of participants.

What happened in this case? Halliday et al. continue:

In the results section of Tomisek et al. (2017), the authors state, “The SHE program was most effective for participants with low levels of physical activity and steps.” . . . The expected analytical approach of evaluating change in step count for the entire sample was not reported . . . given the acknowledgement that a decrease in body weight in the group with highest initial body weights was expected, the same logic should have been applied to the outcome of change in step counts. Thus, the conclusion that there was evidence for effectiveness is not justified given that the results are likely due to RTM and not specific intervention effects attributable to the SHE program. This does not mean that the SHE program is not effective, only that it was not convincingly shown to be effective by ordinary scientific standards in this study.

And:

Interventions that are evaluated without the use of a control group are susceptible to the reliance on results that may be a consequence of RTM. Greater vigilance regarding RTM is necessary throughout the research and publication process.

Yup.

What matters here

The news here is not that a statistical mistake got published in an obscure journal. You could spend your whole life going through the published literature finding papers with fatal statistical flaws. The news is that regression to the mean remains paradoxical and continues to mislead people; hence this post.

The authors of the original paper respond

The above-linked article includes a response from the authors of the paper. Unless I missed it, the authors in their response forgot to say, “We thank Halliday et al. for pointing out the flaw in our analysis, and here is our bias-corrected estimate…”

It’s frustrating when people can’t admit their errors. We all make mistakes; what’s important is to learn from them.

The post Regression to the mean continues to confuse people and lead to errors in published research appeared first on Statistical Modeling, Causal Inference, and Social Science.