What’s published in the journal isn’t what the researchers actually did.

David Allison points us to these two letters:

Alternating Assignment was Incorrectly Labeled as Randomization, by Bridget Hannon, J. Michael Oakes, and David Allison, in the Journal of Alzheimer’s Disease.

Change in study randomization allocation needs to be included in statistical analysis: comment on ‘Randomized controlled trial of weight loss versus usual care on telomere length in women with breast cancer: the lifestyle, exercise, and nutrition (LEAN) study,’ by Stephanie Dickinson, Lilian Golzarri-Arroyo, Andrew Brown, Bryan McComb, Chanaka Kahathuduwa, and David Allison, in Breast Cancer Research and Treatment.

It can be surprisingly difficult for researchers to simply say exactly what they did. Part of this might be a desire to get credit for design features such as random assignment that were too difficult to actually implement; part of it could be sloppiness/laziness; but part of it could just be that, when you write, it’s so easy to drift into conventional patterns. Designs are supposed to be random assignment, so you label them as random assignment, even if they’re not. The above examples are nothing like pizzagate, but it’s part of the larger problem that the scientific literature can’t be trusted. It’s not just that you can’t trust the conclusions; it’s also that papers make claims that can’t possibly be supported by the data in them, and that papers don’t state what the researchers actually did.

As always, I’m not saying these researchers are bad people. Honesty and transparency are not enuf. If you’re a scientist, and you write up your study, and you don’t describe it accurately, we—the scientific community, the public, the consumers of your work—are screwed, even if you’re a wonderful, honorable person. You’ve introduced buggy software in the world, and the published corrections, if any, are likely to never catch up.

P.S. Hannon, Oakes, and Allison explain why it matters that the design described as a “randomized controlled trial” wasn’t actually that:

By sequentially enrolling participants using alternating assignment, the researchers and enrolling physicians in this study were able to know to which group the next participant would be assigned, and there is no allocation concealment. . . .

The allocation method employed by Ito et al. allows the research team to determine in which group a participant would be assigned, and thus could (unintentionally) manipulate the enrollment. . . .

Alternating assignment, or similarly using patient chart numbers, days of the week, date of birth, etc., are nonrandom methods of group allocation, and should not be used in place of randomly assigning participants . . .

There are a number of disciplines (i.e., public health, community interventions, etc.) which commonly employ nonrandomized intervention evaluation studies, and these can be conducted with rigor. It is crucial for researchers conducting these nonrandomized trials to report procedures accurately.