Hysteresis corner: “These mistakes and omissions do not change the general conclusion of the paper . . .”

February 12, 2018
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

All right, then. The paper’s called Attractive Data Sustain Increased B.S. Intake in Journals Attractive Names Sustain Increased Vegetable Intake in Schools.

Seriously, though, this is just an extreme example of a general phenomenon, which we might call scientific hysteresis or the research incumbency advantage:

When you’re submitting a paper to a journal, it can be really hard to get it accepted, and any possible flaw in your reasoning detected by a reviewer is enough to stop publication. But when a result has already been published, it can be really hard to get it overturned. All of a sudden, the burden is on the reviewer, not just to point out a gaping hole in the study but to demonstrate precisely where that hole led to an erroneous conclusion. Even when it turns out that a paper has several different mistakes (including, in the above example, mislabeling preschoolers as elementary school students, a change that entirely changes the intervention being studied), the author is allowed to claim, “These mistakes and omissions do not change the general conclusion of the paper.” It’s the research incumbency effect.

As I wrote in the context of a different paper, where t-statistics of 1.8 and 3.3 were reported as 5.03 and 11.14 and the authors wrote that this “does not change the conclusion of the paper”:

This is both ridiculous and all too true. It’s ridiculous because one of the key claims is entirely based on a statistically significant p-value that is no longer there. But the claim is true because the real “conclusion of the paper” doesn’t depend on any of its details—all that matters is that there’s something, somewhere, that has p less than .05, because that’s enough to make publishable, promotable claims about “the pervasiveness and persistence of the elderly stereotype” or whatever else they want to publish that day.

When the authors protest that none of the errors really matter, it makes you realize that, in these projects, the data hardly matter at all.

In some sense, maybe that’s fine. If this is the rules that the medical and psychology literatures want to play by, that’s their choice. It could be that the theories that these researchers come up with are so valuable, that it doesn’t really matter if they get the details wrong: the data are in some sense just an illustration of their larger points. Perhaps an idea such as “Attractive names sustain increased vegetable intake in schools” is so valuable—such a game-changer—that it should not be held up just because the data in some particular study don’t quite support the claims that were made. Or perhaps the claims in that paper are so robust that they hold up even despite many different errors.

OK, fine, let’s accept that. Let’s accept that, ultimately what matters is that a paper has a grabby idea that could change people’s lives, a cool theory that could very well be true. Along with a grab bag of data and some p-values. I don’t really see why the data are even necessary, but whatever. Maybe some readers have so little imagination that they can’t process an idea such as “Attractive names sustain increased vegetable intake in schools” without a bit of data, of some sort, to make the point.

Again, OK, fine, let’s go with that. But in that case, I think these journals should accept just about every paper sent to them. That is, they should become Arxiv.

Cos multiple fatal errors in a paper aren’t enough to sink it in post-publication review, why should they be enough to sink it in pre-publication review?

Consider the following hypothetical Scenario 1:

Author A sends paper to journal B, whose editor C sends it to referee D.

D: Hey, this paper has dozens of errors. The numbers don’t add up, and the descriptions don’t match the data. There’s no way this experiment could’ve been done as desribed.

C: OK, we’ll reject the paper. Sorry for sending this pile o’ poop to you in the first place!

And now the alternative, Scenario 2:

Author A sends paper to journal B, whose editor C accepts it. Later, the paper is read by outsider D.

D: Hey, this paper has dozens of errors. The numbers don’t add up, and the descriptions don’t match the data. There’s no way this experiment could’ve been done as desribed.

C: We sent your comments to the author who said that the main conclusions of the paper are unaffected.

D: #^&*$#@

[many months later, if ever]

C: The author published a correction, saying that the main conclusions of the paper are unaffected.

Does that really make sense? If the journal editors are going to behave that way in Scenario 2, why bother with Scenario 1 at all?

The post Hysteresis corner: “These mistakes and omissions do not change the general conclusion of the paper . . .” appeared first on Statistical Modeling, Causal Inference, and Social Science.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags:


Subscribe

Email:

  Subscribe