OK, you all remember the story, arguably the single event that sent the replication crisis into high gear: the decision of the Journal of Personality and Social Psychology to publish a paper on extra-sensory perception (ESP) by Cornell professor Daryl Bem and the subsequent discussion of this research in the New York Times and elsewhere.
The journal’s decision to publish that ESP paper was, arguably, a mistake. Or maybe not, as the publication led indirectly to valuable discussions that continue today.
But what should the editors have done? It’s a tough choice:
Option A. Publish an article when you’re pretty sure its theories and conclusions are completely wrong; or
Option B. Reject an article that has no obvious flaws (or, at least none that were noticed at the time; in retrospect the data analysis has big problems, see section 2.2 of this article for just one example).
Before going on, let me emphasize that top journals reject articles without obvious flaws all the time. A common reason for rejecting an article is that it’s not important enough. What about that article on ESP? Well, if its claims were correct, then it would be super-important. On the other hand, if there’s nothing there, it’s not important at all! So it’s hard to untangle the criteria of correctness and importance. Here I’m just pointing out that Option B is not so unreasonable: JPSP is allowed to reject a paper that makes big claims about ESP, just as it’s allowed to reject a paper that appears to be correct but is on a topic that they judge to be too specialized to be of general interest.
Anyway, to continue . . . the choice between options A and B is awkward: publish something you don’t really want to publish something, or decide to reject a paper largely on theoretical grounds.
But there’s a third choice. Option C. It’s a solution that just came to me (and I’m sure others have proposed it elsewhere), and it’s beautifully simple. I’ll get to it in a moment.
But first, why did JPSP publish such a ridiculous paper? Here are some good, or at least reasonable, motivations:
– Fairness. Psychology journals routinely were publishing articles that were just as bad on other topics, so it doesn’t seem fair to reject Bem’s article just because its theory is implausible.
– Open-mindedness; avoidance of censorship. The very implausibility of Bem’s theories could be taken as a reason for publishing his article: maybe it’s appropriate to bend over backward to give exposure to theories that we don’t really believe. The only trouble with this motivation is that there are so many implausible theories out there: if JPSP gives space to all of them, there will be no space left for mainstream psychology, what with all the articles about auras, ghosts, homeopathy, divine intervention, reincarnation, alien abductions, and so forth. Avoidance-of-censorship is an admirable principle, but in practice, some well-connected fringe theories seem to get special treatment. (Medical journals do, from time to time, publish articles on the effectiveness of intercessory prayer, which typically seem to get more publicity than their inevitable follow-up failed replications.)
– What if it’s real? Stranger phenomena than ESP have been found in science. So another reason for publishing a paper such as Bem’s is that it’s possibly the scoop of the century. High-risk, high-reward.
Ok, now, here it is . . . what JPSP could have done:
Option C. Don’t publish Bem’s article. Publish his data. His raw data. Raw raw raw. All of it, along with a complete description of his data collection and experimental protocols, and enough computer code to allow outsiders to do whatever reanalyses they want. And then, if you must insist, you can also include Bem’s article as a speculative document to be included in the supplementary material.
My proposal—which JPSP could’ve done in 2010, had “just publish the raw data” been considered a live option at the time—flips the standard scheme of scientific publication. The usual way things go is to publish a highly polished article making strong conclusions, along with statistical arguments all pointing in the direction of said conclusions—basically, an expanded version of that five-paragraph essay you learned about in high school—and then, as an aside, some additional data summaries might appear in an online supplement. And, if you’re really lucky, the raw data are in some repository somewhere, but that almost never happens.
I’m saying the opposite: to the extent there’s news in a psychology experiment, the news comes from the design and data collection (which should be described in complete detail) and in the data. That’s what’s important. The analysis and the write-up, those are the afterthoughts. Given the data, anyone should be able to do the analysis.
Now apply this to Bem’s ESP research. The value, if any, in his experiments comes from the data. But that was the one thing that the journal didn’t publish! Instead they published pages and pages of speculations, funky theory, and selective data analysis.
Let’s go back and see how Option C fits in with JPSP’s motivations:
– Fairness. Publishing Bem’s data is fair, and the journal could do the same for any other research projects that it deems to be of sufficient quality and importance.
– Open-mindedness; avoidance of censorship. Again, what better argument can Bem offer the skeptics than his raw data? That’s the least censored thing possible.
– What if it’s real? If it is, or could be, real, we want as many eyes on the data as possible. Who knows what could be learned. The very importance of the topic, which motivates publication, should also motivate full data sharing.
I like this solution. I guess the journal wouldn’t only publish the raw data and code; they’d also want to publish some basic analyses showing the key patterns in the data. But the focus is on the data, not the statistical analysis.
Option C is not a panacea and it is not intended to resolve all the problems of a scientific journal. In particular, they’s still have to decide what to publish, what to reject, and when to request revise-and-resubmit. The difference is in what gets published. Or, to be more precise, in what aspects of the publication are considered necessary and which are optional. For the Bem paper as published in JPSP, the writeup, the bold claims, and the statistically significant p-values were necessary; the data were optional. I’d switch that around. But it wouldn’t go that way for every project. Some projects have primary value in their data, for others, it’s the analysis or the theory that are most important.
In the example of the ESP study, if anything’s valuable it’s the data. Publishing the data would get the journal off the hook regarding fairness, open-mindedness, and not missing a scoop, while enabling others to move on reanalyses right away, and without saddling the journal with an embarrassing endorsement of a weak theory that, it turns out, was not really supported by data at all.