Aleks Bogdanoski writes:
I’m writing from the Berkeley Initiative for Transparency in the Social Sciences (BITSS) at UC Berkeley with news about pre-results review, a novel form of peer review where journals review (and accept) research papers based on their methods and theory — before any results are known. Pre-results review is motivated by growing concerns about reproducibility in science, including results-based biases in the ways research is reviewed in academic journals.Over that past year, BITSS has been working with the Journal of Development Economics to pilot this form of peer review, and we recently shared some of the lessons we learned through a post on the World Bank’s Development Impact blog. In a nutshell, pre-results review has helped helped authors improve the methodological quality of their work and provided an opportunity for earlier recognition – a particularly important incentive for early-career researchers. For editors and reviewers, Pre-results Review has been a useful commitment device for preventing results-based publication bias.I’m attaching a press release that details the story in full, and here you can learn more about our Pre-results Review collaboration with the JDE.
I don’t have time to look at this right now, but I’m forwarding it because it seems like the kind of thing that might interest many of you.
Pre-results review could solve the Daryl Bem problem
I will lay out one issue that’s bugged me for awhile regarding results-blind reviewing, which is what we could call the Daryl Bem problem.
It goes like this. Some hypothetical researcher designs an elaborate study conducted at a northeastern university noted for its p-hacking, and the purpose of the study is to demonstrate (or, let’s say, test for) the existence of extra-sensory perception (ESP).
Suppose the Journal of Personality and Social Psychology was using pre-results review. Should they accept this hypothetical study?
Based on the above description from BITSS, this accept/reject decision should come down to the paper’s “methods and theory.” OK, the methods for this hypothetical paper could be fine, but there’s no theory.
So I think that, under this regime, JPSP would reject the paper. Which seems fair enough. If they did accept this paper just because of its method (preregistration, whatever), they’d open the floodgates to accepting every damn double-blind submission anyone sent them. Perpetual motion machines, spoon bending, ovulation and voting, power pose, beauty and sex ratio, you name it. It would be kinda fun for awhile, becoming the de facto Journal of Null Results—indeed, this could do a great service to some areas of science—but I don’t think that’s why anyone wants to become a journal editor, just to publish null results.
OK, fine. But here’s the problem. Suppose this carefully-designed experiment is actually done, and it shows positive results. In that case they really have made a great discovery, and the result really should be publishable.
At this point you might say that you don’t believe it until an outside lab does a preregistered replication. That makes sense.
But, at this point, results-blind review comes to the rescue! That first Bem study should not be accepted because it has no theoretical justification. But the second study, by the outside laboratory . . . its authors could make the argument that the earlier successful study gives enough of a theoretical justification for pre-results acceptance.
So, just to be clear here: to get an ESP paper published under this new regime, you’d need to have two clean, pre-registered studies. The first study would not be results-blind publishable on its own (of course, it could still be published in Science, Nature, PNAS, Psychological Science, or some other results-focused journal), but it would justify the second study being published in results-blind form.
You really need 2 papers from 2 different labs, though. For example, the existing Bem (2011) paper, hyper p-hacked as it is, cannot in any reasonable way serve as a theoretical or empirical support for an ESP study.
I guess this suggests a slight modification of the above BITSS guidelines, that they change “methods and theory” to “methods and theory or strong empirical evidence.”
Methodology is important, but methodology is not just causal identification and sample size and preregistration
In any case, my key point here is that we need to take seriously these concerns regarding theory and evidence. Methodology is important, but methodology is not just causal identification and sample size and preregistration: it’s also measurement and connection to existing knowledge. In empirical social science in particular, we have to avoid privileging ill-founded ideas that happen to be attached to cute experiments or identification strategies.