Reasons for an optimistic take on science: there are not “growing problems with research and publication practices.” Rather, there have been, and continue to be, huge problems with research and publication practices, but we’ve made progress in recognizing these problems.

March 14, 2018
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Javier Benitez points us to an article by Daniele Fanelli, “Is science really facing a reproducibility crisis, and do we need it to?”, published in the Proceedings of the National Academy of Sciences, which begins:

Efforts to improve the reproducibility and integrity of science are typically justified by a narrative of crisis, according to which most published results are unreliable due to growing problems with research and publication practices. This article provides an overview of recent evidence suggesting that this narrative is mistaken, and argues that a narrative of epochal changes and empowerment of scientists would be more accurate, inspiring, and compelling.

My reaction:

Kind of amusing that this was published in the same journal that published the papers on himmicanes, air rage (see also here), and ages ending in 9 (see also here).

But, sure, I agree that there may not be “growing problems with research and publication practices.” There were huge problems with research and publication practices, these problems remain but there may be some improvement (I hope there is!). What’s happened in recent years is that there’s been a growing recognition of these huge problems.

So, yeah, I’m ok with an optimistic take. Recent ideas in statistical understanding have represented epochal changes in how we think about quantitative science, and blogging and post-publication review represent a new empowerment of scientists. And PNAS itself now admits fallibility in a way that it didn’t before.

To put it another way: It’s not that we’re in the midst of a new epidemic. Rather, there’s been an epidemic raging for a long time, and we’re in the midst of an exciting period where the epidemic has been recognized for what it was, and there are some potential solutions.

The solutions aren’t easy—they don’t just involve new statistics, they primarily involve more careful data collection and a closer connection between data and theory, and both these steps are hard work—but they can lead us out of this mess.

P.S. I disagree with the above-linked article on one point, in that I do think that science is undergoing a reproducibility crisis, and I do think this is a pervasive problem. But I agree that it’s probably not a growing problem. What’s growing is our awareness of the problem, and that’s a key part of the solution, to recognize that we do have a problem and to beware of complacency.

P.P.S. Since posting this I came across a recent article by Nelson, Simmons, and Simonsohn (2018), “Psychology’s Renaissance,” that makes many of the above points. Communication is difficult, though, because nobody cites anybody else. Fanelli doesn’t cite Nelson et al.; Nelson et al. don’t cite my own papers on forking paths, type M errors, and “the winds have changed” (which covers much of the ground of their paper); and I hadn’t been aware of Nelson et al.’s paper until just now, when I happened to run across it in an unrelated search. One advantage of the blog is that we can add relevant references as we hear of them, or in comments.

The post Reasons for an optimistic take on science: there are not “growing problems with research and publication practices.” Rather, there have been, and continue to be, huge problems with research and publication practices, but we’ve made progress in recognizing these problems. appeared first on Statistical Modeling, Causal Inference, and Social Science.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags: ,


Subscribe

Email:

  Subscribe