Are statistical nitpickers (e.g., Kaiser Fung and me) getting the way of progress or even serving the forces of evil?

As Ira Glass says, today we have a theme and some variations on this theme.

Statistical nitpickers: Do they cause more harm than good?

I’d like to think we cause more good than harm, but today I want to consider the counter-argument, that, even when we are correct on the technical merits, we statisticians should just pipe down and not interfere with the publicity machine.

Part 1: “How many years do we lose to the air we breathe?”

We recently had a discussion about a Washington Post feature story, “How many years do we lose to the air we breathe?”, which featured the claim, attributed to the University of Chicago’s Energy Policy Institute, that “the average person on Earth would live 2.6 years longer if the air contained none of the deadliest type of pollution.”

I was disturbed because this number seemed to be coming from a discredited analysis from a few years ago, and then Kaiser Fung went into the report and seemed to find the same problem.

The next question is, Why should we care? I mean this question seriously, not cynically. Air pollution is bad, it undoubtedly causes health problems, and it would be good to reduce it. At some point the costs of pollution reduction are greater than the benefits, but I’m pretty sure that in many parts of the world there are some relatively cheap remediations—and even if pollution reduction only saved, on average, 0.26 years of life rather than 2.6 years, much could be done. So, from that standpoint, the “2.6 years” claim is just putting a number on something that we already know.

Part 2: Should we think of early childhood intervention studies as benevolent propaganda?

It’s similar to that iffy claim (see section 2.1 here) that early childhood intervention increases adult earnings by 25% or 42%. Early childhood intervention is good, right? So we can think of this sort of quantitative study—even if flawed, and even if its authors refuse to address these flaws—as a form of benevolent propaganda, in which the qualitative conclusion is more important than its quantitative correctness.

Part 3: Instant power and the utilitarian argument for not being a party pooper

Another example is that study that claimed, “That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications,” even though it had no evidence of anyone actually becoming more powerful. Even after the claims in that paper failed to replicate and were retracted by its first author, critics of that study (including Kaiser Fung and me) were characterized as “social media hate mobs” by superstar linguist Steven Pinker. (I guess he won’t be providing any free legal advice to us anytime soon.) Setting aside the whole “hate mobs” thing—I’m just glad he didn’t call us “methodological terrorists”—I think we can extract a positive argument from Pinker’s message, and that goes as follows: Even if “power pose” doesn’t actually work, and even if the published research from 2010 that kicked off that line of study was, if standard practice for its time, in retrospect too noisy and sloppy to give any useful information (again, see here), and even if later attempts to salvage the claim were fatally flawed, still, there’s the potential for some ideas along this line which could be both scientifically interesting and beneficial to people.

In short: by being critical of this admittedly-flawed work, Kaiser and I are dissuading others from pursuing this line of research, and that could have a human cost. In short: why do we have to be such pedants, why focus on the negative, why not just say nice things?

My quick answers are: 1. Opportunity cost, and 2. Division of labor. Opportunity cost is that if researchers are making errors in their design, data collection, analysis, or presentation of results, then this represents resources that could be better used, either by studying these topics more effectively or by studying something else. Division of labor is that Kaiser and I are statisticians so this is something we can do to help.

But that’s just my take on it. In reply, Pinker could argue that social psychology has the potential to benefit the lives of millions and so we should say more nice things about it. He’s implicitly making the utilitarian argument for not being a party pooper.

Part 4: Is it irresponsible to talk about an error, if it distracts the news media from a political priority?

Here’s another example. A year or so ago, someone criticized me online for “scoffing at the Case/Deaton finding about U.S. life expectancy.” Actually I never scoffed at Case/Deaton; I merely pointed out that their claim of declining life expectancy went away after age adjustment, but the real point is that my correspondent’s criticism was not the “scoffing,” if any, but rather that the stagnant-life-expectancy problem was important, and it was important that Case and Deaton raised the alarm, and my pointing out errors in their analysis had a negative social effect by decreasing the sense of alarm about these problems. My correspondent argued that it was fine that I sent my corrections to Case and Deaton and that I published my criticisms in a journal, but not that I posted the criticisms on a blog or that I spoke with journalists who contacted me in response to those blog posts. In those conversations with journalists, I emphasized that I was not disagreeing with Case and Deaton’s main point of stagnant life expectancy, but I did also talk about their technical mistake.

In short: the argument is that statistical nitpickers cause social harm by decreasing public confidence in a published claim. The argument is that even if the published claim has errors, we should either shut up about the errors or talk about them very quietly. (For example, my correspondent did not mind that Case and Deaton spoke to the news media, but he didn’t like that I did so, because he felt that their message had social value, and my criticism would diminish that impact.)

My collaborators and I don’t feel that way—we believe our nitpicking has value, both in a direct sense of giving the public and policymakers a clearer view of what the science currently says, and indirectly in motivating future research and journalists to be more careful about their claims—but it’s important to recognize that this other view is out there, the view that the social value of certain claims is important enough that any critics should speak very softly so as not to diminish their prestige. It’s a utilitarian argument with which I disagree, but it’s a position you can take.