“Eureka bias”: When you think you made a discovery and then you don’t want to give it up, even if it turns out you interpreted your data wrong

May 16, 2018
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

This came in the email one day:

I am writing to you with my own (very) small story of error-checking a published finding. If you end up posting any of this, please remove my name!

A few years ago, a well-read business journal published an article by a senior-level employee at my company. One of the findings was incredibly counter-intuitive. I looked up one of the reference studies and found that a key measure was reverse-coded (e.g. a 5 meant “poor” and a 1 meant “excellent”). My immediate conclusion was that this reverse coding was not accounted for in the article. I called the author and suggested that with such a strange finding, they should check the data to make sure it was coded properly. The author went back to the research partner, and they claimed the data was correct.

Still thinking the finding was anomalous, I downloaded the data from the original reference study. I then plotted the key metric and showed that the incorrectly coded data matched what was published, but that the correctly coded data matched the intuition. I sent those charts to the senior person. The author and research partner double-checked and confirmed there was an error in their reporting. So far so good!

After confirming the error, the author called and asked me “What are you trying to accomplish here?”. I responded that I was only trying to protect this senior person (and the company), because if I found the error somebody else would find it later down the line. The author, however, was suspicious of why I took the time to investigate the data. I was puzzled, since it appeared it was the research partner who made the fundamental error and the author’s only fault was in not diving into a counter-intuitive result. In the end, the graph in question was redacted from the online version of article. And, as you by now would certainly expect, the author claimed “none of the conclusions were materially impacted by the change”.

Do you have a name for this phenomenon in your lexicon yet? Might I suggest “eureka bias”? Meaning, when somebody is well-intentioned and discovers something unique, that “eureka moment” assumes a supremely privileged status in the researcher’s mind, and they never want to abandon that position despite evidence to the contrary…

My reply: Hey, I reverse-coded a variable once! Unfortunately it destroyed my empirical finding and I felt the need to issue a correction:

In the paper “Should the Democrats move to the left on economic policy?” [Ann. Appl. Stat. 2 (2008) 536–549] by Andrew Gelman and Cexun Jeffrey Cai, because of a data coding error on one of the variables, all our analysis of social issues is incorrect. Thus, arguably, all of Section 3 is wrong until proven otherwise.

We thank Yang Yang Hu for discovering this error and demonstrating its importance.

Regarding your general question:

“Eureka bias,” yes, that’s an interesting idea. I’ve written about this, I can’t remember where, and I think you’re right. Sometimes there’s confirmation bias, when someone does a study and, no surprise!, finds exactly what they were looking for, apparently all wrapped in a bow with statistical significance as long as you ignore the forking paths (as in that famous ESP paper from a few years back).

Other times, though, a researcher is surprised by the data and then takes that surprise as confirming evidence, with the implicit reasoning being: I wasn’t even looking to see this and it showed up anyway, so it must be real. At that point the researcher seems to become attached to the finding and doesn’t want to give it up, sometimes going to extreme lengths to defend it and even to attack and question the motives of anyone who points out problems with their data, as we see in your example above and we’ve seen many times before in various contexts.

So, yes, “Eureka bias” it is.

P.S. Check out the “please remove my name!” above. I hear this sort of thing from whistleblowers all the time, and it’s my impression that there’s a lot of bullying done against people who go to the trouble of uncovering inconvenient truths about purportedly successful research. Remember how Marc Hauser treated his research assistants? Remember how that psychologists applied the “terrorists” label to people who were pointing out errors in published research? There’s a good reason that Pubpeer allows anonymous comments. Not all scientists respond well to criticism; some will attack anyone who they see as devaluing their brand.

The post “Eureka bias”: When you think you made a discovery and then you don’t want to give it up, even if it turns out you interpreted your data wrong appeared first on Statistical Modeling, Causal Inference, and Social Science.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags: ,


Subscribe

Email:

  Subscribe