“The writer who confesses that he is ‘not good at attention to detail’ is like a pianist who admits to being tone deaf”

Edward Winter wrote:

It is extraordinary how the unschooled manage to reduce complex issues to facile certainties. The writer who confesses that he is ‘not good at attention to detail’ (see page 17 of the November 1990 CHESS for that stark, though redundant, admission by the Weekend Wordspinner) is like a pianist who admits to being tone deaf. Broad sweeps are valueless. Unless an author has explored his terrain thoroughly, how will he be reasonably sure that his central thesis cannot be overturned? Facts count. Tentative theorizing may have a minor role once research paths have been exhausted but, as a general principle, rumour and guesswork, those tawdry journalistic mainstays, have no place in historical writing of any kind. . . .

He’s talking about chess, but the principle applies more generally.

What’s interesting to me is how many people—including scientists and even mathematicians (sorry, Chrissy) don’t think that way.

We’ve discussed various examples over the years where scientists write things in published papers that are obviously false, reporting results that could not possibly have been in their data, which we know either from simple concordances (i.e., the numbers don’t add up) or because the results report information that was never actually gathered in the study in question.

How do people do this? How can they possibly think this is a good idea?

Here are the explanations that have been proffered for this behavior, of publishing claims that are contradicted by, or have zero support from, their data:

1. Simple careerism, or what’s called “incentives”: Make big claims and you can get published in PNAS, get a prestigious job, fame, fortune, etc.

2. The distinction between truth and evidence: Researchers think their hypotheses are true, so they get sloppy on the evidence. To them, it doesn’t really matter if their data support their theory because they believe the theory in any case—and the theory is vague enough to support just about any pattern in data.

And, sure, that explains a lot, but it doesn’t explain some of the examples that Winter has given (for example, a chess book staing that a game occurred 10 years after one of the players had died; although, to be fair, that player was said to be the loser of said game). Or, in the realm of scientific research, the papers of Brian Wansink which contained numbers that were not consistent with any possible data.

One might ask: Why get such details wrong? Why not either look up the date, or, if you don’t want to bother, why give a date at all?

This leads us to a third explanation for error:

3. If an author makes zillions of statements, and truth just doesn’t matter for any particular one of the statements, then you’ll get errors.

I think this happens a lot. All of us make errors, and most of us understand that errors are inevitable. But there seems to be a divide between two sorts of people: (a) Edward Winter, and I expect most of the people who read this blog, who feel personally responsible for our errors and try to check as much as possible and correct what mistakes arise, and (b) Brian Wansink, David Brooks and, it seems, lots of other writers, who are more interested in the flow, and who don’t want be slowed down by fact checking.

Winter argues that if you get the details wrong, or if you don’t care about the details, you can get the big things wrong too. And I’m inclined to agree. But maybe we’re wrong. Maybe it’s better to just plow on ahead, mistakes be damned, always on to the next project. I dunno.

P.S. Regarding Winter’s quote above, I bet it is possible to be a good pianist even if tone-deaf, if you can really bang it out and you have a good sense of rhythm. Just as you can be a good basketball player even if you’re really short. But it’s a handicap, that’s for sure.