Category: Zombies

“Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science”

As promised, let’s continue yesterday’s discussion of Christopher Tong’s article, “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science.” First, the title, which makes an excellent point. It can be valuable to think about measurement, comparison, and variation, even if commonly-used statistical methods can mislead. This reminds me of the idea in decision analysis […]

Harking, Sharking, Tharking

Bert Gunter writes: You may already have seen this [“Harking, Sharking, and Tharking: Making the Case for Post Hoc Analysis of Scientific Data,” John Hollenbeck, Patrick Wright]. It discusses many of the same themes that you and others have highlighted in the special American Statistician issue and elsewhere, but does so from a slightly different […]

Deterministic thinking (“dichotomania”): a problem in how we think, not just in how we act

This has come up before: – Basketball Stats: Don’t model the probability of win, model the expected score differential. – Econometrics, political science, epidemiology, etc.: Don’t model the probability of a discrete outcome, model the underlying continuous variable – Thinking like a statistician (continuously) rather than like a civilian (discretely) – Message to Booleans: It’s […]

Exchange with Deborah Mayo on abandoning statistical significance

The philosopher wrote: The big move in the statistics wars these days is to fight irreplication by making it harder to reject, and find evidence against, a null hypothesis. Mayo is referring to, among other things, the proposal to “redefine statistical significance” as p less than 0.005. My colleagues and I do not actually like […]

A world of Wansinks in medical research: “So I guess what I’m trying to get at is I wonder how common it is for clinicians to rely on med students to do their data analysis for them, and how often this work then gets published”

In the context of a conversation regarding sloppy research practices, Jordan Anaya writes: It reminds me of my friends in residency. Basically, while they were med students for some reason clinicians decided to get them to analyze data in their spare time. I’m not saying my friends are stupid, but they have no stats or […]

It’s not just p=0.048 vs. p=0.052

Peter Dorman points to this post on statistical significance and p-values by Timothy Taylor, editor of the Journal of Economic Perspectives, a highly influential publication of the American Economic Association. I have some problems with what Taylor writes, but for now I’ll just take it as representing a certain view, the perspective of a thoughtful […]

He says it again, but more vividly.

We’ve discussed Clarke’s third law (“Any sufficiently crappy research is indistinguishable from fraud”) and that, to do good science, honesty and transparency are not enough. James Heathers says it again, vividly. I don’t know if Heathers has ever written anything about the notorious study in which participants were invited to stick 51 pins into a […]

When people make up victim stories

A couple of victim stories came up recently: in both cases these were people I’d never heard of until (a) they claimed to have been victimized, and (b) it seems that these claims were made up. First case was Jussie Smollett, a cable-TV actor who claimed to be the victim of a racist homophobic attack, […]

Forming a hyper-precise numerical summary during a research crisis can improve an article’s chance of achieving its publication goals.

Speaking of measurement and numeracy . . . Kevin Lewis pointed me to this published article with the following abstract that starts out just fine but kinda spirals out of control: Forming a military coalition during an international crisis can improve a state’s chances of achieving its political goals. We argue that the involvement of […]

Did that “bottomless soup bowl” experiment ever happen?

I’m trying to figure out if Brian “Pizzagate” Wansink’s famous “bottomless soup bowl” experiment really happened. Way back when, everybody thought the experiment was real. After all, it was described in a peer-reviewed journal article. Here’s my friend Seth Roberts in 2006: An experiment in which people eat soup from a bottomless bowl? Classic! Or […]

As always, I think the best solution is not for researchers to just report on some preregistered claim, but rather for them to display the entire multiverse of possible relevant results.

I happened to receive these two emails in the same day. Russ Lyons pointed to this news article by Jocelyn Kaiser, “Major medical journals don’t follow their own rules for reporting results from clinical trials,” and Kevin Lewis pointed to this research article by Kevin Murphy and Herman Aguinis, “HARKing: How Badly Can Cherry-Picking and […]

This one goes in the Zombies category, for sure.

Paul Alper writes: I was in my local library and I came across this in Saturday’s WSJ: The Math Behind Successful Relationships Nearly 30 years ago, a mathematician and a psychologist teamed up to explore one of life’s enduring mysteries: What makes some marriages happy and some miserable? The psychologist, John Gottman, wanted to craft […]

The garden of forking paths

Bert Gunter points us to this editorial: So, researchers using these data to answer questions about the effects of technology [screen time on adolescents] need to make several decisions. Depending on the complexity of the data set, variables can be statistically analysed in trillions of ways. This makes almost any pattern of results possible. As […]

Plaig!

Tom Scocca discusses some plagiarism that was done by a former New York Times editor: There was no ambiguity about it; Abramson clearly and obviously committed textbook plagiarism. Her text lifted whole sentences from other sources word for word, or with light revisions, presenting the same facts laid out in the same order as in […]

“Guarantee” is another word for “assumption”

I always think it’s funny when people go around saying their statistical methods have some sort of “guaranteed” performance. I mean, sure, guarantees are fine—but a guarantee comes from an assumption. If you want to say that your method has a guarantee but my method doesn’t, what you’re really saying is that you’re making an […]

What’s published in the journal isn’t what the researchers actually did.

David Allison points us to these two letters: Alternating Assignment was Incorrectly Labeled as Randomization, by Bridget Hannon, J. Michael Oakes, and David Allison, in the Journal of Alzheimer’s Disease. Change in study randomization allocation needs to be included in statistical analysis: comment on ‘Randomized controlled trial of weight loss versus usual care on telomere […]