Category: Miscellaneous Statistics

Many perspectives on Deborah Mayo’s “Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars”

This is not new—these reviews appeared in slightly rawer form several months ago on the blog. After that, I reorganized the material slightly and sent to Harvard Data Science Review (motto: “A Microscopic, Telescopic, and Kaleidoscopic View of Data Science”) but unfortunately reached a reviewer who (a) didn’t like Mayo’s book, and (b) felt that […]

Controversies in the theory of measurement in mathematical psychology

We begin with this email from Guenter Trendler: On your blog you wrote: The replication crisis in social psychology (and science more generally) will not be solved by better statistics or by preregistered replications. It can only be solved by better measurement. Check this out: Measurement Theory, Psychology and the Revolution That Cannot Happen (pdf […]

Glenn Shafer tells us about the origins of “statistical significance”.

Shafer writes: It turns out that Francis Edgeworth, who introduced “significant” in statistics, and Karl Pearson, who popularized it in statistics, used it differently than we do. For Edgeworth and Pearson, “being significant” meant “signifying”. An observed difference was significant if it signified a real difference, and you needed a very small p-value to be […]

Chow and Greenland: “Unconditional Interpretations of Statistics”

Zad Chow writes: I think your readers might find this paper [“To Aid Statistical Inference, Emphasize Unconditional Descriptions of Statistics,” by Greenland and Chow] interesting. It’s a relatively short paper that focuses on how conventional statistical modeling is based on assumptions that are often in the background and dubious, such as the presence of some […]

“Persistent metabolic youth in the aging female brain”??

A psychology researcher writes: I want to bring your attention to a new PNAS paper [Persistent metabolic youth in the aging female brain, by Manu Goyal, Tyler Blazey, Yi Su, Lars Couture, Tony Durbin, Randall Bateman, Tammie Benzinger, John Morris, Marcus Raichle, and Andrei Vlassenko] that’s all over the news. Can one do a regression […]

“Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science”

As promised, let’s continue yesterday’s discussion of Christopher Tong’s article, “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science.” First, the title, which makes an excellent point. It can be valuable to think about measurement, comparison, and variation, even if commonly-used statistical methods can mislead. This reminds me of the idea in decision analysis […]

Harking, Sharking, Tharking

Bert Gunter writes: You may already have seen this [“Harking, Sharking, and Tharking: Making the Case for Post Hoc Analysis of Scientific Data,” John Hollenbeck, Patrick Wright]. It discusses many of the same themes that you and others have highlighted in the special American Statistician issue and elsewhere, but does so from a slightly different […]

My math is rusty

When I’m giving talks explaining how multilevel modeling can resolve some aspects of the replication crisis, I mention this well-known saying in mathematics: “When a problem is hard, solve it by embedding it in a harder problem.” As applied to statistics, the idea is that it could be hard to analyze a single small study, […]

My talk at the Metascience symposium Fri 6 Sep

The meeting is at Stanford, and here’s my talk: Embracing Variation and Accepting Uncertainty: Implications for Science and Metascience The world would be pretty horrible if your attitude on immigration could be affected by a subliminal smiley face, if elections were swung by shark attacks and college football games, if how you vote depended on […]

The methods playroom: Mondays 11-12:30

Each Monday 11-12:30 in the Lindsay Rogers room (707 International Affairs Bldg, Columbia University): The Methods Playroom is a place for us to work and discuss research problems in social science methods and statistics. Students and others can feel free to come to the playroom and work on their own projects, with the understanding that […]

Beyond Power Calculations: Some questions, some answers

Brian Bucher (who describes himself as “just an engineer, not a statistician”) writes: I’ve read your paper with John Carlin, Beyond Power Calculations. Would you happen to know of instances in the published or unpublished literature that implement this type of design analysis, especially using your retrodesign() function [here’s an updated version from Andy Timm], […]

More on the piranha problem, the butterfly effect, unintended consequences, and the push-a-button, take-a-pill model of science

The other day we had some interesting discussion that I’d like to share. I started by contrasting the butterfly effect—the idea that a small, seemingly trivial, intervention at place A can potentially have a large, unpredictable effect at place B—with the “PNAS” or “Psychological Science” view of the world, in which small, seemingly trivial, intervention […]

You should (usually) log transform your positive data

The reason for log transforming your data is not to deal with skewness or to get closer to a normal distribution; that’s rarely what we care about. Validity, additivity, and linearity are typically much more important. The reason for log transformation is in many settings it should make additive and linear models make more sense. […]

What can be learned from this study?

James Coyne writes: A recent article co-authored by a leading mindfulness researcher claims to address the problems that plague meditation research, namely, underpowered studies; lack of or meaningful control groups; and an exclusive reliance on subjective self-report measures, rather than measures of the biological substrate that could establish possible mechanisms. The article claims adequate sample […]

Amending Conquest’s Law to account for selection bias

Robert Conquest was a historian who published critical studies of the Soviet Union and whose famous “First Law” is, “Everybody is reactionary on subjects he knows about.” I did some searching on the internet, and the most authoritative source seems to be this quote from Conquest’s friend Kingsley Amis: Further search led to this elaboration […]

Here are some examples of real-world statistical analyses that don’t use p-values and significance testing.

Joe Nadeau writes: I’ve followed the issues about p-values, signif. testing et al. both on blogs and in the literature. I appreciate the points raised, and the pointers to alternative approaches. All very interesting, provocative. My question is whether you and your colleagues can point to real world examples of these alternative approaches. It’s somewhat […]