Author: Andrew

Forming a hyper-precise numerical summary during a research crisis can improve an article’s chance of achieving its publication goals.

Speaking of measurement and numeracy . . . Kevin Lewis pointed me to this published article with the following abstract that starts out just fine but kinda spirals out of control: Forming a military coalition during an international crisis can improve a state’s chances of achieving its political goals. We argue that the involvement of […]

Coney Island

Inspired by this story (“Good news! Researchers respond to a correction by acknowledging it and not trying to dodge its implications”): Coming down from Psych Science Stopping off at PNAS Out all day datagathering And the craic was good Stopped off at the old lab Early in the morning Drove through Harvard taking pictures And […]

You should (usually) log transform your positive data

The reason for log transforming your data is not to deal with skewness or to get closer to a normal distribution; that’s rarely what we care about. Validity, additivity, and linearity are typically much more important. The reason for log transformation is in many settings it should make additive and linear models make more sense. […]

Did that “bottomless soup bowl” experiment ever happen?

I’m trying to figure out if Brian “Pizzagate” Wansink’s famous “bottomless soup bowl” experiment really happened. Way back when, everybody thought the experiment was real. After all, it was described in a peer-reviewed journal article. Here’s my friend Seth Roberts in 2006: An experiment in which people eat soup from a bottomless bowl? Classic! Or […]

What can be learned from this study?

James Coyne writes: A recent article co-authored by a leading mindfulness researcher claims to address the problems that plague meditation research, namely, underpowered studies; lack of or meaningful control groups; and an exclusive reliance on subjective self-report measures, rather than measures of the biological substrate that could establish possible mechanisms. The article claims adequate sample […]

Bayesian Computation conference in January 2020

X writes to remind us of the Bayesian computation conference: – BayesComp 2020 occurs on 7-10 January 2020 in Gainesville, Florida, USA – Registration is open with regular rates till October 14, 2019 – Deadline for submission of poster proposals is December 15, 2019 – Deadline for travel support applications is September 20, 2019 – […]

Amending Conquest’s Law to account for selection bias

Robert Conquest was a historian who published critical studies of the Soviet Union and whose famous “First Law” is, “Everybody is reactionary on subjects he knows about.” I did some searching on the internet, and the most authoritative source seems to be this quote from Conquest’s friend Kingsley Amis: Further search led to this elaboration […]

As always, I think the best solution is not for researchers to just report on some preregistered claim, but rather for them to display the entire multiverse of possible relevant results.

I happened to receive these two emails in the same day. Russ Lyons pointed to this news article by Jocelyn Kaiser, “Major medical journals don’t follow their own rules for reporting results from clinical trials,” and Kevin Lewis pointed to this research article by Kevin Murphy and Herman Aguinis, “HARKing: How Badly Can Cherry-Picking and […]

“Beyond ‘Treatment Versus Control’: How Bayesian Analysis Makes Factorial Experiments Feasible in Education Research”

Daniel Kassler, Ira Nichols-Barrer, and Mariel Finucane write: Researchers often wish to test a large set of related interventions or approaches to implementation. A factorial experiment accomplishes this by examining not only basic treatment–control comparisons but also the effects of multiple implementation “factors” such as different dosages or implementation strategies and the interactions between these […]

Here are some examples of real-world statistical analyses that don’t use p-values and significance testing.

Joe Nadeau writes: I’ve followed the issues about p-values, signif. testing et al. both on blogs and in the literature. I appreciate the points raised, and the pointers to alternative approaches. All very interesting, provocative. My question is whether you and your colleagues can point to real world examples of these alternative approaches. It’s somewhat […]

For each parameter (or other qoi), compare the posterior sd to the prior sd. If the posterior sd for any parameter (or qoi) is more than 0.1 times the prior sd, then print out a note: “The prior distribution for this parameter is informative.”

Statistical models are placeholders. We lay down a model, fit it to data, use the fitted model to make inferences about quantities of interest (qois), check to see if the model’s implications are consistent with data and substantive information, and then go back to the model and alter, fix, update, augment, etc. Given that models […]

Conditional probability and police shootings

A political scientist writes: You might have already seen this, but in case not: PNAS published a paper [Officer characteristics and racial disparities in fatal officer-involved shootings, by David Johnson, Trevor Tress, Nicole Burkel, Carley Taylor, and Joseph Cesario] recently finding no evidence of racial bias in police shootings: Jonathan Mummolo and Dean Knox noted […]