Category: Causal Inference

Matching (and discarding non-matches) to deal with lack of complete overlap, then regression to adjust for imbalance between treatment and control groups

John Spivack writes: I am contacting you on behalf of the biostatistics journal club at our institution, the Mount Sinai School of Medicine. We are working Ph.D. biostatisticians and would like the opinion of a true expert on several questions having to do with observational studies—questions that we have not found to be well addressed […]

The post Matching (and discarding non-matches) to deal with lack of complete overlap, then regression to adjust for imbalance between treatment and control groups appeared first on Statistical Modeling, Causal Inference, and Social Science.

Debate about genetics and school performance

Jag Bhalla points us to this article, “Differences in exam performance between pupils attending selective and non-selective schools mirror the genetic differences between them,” by Emily Smith-Woolley, Jean-Baptiste Pingault, Saskia Selzam, Kaili Rimfeld, Eva Krapohl, Sophie von Stumm, Kathryn Asbury, Philip Dale, Toby Young, Rebecca Allen, Yulia Kovas, and Robert Plomin, along with this response […]

The post Debate about genetics and school performance appeared first on Statistical Modeling, Causal Inference, and Social Science.

A potential big problem with placebo tests in econometrics: they’re subject to the “difference between significant and non-significant is not itself statistically significant” issue

In econometrics, or applied economics, a “placebo test” is not a comparison of a drug to a sugar pill. Rather, it’s a sort of conceptual placebo, in which you repeat your analysis using a different dataset, or a different part of your dataset, where no intervention occurred. For example, if you’re performing some analysis studying […]

The post A potential big problem with placebo tests in econometrics: they’re subject to the “difference between significant and non-significant is not itself statistically significant” issue appeared first on Statistical Modeling, Causal Inference, and Social Science.

What to do when your measured outcome doesn’t quite line up with what you’re interested in?

Matthew Poes writes: I’m writing a research memo discussing the importance of precisely aligning the outcome measures to the intervention activities. I’m making the point that an evaluation of the outcomes for a given intervention may net null results for many reasons, one of which could simply be that you are looking in the wrong […]

The post What to do when your measured outcome doesn’t quite line up with what you’re interested in? appeared first on Statistical Modeling, Causal Inference, and Social Science.

Don’t get fooled by observational correlations

Gabriel Power writes: Here’s something a little different: clever classrooms, according to which physical characteristics of classrooms cause greater learning. And the effects are large! Moving from the worst to the best design implies a gain of 67% of one year’s worth of learning! Aside from the dubiously large effect size, it looks like the […]

The post Don’t get fooled by observational correlations appeared first on Statistical Modeling, Causal Inference, and Social Science.

Discussion of effects of growth mindset: Let’s not demand unrealistic effect sizes.

Shreeharsh Kelkar writes: As a regular reader of your blog, I wanted to ask you if you had taken a look at the recent debate about growth mindset [see earlier discussions here and here] that happened on theconversation.com. Here’s the first salvo by Brooke McNamara, and then the response by Carol Dweck herself. The debate […]

The post Discussion of effects of growth mindset: Let’s not demand unrealistic effect sizes. appeared first on Statistical Modeling, Causal Inference, and Social Science.

The gaps between 1, 2, and 3 are just too large.

Someone who wishes to remain anonymous points to a new study of David Yeager et al. on educational mindset interventions (link from Alex Tabarrok) and asks: On the blog we talk a lot about bad practice and what not to do. Might this be an example of how *to do* things? Or did they just […]

The post The gaps between 1, 2, and 3 are just too large. appeared first on Statistical Modeling, Causal Inference, and Social Science.

John Hattie’s “Visible Learning”: How much should we trust this influential review of education research?

Dan Kumprey, a math teacher at Lake Oswego High School, Oregon, writes: Have you considered taking a look at the book Visible Learning by John Hattie? It seems to be permeating and informing reform in our K-12 schools nationwide. Districts are spending a lot of money sending their staffs to conferences by Solution Tree to […]

The post John Hattie’s “Visible Learning”: How much should we trust this influential review of education research? appeared first on Statistical Modeling, Causal Inference, and Social Science.

Let’s be open about the evidence for the benefits of open science

A reader who wishes to remain anonymous writes: I would be curious to hear your thoughts on is motivated reasoning among open science advocates. In particular, I’ve noticed that papers arguing for open practices have seriously bad/nonexistent causal identification strategies. Examples: Kidwell et al. 2017, Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method […]

The post Let’s be open about the evidence for the benefits of open science appeared first on Statistical Modeling, Causal Inference, and Social Science.