(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Brendan Nyhan writes:

Thought this might be of interest – new paper with Jacob Montgomery and Michelle Torres, How conditioning on post-treatment variables can ruin your experiment and what to do about it.

The post-treatment bias from dropout on Turk you just posted about is actually in my opinion a less severe problem than inadvertent experimenter-induced bias due to conditioning on post-treatment variables in determining the sample (attention/manipulation checks, etc.) and controlling for them/using them as moderators. We show how common these practices are in top journal articles, demonstrate the problem analytically, and reanalyze some published studies. Here’s the table on the extent of the problem:

Post-treatment bias is not new but it’s an important area where practice hasn’t improved as rapidly as in other areas.

I wish they’d round their numbers to the nearest percentage point.

The post “How conditioning on post-treatment variables can ruin your experiment and what to do about it” appeared first on Statistical Modeling, Causal Inference, and Social Science.

**Please comment on the article here:** **Statistical Modeling, Causal Inference, and Social Science**