While teaching undergraduate research methods over the summer, my students and I noticed something about academic journal articles that I had never observed before--that authors carefully draw sharp distinctions between causal and correlational claims when discussing their data analysis, but then interpret the correlational patterns in a totally causal way in their conclusion section. Shortly after this, Kaiser Fung started talking about "causal creep" and Scott Clifford independently raised the issue with me. Since then, I see this happening all the time in academic articles (even in my own working papers) and popular discussions of political science research.
I found another excellent example this morning in a discussion of an observational study of a small, convenience sample of college students.
Murray admits the sample was small and came from nationally unrepresentative, similar backgrounds. But while more research needs to be done with larger groups and both parents to confirm the findings, the results do show that different parenting styles do affect how a child's political belief system is impacted by his or her parents.
How do we avoid causal creep? I think the phenomenon of causal creep flows directly from our incorrect attitude that statistics always points sharply to a conclusion that the causal theory is either right or wrong. Most evidence is highly ambiguous at best. Rather than use statistics to "test" arguments, I prefer an approach in which an analyst makes a clear causal claim, theoretically defends the claim, and makes the best empirical argument she can against alternative explanations, while acknowledging the shortcomings. In this framework, statistical analyses are more appropriately viewed as an argument that can be stronger or weaker and the reader is invited to consider the weight of the evidence rather than trust the researcher.
Please comment on the article here: Carlisle Rainey » Methods/Statistics