(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)
Robin Evans writes:
As someone who works partly on causal inference and lies somewhere in the middle of the ‘conservative to permissive’ range you mention in your paper, I’d be interested to know what you think on the following.
Is there really a (philosophical) distinction between hypothesising a zero relationship (causal or otherwise) between two variables in a large multivariate model, and making some other modelling assumption, such as your variables being jointly Gaussian, or effects being additive, or whatever?
All involve restricting inference to some lower dimensional subset of possible models, and all are simplifications which may assist in inferring other aspects of the model, by reducing the number of parameters to be estimated. All are assumptions which we probably don’t believe to hold exactly in the real world, but may be close enough to to the truth for this not to matter.
Of course, how you choose to interpret the zero result in your social science data afterwards will be important, but so too if you’ve assumed a simple correlation structure such as that of a Gaussian distribution.
I [Evans] tend to think of pretty much all modelling assumptions (from scientific induction upwards) as ‘smoothness assumptions’ about
Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science