Methodological terrorism. For reals. (How to deal with “what we don’t know” in missing-data imputation.)

February 7, 2018
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Kevin Lewis points us to this paper, by Aaron Safer-Lichtenstein, Gary LaFree, Thomas Loughran, on the methodology of terrorism studies. This is about as close to actual “methodological terrorism” as we’re ever gonna see here.

The linked article begins:

Although the empirical and analytical study of terrorism has grown dramatically in the past decade and a half to incorporate more sophisticated statistical and econometric methods, data validity is still an open, first-order question. Specifically, methods for treating missing data often rely on strong, untestable, and often implicit assumptions about the nature of the missing values.

Later, they write:

If researchers choose to impute data, then they must be clear about the benefits and drawbacks of using an imputation technique.

Yes, definitely. One funny thing about missing-data imputation is that the methods are so mysterious and are so obviously subject to uncheckable assumptions that there’s a tendency for researchers to just throw up their hands and give up, and either go for crude data-simplification strategies such as throwing away all cases where anything is missing, or just imputing without any attempt to check the resulting inferences.

My preference is to impute and then check assumptions, as here. That said, in practice this can be a bit of work so in a lot of my own applied work I kinda close my eyes to the problem too. I should do better.

The post Methodological terrorism. For reals. (How to deal with “what we don’t know” in missing-data imputation.) appeared first on Statistical Modeling, Causal Inference, and Social Science.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags: , ,


Subscribe

Email:

  Subscribe