This comes up from time to time. We were discussing a published statistical blunder, an innumerate overconfident claim arising from blind faith that a crude regression analysis would control for various differences between groups.

Martha made the following useful comment:

Another factor that I [Martha] believe tends to promote the kind of thing we’re talking about here is use of language in ways that obscure that the devil is in the details. This can be illustrated in this particular case by the following quote from Marc’s original post:

“controlling for a confounder in a model does not resolve the problem? That is, if I put all covariates into a statistical model and compare it to a model with all covariates + target predictor, I thought I was able to test whether the additional target predictor can account for additional variance in the criterion?”

A big part of the problem here is using the word “control” in a technical meaning that is only vaguely related to the way the word is used in everyday situations. My experience is that the use of “control” here leads people to believe (innocently) that the procedure in question does something stronger than it really does. I think it would be more helpful (communicate more clearly) if the process were called “attempt to adjust for” or “attempt to take into account” rather than “control for”.

I’ve felt this for awhile. For example, in revising our book for the new edition, Jennifer and I went through and changed “control for” to “adjust for,” wherever we could find the phrase. (We also removed the term “statistical significant” except when explaining what it means so that readers know to be wary of it.)

Commenter Mikhail added:

I guess “attempt to adjust for” can be further expanded to “attempt to adjust for using unrealistic linear assumption.”

All adjustments are attempted adjustments and all assumptions are unrealistic. So I don’t mind saying “adjust for,” with the understanding that any adjustment is necessarily an approximation.