(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Alex Hoffman points me to this interview by Dylan Matthews of education researcher Thomas Kane, who at one point says,

Once you corrected for measurement error, a teacher’s score on their chosen videos and on their unchosen videos were correlated at 1. They were perfectly correlated.

Hoffman asks, “What do you think? Do you think that just maybe, perhaps, it’s possible we aught to consider, I’m just throwing out the possibility that it might be that the procedure for correcting measurement error might, you now, be a little too strong?”

I don’t know exactly what’s happening here, but it might be something that I’ve seen on occasion when fitting multilevel models using a point estimate for the group-level variance. It goes like this: measurement-error models are multilevel models, they involve the estimation of a distribution of a latent variable. When fitting multilevel models, it is possible to estimate the group-level variance to be zero, even though the group-level variance is not zero in real life. We have a penalized-likelihood approach to keep the estimate away from zero (see this paper, to appear in Psychometrika) but this is not yet standard in computer packages. The result is that in a multilevel model you can get estimates of zero variance or perfect correlations because the variation in the data is less than its expected value under the noise model. With a full Bayesian approach, you’d find the correlation could take on a range of possible values, it’s not really equal to 1.

**Please comment on the article here:** **Statistical Modeling, Causal Inference, and Social Science**