Fixed effects, followed by Bayes shrinkage?

December 30, 2012
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Stuart Buck writes:

I have a question about fixed effects vs. random effects. Amongst economists who study teacher value-added, it has become common to see people saying that they estimated teacher fixed effects (via least squares dummy variables, so that there is a parameter for each teacher), but that they then applied empirical Bayes shrinkage so that the teacher effects are brought closer to the mean. (See this paper by Jacob and Lefgren, for example.)

Can that really be what they are doing? Why wouldn’t they just run random (modeled) effects in the first place? I feel like there’s something I’m missing.

My reply: I don’t know the full story here, but I’m thinking there are two goals, first to get an unbiased estimate of an overall treatment effect (and there the econometricians prefer so-called fixed effects; I disagree with them on this but I know where they’re coming from) and second to estimate individual teacher effects (and there it makes sense to use so-called random effects, although in general I would shrink toward a regression model with teacher-level predictors rather than to an overall mean).



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags: ,


Subscribe

Email:

  Subscribe