I received this press release in the mail:
Study finds ‘Growth Mindset’ intervention taking less than an hour raises grades for ninth graders
Intervention is first to show national applicability, breaks new methodological ground
– Study finds low-cost, online growth mindset program taking less than an hour can improve ninth graders’ academic achievement
– The program can be used for free in high schools around U.S. and Canada
– Researchers developed rigorous new study design that can help identify who could benefit most from intervention and under which social contexts
A groundbreaking study of more than 12,000 ninth grade U.S. students has revealed how a brief, low-cost, online program that takes less than an hour to complete can help students develop a growth mindset and improve their academic achievement. A growth mindset is the belief that a person’s intellectual abilities are not fixed and can be further developed.
Published in the journal Nature on August 7, the nationally representative study showed that both lower- and higher-achieving students benefited from the program. Lower-achieving students had significantly higher grades in ninth grade, on average, and both lower- and higher-achieving students were more likely to enroll in more challenging math courses their sophomore year. The program increased achievement as much as, and in some cases more than, previously evaluated, larger-scale education interventions costing far more and taking far longer to complete. . . .
The National Study of Learning Mindsets is as notable for its methodology to investigate the differences, or heterogeneity, in treatment effects . . . the first time an experimental study in education or social psychology has used a random, nationally representative sample—rather than a convenience sample . . .
Past studies have shown mixed effects for growth mindset interventions, with some showing small effects and others showing larger ones.
“These mixed findings result from both differences in the types of interventions, as well as from not using nationally representative samples in ways that rule out other competing hypotheses,” [statistician Elizabeth] Tipton said. . . .
The researchers hypothesized that the effects of the mindset growth intervention would be stronger for some types of schools and students than others and designed a rigorous study that could test for such differences. Though the overall effect might be small when looking at all schools, particular types of schools, such as those performing in the bottom 75% of academic achievement, showed larger effects from the intervention.
I’m often skeptical about studies that appear in the tabloids and get promoted via press release, and I guess I’m skeptical here too—but I know a lot of the people involved in this one, and I think they know what they’re doing. Also I think I helped out in the design of this study, so it’s not like I’m a neutral observer here.
One thing that does bother me is all the p-values in the paper and, in general, the reliance on classical analysis. Given that the goal of this research is to recognize variation in treatment effects, I think it should be reasonable to expect lots of the important aspects of the model to not be estimated very precisely from data (remember 16). So I’m thinking that, instead of strewing the text with p-values, there should be a better way to summarize inferences for interactions. Along similar lines, I’m guessing they could do better using Bayesian multilevel analysis to partially pool estimated interactions toward zero, rather than simple data comparisons which will be noisy. I recognize that many people consider classical analysis to be safer or more conservative, but statistical significance thresholding can just add noise; I think it’s partial pooling that will give results that are more stable and more likely to stand up under replication. This is not to say that I think the conclusions in the article are wrong; also, just at the level of the statistics, I think by far the most important issues are those identified by Tipton in the above-linked press release. I just think there’s more that can be done.
It appears the data and code are available here, so other people can do their own analyses, perhaps using multilevel modeling and graphical displays of grids of comparisons get a clearer picture of what can be learn from the data.
In any case, this topic is potentially very important—a effective intervention lasting an hour—so I’m glad that top statisticians and education researchers are working on it. Here’s how Yeager et al. conclude:
The combined importance of belief change and school environments in our study underscores the need for interdisciplinary research to understand the numerous influences on adolescents’ developmental trajectories.