Does “status threat” explain the 2016 presidential vote? Diana Mutz replies to criticism.

A couple months ago we reported on an article by sociologist Steve Morgan, criticizing a published paper by political scientist Diana Mutz.

Mutz’s original article was called, “Status Threat, Not Economic Hardship, Explains the 2016 Presidential Vote,” and Morgan’s reply is called, “Status Threat, Material Interests, and the 2016 Presidential Vote” (it originally had the more provocative title, “Fake News: Status Threat Does Not Explain the 2016 Presidential Vote”).

Mutz wrote a long and detailed response, with lots of which will appear in the same journal as Morgan’s article. I have not had a chance to read all of this in detail, but just speaking generally I am happy with how this exchange is going, in that both these researchers are (a) getting into the details, and (b) connecting these data issues with the larger political questions under studied. Yes, both sides are somewhat annoyed at this point, but that’s fine: both Diana and Steve are acting professionally, and I think all this discussion will help in further research in this area.

In addition to her formal response, Mutz also had some responses to our blog post, which I can share right now. Here’s Mutz:

Before adding my own [Mutz’s] reactions, first some corrections to Gelman’s description of my study are in order. In the cross-sectional analysis, Gelman describes four blocks, A, B, C and D, when in reality there were only three: A) demographics, including education, B) seven (not four) items indicating a basis for retrospective and prospective concern about personal financial conditions, and 3) eight status threat indicators tapping both racial and global threats to dominant status.

Second, Gelman describes the analysis as gradually adding additional blocks of variables, and changing only the order in which they are entered. This is inaccurate. I include the first block of demographics including education, and then I add either B) the personal finance indicators, or C) the status threat indicators. The analysis never includes all of them at once, nor do I change the order in which they are entered. What I altered in the analyses in Table S5 was not the order of entering the variables, but which additional variables were added to the basic model—economic variables or status threat.

Third, this is also described as a causal analysis, which was not the point of using the cross-sectional data. The panel data are much better for those purposes. The question I was attempting to answer using cross-sectional data is why education was so strongly related to voting for Trump in 2016. The “left behind” interpretation was based on the assumption that education represented the effect of economic self-interest among working class people with lower incomes/education. Income was not as good a predictor of preferring Trump, but this interpretation of the relationship persisted nonetheless.

Because the relationship with education disappears when status threat variables are included, but changes negligibly when economic variables are included, I conclude that education’s strong relationship to Trump preference is because those with low education are also higher in status threat.

Of course, there may be other variables that also could erase the relationship between Trump voting and education. But thus far, the only analyses that have been able to eliminate education’s impact have done so with the same kind of status threat indicators I have used here, that is, indicators tied to racial attitudes (e.g., Sides, Tesler and Vavreck 2018; Schaffner, MacWilliams & Tatishe 2017).

And yes, PNAS insists that all articles have a single declarative sentence as their title. No subtitles, no colons, etc. So I was asked to change my title before publication. Likewise, because most election interpretations stress the importance of priming or activation of existing opinions over attitude change, I was asked by reviewers to include tests of priming along with the panel models showing that opinion change over time is related to changing candidate support. Including these interactions doesn’t change anything about the results as Morgan undoubtedly knows, but combining the two analyses takes up less space in a journal that allows a maximum of 10 pages.

Gelman also says he doesn’t buy my claim that estimating a fixed effects model with time-varying independent variables and time-varying dependent variable “represents evidence that change in an independent variable corresponds to change in the dependent variable at the individual level.” My statement is simply a restatement of what fixed effects does. If the objection is to the idea that one does not need a fully specified model as one would in an analysis that models between-subject variance, that is the whole point of using fixed effects approach—less risk of omitted variable bias (see Vaisey and Miles 2017). Some assumptions of the model cannot be tested without three or more waves, but it is nonetheless the best that one can do with available data, with far less risk of omitted variable bias than other approaches.

Finally, while I admit that perhaps I’m old-fashioned, I don’t consider Twitter the ideal venue for a scholarly discussion. [I agree! — AG] Unlike the usual academic practice when one critiques another’s work, Morgan did not share his article with me, but posted a link via Twitter where he characterized it as “a frontal assault on someone else’s article,” a designation that did not make me especially eager to engage in dialogue with him. That approach, combined with his original article title, “Fake News,” made me suspect this was yet another troll. Professor Morgan has apparently chosen to forego the mutual respect that is customary when questioning others’ scholarly work, and has instead taken to Trump-like tactics in both his mode of communication as well as his tone.

I certainly hope this is not “science at its best.” As someone who has spent a lot of time studying the impact of incivility in discourse, I know that while it is extremely useful for attracting audiences, incivility has unfortunate consequences for serious and productive dialogue (Mutz 2007, 2015). Nonetheless, I greatly appreciate this opportunity to respond to the substantive claims in Morgan’s critique.

Morgan also criticizes the University of Pennsylvania for writing a press release based on the article and for distributing the article to the press in advance. The distribution process for PNAS articles is controlled entirely by PNAS. Morgan is apparently unaware that it is standard PNAS procedure to embargo publications up to a specific date, and to release them to the press in advance via Eurekalert. Thus journalists have access before even the author or the author’s university is allowed to have a copy of the final publication. This practice has nothing to do with one’s university.

Morgan further criticizes the PNAS for the title it has chosen for its journal, arguing that it is a “journal with a title that implies that its contents are first presented in front of a body of the country’s leading scientists.” I am not sure why he would have this impression, but unlike Morgan’s article which appeared and was publicized via twitter, the PNAS does at least have a review process before a paper is publicly released. Morgan spends three pages of his manuscript criticizing press coverage of the article, which to me seemed an extremely unusual focus for an academic critique. While I sympathize with the lack of control academics have over media coverage, this is hardly new. And I question whether it is preferable that policymakers and the general public have no access or exposure to academic findings. The press performs this important service.

My remaining comments, especially those pertaining to Morgan’s reclassification of my indicators, are included in the attached document.

The post Does “status threat” explain the 2016 presidential vote? Diana Mutz replies to criticism. appeared first on Statistical Modeling, Causal Inference, and Social Science.