Does “status threat” explain the 2016 presidential vote?

May 14, 2018
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Steve Morgan writes:

The April 2018 article of Diana Mutz, Status Threat, Not Economic Hardship, Explains the 2016 Presidential Vote, was published in the Proceedings of the National Academy of Sciences and contradicts prior sociological research on the 2016 election. Mutz’s article received widespread media coverage because of the strength of its primary conclusion, declaimed in its title. The current article is a critical reanalysis of the models offered by Mutz, using the data files released along with her article.

The title of Morgan’s article is, “Fake News: Status Threat Does Not Explain the 2016 Presidential Vote.”

What happened? According to Morgan:

Material interests and status threat are deeply entangled in her cross-sectional data and, together, do not enable a definitive analysis of their relative importance. . . . Her panel-data model of votes, which she represents as a fixed-effect logit model, is, in fact, a generic pooled logit model.

And, the punch line:

In contrast, the sociological literature has offered more careful interpretations, and as such provides a more credible interpretation of the 2016 election.

Mutz, like me, is a political scientist. Morgan is a sociologist.

So, what we have here is:
1. A technical statistical dispute
2 . . . about the 2016 election
3 . . . in PNAS.

Lots to talk about. I’ll discuss each in turn.

But, before going on, let me just point out one thing I really like about this discussion, which is that, for once, it doesn’t involve scientific misconduct, and it doesn’t involve anyone digging in and not admitting ambiguity. Mutz made a strong claim in a published paper, and Morgan’s disputing it on technical grounds. That’s how it should go. I’m interested to see how Mutz replies. She may well argue that Morgan is correct, that the data are more ambiguous than implied in her article, but that she favors her particular explanation on other grounds. Or she might perform further analyses that strengthen her original claim. In any case, I hope she can share her (anonymized) raw data and analyses.

1. The technical statistical dispute

In his article, Morgan writes:

A first wave of sociological research on the 2016 presidential election has now been published, and a prominent theme of this research is the appeal of Trump’s campaign to white, working-class voters. Analyses of Obama-to-Trump voters, along with the spatial distribution of votes cast, are both consistent with the claim that white, working-class voters represented the crucial block of supporters who delivered the electoral college victory to Trump . . . The overall conclusion of this first wave of sociological research would appear to be that we have much more work to do in order to understand why so many white voters supported Trump. And, although we may never be able to definitively decompose the sources of their support, three primary motives deserve further scrutiny: material interests, white nativism, and the appeal of the Trump persona. At the same time, it remains to be determined how much of the animus toward his competitor – Hillary Clinton – was crucial to his success . . .

From a statistical perspective, Morgan is saying that there’s collinearity between three dimensions of variation among white voters: (a) lack of education, (b) nativism, and (c) opposition to Clinton. He and Mutz both focus on (a) and (b), so I will too. The quick summary is that Mutz is saying it’s (b) not (a), while Morgan is saying that we can’t disentangle (a) from (b).

Mutz’s argument derives from an analysis of a survey where people were interviewed during the 2012 and 2016 campaigns. She writes:

It could be either, but this study shows that the degree to which you have been personally affected had absolutely no change between 2012 and 2016. It’s a very small percentage of people who feel they have been personally affected negatively. It’s not that people aren’t being hurt, but it wasn’t those people who were drawn to support Trump. When you look at trade attitudes, they aren’t what you’d expect: It’s not whether they were in an industry where you were likely to be helped or hurt by trade. It’s also driven by racial attitudes and nationalistic attitudes—to what extent do you want to be an isolationist country? Trade is not an economic issue in terms of how the public thinks about it.

Morgan analyzes three of Mutz’s claims. Here’s Morgan:

Question 1: Did voters change their positions on trade and immigration between 2012 and 2016, and were they informed enough to recognize that Trump’s positions were much different than Romney’s, in comparison to Clinton’s and Obama’s?

Answer: Voters did change their positions on trade and immigration, but only by a small amount. They were also informed enough to recognize that the positions of Trump and Clinton were very different from each other on these issues, and also in comparison to the positions of Romney and Obama four years prior.

On this question, Mutz’s results are well supported under reanalysis, and they are a unique and valuable addition to the literature . . .

Question 2: Can the relative appeal of Trump to white voters with lower levels of education be attributed to status threat rather than their material interests?

Answer: No. . . .

Question 3: Do repeated measures of voters’ attitudes and policy priorities, collected in October of 2012 and 2016, demonstrate that status threat is a sufficiently complete explanation of Trump’s 2016 victory?

Answer: No. . . .

The key points of dispute, clearly, are questions 2 and 3.

Question 2 is all about the analysis of Mutz’s cross-sectional dataset, a pre-election poll from 2016. Mutz presents an analysis predicting Trump support (measured by relative feeling thermometer responses or vote intention), given various sets of predictors:
A: education (an indicator for “not having a college degree”)
B: five demographic items and party identification
C: four economic items (three of which are about the respondent’s personal economic situation and one of which is an attitude question on taxes and spending)
D: eight items on status threat (four of which represent personal beliefs and four of which are issue attitudes).

For each of feeling thermometer and vote intention, Mutz regresses on A+B, A+B+C, and A+B+C+D. What she finds is that the predictive power of the education indicator is high when regressing on A+B, remains high when regressing on A+B+C, but decreases mostly to zero when regressing on A+B+C+D. She concludes that the predictive power of education is not explained away by economics, but is largely explained away by status threat. Hence the conclusion that it is status threat, not economics, that explains the Trump shift among low-education whites.

Morgan challenges Mutz’s analysis in four ways. First, he would prefer not to include party identification as a predictor in the regression because he considers it to be an intermediate outcome: there are some voters who will react to Clinton and Trump and change their affiliation. My response is: sure, this can happen, but my impression from lookin gat such data for many years is that party ID is pretty stable, even during an election campaign, and it predicts a lot. So I’m OK with Mutz’s decision to treat it as a sort of demographic variable. Morgan does his analyses both ways, with and without party ID, so I’ll just look at his results that include this variable. Second, Morgan would prefer to just restrict the analysis to white respondents, rather than including ethnicity as a regression predictor. I agree with him on this one. There were important shifts in the nonwhite vote, but much of that is, I assume due to Barack Obama not being on the ticket, and this is not what’s being discussed here. So, for this discussion, I think the analyses should be restricted to whites. Morgan’s third point has to do with Mutz’s characterization of the regression predictors. He relabels Mutz’s economic indicators as “material interests.” That’s no big deal given that in her paper Mutz labeled them as measures of “being left behind with respect to personal financial wellbeing,” which sounds a lot like “material interests.” But Morgan also labels several of Mutz’s “status threat” variables as pertaining “material interests” and “foreign policy.” When he does the series of regressions, Morgan finds that the material interest and foreign policy variables explain away almost all the predictive power of education (that is, the coefficient of “no college education” goes to nearly zero after controlling for these other predictors), and that the remaining status threat variables don’t reduce the education coefficient any further. Thus, repeating Mutz’s analysis but including the predictors in a different order leads to a completely different conclusion. I find Morgan’s batching to be as convincing as Mutz’s, and considering that Morgan is making a weaker statement—all he’s aiming to show is that the data are ambiguous regarding the material-interest-or-status-threat-question, I think he’s done enough to make that case. Morgan’s fourth argument is that, in any case, this sort of analysis—looking at how a regression coefficient changes when throwing in more predictors—can’t tell us much about causality without a bunch of strong assumptions that have not been justified here. I agree: I always find this sort of analysis hard to understand. In this case, Morgan doesn’t really have to push on this fourth point because his reanalysis suggests issues with Mutz’s regressions on their own terms.

Question 3 is the panel-data analysis, about which Mutz writes:

Because the goal is understanding what changed from 2012 to 2016 to facilitate greater support for Trump in 2016 than Mitt Romney in 2012, I estimate the effects of time-varying independent variables to determine whether changes in the independent variables produce changes in candidate choice without needing to fully specify a model including all possible influences on candidate preference. Significant coefficients thus represent evidence that change in an independent variable corresponds to change in the dependent variable at the individual level.

Morgan doesn’t buy this claim, and neither do I. At least, not in general.

Also this:

To examine whether heightened issue salience accounts for changing preferences, I include in the models measures of respondents’ pre-Trump opinions on these measures interacted with a dichotomous wave variable. These independent variable by wave interactions should be significant to the extent that the salience of these issue opinions was increased by the 2016 campaign, so that they weighed more heavily in individual vote choice in 2016 than in 2012. For example, if those who shifted toward Trump in 2016 were people who already opposed trade in 2012 and Trump simply exploited those preexisting views for electoral advantage, this would be confirmed by a significant interaction between SDO and wave.

I dunno. This to me puts a big burden on the regression model that you’re fitting. I think I’d rather see these sorts of comparisons directly, than fit a big regression model and start looking at statistically significant coefficients.

And this:

To the extent that changes in issue salience are responsible for changing presidential preferences, the interaction coefficients should be significant; to the extent that changing public opinions and/or changing candidate positions also account for changing presidential vote preferences, the coefficients corresponding to these independent variables will be significant.

Again, I’m skeptical. The z-score, or p-value, or statistical significance of a coefficient is a noisy random variable. It’s just wrong to equate statistical significance with reality. It’s wrong in that the posited story can be wrong and you can still get statistical significance (through a combination of model misspecification and noise), and in that the posited story can be right and you can still not get statistical significance (again, model misspecification and noise). I know this is how lots of people do social science, but it’s not how statistics works.

That said, this does not mean that Mutz is wrong in her conclusions. Although it is incorrect to associate statistical significance with a casual explanation, it’s also the case that Mutz is an expert in political psychology, and I have every reason to believe that her actual data analysis will be reasonable. I have not tried to follow what she did in detail. I read her article and Morgan’s critique, and what I got out of this is that there are lots of ways to use regression to analyze opinion change in this panel survey, and different analyses lead to different conclusions, which again seems to support Morgan’s claim that we can’t at this time separate these different stories of the white voters who switched from Obama to Trump. Again, Mutz’s story could be correct but it’s not so clearly implied by the data.

2. The 2016 election

Mutz writes that “change in financial wellbeing had little impact on candidate preference. Instead, changing preferences were related to changes in the party’s positions on issues related to American global dominance and the rise of a majority–minority America,” but I find Morgan’s alternative analyses convincing, so for now I’d go with Morgan’s statement that available data don’t allow us to separate these different stories.

My own article (with Julia Azari) on the 2016 election is here, but we focus much more on geography than demography, so our paper isn’t particularly relevant to the discussion here.

All I’ll say on the substance here is that different voters have different motivations, and that individual voters can have multiple motivations on vote choice. As we all know, most voters went with their party ID in 2016 as in the past; hence it makes sense each year to explain what motivated the voters who went for the other party. Given the multiple reasons to vote in either direction (or, for that matter, to abstain from voting), I think we have to go beyond looking for single explanations for the behaviors of millions of voters.

3. PNAS

Morgan wrote, “Mutz’s article received widespread media coverage because of the strength of its primary conclusion.” I don’t think this was the whole story. I think the key reason Mutz’s article received widespread media coverage was because it appeared in PNAS. And here’s the paradox: PNAS seems to be considered by journalists to be a top scientific outlet—better than, say, the American Political Science Review or the American Journal of Sociology—even though, at least when it comes to social sciences, I think it’s edited a lot more capriciously than these subject-matter journals. (Recall air rage, himmicanes, and many others.) It’s not that everything published in PNAS is wrong—far from it!—but I wish the news media would spend a bit less attention on PNAS press releases and a bit more time on papers in social science journals.

One problem I have with PNAS, beyond all issues of what papers get accepted and publicized, is what might be called PNAS style. Actually, I don’t think it’s necessarily the style for physical and biological science papers in PNAS, maybe it’s just for social science. Anyway, the style is to make big claims. Even if, as an author, you don’t want to hype your claims, you pretty much have no choice—if you want your social science paper to appear in Science, Nature, or PNAS. And, given that publication in these “tabloids” is likely to give you work a lot more influence, it makes sense to go for it. The institutional incentives favor exaggeration.

I hope that Morgan’s article, along with discussions by Mutz and others, is published in the APSR and gets as much, or more, media attention as the PNAS paper that motivated it.

The post Does “status threat” explain the 2016 presidential vote? appeared first on Statistical Modeling, Causal Inference, and Social Science.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags: , ,


Subscribe

Email:

  Subscribe