“Did Jon Stewart elect Donald Trump?”

I wrote this post a couple weeks ago and scheduled it for October, but then I learned from a reporter that the research article under discussion was retracted, so it seemed to make sense to post this right away while it was still newsworthy.

My original post is below, followed by a post script regarding the retraction.

Matthew Heston writes:

First time, long time. I don’t know if anyone has sent over this recent paper [“Did Jon Stewart elect Donald Trump? Evidence from television ratings data,” by Ethan Porter and Thomas Wood] which claims that Jon Stewart leaving The Daily Show “spurred a 1.1% increase in Trump’s county-level vote share.”

I’m not a political scientist, and not well versed in the methods they say they’re using, but I’m skeptical of this claim. One line that stood out to me was: “To put the effect size in context, consider the results from the demographic controls. Unsurprisingly, several had significant results on voting. Yet the effects of The Daily Show’s ratings decline loomed larger than several controls, such as those related to education and ethnicity, that have been more commonly discussed in analyses of the 2016 election.” This seems odd to me, as I wouldn’t expect a TV show host change to have a larger effect than these other variables.

They also mention that they’re using “a standard difference-in-difference approach.” As I mentioned, I’m not too familiar with this approach. But my understanding is that they would be comparing pre- and post- treatment differences in a control and treatment group. Since the treatment in this case is a change in The Daily Show host, I’m unsure of who the control group would be. But maybe I’m missing something here.

Heston points to our earlier posts on the Fox news effect.

Anyway, what do I think of this new claim? The answer is that I don’t really know.

Let’s work through what we can.

In reporting any particular effect there’s some selection bias, so let’s start by assuming an Edlin factor of 1/2, so now the estimated effect of Jon Stewart goes from 1.1% to 0.55% in Trump’s county-level vote share. Call it 0.6%. Vote share is approximately 50%, so a 0.6% change is approximately a 0.3 percentage point in the vote. Would this have swung the election? I’m not sure, maybe not quite.

Let’s assume the effect is real. How to think about it? It’s one of many such effects, along with other media outlets, campaign tactics, news items, etc.

A few years ago, Noah Kaplan, David Park, and I wrote an article attempting to distinguish between what we called the random walk and mean-reversion models of campaigning. The random walk model posits that the voters are where they are, and campaigning (or events more generally) moves them around. In this model, campaign effects are additive: +0.3 here, -0.4 there, and so forth. In contrast, the mean-reversion model starts at the end, positing that the election outcome is largely determined by the fundamentals, with earlier fluctuations in opinion mostly being a matter of the voters coming to where they were going to be. After looking at what evidence we could find, we concluded that the mean-reversion model made more sense and was more consistent with the data. This is not to say that the Jon Stewart show would have no effect, just that it’s one of many interventions during the campaign, and I can’t picture each of them having an independent effect and these effects all adding up.

P.S. After the retraction

The article discussed above was retracted because the analysis had a coding error.

What to say given this new information?

First, I guess Heston’s skepticism is validated. When you see a claim that seems too big to be true (as here or here), maybe it’s just mistaken in some way.

Second, I too have had to correct a paper whose empirical claims were invalidated by a coding error. It happens—and not just to Excel users!

Third, maybe the original reaction to that study was a bit too strong. See the above post: Even had the data shown what had originally been claimed, the effect they found was not as consequential as it might’ve seen at first. Setting aside all questions of data errors and statistical errors, there’s a limit to what can be learned about a dynamic process—an election campaign—from an isolated study.

I am concerned that all our focus on causal identification, important as it is, can lead to researchers, journalists, and members of the general public to overconfidence in theories as a result of isolated studies, without always the recognition that real life is more complicated. I had a similar feeling a few years ago regarding the publicity surrounding the college-football-and-voting study. The particular claims regarding football and voting have since been disputed, but even if you accept the original study as is, its implications aren’t as strong as had been claimed in the press. Whatever these causal effects are, they vary by person and scenario, and they’re not occurring in isolation.