13 Reasons not to trust that claim that 13 Reasons Why increased youth suicide rates

A journalist writes:

My eye was caught by this very popular story that broke yesterday — about a study that purported to find a 30 percent (!) increase in suicides, in kids 10-17, in the MONTH after a controversial show about suicide aired. And that increase apparently persisted for the rest of the year. It’s an observational study, but the hypothesis is that this show caused the bump.

This seems manifestly implausibly to me (although huge, if true).

I wondered if you thought there might be something short to be written, quickly, about this study? Should we believe it? Are there obvious flaws, etc.?

I took a look and here’s my reply:

The AP article you cite has some problems in that it jumps around from numbers to rates, with the time scale shifting from one month or several months or five years. All the numbers seem like a moving target; it’s hard for me to track exactly what’s going on.

Looking at the paper, I see this one graph:

Just as a minor thing, it seems to me to be a poor choice to report suicides as 0.4 per 100,000 people. I think it would be easier to interpret as 4 per million people, for two reasons: (a) 4 is easier to understand than 0.4, and (b) million is easier for me to interpret than 100,000. For example, NYC has 8 million people so we’d expect 32 suicides in a month?
Actually I think it would be even better for them to multiply by 12 and report annualized rates, thus 4 would become an annualized rate of 48 suicides per million people per year.

The statistical analysis is fine for what it is. The quasi-Poisson model is fine but it doesn’t really matter, either. The time series filtering model is probably fine; I guess I’d also like to see something simpler that estimates an offset for each of the 12 months of the year. But, again, I doubt it will make much of a difference in the results. I find the orange lines on the graph to be more distracting than helpful, but I can just ignore them and look at the blue dots. Instead of the jagged orange lines, they should plot the fitted seasonal + trend in orange, as that would give us a better sense of the comparison model with no jump.

Looking at the above graph, the key result is a jump from Feb-Mar-Apr 2017. Comparable sized jumps appear elsewhere in the dataset, for example Jul-Aug-Sep 2014, or Nov-Dec 2015, but Feb-Mar-Apr 2017 is more striking because the jump is happening during the spring, when we’d expect suicide rates to be dropping. On the other hand, Feb-Mar-Apr 2013 shows a steady increase. Not quite as high as Feb-Mar-Apr 2017 but a pretty big jump and in the same direction. Of course the authors could argue at this point that something may have happened in March 2013 to influence suicide rates, but that’s kinda the point: every month of every year, there’s _something_ going on.

I’m not quite sure what is their conceptual model. If the TV show causes suicides, does it cause people to commit suicide earlier than they otherwise would have, or are they assuming it will induce new suicides that otherwise never would’ve happened? I assume it would be a bit of both, but I don’t see this point discussed anywhere in the paper. You can see a big drop from Apr to Jul, but that happens in other years too.

Also this: “When the observed and forecasted rates of youth suicide were graphed based on the Holt-Winters analysis, there was a visible and statistically significant effect of the release of 13 Reasons Why on subsequent suicide (Figure 1), with observed rates in April, June, and December being significantly higher than corresponding rates forecasted using Holt-Winters modeling. Interestingly, the observed rate in the month of March (promotional period) is also statistically significantly higher than the model forecast.” They have a story for March and April. But June and December? No story for that, indeed the June and Dec results make me think that their story “proves too much,” as the saying goes. If 4 of the months of 2019 have elevated rates, this suggests just that suicide rates went up in 2019. Again, at the end of the paper, they write, “Suicide rates in two subsequent months remained elevated over forecasted rates, resulting in 195 additional deaths.” That’s just weird, to count June and December just because they’re higher than the predicted model. Maybe it’s the model that has the problem, huh?

They also made a statistical error. In the abstract, they write: “these associations were restricted to boys.” But in the text, they say: “When analyses were stratified by sex, a statistically significant increase in suicide rates was observed for boys in keeping with overall results in the 10- to 17- year-old age group (IRR=1.35, 95% CI=1.12-1.64; Table 1 and Figure S1, available online). Although the mean monthly count and rate of suicide for female children and adolescents increased after the series’ release, the difference was not statistically significant (IRR=1.15; 95% CI=0.89-1.50), with no change in post-release trends (IRR=0.97, 95% CI=0.93-1.01). Observed suicide rates for 10- to 17-year-old girls in June, 2017 were significantly greater than corresponding forecasted rates, but observed rates in September were significantly lower than expected rates (Figure S1, available online).”

There are a bunch of weird things about this summary. First, as the saying goes, the difference between “significant” and “not significant” is not itself statistically significant, so it is an error for them to say “these associations were restricted to boys.” It’s bad news that they waste a paragraph on pages 9-10 explaining this bit of random noise. Second, who cares about the observed rates in September? At this point it seems like they’re just fishing through the data. Where did September come from?

Finally, this sentence in the last paragraph seems a bit over the top: “There is no discernible public health benefit associated with viewing the series.” I just had fried chicken for lunch. There is no discernible public health benefit of that either, my dude.

Look I understand that suicide is a problem, it’s important for public heath researchers to study it, and I don’t know enough about the topic to have a sense of whether the claim is plausible, that the airing of a TV show could cause a 29% increase in suicide rates. It seems like a big number to me, but I don’t really know. If it really caused 195 kids to die, that’s really sad. Their statistical analysis seems over-certain, though. First are the identification issues, which are mentioned in the research article: Exposure to this show did not happen in isolation. Second is the data analysis: again, given they found effects in June and December as well, it seems that their method is pretty sensitive to noise.

I can see the logic of the argument that a dramatization of suicide can encourage copycats; more generally, TV and movies have lots of dramatizations of bad behavior, including horrible crimes. I just don’t think there are any easy ways of cleanly estimating their effects with these sorts of aggregate statistical analyses, and this particular paper has some issues.

Reporters are wising up

I searched on the web and found a bunch of news articles. Most of these reports were uncritical, just reporting the mild caveats given by the study’s authors, but some were more searching, and I’d like to give them a shout-out:

Beth Mole in Ars Technica:

A study out this week suggests that the release of the first season of Netflix’s 13 Reasons Why series in 2017 led to a small but notable uptick in teen suicides. The finding seems to confirm widespread apprehensions among mental health experts and advocates that a suicide “contagion” could spread from the teen drama, which centers around a 17-year-old girl’s suicide and includes graphic details. But the study contains significant caveats, and the findings should be interpreted cautiously. . . .

In a press statement, co-author Lisa Horowitz, a clinical scientist at the National Institute of Mental Health, said that the finding “should raise awareness that young people are particularly vulnerable to the media. All disciplines, including the media, need to take good care to be constructive and thoughtful about topics that intersect with public health crises.”

While the point that care should be taken with regard to suicide should be duly noted, it’s still unclear just how vulnerable young people are to the show’s content. The study has significant caveats and limitations. And the overall field of research into the epidemiology of suicide is a bit murky. . . .

As Harvard psychologist Matthew K. Nock noted in an interview with The New York Times, “Suicide rates bounce around a lot more when the cell sizes are low, as they are with kids aged 10 to 17 years. So, this new paper suggests there may be an association between 13 Reasons Why and the suicide rate. However, we must always be cautious when trying to draw causal conclusions from correlational data.” . . . the authors reported finding a significant uptick in suicides in April—the month after the show’s release—but they also found them in June and December. It’s unclear how the show is linked to changes in those specific months. Moreover, the authors found a statistically significant increase in suicides in a fourth month—the month of March, which would be prior to the show’s release on March 31. The authors say this finding “raises questions about effects of pre-release media promotion of the series premiere.” However, it also raises questions about whether factors or events unrelated to the show may explain or contribute to the reported increase in suicide rates. . . .

Another odd wrinkle emerged from the data when the authors looked at the sex breakdown of those deaths. The statistically significant increase in suicides was entirely due to suicides in boys, not girls, as the researchers had hypothesized. . . . The sex finding flies in the face of some ideas of a “suicide contagion,” a term used by the authors of the new study and used generally by researchers to discuss the hypothetical contagiousness of suicide from events or media. . . . Overall, the research into 13 Reasons Why serves to highlight the complexity of suicide and suicide prevention—and also the murkiness of the research field that surrounds it.

Well put.

And Christie D’Zurilla did a good job in the LA Times, with a story entitled, “‘13 Reasons Why’ influenced the suicide rate? It’s not that simple.”

Also Chelsea Whyte’s appropriately skeptical take in New Scientist, “Did Netflix’s 13 Reasons Why really increase suicide rates?”, which presents an arguments that this sort of TV show could actually reduce suicide rates.

It’s good to see that lots of reporters are getting the point, that statistical significance + identification strategy + published in a respected journal does not necessarily mean we have to believe it. Credulous journalists have been burned too many times, with studies of beauty and sex ratio, ESP, embodied cognition, ovulation and voting, himmicanes, air rage, etc.: now they’re starting to get the picture that you can’t always take these claims at face value. More recently there was the claim of elevated traffic accidents on 4/20 and lower homicide rates during the NRA convention—that last one was particularly ridiculous. The press is starting to wise up and no longer believe that “Disbelief is not an option . . . You have no choice but to accept that the major conclusions of these studies are true.”

It can be hard to talk about these things—suicide is an important topic, and who are we to question people who are trying to fight it?—but, as the above-linked news articles discuss, suicide is also complicated, and it’s not clear that we’re doing potential victims any favors by pushing simple stories.