Two reviews of Nate Silver’s new book, from Kaiser Fung and Cathy O’Neil

December 21, 2012
By

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

People keep asking me what I think of Nate’s book, and I keep replying that, as a blogger, I’m spoiled. I’m so used to getting books for free that I wouldn’t go out and buy a book just for the purpose of reviewing it. (That reminds me that I should post reviews of some of those books I’ve received in the mail over the past few months.)

I have, however, encountered a couple of reviews of The Signal and the Noise so I thought I’d pass them on to you. Both these reviews are by statisticians / data scientists who work here in NYC in the non-academic “real world” so in that sense they are perhaps better situated than me to review the book (also, they have not collaborated with Nate so they have no conflict of interest).

Kaiser Fung gives a positive review:

It is in the subtitle—“why so many predictions fail – but some don’t”—that one learns the core philosophy of Silver: he is most concerned with the honest evaluation of the performance of predictive models. The failure to look into one’s mirror is what I [Kaiser] often describe as the elephant in the data analyst’s room. Science reporters and authors keep bombarding us with stories of success in data mining, when in fact most statistical models in the social sciences have high rates of error. . . .

In 450 briskly-moving pages, Silver takes readers through case studies on polling, baseball, the weather, earthquakes, GDP, pandemic flu, chess, poker, stock market, global warming, and terrorism. I appreciate the refreshing modesty in discussing the limitation of various successful prediction systems. For example, one of the subheads in the chapter about a baseball player performance forecasting system he developed prior to entering the world of political polls reads: “PECOTA Versus Scouts: Scouts Win.” Unlike many popular science authors, Silver does not portray his protagonists as uncomplicated heroes, he does not draw overly general conclusions, and he does not flip from one anecdote to another but instead provides details for readers to gain a fuller understanding of each case study. In other words, we can trust his conclusions, even if his book contains little Freakonomics-style counter-intuition.

If you are thinking the evalution methods listed [in the book] seem numerous and arbitrary, you’d be right. After reading Silver’s book, you should be thinking critically about how predictions are evaluated (and in some cases, how they may be impossible to verify). Probabilistic forecasts that Silver advocates are even harder to validate. Silver tells it like it is: this is difficult but crucial work; and one must look out for forecasters who don’t report their errors, as well as those who hide their errors by using inappropriate measurement.

Throughout the book, Silver makes many practical recommendations that reveal his practitioner’s perspective on forecasting. As an applied statistician, I endorse without hesitation specific pieces of advice, such as use probability models, more data could make predictions worse, mix art and science, try hard to find the right data, don’t just use readily available data, and avoid too much precision.

The only exaggeration in the book is his elevation of “Bayesian” statistics as the solution to predictive inaccuracy. What he packages as Bayesian has been part of statistical science even before the recent rise of modern Bayesian statistics. [This is a point that Larry Wasserman and I discussed recently --- AG]. . . .

In spite of the minor semantic issue, I am confident my readers will enjoy reading Silver’s book. It is one of the more balanced, practical books on statistical thinking on the market today by a prominent public advocate of the data-driven mindset.

Cathy O’Neil is more critical:

As a modeler myself, I am extremely concerned about how models affect the public, so the book’s success is wonderful news. The first step to get people to think critically about something is to get them to think about it at all. . . . Silver has a knack for explaining things in plain English. . . .

Having said all that, I [O'Neil] have major problems with this book and what it claims to explain. In fact, I’m angry.

It would be reasonable for Silver to tell us about his baseball models . . . [and] political polling . . . He also interviews a bunch of people who model in other fields, like meteorology and earthquake prediction, which is fine, albeit superficial.

What is not reasonable, however, is for Silver to claim to understand how the financial crisis was a result of a few inaccurate models, and how medical research need only switch from being frequentist to being Bayesian to become more accurate. . . .

The ratings agencies, which famously put AAA ratings on terrible loans, and spoke among themselves as being willing to rate things that were structured by cows, did not accidentally have bad underlying models. . . . Rather, the entire industry crucially depended on the false models. Indeed they changed the data to conform with the models . . .

Silver gives four examples what he considers to be failed models at the end of his first chapter, all related to economics and finance. But each example is actually a success (for the insiders) if you look at a slightly larger picture and understand the incentives inside the system. . . .

We didn’t have a financial crisis because of a bad model or a few bad models. We had bad models because of a corrupt and criminally fraudulent financial system. That’s an important distinction, because we could fix a few bad models with a few good mathematicians, but we can’t fix the entire system so easily. There’s no math band-aid that will cure these boo-boos. . . .

Silver has an unswerving assumption, which he repeats several times, that the only goal of a modeler is to produce an accurate model. (Actually, he made an exception for stock analysts.) This assumption generally holds in his experience: poker, baseball, and polling are all arenas in which one’s incentive is to be as accurate as possible. But he falls prey to some of the very mistakes he warns about in his book, namely over-confidence and over-generalization. . . .

Silver discusses both in the Introduction and in Chapter 8 to John Ioannadis’s work which reveals that most medical research is wrong. . . . [But] the flaws in these medical models will be hard to combat, because they advance the interests of the insiders: competition among academic researchers to publish and get tenure is fierce, and there are enormous financial incentives for pharmaceutical companies. Everyone in this system benefits from methods that allow one to claim statistically significant results, whether or not that’s valid science, and even though there are lives on the line. . . . there’s massive incentive to claim statistically significant findings, and not much push-back when that’s done erroneously, so the field never self-examines and improves their methodology. The bad models are a consequence of misaligned incentives. . . .

Silver chooses to focus on individuals working in a tight competition and their motives and individual biases, which he understands and explains well. For him, modeling is a man versus wild type thing, working with your wits in a finite universe to win the chess game. He spends very little time on the question of how people act inside larger systems, where a given modeler might be more interested in keeping their job or getting a big bonus than in making their model as accurate as possible. In other words, Silver crafts an argument which ignores politics. . . . Nate Silver is a man who deeply believes in experts, even when the evidence is not good that they have aligned incentives with the public. . . .

My [O'Neil's] complaint about Silver is naivete, and to a lesser extent, authority-worship. I’m not criticizing Silver for not understanding the financial system. . . . But at the very least he should know that he is not an authority and should not act like one. . . . Silver is selling a story we all want to hear, and a story we all want to be true. Unfortunately for us and for the world, it’s not.

Putting the two reviews together

1. Nate’s a good writer, he gets right to the point and is willing to acknowledge his own uncertainty. Unlike Gladwell, Levitt, etc., he doesn’t present scientific inquiry in terms of heroes but rather captures the back-and-forth among data, models, and theories. This comports with my impression that Nate is a hard worker and an excellent analyst who can get right to the point of whatever he is studying. (And, by the way, in the world in which I live, “hard worker” is one of the best compliments out there; I’m not using letter-of-recommendation-talk in which “hard worker” is a euphemism for “weak student.” It is by working hard that we learn.)

2. Nate is excellent when writing about areas he’s worked on directly (baseball, poker, poll analysis), solid when acting as a reporter on non-politically-loaded topics (weather forecasting), and weak where delving into academic subjects (Kaiser, Cathy, and Larry all discuss where Nate oversells Bayesian inference, something that presumably wouldn’t have happened had he run a prepublication draft of his book by any of us for comments) and the more technical areas of finance where he doesn’t have a helpful expert to keep him from getting lost.

3. Kaiser’s review is positive because he’s treating The Signal and the Noise as a pop-statistics book along the lines of Freakonomics, and he (Kaiser) is happy to see Nate’s openness and questioning spirit, allied with solid practical recommendations. Cathy’s review is negative because she’s comparing the book to other treatments of technical failure in our society and she is worried that Nate is implicitly sending the message that the problems with the financial system arose from errors rather than corruption and that everything could be fixed with some Moneyball-type analysis.

4. Putting all this together, I think the two reviews given above are essentially in agreement. (Again, I say this without actually having seen the book itself; I respect both Kaiser and Cathy and it makes sense for me to try to synthesize their views.) Nate does a good job—perhaps the best popular writing ever on what it’s like to be a statistical analyst—but he doesn’t often leave the analyst’s box to look at the big picture. Nate considers non-statistical issues in some small cases (for example, when writing about the varying incentives of different groups of weather forecasters) but is, according to Cathy, too accepting of face-value motivations when discussing finance, medicine, and politics. But this should not, I think, diminish the value of Nate’s contributions in providing a perhaps uniquely readable description of statistical thinking in some real-world settings. Even while we recognize that people often have strong motivations to make inaccurate predictions when money and reputations are on the line.



Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

Tags: , ,


Subscribe

Email:

  Subscribe