Gigerenzer: “The Bias Bias in Behavioral Economics,” including discussion of political implications

Gerd Gigerenzer writes:

Behavioral economics began with the intention of eliminating the psychological blind spot in rational choice theory and ended up portraying psychology as the study of irrationality. In its portrayal, people have systematic cognitive biases that are not only as persistent as visual illusions but also costly in real life—meaning that governmental paternalism is called upon to steer people with the help of “nudges.” These biases have since attained the status of truisms. In contrast, I show that such a view of human nature is tainted by a “bias bias,” the tendency to spot biases even when there are none. This may occur by failing to notice when small sample statistics differ from large sample statistics, mistaking people’s random error for systematic error, or confusing intelligent inferences with logical errors. Unknown to most economists, much of psychological research reveals a different portrayal, where people appear to have largely fine-tuned intuitions about chance, frequency, and framing. A systematic review of the literature shows little evidence that the alleged biases are potentially costly in terms of less health, wealth, or happiness. Getting rid of the bias bias is a precondition for psychology to play a positive role in economics.

Like others, Gigerenzer draws the connection to visual illusions, but with a twist:

By way of suggestion, articles and books introduce biases together with images of visual illusions, implying that biases (often called “cognitive illusions”) are equally stable and inevitable. If our cognitive system makes such big blunders like our visual system, what can you expect from everyday and business decisions? Yet this analogy is misleading, and in two respects.

First, visual illusions are not a sign of irrationality, but a byproduct of an intelligent brain that makes “unconscious inferences”—a term coined by Hermann von Helmholtz—from two-dimensional retinal images to a three-dimensional world. . . .

Second, the analogy with visual illusions suggests that people cannot learn, specifically that education in statistical reasoning is of little efficacy (Bond, 2009). This is incorrect . . .

It’s an interesting paper. Gigerenzer goes through a series of classic examples of cognitive errors, including the use of base rates in conditional probability, perceptions of patterns in short sequences, the hot hand, bias in estimates of risks, systematic errors in almanac questions, the Lake Wobegon effect, and framing effects.

I’m a sucker for this sort of thing. It might be that at some points Gigerenzer is overstating his case, but he makes a lot of good points.

Some big themes

In his article, Gigerenzer raises three other issues that I’ve been thinking about a lot lately:

1. Overcertainty in the reception and presentation of scientific results.

2. Claims that people are stupid.

3. The political implications of claims that people are stupid.

Overcertainty and the problem of trust

Gigerenzer writes:

The irrationality argument exists in many versions (e.g. Conley, 2013; Kahneman, 2011). Not only has it come to define behavioral economics but it also has defined how most economists view psychology: Psychology is about biases, and psychology has nothing to say about reasonable behavior.

Few economists appear to be aware that the bias message is not representative of psychology or cognitive science in general. For instance, loss aversion is often presented as a truism; in contrast, a review of the literature concluded that the “evidence does not support that losses, on balance, tend to be any more impactful than gains” (Gal and Rucker, 2018). Research outside the heuristics-and-biases program that does not confirm this message—including most of the psychological research described in this article—is rarely cited in the behavioral economics literature (Gigerenzer, 2015).

(We discussed Gal and Rucker (2018) here.)

More generally, this makes me think of the problem of trust that Kaiser Fung and I noted in the Freakonomics franchise. There’s so much published research out there, indeed so much publicized research, that it’s hard to know where to start, so a natural strategy for sifting through and understanding it all is using networks of trust. You trust your friends and colleagues, they trust their friends and colleagues, and so on. But you can see you this can lead to economists getting a distorted view of the content of psychology and cognitive science.

Claims that people are stupid

The best of the heuristics and biases research is fascinating, important stuff that has changed my life and gives us, ultimately, a deeper respect for ourselves as reasoning beings. But, as Gigerenzer points out, this same research is often misinterpreted as suggesting that people are easily-manipulable (or easily-nudged) fools, and this fits in with lots of junk science claims of the same sort: pizzagate-style claims that the amount you eat can be manipulated by the size of your dining tray, goofy poli-sci claims that a woman’s vote depends on the time of the month, air rage, himmicanes, shark attacks, ages-ending-in-9, and all the rest. This is an attitude which I can understand might be popular among certain marketers, political consultants, and editors of the Proceedings of the National Academy of Sciences, but I don’t buy it, partly because of zillions of errors in the published studies in question and also because of the piranha principle. Again, what’s important here is not just the claim that people make mistakes, but that they can be consistently manipulated using what would seem to be irrelevant stimuli.

Political implications

As usual, let me emphasize that if these claims were true—if it were really possible to massively and predictably change people’s attitudes on immigration by flashing a subliminal smiley face on a computer screen—then we’d want to know it.

If the claims don’t pan out, then they’re not so interesting, except inasmuch as: (a) it’s interesting that smart people believed these things, and (b) we care if resources are thrown at these ideas. For (b), I’m not just talking about NSF funds etc., I’m also talking about policy money (remember, pizzagate dude got appointed to a U.S. government position at one point to implement his ideas) and just a general approach toward policymaking, things like nudging without persuasion, nudges that violate the Golden Rule, and of course nudges that don’t work.

There’s also a way in which a focus on individual irrationality can be used to discredit or shift blame onto the public. For example, Gigerenzer writes:

Nicotine addiction and obesity have been attributed to people’s myopia and probability-blindness, not to the actions of the food and tobacco industry. Similarly, an article by the Deutsche Bank Research “Homo economicus – or more like Homer Simpson?” attributed the financial crisis to a list of 17 cognitive biases rather than the reckless practices and excessive fragility of banks and the financial system (Schneider, 2010).

Indeed, social scientists used to talk about the purported irrationality of voting (for our counter-argument, see here). If voters are irrational, then we shouldn’t take their votes seriously.

I prefer Gigerenzer’s framing:

The alternative to paternalism is to invest in citizens so that they can reach their own goals rather than be herded like sheep.