Probability, Perception and False Positives

November 4, 2012

(This article was originally published at Learn and Teach Statistics and Operations Research, and syndicated at StatsBlogs.)

An understanding of probability empowers people to make informed choices in matters of great importance, including health screening, insurance, major weather events and terrorist threat. Unfortunately it has been shown that this understanding of probability eludes even some of our most educated professionals and decision-makers

Perceptions of Probability and Risk

There is a considerable body of work studying people’s perceptions of probability and risk, particularly by Amos Tversky and the Nobel prize-winning Daniel Kahnemann. This has uncovered many systematic errors humans make in judging the relative probabilities of uncertain events. The brain’s tendency to find patterns results in heuristics or rules that have consistent bias. For example, if we have recently experience or even heard of a bad random event, we perceive the probability to be higher than it really is. Having experienced two years of earthquakes in Christchurch, my estimation of the likelihood of an earthquake in other places is markedly increased. I (and many others from here) feel uneasy surrounded by tall buildings, street awnings and unsecured masonry in other cities, particularly Wellington, but even in cities with no known earthquake risk.

Cultural implications

The perception of probability is also found to be cultural. I analysed a probability-based task as part of the National Education Monitoring Project. I found that there was a statistical and practical difference between the responses of ten-year-old Pacific Island students and NZ European students. I hypothesised that different home experiences involving games of chance may have led to this.  Further reading uncovered other research which had identified other cultural differences. In particular, there are cultures in which everything is perceived to be decided by God and there is no chance but rather a lack of knowledge of God’s will.

In fact many things that we perceive to be subject to chance, would not be, if we had perfect knowledge. Increased understanding of weather patterns has made forecasting more reliable, which has reduced the level of uncertainty with regard to the arrival of bad storms like the recent Hurricane Sandy, or to a lesser extent, two heavy snowfalls in Christchurch in 2011. Even a coin toss is, strictly speaking, only a function of the placement of the coin and thumb, the amount of force applied and various other external factors. Because we cannot measure these factors, we are left to assume that the chance of a head or a tail is equal until shown otherwise.

Screening tests

In disease screening we generally do know the figures, and are not relying on subjective judgment as to the probabilities. However the interpretation of the figures is notoriously badly done. There is a great deal of money involved in the screening industry, and it is an emotive area. Neither money nor emotion aids rational decision-making. This is exacerbated by misinterpretation of probabilities, and selective cost-counting.

My eyes were opened to this issue by a keynote address by Gerd Gigerenzer, director at Max Planck Institute for Human Development . There is a very interesting 8 question quiz at the Harding Center. Try it now.  (I was very excited to score 100%, but I put that down to having heard the address, and thought seriously about this.) It would be great if you could tell us your score and reaction to the quiz in the comments below.

A week ago Tim Harford wrote about the lack of understanding among physicians in his post, “Why aren’t we doing the maths? – The practical implications of misplaced confidence when dealing with statistical evidence are obvious and worrying.” This problem is not going away. Some of the comments on the post expressed regret that probability questions like these are not part of the school curriculum, and that it is difficult to find resources to learn on-line. In New Zealand a new curriculum is being introduced with a greater emphasis on statistics at all levels. At year 12 knowledge of understanding of risk, particularly using two-way tables, is examined. As we develop materials to help teach this, we will make them available to the general public.


The following link takes you to a pdf of a powerpoint presentation that teaches a step-by-step approach to this: Risk and Screening – step-by-step approach
We have found that this approach is helpful to students.

In particular you need to make sure that the table has “What the test tells us” along the top, and “What is the reality” down the side. You do not have columns or rows saying “Correct” or “incorrect” as this is much more difficult.

At present there is no audio to go with this segment, but we hope it is self-explanatory.

The costs of screening

Just in case you are tempted to think that all screening must be good and more screening must therefore be better, here are some things to think about.

The following article Breast screening is harmful appeared recently and I found it after I had written the rest of this post. I am very excited to read that  “BreastScreen Aotearoa is revising its leaflets to incorporate information about the risks of overdiagnosis”.

Screening is big business. There are the obvious costs of the equipment and staffing, including nurses, doctors, technicians and clerical workers. Added to that is the cost of loss of productivity for the time taken for the test. The test itself may be harmful. The cost of a false positive is considerable, including unnecessary further tests and interventions, some of which do actual harm. When screening is increased to include people at low-risk, the number of false positives increases, which then takes up resources, and can prevent people who really need intervention from getting it. The emotional costs of a false positive are far-reaching, unnecessarily decreasing quality of life, as people lose confidence in their own health and medicine.

More screening can be harmful

Too often lobby groups,with well-intentioned but ill-informed leaders can do harm. This was possibly the case with breast cancer screening in New Zealand. The age of free screening was lowered to include a group for which the test is less accurate, resulting in many more false positives. A correct understanding of probability in the general populace might have prevented this.

What is clear is that information needs to be better explained in order for informed consent to occur.

Please comment on the article here: Learn and Teach Statistics and Operations Research

Tags: , , , , , , , , , ,