Book Review—Reckoning with Risk, Gird Gigerenzer
Gerd Gigerenzer discusses risk and its applications in daily life, with examples from medicine, the O.J. Simpson trial and DNA testing in general, wife battering, AIDS counseling, and other fun avenues of life.
Executive Summary
The human brain has evolved several mechanisms that helped us survive in the African veldt but now hinder us from understanding our world. Our brains see in terms of certainties instead of chances - we round "unlikely" to never and "likely" to now. I can observe my own brain working this way, but the evolutionary benefit is not obvious to me and I would like to read more. We see patterns when there is noise. This could be a direct result of evolution: a bias towards false positives prevents catastrophe at the cost of paranoia, which is a perfectly good tradeoff for hominids who usually die by age twenty but is not so good for, say, rational stock trading. Or, this could be a side-effect of simply having powerful pattern-recognition mechanisms. We see cause and effect when it doesn't exist. And we think in terms of natural numbers, not percentages.
Two examples of our evolutionarily triggered false conclusions: (this part is not from the book). I read a true story (in another book about risk and math, I think) in which the narrator asked a bunch of senior military leaders how many generals were "great." They conferred and said, about five percent. He then asked, how many battles does one have to win in a row to be a great general. They answered, "five in a row." If the chance of winning a battle is 50%, then the chance of winning five in a row is 1/2^5, or 1 / (2*2*2*2*2), or 1 / 32, or about 3%. In other words, there's no reason to think the typical "great" general is anything other than lucky, at least not until they rack up a bigger lead over random chance. The other example was similar, and applied to sports. The chance that purely random performance - shooting baskets, getting hits - will read to streaks during the course of a season is easily calculated, and it turns out that many or even most streaks in most sports are as frequent as you would predict from random chance. In other words, most streaks are just random chance. Since our brains are geared to recognize patterns and attribute cause to effect, we falsely see that someone is performing especially well, probably because they ate their lucky pasta before the game.
Gigerenzer outlines a number of common statistical mistakes, and I'll repeat the interesting ones here, skipping boring ones like confusing a 50% chance of rain tomorrow with the expectation of 12 hours of rain.
Risks expressed as probabilities are less understandable than risks expressed as frequencies. Compare:
The probability that an asymptomatic woman aged 40-50 in region X has breast cancer is 0.8 percent. If a woman has breast cancer, the probably is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a postive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?Write down your answer before proceeding to try the second question.
Eight of every thousand women [aged 40-50 etc] have breast cancer. Of these eight with breast cancer, seven will have a positive mammogram. Of the remaining 992 without breast cancer, about 70 will have a (false) positive mammogram. Imagine a sample of women with positive mammograms. How many actually have breast cancer?The correct answers are
.008 / ((.008 * .9) + (.992 * .07)) =
0.104
, and 8 / (7 + 70) = about one in ten.
It is the same problem expressed in two ways, and the second way is easier for most people. One thing that did confuse me in the book, though, is why Gigerenzer argues for physicians to use the second method instead of a third method:
For every ten women with a positive mammogram, typically one actually has breast cancer.It is then even easier to answer the question, "If you have a positive mammogram, what is the chance you actually have breast cancer?"
Expression of relative risks without a base rate. Example: Mammography screening starting at age 40 reduces the risk of death by breast cancer by 25%. This seems like a convincing case for screening. However, the overall chance of dying from breast cancer is actually quite low; screening reduces the risk of death from breast cancer in the next 10 years from 0.4% to 0.3%. Once the consequences of the high rate of false positives, from stress to unnecessary surgery, is accounted for, the case for mammography screening is slim, especially if the effort put into mammography screening could instead be put into finding the real killers (i.e., smoking, poor nutrition, and lack of exercise).
Prosecutor's fallacy, confusing the chance of a match with the chance that, given a match, the defendent is not guilty. This also involves ignoring the base rate: If your DNA matches DNA found at the scene with a one in a million chance of a false match, this does not mean that the chance that you are the real killer is 999,999 in a million. If the only evidence differentiating you from the other 10 million Los Angelenos is the DNA, then there are nine other people in LA who will match, and thus only a one in ten chance that you are the right match. (And this assumes that the other links in the chain are not broken - ie, no lab error, no planted evidence, no possibility that you were at the scene and left DNA before or after the crime, or during a crime you didn't commit.)
Given a Monty Hall situation, you should switch doors.