In his almost-overwhelmingly rich new book, Nobel Prize-winning psychologist Daniel Kahneman writes about how we think. One of the many aspects of human thinking he illuminates is a deeply ingrained habit of ignoring statistical facts about groups or populations while gobbling up or even cranking out causal stories that purport to explain those facts. In a chapter called “Causes Trump Statistics,” he writes:
Statistical base rates are facts about the population to which a case belongs, but they are not relevant to the individual case. Causal base rates change your view of how the individual case came to be. The two types of base-rate information are treated very differently:
* Statistical base rates are generally underweighted, and sometimes neglected altogether, when specific information about the case at hand is available.
* Causal base rates are treated as information about the individual case and are easily combined with other case-specific information.
These different responses appear to be built-in features of the automatic and unconscious thinking that dominates our cognition. Because of them, our minds “can deal with stories in which the elements are causally linked,” but they are “weak in statistical reasoning,” and “people will not draw from base-rate information an inference that conflicts with other beliefs.”
This problem seems like it ought to be fixable. We’re a smart species; if we know we have this glitch in our thinking, we should be able to correct it through learning and force of will, right? Sadly, Kahneman says not. His research “supports the uncomfortable conclusion that teaching psychology is mostly a waste of time.” These routines are so deeply embedded that the conscious “controller” overseeing (or, in this case, mostly just watching) them is usually unable to adjust the routines or switch them off. He concludes:
To teach students any psychology they did not know before, you must surprise them. But which surprise will do? [Psychologists Richard] Nisbett and [Eugene] Borgida found that when they presented their students with a surprising statistical fact, the students managed to learn nothing at all. But when the students were surprised by individual cases…they immediately made the generalization…Nisbett and Borgida summarize the results in a memorable sentence: ‘Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.’
I read that chapter, and that last sentence in particular, and thought of two threads running through my many experiences trying to convince people who know things about specific countries to pay attention to statistical forecasts of future events in those countries.
The first thread connects the many occasions when I heard something like, “But my country is different, and here’s why.” As I recall our interactions, most country specialists respond to statistical forecasts about their cases as Kahneman’s research predicts they will: by underweighting or ignoring this information about the statistical base rate in favor of the abundance of information they already have about the case at hand. Once you’ve gotten this reaction, it’s extremely hard to convince a country specialist to pay attention to the forecast, even when you can show empirically that the forecasting model works very well (another one of those pesky statistical base rates we’re apparently programmed to ignore).
The second thread connects the occasions when I found audiences that were more willing to listen. In my experience, country specialists and other consumers of statistical forecasts are more receptive to those numbers when the model that generated them can also be used (or, really, abused; forecasting models are almost never designed to identify cause and effect) to tell an interesting causal story. Country specialists may be especially receptive when one or more of the causal stories you might tell with the model resonates in some way with what she already believes about forces at work in her own case. For example, if the country specialist previously believed that ethnic tensions were a likely source of political instability in her country, she would be more likely to give the statistical forecast serious consideration if the underlying model included a measure of ethnic diversity, even if that measure very weakly influences the forecast.
After reading Kahneman, I wonder if what’s happening in these situations is that the analyst is starting the conversation by making an inference about the validity of the statistical model (the general) on the basis of his or her understanding of a specific case (the particular). When these situations happened, they felt like a victory of sorts, but I’m now inclined to see them as the flip side of the loaded coin that leads to the more frequent and more obvious defeats. The association between the model’s components and the analyst’s prior beliefs gives the analyst a cognitive toehold from which she can start to explore the larger forecast, but the basic process is the same: particular trumps general, and causal storytelling beats “mere” association.