Cognitive theories predict that even experts cope with the complexities and ambiguities of world politics by resorting to theory-driven heuristics that allow them: (a) to make confident counterfactual inferences about what would have happened had history gone down a different path (plausible pasts); (b) to generate predictions about what might yet happen (probable futures); (c) to defend both counterfactual beliefs and conditional forecasts from potentially disconfirming data. An interrelated series of studies test these predictions by assessing correlations between ideological world view and beliefs about counterfactual histories (Studies 1 and 2), experimentally manipulating the results of hypothetical archival discoveries bearing on those counterfactual beliefs (Studies 3-5), and by exploring experts’ reactions to the confirmation or disconfirmation of conditional forecasts (Studies 6-12). The results revealed that experts neutralize dissonant data and preserve confidence in their prior assessments by resorting to a complex battery of belief-system defenses that, epistemologically defensible or not, make learning from history a slow process and defections from theoretical camps a rarity.
That’s the abstract to a 1999 AJPS paper by Phil Tetlock (emphasis added; ungated PDF here). Or, as Phil writes in the body of the paper,
The three sets of studies underscore how easy it is even for sophisticated professionals to slip into borderline tautological patterns of thinking about complex path-dependent systems that unfold once and only once. The risk of circularity is particularly pronounced when we examine reasoning about ideologically charged historical counterfactuals.
As noted in a recent post, ongoing debates over who “lost” Iraq or how direct U.S. military intervention in Syria might or might not have prevented wider war in the Middle East are current cases in point.
This morning, though, I’m intrigued by Phil’s point about the rarity of defections from theoretical camps tied to wider belief systems. If that’s right—and I have no reason to doubt that it is—then we should not put much faith in any one expert’s ability to update his or her scientific understanding in response to new information. In other words, we shouldn’t expect science to happen at the level of the individual. Instead, we should look wherever possible at the distribution of beliefs across a community of experts and hope that social cognition is more powerful than our individual minds are.
This evidence should also affect our thinking about how scientific change occurs. In The Structure of Scientific Revolutions, Thomas Kuhn (p. 19 in the 2nd Edition) described scientific revolutions as a process that happens at both the individual and social levels:
When, in the development of a natural science, an individual or group first produces a synthesis able to attract most of the next generation’s practitioners, the older schools gradually disappear. In part their disappearance is caused by their members’ conversion to the new paradigm. But there are always some men who cling to one or another of the older views, and they are simply read out of the profession, which thereafter ignores their work.
If I’m reading Tetlock’s paper right, though, then this description is only partly correct. In reality, scientists who are personally and professionally (or cognitively and emotionally?) invested in existing theories probably don’t convert to new ones very often. Instead, the recruitment mechanism Kuhn also mentions is probably the more relevant one. If we could reliably measure it, the churn rate associated with specific belief clusters would be a fascinating thing to watch.