Beware the Confident Counterfactual

Did you anticipate the Syrian uprising that began in 2011? What about the Tunisian, Egyptian, and Libyan uprisings that preceded and arguably shaped it? Did you anticipate that Assad would survive the first three years of civil war there, or that Iraq’s civil war would wax again as intensely as it has in the past few days?

All of these events or outcomes were difficult forecasting problems before they occurred, and many observers have been frank about their own surprise at many of them. At the same time, many of those same observers speak with confidence about the causes of those events. The invasion of Iraq in 2003 surely is or is not the cause of the now-raging civil war in that country. The absence of direct US or NATO military intervention in Syria is or is not to blame for continuation of that country’s civil war and the mass atrocities it has brought—and, by extension, the resurgence of civil war in Iraq.

But here’s the thing: strong causal claims require some confidence about how history would have unfolded in the absence of the cause of interest, and those counterfactual histories are no easier to get right than observed history was to anticipate.

Like all of the most interesting questions, what causality means and how we might demonstrate it will forever be matters for debate—see here on Daniel Little’s blog for an overview of that debate’s recent state—but most conceptions revolve around some idea of necessity. When we say X caused Y, we usually mean that had X not occurred, Y wouldn’t have happened, either. Subtler or less stringent versions might center on salience instead of necessity and insert a “probably” into the final phrase of the previous sentence, but the core idea is the same.

In nonexperimental social science, this logic implicitly obliges us to consider the various ways history might have unfolded in response to X’ rather than X. In a sense, then, both prediction and explanation are forecasting problems. They require us to imagine states of the world we have not seen and to connect them in plausible ways to to ones we have. If anything, the counterfactual predictions required for explanation are more frustrating epistemological problems than the true forecasts, because we will never get to see the outcome(s) against which we could assess the accuracy of our guesses.

As Robert Jervis pointed out in his contribution to a 1996 edited volume on counterfactual thought experiments in world politics, counterfactuals are (or should be) especially hard to construct—and thus causal claims especially hard to make—when the causal processes of interest involve systems. For Jervis,

A system exists when elements or units are interconnected so that the system has emergent properties—i.e., its characteristics and behavior canot be inferred from the characteristics and behavior of the units taken individually—and when changes in one unit or the relationship between any two of them produce ramifying alterations in other units or relationships.

As Jervis notes,

A great deal of thinking about causation…is based on comparing two situations that are the same in all ways except one. Any differences in the outcome, whether actual or expected…can be attributed to the difference in the state of the one element…

Under many circumstances, this method is powerful and appropriate. But it runs into serious problems when we are dealing with systems because other things simply cannot be held constant: as Garret Hardin nicely puts it, in a system, ‘we can never do merely one thing.’

Jervis sketches a few thought experiments to drive this point home. He has a nice one about the effects of external interventions on civil wars that is topical here, but I think his New York traffic example is more resonant:

In everyday thought experiments we ask what would have happened if one element in our world had been different. Living in New York, I often hear people speculate that traffic would be unbearable (as opposed to merely terrible) had Robert Moses not built his highways, bridges, and tunnels. But to try to estimate what things would have been like, we cannot merely subtract these structures from today’s Manhattan landscape. The traffic patterns, the location of businesses and residences, and the number of private automobiles that are now on the streets are in significant measure the product of Moses’s road network. Had it not been built, or had it been built differently, many other things would have been different. Traffic might now be worse, but it is also possible that it would have been better because a more efficient public transportation system would have been developed or because the city would not have grown so large and prosperous without the highways.

Substitute “invade Iraq” or “fail to invade Syria” for Moses’s bridges and tunnels, and I hope you see what I mean.

In the end, it’s much harder to get beyond banal observations about influences to strong claims about causality than our story-telling minds and the popular media that cater to them would like. Of course the invasion of Iraq in 2003 or the absence of Western military intervention in Syria have shaped the histories that followed. But what would have happened in their absence—and, by implication, what would happen now if, for example, the US now re-inserted its armed forces into Iraq or attempted to topple Assad? Those questions are far tougher to answer, and we should beware of anyone who speaks with great confidence about their answers. If you’re a social scientist who isn’t comfortable making and confident in the accuracy of your predictions, you shouldn’t be comfortable making and confident in the validity of your causal claims, either.

Hello?!? Not All Forecasters Are Strict Positivists

International relations is the most predictively oriented subfield of political science…Yet even in the other empirical subfields, the positivist notion that everything must ultimately be reducible to (knowable) universal laws displays its hold in excrescences such as quadrennial attempts to derive formulae for predicting the next presidential election outcome, usually on the basis of ‘‘real’’ (economic) factors. Even if one follows Milton Friedman (1953) in insisting that the factors expressed by such formulae are not supposed to be actually causing electoral outcomes, but are merely variables that (for some unknown reason) allow us to make good behavioral predictions, in practice one usually wants to know what is actually causing the behavior, and it is all too easy to assume that whatever is causing it—since it seems to be responsible for a behavioral regularity—must be some universal human disposition.

That’s from a 2012 paper by Jeffrey Friedman on Robert Jervis’ 1997 System Effects and the “problem of prediction.” I actually enjoyed the paper on the whole, but this passage encapsulates what drives me nuts about what many people—including many social “scientists”—think it means to try to make forecasts about politics.

Contrary to the assertions of some haters, political scientists almost never make explicit forecasts about the things they study—at least not in print or out loud. Some of that reticence presumably results from the fact that there’s no clear professional benefit to making predictions, and there is some professional risk in doing so and then being wrong.

Some of that reticence, though, also seems to flow from this silly but apparently widely-held idea that the very act of forecasting implies that the forecaster accepts the strict positivist premise that “everything must ultimately be reducible to (knowable) universal laws.” To that, I say…

charlie brown aaugh

Probability is a mathematical representation of uncertainty, and a probabilistic forecast explicitly acknowledges that we don’t know for sure what’s going to happen. Instead, it’s an educated guess—or, in Bayesian terms, an informed belief.

Forecasters generally use evidence from the past to educate those guesses, but that act of empiricism in itself does not imply that we presume there are universal laws driving political processes lurking beneath that history. Instead, it’s really just a practical solution to the problem of wanting better information—sometimes to help us plan for the future, and sometimes to try to adjudicate between different ideas about the forces shaping those processes now and in the past.

Empiricism is a practical solution because it works—not perfectly, of course, but, for many problems of interest, a lot better than casting bones or reading entrails or consulting oracles. The handful of forecasters I know all embrace the premises that their efforts are only approximations, and that the world can always change in ways that will render the models we find helpful today less helpful in the future. In the meantime, though, we figure we can nibble away at our ignorance by making structured guesses about that future and seeing which ones turn out to be more reliable than the others. Physicists still aren’t entirely sure how planes manage to fly, but millions of us make a prediction every day that the plane we’re about to board is somehow going to manage that feat. We don’t need to be certain of the underlying law to find that prediction useful.

Finally, I can’t resist: there’s real irony in Freidman’s choice of examples of misguided forecasting projects. To have called efforts to predict the outcome of U.S. presidential elections “excrescences” in the year those excrescences had a kind of popular coming out, well, that’s just unfortunate. I guess Friedman didn’t see that one coming.

  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13,613 other followers

  • Archives

%d bloggers like this: