In Defense of Political Science and Forecasting

Under the headline “Political Scientists Are Lousy Forecasters,” today’s New York Times includes an op-ed by Jacqueline Stevens that takes a big, sloppy swipe at most of the field. The money line:

It’s an open secret in my discipline: in terms of accurate political predictions (the field’s benchmark for what counts as science), my colleagues have failed spectacularly and wasted colossal amounts of time and money.

As she sees it, this poor track record is an inevitability. Referencing the National Science Foundation‘s history of funding research in which she sees little value, Stevens writes:

Government can—and should—assist political scientists, especially those who use history and theory to explain shifting political contexts, challenge our intuitions and help us see beyond daily newspaper headlines. Research aimed at political prediction is doomed to fail. At least if the idea is to predict more accurately than a dart-throwing chimp.

I don’t have much time to write today, so I was glad to see this morning that Henry Farrell has already penned a careful rebuttal that mirrors my own reactions. On the topic of predictions in particular, Farrell writes:

The claim here—that “accurate political prediction” is the “field’s benchmark for what counts as science” is quite wrong. There really isn’t much work at all by political scientists that aspires to predict what will happen in the future…It is reasonable to say that the majority position in political science is a kind of soft positivism, which focuses on the search for law-like generalizations. But that is neither a universal benchmark (I, for one, don’t buy into it), nor indeed, the same thing as accurate prediction, except where strong covering laws (of the kind that few political scientists think are generically possible) can be found.

To Farrell’s excellent rebuttals, I would add a couple of things.

First and most important, there’s a strong case to be made that political scientists don’t engage in enough forecasting and really ought to do more of it. Contrary to Stevens’ assertion in that NYT op-ed, most political scientists eschew forecasting, seeing description and explanation as the goals of their research instead. As Phil Schrodt argues in “Seven Deadly Sins of Quantitative Political Science” (PDF), however, to the extent that we see our discipline as a form of science, political scientists ought to engage in forecasting, because prediction is an essential part of the scientific method.

Explanation in the absence of prediction is not somehow scienti cally superior to predictive analysis, it isn’t scienti c at all! It is, instead, “pre-scientific.”

In a paper on predicting civil conflicts, Mike Ward, Brian Greenhill, and Kristin Bakke help to explain why:

Scholars need to make and evaluate predictions in order to improve our models. We have to be willing to make predictions explicitly – and plausibly be wrong, even appear foolish – because our policy prescriptions need to be undertaken with results that are drawn from robust models that have a better chance of being correct. The whole point of estimating risk models is to be able to apply them to specific cases. You wouldn’t expect your physician to tell you that all those cancer risk factors from smoking don’t actually apply to you. Predictive heuristics provide a useful, possibly necessary, strategy that may help scholars and policymakers guard against erroneous recommendations.

Second, I think Stevens actually gets the historical record wrong. It drives me crazy when I see people take the conventional wisdom about a topic—say, the possibility of the USSR’s collapse, or a wave of popular uprisings in Middle East and North Africa—and turn it into a blanket statement that “no one predicted X.” Most of the time, we don’t really know what most people would have predicted, because they weren’t asked to make predictions. The absence of a positive assertion that X will happen is not the same thing as a forecast that X will not happen. In fact, in at least one of the cases Stevens discusses—the USSR’s collapse—we know that some observers did forecast its eventual collapse, albeit usually without much specificity about the timing of that event.

More generally, I think it’s fair to say that, on just about any topic, there will be a distribution of forecasts—from high to low, impossible to inevitable, and so on. Often, that distribution will have a clear central tendency, as did expectations about the survival of authoritarian regimes in the USSR or the Arab world, but that central tendency should not be confused with a consensus. Instead, this divergence of expectations is precisely where the most valuable information will be found. Eventually, some of those predictions will prove correct while others will not, and, as Phil and Mike and co. remind us, that variation in performance tells us something very useful about the power of the explanatory models—quantitative, qualitative, it doesn’t really matter—from which they were derived.

PS. For smart rebuttals to other aspects of Steven’s jeremiad, see Erik Voeten’s post at the Monkey Cage and Steve Saideman’s rejoinder at Saideman’s Semi-Spew.

PPS. Stevens provides some context for her op-ed on her own blog, here. (I would have added this link sooner, but I’ve just seen it myself.)

PPPS. For some terrific ruminations on uncertainty, statistics, and scientific knowledge, see this latecomer response from Anton Strezhnev.

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13.6K other subscribers
  • Archives