A Skeptical Note on Policy-Prescriptive Political Science

My sometimes-colleague Michael Horowitz wrote a great piece for War on the Rocks last week on what “policy relevance” means for political scientists who study international affairs, and the different forms that relevance can take. Among the dimensions of policy relevance he calls out is the idea of “policy actionability”:

Policy actionability refers to a recommendation that is possible to implement for the target of the recommendation. Most academic work is not policy actionable, fundamentally. For example, implications from international relations research are things such as whether countries with high male-to-female ratios are more likely to start military conflicts or that countries that acquire nuclear weapons become harder to coerce.

As Michael notes, most scholarship isn’t “actionable” in this way, and isn’t meant to be. In my experience, though, there is plenty of demand in Washington and elsewhere for policy-actionable research on international affairs, and there is a subset of scholars who, in pursuit of relevance, do try to extract policy prescriptions from their studies.

As an empiricist, I welcome both of those things—in principle. Unfortunately, the recommendations that scholars offer rarely follow directly from their research. Instead, they almost always require some additional, often-heroic assumptions, and those additional assumptions render the whole endeavor deeply problematic. For example, Michael observes that most statistical studies identify average effects—other things being equal, a unit change in x is associated with some amount of change in y—and points out that the effects in any particular case will still be highly uncertain.

That’s true for a lot of what we study, but it’s only the half of it. Even more significant, I think, are the following three assumptions, which implicitly underpin the “policy implications” sections in a lot of the work on international affairs that tries to convert comparative analysis (statistical or not) into policy recommendations:

  • Attempts to induce a change in x in the prescribed direction will actually produce the desired change in x;
  • Attempts to induce a change in x in the prescribed direction will not produce significant and negative unintended consequences; and
  • If it does occur, a change in y induced by the policy actor to whom the scholar is making recommendations will have the same effect as previous changes in y that occurred for various other reasons.

The last assumption isn’t so problematic when the study in question looked specifically at policy actions by that same policy actor, but that’s almost never the case in international relations and other fields using observational data to study macro-political behavior. Instead, we’re more likely to have a study that looked at something like GDP growth rates, female literacy, or the density of “civil society” organizations that the policy audience does not control and does not know how to control. Under these circumstances, all three of those assumptions must hold for the research to be neatly “actionable,” and I bet most social scientists will tell you that at least one and probably two or three of them usually don’t.

With so much uncertainty and so much at stake, I wind up thinking that, unless their research designs have carefully addressed these assumptions, scholars—in their roles as scientists, not as citizens or advocates—should avoid that last mile and leave it to the elected officials and bureaucrats hired for that purpose. That’s hard to do when we care about the policies involved and get asked to offer “expert” advice, but “I don’t know” or “That’s not my area of expertise” will almost always be a more honest answer in these situations.

 

Forecasting Round-Up

I don’t usually post lists of links, but the flurry of great material on forecasting that hit my screen over the past few days is inspiring me to make an exception. Here in no particular order are several recent pieces that deserve wide reading:

  • The Weatherman Is Not a Moron.” Excerpted from a forthcoming book by the New York Times’ Nate Silver, this piece deftly uses meteorology to illustrate the difficulties of forecasting in complex systems and some of the ways working forecasters deal with them. For a fantastic intellectual history on the development of the ensemble forecasting approach Silver discusses, see this July 2005 journal article by John Lewis in the American Meteorological Society’s Monthly Weather Review.
  • Trending Upward.” Michael Horowitz and Phil Tetlock write for Foreign Policy about how the U.S. “intelligence community” can improve its long-term forecasting. The authors focus on the National Intelligence Council’s Global Trends series, which attempts the Herculean (or maybe Sisyphean) feat of trying to peer 15 years into the future, but the recommendations they offer apply to most forecasting exercises that rely on expert judgment. And, on the Duck of Minerva blog, John Western pushes back: “I think there is utility in long-range forecasting exercises, I’m just not sure I see any real benefits from improved accuracy on the margins. There may actually be some downsides.” [Disclosure: Since this summer, I have been a member of Tetlock and Horowitz’s team in the IARPA-funded forecasting competition they mention in the article.]
  • Theories, Models, and the Future of Science.” This post by Ashutosh Jogalekar on Scientific American‘s Curious Waveform blog argues that “modeling and simulation are starting to be considered as a respectable ‘third leg’ of science, in addition to theory and experiment.” Why? Because “many of science’s greatest current challenges may not be amenable to rigorous theorizing, and we may have to treat models of phenomena as independent, authoritative explanatory entities in their own right.” Like Trey Causey, who pointed me toward this piece on Twitter, I think the post draws a sharper distinction between modeling for simulation and explanation than it needs to, but it’s a usefully provocative read.
  • The Probabilities of Large Terrorist Events.” I recently finished Nassim Nicholas Taleb’s Black Swan and was looking around for worked examples applying that book’s idea of “fractal randomness” to topics I study. Voilà! On Friday, Wired‘s Social Dimensions blog spotlighted a recent paper by Aaron Clauset and Ryan Woodward that uses a mix of statistical techniques, including power-law models, to estimate the risk of this particular low-probability, high-impact political event. Their approach—model only the tail of the distribution and use an ensemble approach like the aforementioned meteorologists do—seems really clever to me, and I like how they are transparent about the uncertainty of the resulting estimates.
  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13,613 other followers

  • Archives

%d bloggers like this: