Yes, Forecasting Conflict Can Help Make Better Foreign Policy Decisions

At the Monkey Cage, Idean Salehyan has a guest post that asks, “Can forecasting conflict help to make better foreign policy decisions?” I started to respond in a comment there, but as my comment ballooned into several paragraphs and started to include hyperlinks, I figured I’d go ahead and blog it.

Let me preface my response by saying that I’ve spent most of my 16-year career since graduate school doing statistical forecasting for the U.S. government and now wider audiences and plan and expect to continue doing this kind of work for a while. That means I have a lot of experience doing it and thinking about how and why to do it, but it also means that I’m financially invested in an affirmative answer to Salehyan’s rhetorical question. Make of that what you will.

So, on to the substance. Salehyan’s main concern is actually an ethical one, not the pragmatic one I inferred when I first saw the title of his post. When Salehyan asks about making decisions “better,” he doesn’t just mean more effective. In his view,

Scholars cannot be aloof from the real-world implications of their work, but must think carefully about the potential uses of forecasts…If social scientists will not use their research to engage in policy debates about when to strike, provide aid, deploy troops, and so on, others will do so for them.  Conflict forecasting should not be seen as value-neutral by the academic community—it will certainly not be seen as such by others.

On this point, I agree completely, but I don’t think there’s anything unique about conflict forecasting in this regard. No scholarship is entirely value neutral, and research on causal inference informs policy decisions, too. In fact, my experience is that policy frames suggested by compelling causal analysis have deeper and more durable influence than statistical forecasts, which most policymakers still seem inclined to ignore.

One prominent example comes from the research program that emerged in the 2000s on the relationship between natural resources and the occurrence and persistence of armed conflict. After Paul Collier and Anke Hoeffler famously identified “greed” as an important impetus to civil war (here), numerous scholars showed that some rebel groups were using “lootable” resources to finance their insurgencies. These studies helped inspire advocacy campaigns that led, among other things, to U.S. legislation aimed at restricting trade in “conflict minerals” from the Democratic Republic of Congo. Now, several years later, other scholars and advocates have convincingly shown that this legislation was counterproductive. According to Laura Seay (here), the U.S. law

has created a de facto ban on Congolese mineral exports, put anywhere from tens of thousands up to 2 million Congolese miners out of work in the eastern Congo, and, despite ending most of the trade in Congolese conflict minerals, done little to improve the security situation or the daily lives of most Congolese.

Those are dire consequences, and forecasting is nowhere in sight. I don’t blame Collier and Hoeffler or the scholars who followed their intellectual lead on this topic for Dodd-Frank 1502, but I do hope and expect that those scholars will participate in the public conversation around related policy choices.

Ultimately, we all have a professional and ethical responsibility for the consequences of our work. For statistical forecasters, I think this means, among other things, a responsibility to be honest about the limitations, and to attend to the uses, of the forecasts we produce. The fact that we use mathematical equations to generate our forecasts and we can quantify our uncertainty doesn’t always mean that our forecasts are more accurate or more precise than what pundits offer, and it’s incumbent on us to convey those limitations. It’s easy to model things. It’s hard to model them well, and sometimes hard to spot the difference. We need to try to recognize which of those worlds we’re in and to communicate our conclusions about those aspects of our work along with our forecasts. (N.B. It would be nice if more pundits tried to abide by this rule as well. Alas, as Phil Tetlock points out in Expert Political Judgment, the market for this kind of information rewards other things.)

Salehyan doesn’t just make this general point, however. He also argues that scholars who produce statistical forecasts have a special obligation to attend to the ethics of policy informed by their work because, in his view, they are likely to be more influential.

The same scientific precision that makes statistical forecasts better than ‘gut feelings’ makes it even more imperative for scholars to engage in policy debates.  Because statistical forecasts are seen as more scientific and valid they are likely to carry greater weight in the policy community.  I would expect—indeed hope—that scholars care about how their research is used, or misused, by decision makers.  But claims to objectivity and coolheaded scientific-ness make many academics reluctant to advocate for or against a policy position.

In my experience and the experience of every policy veteran with whom I’ve ever spoken about the subject, Salehyan’s conjecture that “statistical forecasts are likely to carry greater weight in the policy community” is flat wrong. In many ways, the intellectual culture within the U.S. intelligence and policy communities mirrors the intellectual culture of the larger society from which their members are drawn. If you want to know how those communities react to statistical forecasts of the things they care about, just take a look at the public discussion around Nate Silver’s election forecasts. The fact that statistical forecasts aren’t blithely and blindly accepted doesn’t absolve statistical forecasters of responsibility for their work. Ethically speaking, though, it matters that we’re nowhere close to the world Salehyan imagines in which the layers of deliberation disappear and a single statistical forecast drives a specific foreign policy decision.

Look, these decisions are going to be made whether or not we produce statistical forecasts, and when they are made, they will be informed by many things, of which forecasts—statistical or otherwise—will be only one. That doesn’t relieve the forecaster of ethical responsibility for the potential consequences of his or her work. It just means that the forecaster doesn’t have a unique obligation in this regard. In fact, if anything, I would think we have an ethical obligation to help make those forecasts as accurate as we can in order to reduce as much as we can the uncertainty about this one small piece of the decision process. It’s a policymaker’s job to confront these kinds of decisions, and their choices are going to be informed by expectations about the probability of various alternative futures. Given that fact, wouldn’t we rather those expectations be as well informed as possible? I sure think so, and I’m not the only one.

Leave a comment

12 Comments

  1. Rex Brynen

     /  July 29, 2013

    Great post, Jay. I added some thoughts of my own in the comments section at The Monkey Cage (http://themonkeycage.org/2013/07/28/32299/#comment-66244),

    Reply
  2. Thanks for the thoughtful post. I completely agree that statistical forecasts are just one policy tool among many and they do not currently have greater “weight” than other tools. I should have been clear that if we do get to a world in which forecasts are far more accurate than not, their rigor will and should be seen as more valid. We aren’t currently at the phase where conflict forecasting has the “punch” as, say, economic forecasts, but if and when we get better at it, I can see them being far more influential than they are now. Of course, agencies such as DARPA are making large bets on the ability of forecasts to improve conflict-readiness.

    Reply
  3. Nice post. I can understand that one would be uncomfortable, on a personal level, with making forecasts that can potentially influence real-world decisions, but this risk is ultimately just part of doing work that is relevant. And let’s not forget the flip side, which you mention, of producing forecasts that are more accurate or add some other value to what decision-makers currently rely on.

    Reply
  4. Grant

     /  July 29, 2013

    It seems too demanding to expect higher ethical considerations for forecasting. As a hypothetical what if I to publish a paper that convincingly used statistical forecasting to say that the dreaded Sunni-Shia war in the Middle East was about to happen? The American government might take that to mean that we should send more soldiers to protect American interests and those soldiers might accidentally start that war. On the other hand, the American government might pressure allies to take steps to reduce tensions and so avoid the war altogether. I have no way of knowing which might happen. Indeed, I would have no guarantee whatsoever that either would even be the most likely result of my research.

    It seems to me that the researchers have a greater responsibility to research and to publish. If they don’t then we’re wandering around part blind, and so more likely to make a terrible mistake.

    Reply
    • Rex Brynen

       /  July 29, 2013

      Grant raises an interesting question about our broader ethical obligations as researchers which would make a great blog post in and of itself.

      Let’s say, for example, that one’s research generated findings that showed the systematic torture of detainees was quite effective in deterring certain types of militant political behaviour. Ought one to publish the findings, or refrain from doing so for fear that regimes might use them to justify or even expand such activities?

      In fact, this isn’t a hypothetical example: one of my graduate students was in precisely that situation. He–quite wisely, in my view–choose to deemphasize the torture findings. Having been a former political prisoner himself, he was particularly reluctant to publish anything, no matter how firmly based in solid social science research and no matter how significant to scholarship, that seemed so open to potential moral misuse.

      Reply
      • Oral Hazard

         /  July 30, 2013

        I often wonder where the repositories of the dark arts of torture and manipulative sleight-of-hand are kept. This kind of academic self-censorship can have unintended consequences: allowing such compendia accumulated from earliest history to remain in the shadows, rather than be brought to the fore and addressed in the sunlight. Is it really any surprise to certain types and power structures that human rights abuses can serve as a means to an end? Science shouldn’t shy away from topics that aren’t fit for polite conversation. I’m a bit tempted to poke fun at you political scientists by reminding you that you have a long way to go to catch up to the physicists and biologists in the area of disclosing knowledge that may be seriously abused.

      • Grant

         /  July 31, 2013

        I can certainly understand having a desire to avoid giving people excuses to use torture, but I have to admit I was actually arguing the opposite. I don’t know what consequences may come of publishing research, but in nine out of ten cases I would say that as long as the study was well done it should be published. Speaking purely for myself, I fear the consequences of ignorance more than the dangers of misused knowledge.

  5. Thanks for writing, Jay, resonates a lot with my experience in the conflict early warning/response space.

    Reply
  1. Yes, Forecasting Conflict Can Help Make Better Foreign Policy Decisions: A Response to Salehyan | Symposium Magazine
  2. Tiwesdæg: the Left-hand of Linkage » Duck of Minerva
  3. Forecasting and Ethics, Ctd. | The Smoke-Filled Room
  4. Links: Taking Kids on Field Trips; Forecasting; Cyber Security; Syria's Future; Football and Violence; New UN Blog; Honest Acknowledgments - IR Blog

Leave a Comment

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13.6K other subscribers
  • Archives