At the Monkey Cage, Idean Salehyan has a guest post that asks, “Can forecasting conflict help to make better foreign policy decisions?” I started to respond in a comment there, but as my comment ballooned into several paragraphs and started to include hyperlinks, I figured I’d go ahead and blog it.
Let me preface my response by saying that I’ve spent most of my 16-year career since graduate school doing statistical forecasting for the U.S. government and now wider audiences and plan and expect to continue doing this kind of work for a while. That means I have a lot of experience doing it and thinking about how and why to do it, but it also means that I’m financially invested in an affirmative answer to Salehyan’s rhetorical question. Make of that what you will.
So, on to the substance. Salehyan’s main concern is actually an ethical one, not the pragmatic one I inferred when I first saw the title of his post. When Salehyan asks about making decisions “better,” he doesn’t just mean more effective. In his view,
Scholars cannot be aloof from the real-world implications of their work, but must think carefully about the potential uses of forecasts…If social scientists will not use their research to engage in policy debates about when to strike, provide aid, deploy troops, and so on, others will do so for them. Conflict forecasting should not be seen as value-neutral by the academic community—it will certainly not be seen as such by others.
On this point, I agree completely, but I don’t think there’s anything unique about conflict forecasting in this regard. No scholarship is entirely value neutral, and research on causal inference informs policy decisions, too. In fact, my experience is that policy frames suggested by compelling causal analysis have deeper and more durable influence than statistical forecasts, which most policymakers still seem inclined to ignore.
One prominent example comes from the research program that emerged in the 2000s on the relationship between natural resources and the occurrence and persistence of armed conflict. After Paul Collier and Anke Hoeffler famously identified “greed” as an important impetus to civil war (here), numerous scholars showed that some rebel groups were using “lootable” resources to finance their insurgencies. These studies helped inspire advocacy campaigns that led, among other things, to U.S. legislation aimed at restricting trade in “conflict minerals” from the Democratic Republic of Congo. Now, several years later, other scholars and advocates have convincingly shown that this legislation was counterproductive. According to Laura Seay (here), the U.S. law
has created a de facto ban on Congolese mineral exports, put anywhere from tens of thousands up to 2 million Congolese miners out of work in the eastern Congo, and, despite ending most of the trade in Congolese conflict minerals, done little to improve the security situation or the daily lives of most Congolese.
Those are dire consequences, and forecasting is nowhere in sight. I don’t blame Collier and Hoeffler or the scholars who followed their intellectual lead on this topic for Dodd-Frank 1502, but I do hope and expect that those scholars will participate in the public conversation around related policy choices.
Ultimately, we all have a professional and ethical responsibility for the consequences of our work. For statistical forecasters, I think this means, among other things, a responsibility to be honest about the limitations, and to attend to the uses, of the forecasts we produce. The fact that we use mathematical equations to generate our forecasts and we can quantify our uncertainty doesn’t always mean that our forecasts are more accurate or more precise than what pundits offer, and it’s incumbent on us to convey those limitations. It’s easy to model things. It’s hard to model them well, and sometimes hard to spot the difference. We need to try to recognize which of those worlds we’re in and to communicate our conclusions about those aspects of our work along with our forecasts. (N.B. It would be nice if more pundits tried to abide by this rule as well. Alas, as Phil Tetlock points out in Expert Political Judgment, the market for this kind of information rewards other things.)
Salehyan doesn’t just make this general point, however. He also argues that scholars who produce statistical forecasts have a special obligation to attend to the ethics of policy informed by their work because, in his view, they are likely to be more influential.
The same scientific precision that makes statistical forecasts better than ‘gut feelings’ makes it even more imperative for scholars to engage in policy debates. Because statistical forecasts are seen as more scientific and valid they are likely to carry greater weight in the policy community. I would expect—indeed hope—that scholars care about how their research is used, or misused, by decision makers. But claims to objectivity and coolheaded scientific-ness make many academics reluctant to advocate for or against a policy position.
In my experience and the experience of every policy veteran with whom I’ve ever spoken about the subject, Salehyan’s conjecture that “statistical forecasts are likely to carry greater weight in the policy community” is flat wrong. In many ways, the intellectual culture within the U.S. intelligence and policy communities mirrors the intellectual culture of the larger society from which their members are drawn. If you want to know how those communities react to statistical forecasts of the things they care about, just take a look at the public discussion around Nate Silver’s election forecasts. The fact that statistical forecasts aren’t blithely and blindly accepted doesn’t absolve statistical forecasters of responsibility for their work. Ethically speaking, though, it matters that we’re nowhere close to the world Salehyan imagines in which the layers of deliberation disappear and a single statistical forecast drives a specific foreign policy decision.
Look, these decisions are going to be made whether or not we produce statistical forecasts, and when they are made, they will be informed by many things, of which forecasts—statistical or otherwise—will be only one. That doesn’t relieve the forecaster of ethical responsibility for the potential consequences of his or her work. It just means that the forecaster doesn’t have a unique obligation in this regard. In fact, if anything, I would think we have an ethical obligation to help make those forecasts as accurate as we can in order to reduce as much as we can the uncertainty about this one small piece of the decision process. It’s a policymaker’s job to confront these kinds of decisions, and their choices are going to be informed by expectations about the probability of various alternative futures. Given that fact, wouldn’t we rather those expectations be as well informed as possible? I sure think so, and I’m not the only one.
Like this:
Like Loading...