About That Apparent Decline in Violent Conflict…

Is violent conflict declining, or isn’t it? I’ve written here and elsewhere about evidence that warfare and mass atrocities have waned significantly in recent decades, at least when measured by the number of people killed in those episodes. Not everyone sees the world the same way, though. Bear Braumoeller asserts that, to understand how war prone the world is, we should look at how likely countries are to use force against politically relevant rivals, and by this measure the rate of warfare has held pretty steady over the past two centuries. Tanisha Fazal argues that wars have become less lethal without becoming less frequent because of medical advances that help keep more people in war zones alive. Where I have emphasized war’s lethal consequences, these two authors emphasize war’s likelihood, but their arguments suggest that violent conflict hasn’t really waned the way I’ve alleged it has.

This week, we got another important contribution to the wider debate in which my shallow contributions are situated. In an updated working paper, Pasquale Cirillo and Nassim Nicholas Taleb claim to show that

Violence is much more severe than it seems from conventional analyses and the prevailing “long peace” theory which claims that violence has declined… Contrary to current discussions…1) the risk of violent conflict has not been decreasing, but is rather underestimated by techniques relying on naive year-on-year changes in the mean, or using sample mean as an estimator of the true mean of an extremely fat-tailed phenomenon; 2) armed conflicts have memoryless inter-arrival times, thus incompatible with the idea of a time trend.

Let me say up front that I only have a weak understanding of the extreme value theory (EVT) models used in Cirillo and Taleb’s paper. I’m a political scientist who uses statistical methods, not a statistician, and I have neither studied nor tried to use the specific techniques they employ.

Bearing that in mind, I think the paper successfully undercuts the most optimistic view about the future of violent conflict—that violent conflict has inexorably and permanently declined—but then I don’t know many people who actually hold that view. Most of the work on this topic distinguishes between the observed fact of a substantial decline in the rate of deaths from political violence and the underlying risk of those deaths and the conflicts that produce them. We can (partly) see the former, but we can’t see the latter; instead, we have to try to infer it from the conflicts that occur. Observed history is, in a sense, a single sample drawn from a distribution of many possible histories, and, like all samples, this one is only a jittery snapshot of the deeper data-generating process in which we’re really interested. What Cirillo and Taleb purport to show is that long sequences of relative peace like the one we have seen in recent history are wholly consistent with a data-generating process in which the risk of war and death from it have not really changed at all.

Of course, the fact that a decades-long decline in violent conflict like the one we’ve seen since World War II could happen by chance doesn’t necessarily mean that it is happening by chance. The situation is not dissimilar to one we see in sports when a batter or shooter seems to go cold for a while. Oftentimes that cold streak will turn out to be part of the normal variation in performance, and the athlete will eventually regress to the mean—but not every time. Sometimes, athletes really do get and stay worse, maybe because of aging or an injury or some other life change, and the cold streak we see is the leading edge of that sustained decline. The hard part is telling in real time which process is happening. To try to do that, we might look for evidence of those plausible causes, but humans are notoriously good at spotting patterns where there are none, and at telling ourselves stories about why those patterns are occurring that turn out to be bunk.

The same logic applies to thinking about trends in violent conflict. Maybe the downward trend in observed death rates is just a chance occurrence in an unchanged system, but maybe it isn’t. And, as Andrew Gelman told Zach Beauchamp, the statistics alone can’t answer this question. Cirillo and Taleb’s analysis, and Braumoeller’s before it, imply that the history we’ve seen in the recent past  is about as likely as any other, but that fact isn’t proof of its randomness. Just as rare events sometimes happen, so do systemic changes.

Claims that “This time really is different” are usually wrong, so I think the onus is on people who believe the underlying risk of war is declining to make a compelling argument about why that’s true. When I say “compelling,” I mean an argument that a) identifies specific causal mechanisms and b) musters evidence of change over time in the presence or prevalence of those mechanisms. That’s what Steven Pinker tries at great length to do in The Better Angels of Our Nature, and what Joshua Goldstein did in Winning the War on War.

My own thinking about this issue connects the observed decline in the the intensity of violent conflict to the rapid increase in the past 100+ years in the size and complexity of the global economy and the changes in political and social institutions that have co-occurred with it. No, globalization is not new, and it certainly didn’t stop the last two world wars. Still, I wonder if the profound changes of the past two centuries are accumulating into a global systemic transformation akin to the one that occurred locally in now-wealthy societies in which organized violent conflict has become exceptionally rare. Proponents of democratic peace theory see a similar pattern in the recent evidence, but I think they are too quick to give credit for that pattern to one particular stream of change that may be as much consequence as cause of the deeper systemic transformation. I also realize that this systemic transformation is producing negative externalities—climate change and heightened risks of global pandemics, to name two—that could offset the positive externalities or even lead to sharp breaks in other directions.

It’s impossible to say which, if any, of these versions is “true,” but the key point is that we can find real-world evidence of mechanisms that could be driving down the underlying risk of violent conflict. That evidence, in turn, might strengthen our confidence in the belief that the observed pattern has meaning, even if it doesn’t and can’t prove that meaning or any of the specific explanations for it.

Finally, without deeply understanding the models Cirillo and Taleb used, I also wondered when I first read their new paper if their findings weren’t partly an artifact of those models, or maybe some assumptions the authors made when specifying them. The next day, David Roodman wrote something that strengthened this source of uncertainty. According to Roodman, the extreme value theory (EVT) models employed by Cirillo and Taleb can be used to test for time trends, but the ones described in this new paper don’t. Instead, Cirillo and Taleb specify their models in a way that assumes there is no time trend and then use them to confirm that there isn’t. “It seems to me,” Roodman writes, “that if Cirillo and Taleb want to rule out a time trend according to their own standard of evidence, then they should introduce one in their EVT models and test whether it is statistically distinguishable from zero.”

If Roodman is correct on this point, and if Cirillo and Taleb were to do what he recommends and still find no evidence of a time trend, I would update my beliefs accordingly. In other words, I would worry a little more than I do now about the risk of much larger and deadlier wars occurring again in my expected lifetime.

A Note on Trends in Armed Conflict

In a report released earlier this month, the Project for the Study of the 21st Century (PS21) observed that “the body count from the top twenty deadliest wars in 2014 was more than 28% higher than in the previous year.” They counted approximately 163 thousand deaths in 2014, up from 127 thousand in 2013. The report described that increase as “part of a broader multi-year trend” that began in 2007. The project’s executive director, Peter Epps, also appropriately noted that “assessing casualty figures in conflict is notoriously difficult and many of the figures we are looking at here a probably underestimates.”

This is solid work. I do not doubt the existence of the trend it identifies. That said, I would also encourage us to keep it in perspective:

That chart (source) ends in 2005. Uppsala University’s Department of Peace and Conflict (UCDP) hasn’t updated its widely-used data set on battle-related deaths for 2014 yet, but from last year’s edition, we can see the tail end of that longer period, as well as the start of the recent upward trend PS21 identifies. In this chart—R script here—the solid line marks the annual, global sums of their best estimates, and the dotted lines show the sums of the high and low estimates:
Annual, global battle-related deaths, 1989-2013 (source: UCDP)

Annual, global battle-related deaths, 1989-2013 (Data source: UCDP)

If we mentally tack that chart onto the end of the one before it, we can also see that the increase of the past few years has not yet broken the longer spell of relatively low numbers of battle deaths. Not even close. The peak around 2000 in the middle of the nearer chart is a modest bump in the farther one, and the upward trend we’ve seen since 2007 has not yet matched even that local maximum. This chart stops at the end of 2013, but if we used the data assembled by PS21 for the past year to project an increase in 2014, we’d see that we’re still in reasonably familiar territory.

Both of these things can be true. We could be—we are—seeing a short-term increase that does not mark the end of a longer-term ebb. The global economy has grown fantastically since the 1700s, and yet it still suffers serious crises and recessions. The planet has warmed significantly over the past century, but we still see some unusually cool summers and winters.

Lest this sound too sanguine at a time when armed conflict is waxing, let me add two caveats.

First, the picture from the recent past looks decidedly worse if we widen our aperture to include deliberate killings of civilians outside of battle. UCDP keeps a separate data set on that phenomenon—here—which they label “one-sided” violence. If we add the fatalities tallied in that data set to the battle-related ones summarized in the previous plot, here is what we get:

Annual, global battle-related deaths and deaths from one-sided violence, 1989-2013 (Data source: UCDP)

Annual, global battle-related deaths and deaths from one-sided violence, 1989-2013 (Data source: UCDP)

Note the difference in the scale of the y-axis; it is an order of magnitude larger than the one in the previous chart. At this scale, the peaks and valleys in battle-related deaths from the past 25 years get smoothed out, and a single peak—the Rwandan genocide—dominates the landscape. That peak is still much lower than the massifs marking the two World Wars in the first chart, but it is huge nonetheless. Hundreds of thousands of people were killed in a matter of months.

Second, the long persistence of this lower rate does not prove that the risk of violent conflict on the scale of the two World Wars has been reduced permanently. As Bear Braumoeller (here) and Nassim Nicholas Taleb (here; I link reluctantly, because I don’t care for the scornful and condescending tone) have both pointed out, a single war between great powers could end or even reverse this trend, and it is too soon to say with any confidence whether or not the risk of that happening is much lower than it used to be. Like many observers of international relations, I think we need to see how the system processes the (relative) rise of China and declines of Russia and the United States before updating our beliefs about the risk of major wars. As someone who grew up during the Cold War and was morbidly fascinated by the possibility of nuclear conflagration, I think we also need to remember how close we came to nuclear war on some occasions during that long spell, and to ponder how absurdly destructive and terrible that would be.

Strictly speaking, I’m not an academic, but I do a pretty good impersonation of one, so I’ll conclude with a footnote to that second caveat: I did not attribute the idea that the risk of major war is a thing of the past to Steven Pinker, as some do, because as Pinker points out in a written response to Taleb (here), he does not make precisely that claim, and his wider point about a long-term decline in human violence does not depend entirely on an ebb in warfare persisting. It’s hard to see how Pinker’s larger argument could survive a major war between nuclear powers, but then if that happened, who would care one way or another if it had?

Some Suggested Readings for Political Forecasters

A few people have recently asked me to recommend readings on political forecasting for people who aren’t already immersed in the subject. Since the question keeps coming up, I thought I’d answer with a blog post. Here, in no particular order, are books (and one article) I’d suggest to anyone interested in the subject.

Thinking, Fast and Slow, by Daniel Kahneman. A really engaging read on how we think, with special attention to cognitive biases and heuristics. I think forecasters should read it in hopes of finding ways to mitigate the effects of these biases on their own work, and of getting better at spotting them in the thinking of others.

Numbers Rule Your World, by Kaiser Fung. Even if you aren’t going to use statistical models to forecast, it helps to think statistically, and Fung’s book is the most engaging treatment of that topic that I’ve read so far.

The Signal and the Noise, by Nate Silver. A guided tour of how forecasters in a variety of fields do their work, with some useful general lessons on the value of updating and being an omnivorous consumer of relevant information.

The Theory that Would Not Die, by Sharon Bertsch McGrayne. A history of Bayesian statistics in the real world, including successful applications to some really hard prediction problems, like the risk of accidents with atomic bombs and nuclear power plants.

The Black Swan, by Nicholas Nassim Taleb. If you can get past the derisive tone—and I’ll admit, I initially found that hard to do—this book does a great job explaining why we should be humble about our ability to anticipate rare events in complex systems, and how forgetting that fact can hurt us badly.

Expert Political Judgment: How Good Is It? How Can We Know?, by Philip Tetlock. The definitive study to date on the limits of expertise in political forecasting and the cognitive styles that help some experts do a bit better than others.

Counterfactual Thought Experiments in World Politics, edited by Philip Tetlock and Aaron Belkin. The introductory chapter is the crucial one. It’s ostensibly about the importance of careful counterfactual reasoning to learning from history, but it applies just as well to thinking about plausible futures, an important skill for forecasting.

The Foundation Trilogy, by Isaac Asimov. A great fictional exploration of the Modernist notion of social control through predictive science. These books were written half a century ago, and it’s been more than 25 years since I read them, but they’re probably more relevant than ever, what with all the talk of Big Data and the Quantified Self and such.

The Perils of Policy by P-Value: Predicting Civil Conflicts,” by Michael Ward, Brian Greenhill, and Kristin Bakke. This one’s really for practicing social scientists, but still. The point is that the statistical models we typically construct for hypothesis testing often won’t be very useful for forecasting, so proceed with caution when switching between tasks. (The fact that they often aren’t very good for hypothesis testing, either, is another matter. On that and many other things, see Phil Schrodt’s “Seven Deadly Sins of Contemporary Quantitative Political Analysis.“)

I’m sure I’ve missed a lot of good stuff and would love to hear more suggestions from readers.

And just to be absolutely clear: I don’t make any money if you click through to those books or buy them or anything like that. The closest thing I have to a material interest in this list are ongoing professional collaborations with three of the authors listed here: Phil Tetlock, Phil Schrodt, and Mike Ward.

Forecasting Round-Up

I don’t usually post lists of links, but the flurry of great material on forecasting that hit my screen over the past few days is inspiring me to make an exception. Here in no particular order are several recent pieces that deserve wide reading:

  • The Weatherman Is Not a Moron.” Excerpted from a forthcoming book by the New York Times’ Nate Silver, this piece deftly uses meteorology to illustrate the difficulties of forecasting in complex systems and some of the ways working forecasters deal with them. For a fantastic intellectual history on the development of the ensemble forecasting approach Silver discusses, see this July 2005 journal article by John Lewis in the American Meteorological Society’s Monthly Weather Review.
  • Trending Upward.” Michael Horowitz and Phil Tetlock write for Foreign Policy about how the U.S. “intelligence community” can improve its long-term forecasting. The authors focus on the National Intelligence Council’s Global Trends series, which attempts the Herculean (or maybe Sisyphean) feat of trying to peer 15 years into the future, but the recommendations they offer apply to most forecasting exercises that rely on expert judgment. And, on the Duck of Minerva blog, John Western pushes back: “I think there is utility in long-range forecasting exercises, I’m just not sure I see any real benefits from improved accuracy on the margins. There may actually be some downsides.” [Disclosure: Since this summer, I have been a member of Tetlock and Horowitz’s team in the IARPA-funded forecasting competition they mention in the article.]
  • Theories, Models, and the Future of Science.” This post by Ashutosh Jogalekar on Scientific American‘s Curious Waveform blog argues that “modeling and simulation are starting to be considered as a respectable ‘third leg’ of science, in addition to theory and experiment.” Why? Because “many of science’s greatest current challenges may not be amenable to rigorous theorizing, and we may have to treat models of phenomena as independent, authoritative explanatory entities in their own right.” Like Trey Causey, who pointed me toward this piece on Twitter, I think the post draws a sharper distinction between modeling for simulation and explanation than it needs to, but it’s a usefully provocative read.
  • The Probabilities of Large Terrorist Events.” I recently finished Nassim Nicholas Taleb’s Black Swan and was looking around for worked examples applying that book’s idea of “fractal randomness” to topics I study. Voilà! On Friday, Wired‘s Social Dimensions blog spotlighted a recent paper by Aaron Clauset and Ryan Woodward that uses a mix of statistical techniques, including power-law models, to estimate the risk of this particular low-probability, high-impact political event. Their approach—model only the tail of the distribution and use an ensemble approach like the aforementioned meteorologists do—seems really clever to me, and I like how they are transparent about the uncertainty of the resulting estimates.
%d bloggers like this: