About That Apparent Decline in Violent Conflict…

Is violent conflict declining, or isn’t it? I’ve written here and elsewhere about evidence that warfare and mass atrocities have waned significantly in recent decades, at least when measured by the number of people killed in those episodes. Not everyone sees the world the same way, though. Bear Braumoeller asserts that, to understand how war prone the world is, we should look at how likely countries are to use force against politically relevant rivals, and by this measure the rate of warfare has held pretty steady over the past two centuries. Tanisha Fazal argues that wars have become less lethal without becoming less frequent because of medical advances that help keep more people in war zones alive. Where I have emphasized war’s lethal consequences, these two authors emphasize war’s likelihood, but their arguments suggest that violent conflict hasn’t really waned the way I’ve alleged it has.

This week, we got another important contribution to the wider debate in which my shallow contributions are situated. In an updated working paper, Pasquale Cirillo and Nassim Nicholas Taleb claim to show that

Violence is much more severe than it seems from conventional analyses and the prevailing “long peace” theory which claims that violence has declined… Contrary to current discussions…1) the risk of violent conflict has not been decreasing, but is rather underestimated by techniques relying on naive year-on-year changes in the mean, or using sample mean as an estimator of the true mean of an extremely fat-tailed phenomenon; 2) armed conflicts have memoryless inter-arrival times, thus incompatible with the idea of a time trend.

Let me say up front that I only have a weak understanding of the extreme value theory (EVT) models used in Cirillo and Taleb’s paper. I’m a political scientist who uses statistical methods, not a statistician, and I have neither studied nor tried to use the specific techniques they employ.

Bearing that in mind, I think the paper successfully undercuts the most optimistic view about the future of violent conflict—that violent conflict has inexorably and permanently declined—but then I don’t know many people who actually hold that view. Most of the work on this topic distinguishes between the observed fact of a substantial decline in the rate of deaths from political violence and the underlying risk of those deaths and the conflicts that produce them. We can (partly) see the former, but we can’t see the latter; instead, we have to try to infer it from the conflicts that occur. Observed history is, in a sense, a single sample drawn from a distribution of many possible histories, and, like all samples, this one is only a jittery snapshot of the deeper data-generating process in which we’re really interested. What Cirillo and Taleb purport to show is that long sequences of relative peace like the one we have seen in recent history are wholly consistent with a data-generating process in which the risk of war and death from it have not really changed at all.

Of course, the fact that a decades-long decline in violent conflict like the one we’ve seen since World War II could happen by chance doesn’t necessarily mean that it is happening by chance. The situation is not dissimilar to one we see in sports when a batter or shooter seems to go cold for a while. Oftentimes that cold streak will turn out to be part of the normal variation in performance, and the athlete will eventually regress to the mean—but not every time. Sometimes, athletes really do get and stay worse, maybe because of aging or an injury or some other life change, and the cold streak we see is the leading edge of that sustained decline. The hard part is telling in real time which process is happening. To try to do that, we might look for evidence of those plausible causes, but humans are notoriously good at spotting patterns where there are none, and at telling ourselves stories about why those patterns are occurring that turn out to be bunk.

The same logic applies to thinking about trends in violent conflict. Maybe the downward trend in observed death rates is just a chance occurrence in an unchanged system, but maybe it isn’t. And, as Andrew Gelman told Zach Beauchamp, the statistics alone can’t answer this question. Cirillo and Taleb’s analysis, and Braumoeller’s before it, imply that the history we’ve seen in the recent past  is about as likely as any other, but that fact isn’t proof of its randomness. Just as rare events sometimes happen, so do systemic changes.

Claims that “This time really is different” are usually wrong, so I think the onus is on people who believe the underlying risk of war is declining to make a compelling argument about why that’s true. When I say “compelling,” I mean an argument that a) identifies specific causal mechanisms and b) musters evidence of change over time in the presence or prevalence of those mechanisms. That’s what Steven Pinker tries at great length to do in The Better Angels of Our Nature, and what Joshua Goldstein did in Winning the War on War.

My own thinking about this issue connects the observed decline in the the intensity of violent conflict to the rapid increase in the past 100+ years in the size and complexity of the global economy and the changes in political and social institutions that have co-occurred with it. No, globalization is not new, and it certainly didn’t stop the last two world wars. Still, I wonder if the profound changes of the past two centuries are accumulating into a global systemic transformation akin to the one that occurred locally in now-wealthy societies in which organized violent conflict has become exceptionally rare. Proponents of democratic peace theory see a similar pattern in the recent evidence, but I think they are too quick to give credit for that pattern to one particular stream of change that may be as much consequence as cause of the deeper systemic transformation. I also realize that this systemic transformation is producing negative externalities—climate change and heightened risks of global pandemics, to name two—that could offset the positive externalities or even lead to sharp breaks in other directions.

It’s impossible to say which, if any, of these versions is “true,” but the key point is that we can find real-world evidence of mechanisms that could be driving down the underlying risk of violent conflict. That evidence, in turn, might strengthen our confidence in the belief that the observed pattern has meaning, even if it doesn’t and can’t prove that meaning or any of the specific explanations for it.

Finally, without deeply understanding the models Cirillo and Taleb used, I also wondered when I first read their new paper if their findings weren’t partly an artifact of those models, or maybe some assumptions the authors made when specifying them. The next day, David Roodman wrote something that strengthened this source of uncertainty. According to Roodman, the extreme value theory (EVT) models employed by Cirillo and Taleb can be used to test for time trends, but the ones described in this new paper don’t. Instead, Cirillo and Taleb specify their models in a way that assumes there is no time trend and then use them to confirm that there isn’t. “It seems to me,” Roodman writes, “that if Cirillo and Taleb want to rule out a time trend according to their own standard of evidence, then they should introduce one in their EVT models and test whether it is statistically distinguishable from zero.”

If Roodman is correct on this point, and if Cirillo and Taleb were to do what he recommends and still find no evidence of a time trend, I would update my beliefs accordingly. In other words, I would worry a little more than I do now about the risk of much larger and deadlier wars occurring again in my expected lifetime.

Leave a comment

7 Comments

  1. Jay, thanks for this thoughtful and perceptive discussion, which demonstrates much deeper knowledge of the substance than my little comment. It prompts me to elaborate on one point I make.

    The bulk of the Cirillo and Taleb analysis does not contain any notion of time. It is analogous to fitting a bell curve to all the wars in the last 2000 years, and then concluding that the experience since 1945 is only 1 standard deviation away from the mean. My point is that such a parametric model can be elaborated to allow, say, the mean and/or log standard deviation to depend linearly on a post-1945 dummy. Fitting this model would provide a sharp test of the long peace hypothesis (at least if the sample were restricted to great powers). This is not done.

    As I explain, all that is done regarding time patterns amounts to a single graph and a couple of sentences.

    I think this line of your is quite on-target: “Bearing that in mind, I think the paper successfully undercuts the most optimistic view about the future of violent conflict—that violent conflict has inexorably and permanently declined—but then I don’t know many people who actually hold that view.” I wish Cirillo and Taleb would state precisely what hypothesis they are challenging, demonstrate with citations who espouses it, then construct a direct statistical test of it.

    Reply
  2. My understanding of the paper regarding the time trends part is the following. In the language of null hypothesis testing, the alternative hypothesis is that there is a time trend. The null hypothesis is that there is no time trend. It is a null hypothesis, because it has less assumptions than the time trend hypothesis.

    The authors construct a null hypothesis using a homogeneous Poisson process, in which inter-arrival times are independent and identically distributed, i.e. the process is memoryless; there is no time trend. If the data can be explained (i.e. could have been produced) by this simple memoryless process, then we cannot reject the null hypothesis that there is no time trend.

    The data may be consistent with a time-trend-based model at the same time, in the same way that some non-temporal data can be consistent with both a Gaussian distribution and a fat-tailed distribution. In the presence of two sufficiently explanatory but conflicting models, one with no time trend, and the other with time trend, the one with the fewer assumptions must be accepted, i.e. the no-time-trend model. In the same way, it is unreasonable to assume a Gaussian distribution when both a Gaussian distribution and a fat-tailed distribution can explain the observed data (Taleb shows elsewhere how you can fool the Kolmogorov-Smirnov test using a sample from a fat-tailed distribution). This is a way to not be fooled by randomness, i.e. seeing a stronger pattern (time trend), when no pattern (no time trend) would explain the data equally well.

    I do not see any inconsistency in this methodology.

    Reply
    • Orestis, as far as I can see the reported test of the null of homogeneous Poisson is a single graph (figure 8) and a couple of sentences, which find no serial correlation over 20 years. I don’t think that suffices to end discussion in a whole stream of scholarship.

      I think it would be good for you to define precisely what you mean by “consistent” with a model and “sufficiently explanatory” models. Various models will fit the data more or less well according to various measures of goodness of fit. It seems to me that the rigorous approach when faced with two conflicting models is to fit a general one that encompasses both and then perform classic tests (Wald, likelihood ratio, etc.) to pit one against the other. The operative question is whether adding a post-1945 dummy improves the fit more than would be expected by chance. Or is there specific evidence that such a standard procedure would fail in this case?

      If the post-1945 dummy is in fact significant, that would of course speak only to whether there was a trend break and would offer no guarantee of continuance.

      Reply
  3. The Cirillo and Taleb article is an interesting deviation from a previous article by Taleb (http://www.fooledbyrandomness.com/pinker.pdf) for a number of reasons.

    In the previous article, Taleb argues “…that [the] tail exponent α = 1.1, dangerously close to 1.” Whereas in this new article by Taleb and co-authors, they argue that “0.4 ≤ α ≤ .7, thus indicating an extremely fat-tailed phenomenon with an undefined mean (a result that is robustly obtained).”

    This change in conclusion is striking for two reasons: (1) alpha goes from a distribution with a defined mean to an undefined mean, and (2) WWII as a singular event should swamp any trend, thus making analyses un-robust to bootstrapping (which should only include WWII in 38% of samples). WWII dwarfs all other violent interstate conflicts in recent history; whether a sample includes it or not should have a profound effect on the parameter estimation. Does anyone here understand why their bootstrapped distributions are so smooth?

    Also, the differing values of alpha seems to be due to a log-rescaling of data. This is a noble transformation because, as the authors mention, it “accounts for the fact that the number of casualties in a conflict cannot be larger than the world population.” An ordinary fit of a Pareto Distribution to interstate conflict would be silly because the support of a Pareto Distributions exists, and is non-negligible, over the entire positive real number line. Indeed, this is why ordinary Pareto Distributions have undefined means: there is a non-negligible probability of a quintillion humans dying in a Pareto model. Of course, even though the probability of a quintillion deaths is rare enough in the model to not be refutable by existing data, we know that the probability of this event isn’t simply low, it is zero because there are no more than 10 billion humans that can die in an interstate war. Truncating the Pareto distribution at 10 billion would be a great way to accommodate this concern (and is a popular technique http://en.wikipedia.org/wiki/Power_law#Power-law_functions), but doing so then gives the Pareto Distribution a defined mean for any alpha.

    Instead, the authors chose a different track: a log-transformation, which maps human conflict from the domain [L, H_t), where L is the smallest measurable conflict and H_t is the total human population, to a domain [0, infinity). They then say “…we can rescale back for the properties of X.” But unfortunately, we lose a predicted mean by virtue of their transformations and choices of distributions. This seems like transforming the data to obtain a specific result (or to avoid getting a particular result); I’ve use truncated Pareto Distributions myself and there isn’t any problem with their tractability.

    Also, the previous article by Taleb address Pinker directly, but it a little ironic in that it complains of ‘ad hominem blatter’, while simultaneously launching ad hominem attacks. He writes, “[Pinker] still does not understand the difference between probability and expectation…” I think we, as bystanders, should be cognizant of the very emotional discourse that is accompanying these ’emotionless’ statistical models.

    Overall, my feeling is that WWII swamps every discussion of this trend. Most casual observes seem cognizant of this events importance to the debate, e.g. this blog entry. Our data for modern history are worse than it may seem on first take, while our data for anything before this time is horrible. Even if these data were better, we’d be making a huge Ecological Fallacy to think that An Lushan tells us anything about the violent tendencies of new world civilizations at the time (which seem to be completely absent from everyone’s analysis).

    “World History”, itself, is a modern concept, along with “World Wars”–which weren’t possible until modern transportation. I think that any reasonable statistical debate has to focus solely on the past few centuries. Insofar as there has been a decline in violence in the 20th century, it is because of a particular interpretation of WWII. Perhaps we become so disgusted by WWII to create the United Nations, give up WMDs altogether, and fund geopolitical research by folks like Jay, Taleb, Pinker, etc to engineer-away mass violence; or alternatively, this is all a myopic, optimistic, ethno/chronocentristic attitude. I don’t know the answer to this question, but both sides make reasonable claims.

    Reply
  1. Weekly Links | Political Violence @ a Glance
  2. More violence | David Roodman
  3. About That Apparent Decline in Violent Conflict...

Leave a Comment

  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13.6K other subscribers
  • Archives