A Notable Year of the Wrong Kind

The year that’s about to end has distinguished itself in at least one way we’d prefer never to see again. By my reckoning, 2013 saw more new mass killings than any year since the early 1990s.

When I say “mass killing,” I mean any episode in which the deliberate actions of state agents or other organizations kill at least 1,000 noncombatant civilians from a discrete group. Mass killings are often but certainly not always perpetrated by states, and the groups they target may be identified in various ways, from their politics to their ethnicity, language, or religion. Thanks to my colleague Ben Valentino, we have a fairly reliable tally of episodes of state-led mass killing around the world since the mid-1940s. Unfortunately, there is no comparable reckoning of mass killings carried out by non-state actors—nearly always rebel groups of some kind—so we can’t make statements about counts and trends as confidently as I would like. Still, we do the best we can with the information we have.

With those definitions and caveats in mind, I would say that in 2013 mass killings began:

Of course, even as these new cases have developed, episodes of mass killings have continued in a number of other places:

In a follow-up post I hope to write soon, I’ll offer some ideas on why 2013 was such a bad year for deliberate mass violence against civilians. In the meantime, if you think I’ve misrepresented any of these cases here or overlooked any others, please use the Comments to set me straight.

The Fog of War Is Patchy

Over at Foreign Policy‘s Peace Channel, Sheldon Himmelfarb of USIP has a new post arguing that better communications technologies in the hands of motivated people now give us unprecedented access to information from ongoing armed conflicts.

The crowd, as we saw in the Syrian example, is helping us get data and information from conflict zones. Until recently these regions were dominated by “the fog war,” which blinded journalists and civilians alike; it took the most intrepid reporters to get any information on what was happening on the ground. But in the past few years, technology has turned conflict zones from data vacuums into data troves, making it possible to render parts the conflict in real time.

Sheldon is right, but only to a point. If crowdsourcing is the future of conflict monitoring, then the future is already here, as Sheldon notes; it’s just not very evenly distributed. Unfortunately, large swaths of the world remain effectively off the grid on which the production of crowdsourced conflict data depends. Worse, countries’ degree of disconnectedness is at least loosely correlated with their susceptibility to civil violence, so we still have the hardest time observing some of the world’s worst conflicts.

The fighting in the Central African Republic over the past year is a great and terrible case in point. The insurgency that flared there last December drove the president from the country in March, and state security forces disintegrated with his departure. Since then, CAR has descended into a state of lawlessness in which rival militias maraud throughout the country and much of the population has fled their homes in search of whatever security and sustenance they can find.

We know this process is exacting a terrible toll, but just how terrible is even harder to say than usual because very few people on hand have the motive and means to record and report out what they are seeing. At just 23 subscriptions per 100 people, CAR’s mobile-phone penetration rate remains among the lowest on the planet, not far ahead of Cuba’s and North Korea’s (data here). Some journalists and NGOs like Human Rights Watch and Amnesty International have been covering the situation as best they can, but they will be among the first to tell you that their information is woefully incomplete, in part because roads and other transport remain rudimentary. In a must-read recent dispatch on the conflict, anthropologist Louisa Lombard noted that “the French colonists invested very little in infrastructure, and even less has been invested subsequently.”

A week ago, I used Twitter to ask if anyone had managed yet to produce a reasonably reliable estimate of the number of civilian deaths in CAR since last December. The replies I received from some very reputable people and organizations makes clear what I mean about how hard it is to observe this conflict.

C.A.R. is an extreme case in this regard, but it’s certainly not the only one of its kind. The same could be said of ongoing episodes of civil violence in D.R.C., Sudan (not just Darfur, but also South Kordofan and Blue Nile), South Sudan, and in the Myanmar-China border region, to name a few. In all of these cases, we know fighting is happening, and we believe civilians are often targeted or otherwise suffering as a result, but our real-time information on the ebb and flow of these conflicts and the tolls they are exacting remains woefully incomplete. Mobile phones and the internet notwithstanding, I don’t expect that to change as quickly as we’d hope.

[N.B. I didn't even try to cover the crucial but distinct problem of verifying the information we do get from the kind of crowdsourcing Sheldon describes. For an entry point to that conversation, see this great blog post by Josh Stearns.]

How Long Will Syria’s Civil War Last? It’s Really Hard to Say

Last week, political scientist Barbara Walter wrote a great post for the blog Political Violence @ a Glance called “The Four Things We Know about How Civil Wars End (and What This Tells Us about Syria),” offering a set of base-rate forecasts about how long Syria’s civil war will last (probably a lot longer) and how it’s likely to end (with a military victory and not a peace agreement).

The post is great because it succeeds in condensing a large and complex literature into a small set of findings directly relevant to an important topic of public concern. It’s no coincidence that this post was written by one of the leading scholars on that subject. A “data scientist” could have looked at the same data sets used in the studies on which Walter bases her summary and not known which statistics would be most informative. Even with the right statistics in hand, a “hacker” probably wouldn’t know much about the relative quality of the different data sources, or the comparative-historical evidence on relevant causal mechanisms—two things that could (and should) inform their thinking about how much confidence to attach to the various results. To me, this is a nice illustration of the point that, even in an era of relentless quantification, subject-matter expertise still matters.

The one thing that seems to have gotten lost in the retellings and retweetings of this distilled evidence, though, is the idea of uncertainty. Apparently inspired by Walter’s post, Max Fisher wrote a similar one for the Washington Post‘s Worldviews blog under the headline “Political science says Syria’s civil war will probably last at least another decade.” Fisher’s prose is appropriately less specific than that (erroneous) headline, but if my Twitter feed is any indication, lots of people read Walter’s and Fisher’s post as predictions that the Syrian war will probably last 10 years or more in total.*

If you had to bet now on the war’s eventual duration, you’d be right to expect an over-under around 10, but the smart play would probably be not to bet at all, unless you were offered very favorable odds or you had some solid hedges in place. That’s because the statistics Walter and Fisher cite are based on a relatively small number of instances of a complex phenomenon, the origins and dynamics of which we still poorly understand. Under these circumstances, statistical forecasting is inevitably imprecise, and the imprecision only increases the farther we try to peer into the future.

We can visualize that imprecision, and the uncertainty it represents, with something called a prediction interval. A prediction interval is just an estimate of the range in which we expect future values of our quantity of interest to fall with some probability. Prediction intervals are sometimes included in plots of time-series forecasts, and the results typically look like the bell of a trumpet, as shown in the example below. The farther into the future you try to look, the less confidence you should have in your point prediction. When working with noisy data on a stochastic process, it doesn’t take a lot of time slices to reach the point where your prediction interval practically spans the full range of possible values.

prediction interval

Civil wars are, without question, one of those stochastic processes with noisy data. The averages Walter and Fisher cite are just central tendencies from a pretty heterogenous set of cases observed over a long period of world history. Using data like these, I think we can be very confident that the war will last at least a few more months and somewhat confident that it will last at least another year or more. Beyond that, though, I’d say the bell of our forecasting trumpet widens very quickly, and I wouldn’t want to hazard a guess if I didn’t have to.

* In fact, neither Walter nor Fisher specifically predicted that the war would last x number of years or more. Here’s what Walter actually wrote:

1. Civil wars don’t end quickly. The average length of civil wars since 1945 have been about 10 years. This suggests that the civil war in Syria is in its early stages, and not in the later stages that tend to encourage combatants to negotiate a settlement.

I think that’s a nice verbal summary of the statistical uncertainty I’m trying to underscore. And here’s what Fisher wrote under that misleading headline:

According to studies of intra-state conflicts since 1945, civil wars tend to last an average of about seven to 12 years. That would put the end of the war somewhere between 2018 and 2023.

Worse, those studies have identified several factors that tend to make civil wars last even longer than the average. A number of those factors appear to apply to Syria, suggesting that this war could be an unusually long one. Of course, those are just estimates based on averages; by definition, half of all civil wars are shorter than the median length, and Syria’s could be one of them. But, based on the political science, Syria has the right conditions to last through President Obama’s tenure and perhaps most or all of his successor’s.

President Obama, You’re the Fish in this Morbid Game of Poker

I believe the Obama administration’s planned punitive strikes on Syria are wrong for larger reasons (see here for a 2012 post that’s still relevant today), but I’m also convinced that they’re likely to be ineffective for the narrower goal of deterring the use of chemical weapons in Syria and beyond. I don’t mean to make light of a horrible situation, but I think a gaming analogy can help show why.

Think of the repeated interactions between the Assad regime and the U.S. as a single game of poker with several hands. In 2011, President Obama said Assad had to go, and the U.S. hinted that it would intervene to support the Syrian opposition. That was a raise, Assad called it, and the U.S. effectively folded that hand by not following through on its initial raise.

More recently, President Obama declared the use of chemical weapons a “red line” that Assad’s forces must not cross, and then they crossed that line, apparently more than once before the massive attack near Damascus on 21 August. Again, the administration made a raise in hopes of driving Assad off his hand, but again Assad re-raised.

Now the Obama administration is threatening to strike Assad’s forces to punish him for the CW attack. While making this threat, though, the administration is simultaneously signaling that a) the attack will be limited and b) the administration hopes not to have to do more. These terms are more or less written into the authorization Congress is now considering, and they are being reiterated every time a member of the administration makes a public case for a military response.

In poker terms, this approach is like trying to drive your opponent off a pot with a modest bet when you hold a weak hand. Unless your opponent has really weak cards, that kind of bet is usually more effective at enticing that opponent to stay in the hand, not encouraging him to fold. In the Syrian case, the Assad regime has repeatedly signaled that it will play every hand to the end, so this kind of bet will almost certainly not have the desired effect.

That outcome is even more likely if the opponent has good reason to think your hand is weak. When the Obama administration can’t muster much domestic or international support for its punitive strikes and whatever support it can muster is predicated on those strikes being very limited in their scope and intent, then I’d say that’s easy to read as a weak hand. It’s a bit like waving around a pair of eights and threatening to make a small raise. To drive a committed rival to fold, you need to really change the expected value of the pot, and this approach simply doesn’t do that to a regime that has shown itself to be deeply committed to playing every hand to the end.

Some supporters of punitive strikes seem to think the effect those strikes would have on Assad’s forces is less important than the signal this action would send to potential future violators. The goal is not to hurt Assad as much as it’s to reinforce the norm. Unfortunately, the same problem extends to future hands with other players, too. If I were a ruler considering using chemical weapons at some later date, the lesson I think I’d have learned from Syria so far is that the rest of the world actually isn’t willing to pay a steep cost to reinforce this supposed norm for its own sake. In fact, we’ve developed a tell: if the stakes are high for other reasons, our initial raise will probably be a bluff, and it probably won’t be that costly to stay in the hand and see if that’s right.

I can see two paths out of the current situation. One is to acknowledge that our tepid raise has failed to drive Assad off this pot and go ahead and fold this hand. The outcome is essentially the same, and we don’t incur bigger losses getting there. The other is to change the hand we’re playing by committing to do whatever it takes to prevent Assad’s forces from using chemical weapons again. In other words, we commit to regime-defeating war if necessary and we signal that stronger commitment to Assad’s forces and their backers as clearly as possible.

If this more aggressive approach isn’t both feasible and desirable—and I believe it’s neither—it’s hard for me to see what’s gained by continuing to pretend that’s the hand we’re playing when everyone knows it isn’t and calling yet another of the Assad regime’s horrible raises.

What Should the U.S. Do in Syria: Survey Results and Lessons on Process

A few days ago, I used the All Our Ideas platform to create a pairwise wiki survey asking, “Which action would you rather see the United States take next in Syria?” I did this partly to get a better sense of peoples’ views on the question posed, and partly to learn more about how to use this instrument. Now, I think it’s a good time to take stock on both counts.

First, some background. A pairwise wiki survey involves a single question with many possible answers. Respondents are presented with answers in pairs, one pair at a time, and asked to cast a vote for one or the other item. The overarching question determines what that vote is about, but the choice always entails a comparison (more, better, more likely, etc.). Respondents can also choose not to decide, and they can propose their own answers to add to the mix. Here’s a screenshot from my survey on U.S. policy in Syria that shows how that looks in action:

syria wiki survey respondent interface screenshot

You vote by clicking on one of the big blue boxes or the smaller “I can’t decide” button tucked under them, or you propose your own answer by writing it into in the “Add your own ideas here…” field at the bottom. Once you vote on one pair, you’re presented with another pair, and you can repeat this process as many times as you like. To make each vote as informative as possible, the All Our Ideas platform doesn’t select answers for each pairing at random. Instead, it uses an algorithm that favors answers with fewer completed appearances. This adaptive approach spreads the votes evenly across the field of answers, and it helps newly-added answers quickly catch up with older ones. The resulting pairwise votes are converted into aggregate ratings using a Bayesian hierarchical model that estimates a set of collective preferences that’s most consistent with the observed data.

I’m already experimenting with pairwise wiki surveys as a way to forecast rare events, but this question about how the U.S. should respond to events in Syria is closer to their original purpose of identifying and ranking a set of options that aren’t exhaustive or mutually exclusive. In situations like these, it’s often easy to criticize or tout each option on its own. Comparing them all in a coherent way is usually much harder, and that’s what the pairwise wiki survey helps us do.

So, what did the respondents to my survey think the U.S. should do now about Syria? The screenshot below shows where things stood around 8:45 AM Eastern time on Tuesday, September 3, after more than 1,400 votes had been cast in nearly 100 unique user sessions. (For the latest results, click here.)

syria wiki survey results 20130903 0842

Clearly, the crowd that’s found its way to this survey so far is not keen on the Obama administration’s plan for military strikes in Syria in response to the chemical-weapons attack that took place on 21 August. The two options that come closest to that stated plan—limited strikes on targets associated with Syria’s chemical weapons capability or limited strikes to degrade various aspects of its military—both rank in the bottom half, below “Do nothing” and “Strongly condemn the Syrian regime.” The idea of military strikes targeting Assad and other senior regime officials—the so-called decapitation approach—ranks last, and increased military aid to Syrian rebels, another option the U.S. government is already pursuing, doesn’t rank much higher. What this crowd wants from the U.S. instead are increased humanitarian aid to Syrian civilians, broader and tighter sanctions on the Syrian regime and its “enablers,” and more pushing for formal talks among the warring parties.

Now, I don’t mean to imply that the results of this survey accurately capture the contours of public opinion in the U.S. or anywhere else. Frankly, I don’t really know how representative they are. As implemented here, a pairwise wiki survey is a form of crowdsourcing. The big advantage of crowdsourcing is the ability quickly and cheaply to get feedback from large group, but that efficiency sometimes comes at the cost of not knowing a lot about who is responding. I know that the participants in my Syria survey come from six continents (see the map below). but I don’t collect any information about the respondents as they vote, so I can’t say anything about how how representative my crowd is of any larger population, or how the characteristics of individual respondents relate to the preferences they express. All I can say with confidence is that these results are probably a reliable gauge of the views of the crowd that became aware of the survey through my blog post and Twitter and other social-media shares and were motivated to respond. I think it’s reassuring that the results of my wiki survey generally accord with the results of traditional public-opinion surveys in the U.S. (e.g., here and here) and elsewhere (e.g., Germany), but it would be irresponsible to make any strong claims about public opinion from these data alone.

Syria wiki survey vote map 20130903 0908

I hope to put this instrument to more ambitious uses in the future, so I’ll close with a lesson learned about how to do it better: respondents really need to be given some explanation about how the survey works before they’re asked to start voting. I rushed to get the Syria survey online because I was trying to get out the door for a bike ride and didn’t include anything in my blog post or tweets about how the voting process works. From the things some people wrote in the submit-your-own-idea field, it quickly became clear that many visitors were confused. Some apparently thought the initial pair presented were the only options being considered, so they either complained directly about that (“This survey, I hope, is designed to demonstrate to takers the way questioners of surveys control the outcome with push-polling”) or proposed adding ideas that were already covered (e.g., “Neither” when “Do nothing” was already on the list, or “Aid to refugees and camps” when “Increase humanitarian aid to Syrian civilians” was an option). I also think the “I can’t decide” button and the options it offers (press it and see) are a really important feature that respondents may overlook because it can be hard to see. Next time, I won’t share a direct link to the survey and will instead embed the link at the bottom of a blog post that describes the voting process and calls out the “I can’t decide” feature first.

What Should the U.S. Do Now in Syria?

You tell me.

To help you do that, I’ve created a pairwise wiki survey on All Our Ideas. Click HERE to participate. You can vote on the options I listed or add your own.

Results are updated in real time. Just click on the View Results tab to see what the crowd is saying so far.

Before you add an idea, make sure it isn’t already covered in the existing set by clicking on the View Results tab and then the View All button at the bottom of the list.

Lost in the Fog of Civil War in Syria

On Twitter a couple of days ago, Adam Elkus called out a recent post on Time magazine’s World blog as evidence of the way that many peoples’ expectations about the course of Syria’s civil war have zigged and zagged over the past couple of years. “Last year press was convinced Assad was going to fall,” Adam tweeted. “Now it’s that he’s going to win. Neither perspective useful.” To which the eminent civil-war scholar Stathis Kalyvas replied simply, “Agreed.”

There’s a lesson here for anyone trying to glean hints about the course of a civil war from press accounts of a war’s twists and turns. In this case, it’s a lesson I’m learning through negative feedback.

Since early 2012, I’ve been a participant/subject in the Good Judgment Project (GJP), a U.S. government-funded experiment in “wisdom of crowds” forecasting. Over the past year, GJP participants have been asked to estimate the probability of several events related to the conflict in Syria, including the likelihood that Bashar al-Assad would leave office and the likelihood that opposition forces would seize control of the city of Aleppo.

I wouldn’t describe myself as an expert on civil wars, but during my decade of work for the Political Instability Task Force, I spent a lot of time looking at data on the onset, duration, and end of civil wars around the world. From that work, I have a pretty good sense of the typical dynamics of these conflicts. Most of the civil wars that have occurred in the past half-century have lasted for many years. A very small fraction of those wars flared up and then ended within a year. The ones that didn’t end quickly—in other words, the vast majority of these conflicts—almost always dragged on for several more years at least, sometimes even for decades. (I don’t have my own version handy, but see Figure 1 in this paper by Paul Collier and Anke Hoeffler for a graphical representation of this pattern.)

On the whole, I’ve done well in the Good Judgment Project. In the year-long season that ended last month, I ranked fifth among the 303 forecasters in my experimental group, all while the project was producing fairly accurate forecasts on many topics. One thing that’s helped me do well is my adherence to what you might call the forecaster’s version of the Golden Rule: “Don’t neglect the base rate.” And, as I just noted, I’m also quite familiar with the base rates of civil-war duration.

So what did I do when asked by GJP to think about what would happen in Syria? I chucked all that background knowledge out the window and chased the very narrative that Elkus and Kalyvas rightly decry as misleading.

Here’s a chart showing how I assessed the probability that Assad wouldn’t last as president beyond the end of March 2013, starting in June 2012. The actual question asked us to divide the probability of his exiting office across several time periods, but for simplicity’s sake I’ve focused here on the part indicating that he would stick around past April 1. This isn’t the same thing as the probability that the war would end, of course, but it’s closely related, and I considered the two events as tightly linked. As you can see, until early 2013, I was pretty confident that Assad’s fall was imminent. In fact, I was so confident that at a couple of points in 2012, I gave him zero chance of hanging on past March of this year—something a trained forecaster really never should do.

gjp assad chart

Now here’s another chart showing my estimates of the likelihood that rebels would seize control of Aleppo before May 1, 2013. The numbers are a little different, but the basic pattern is the same. I started out very confident that the rebels would win the war soon and only swung hard in the opposite direction in early 2013, as the boundaries of the conflict seemed to harden.

gjp aleppo chart

It’s impossible to say what the true probabilities were in this or any other uncertain situation. Maybe Assad and Aleppo really were on the brink of falling for a while and then the unlikely-but-still-possible version happened anyway.

That said, there’s no question that forecasts more tightly tied to the base rate would have scored a lot better in this case. Here’s a chart showing what my estimates might have looked like had I followed that rule, using approximations of the hazard rate from the chart in the Collier and Hoeffler paper. If anything, these numbers overstate the likelihood that a civil war will end at a given point in time.

gjp baserate chart

I didn’t keep a log spelling out my reasoning at each step, but I’m pretty confident that my poor performance here is an example of motivated reasoning. I wanted Assad to fall and the pro-democracy protesters who dominated the early stages of the uprising to win, and that desire shaped what I read and then remembered when it came time to forecast. I suspect that many of the pieces I was reading were slanted by similar hopes, creating a sort of analytic cascade similar to the herd behavior thought to drive many financial-market booms and busts. I don’t have the data to prove it, but I’m pretty sure the ups and downs in my forecasts track the evolving narrative in the many newspaper and magazine stories I was reading about the Syrian conflict.

Of course, that kind of herding happens on a lot of topics, and I was usually good at avoiding it. For example, when tensions ratcheted up on the Korean Peninsula earlier this year, I hewed to the base rate and didn’t substantially change my assessment of the risk that real clashes would follow.

What got me in the case of Syria was, I think, a sense of guilt. The Assad government has responded to a legitimate popular challenge with mass atrocities that we routinely read about and sometimes even see. In parts of the country, the resulting conflict is producing scenes of absurd brutality. This isn’t a “problem from hell,” as Samantha Powers’ book title would have it; it is a glimpse of hell. And yet, in the face of that horror, I have publicly advocated against American military intervention. Upon reflection, I wonder if my wildly optimistic forecasting about the imminence of Assad’s fall wasn’t my unconscious attempt to escape the discomfort of feeling complicit in the prolongation of that suffering.

As a forecaster, if I were doing these questions over, I would try to discipline myself to attend to the base rate, but I wouldn’t necessarily stop there. As I’ve pointed out in a previous post, the base rate is a valuable anchoring device, but attending to it doesn’t mean automatically ignoring everything else. My preferred approach, when I remember to have one, is to take that base rate as a starting point and then use Bayes’ theorem to update my forecasts in a more disciplined way. Still, I’ll bring a newly skeptical eye the flurry of stories predicting that Assad’s forces will soon defeat Syria’s rebels and keep their patron in power. Now that we’re a couple years into the conflict, quantified history tells us that the most likely outcome in any modest slice of time (say, months rather than years) is, tragically, more of the same.

And, as a human, I’ll keep hoping the world will surprise us and take a different turn.

Challenges in Measuring Violent Conflict, Syria Edition

As part of a larger (but, unfortunately, gated) story on how the terrific new Global Data on Events, Language, and Tone (GDELT) might help social scientists forecast violent conflicts, the New Scientist recently posted some graphics using GDELT to chart the ongoing civil war in Syria. Among those graphics was this time-series plot of violent events per day in Syria since the start of 2011:

Syrian Conflict   New Scientist

Based on that chart, the author of the story (not the producers of GDELT, mind you) wrote:

As Western leaders ponder intervention, the resulting view suggests that the violence has subsided in recent months, from a peak in the third quarter of 2012.

That inference is almost certainly wrong, and why it’s wrong underscores one of the fundamental challenges in using event data—whether it’s collected and coded by software or humans or some combination thereof—to observe the dynamics of violent conflict.

I say that inference is almost certainly wrong because concurrent data on deaths and refugees suggest that violence in Syria has only intensified in past year. One of the most reputable sources on deaths from the war is the Syria Tracker. A screenshot of their chart of monthly counts of documented killings is shown below. Like GDELT, their data also identify a sharp increase in violence in late 2012. Unlike GDELT, their data indicate that the intensity of the violence has remained very high since then, and that’s true even though the process of documenting killings inevitably lags behind the actual violence.

Syria Tracker monthly death counts

We see a similar pattern in data from the U.N. High Commissioner on Refugees (UNHCR) on people fleeing the fighting in Syria. If anything, the flow of refugees has only increased in 2013, suggesting that the violence in Syria is hardly abating.

UNHCR syria refugee plot

The reason GDELT’s count of violent events has diverged from other measures of the intensity of the violence in Syria in recent months is probably something called “media fatigue.” Data sets of political events generally depend on news sources to spot events of interest, and it turns out that news coverage of large-scale political violence follows a predictable arc. As Deborah Gerner and Phil Schrodt describe in a paper from the late 1990s, press coverage of a sustained and intense conflicts is often high when hostilities first break out but then declines steadily thereafter. That decline can happen because editors and readers get bored, burned out, or distracted. It can also happen because the conflict gets so intense that it becomes, in a sense, too dangerous to cover. In the case of Syria, I suspect all of these things are at work.

My point here isn’t to knock GDELT, which is still recording scores or hundreds of events in Syria every day, automatically, using open-source code, and then distributing those data to the public for free. Instead, I’m just trying to remind would-be users of any data set of political events to infer with caution. Event counts are one useful way to track variation over time in political processes we care about, but they’re only one part of the proverbial elephant, and they are inevitably constrained by the limitations of the sources from which they draw. To get a fuller sense of the beast, we need as often as possible to cross-reference those event data with other sources of information. Each of the sources I’ve cited here has its own blind spots and selection biases, but a comparison of trends from all three—and, importantly, an awareness of the likely sources of those biases—is enough to give me confidence that the civil war in Syria is only continuing to intensify. That says something important about Syria, of course, but it also says something important about the risks of drawing conclusions from event counts alone.

PS. For a great discussion of other sources of bias in the study of political violence, see Stathis Kalyvas’ 2004 essay on “The Urban Bias in Research on Civil Wars” (PDF).

A Cautionary Note on Increased Aid to Syrian Rebels

According to today’s Washington Post, the U.S. government is starting to supply food and medicine directly to selected Syrian rebel groups. Meanwhile, “Britain and other nations working in concert with the United States are expected to go further to help the rebel Free Syrian Army by providing battlefield equipment such as armored vehicles, night-vision devices or body armor.”

The point of all this assistance, of course, is to hasten the fall of Syrian President Bashir al-Assad. According to newly minted Secretary of State John Kerry, Assad is “out of time and must be out of power.”

220px-Ford_assembly_line_-_1913

Best I can tell, the logic behind this stepped-up support for the Syrian rebels Western governments “like” follows the logic of an assembly line. To increase desired outputs, increase relevant inputs.

But civil wars aren’t like factories. They’re more like ecosystems, and if there’s one thing we’ve learned from our attempts to manage ecosystems, it’s that they often have unintended consequences. Consider this 2009 story from the New York Times:

With its craggy green cliffs and mist-laden skies, Macquarie Island — halfway between Australia and Antarctica — looks like a nature lover’s Mecca. But the island has recently become a sobering illustration of what can happen when efforts to eliminate an invasive species end up causing unforeseen collateral damage.

In 1985, Australian scientists kicked off an ambitious plan: to kill off non-native cats that had been prowling the island’s slopes since the early 19th century. The program began out of apparent necessity — the cats were preying on native burrowing birds. Twenty-four years later, a team of scientists from the Australian Antarctic Division and the University of Tasmania reports that the cat removal unexpectedly wreaked havoc on the island ecosystem.

With the cats gone, the island’s rabbits (also non-native) began to breed out of control, ravaging native plants and sending ripple effects throughout the ecosystem. The findings were published in the Journal of Applied Ecology online in January.

“Our findings show that it’s important for scientists to study the whole ecosystem before doing eradication programs,” said Arko Lucieer, a University of Tasmania remote-sensing expert and a co-author of the paper. “There haven’t been a lot of programs that take the entire system into account. You need to go into scenario mode: ‘If we kill this animal, what other consequences are there going to be?’”

I don’t mean to suggest a moral equivalence between the human beings fighting and being murdered in Syria and the rabbits and cats and birds on Macquarie Island. I do mean to suggest that attempts to manipulate systems like these almost always underestimate the complexity of the problem. What scientist Barry Rice said to the New York Times for that 2009 article on the difficulty of managing invasive species applies just as well to attempts by outside powers to manufacture desired outcomes in civil wars:

When you’re doing a removal effort, you don’t know exactly what the outcome will be. You can’t just go in and make a single surgical strike. Every kind of management you do is going to cause some damage.

I hope Syria gets to a better place soon. Like Dan Trombly and Ahsan Butt, however, I am not confident that increased support for selected rebel factions will help that happen, and I am worried about the unintended consequences it will bring.

A Rumble of State Collapses

The past couple of years have produced an unusually large number of collapsed states around the world, and I think it’s worth pondering why.

As noted in a previous post, when I say “state collapse,” I mean this:

A state collapse occurs when a sovereign state fails to provide public order in at least one-half of its territory or in its capital city for at least 30 consecutive days. A sovereign state is regarded as failing to provide public order in a particular area when a) an organized challenger, usually a rebel group or regional government, effectively controls that area; b) lawlessness pervades in that area; or c) both. A state is considered sovereign when it is granted membership in the U.N. General Assembly.

The concepts used in this definition are very hard to observe, so I prefer to make probabilistic instead of categorical judgments about which states have crossed this imaginary threshold. In other words, I think state collapse is more usefully treated as a fuzzy set instead of a crisp one, so that’s what I’ll do here.

At the start of 2011, there was only state I would have confidently identified as collapsed: Somalia. Several more were plausibly collapsed or close to it—Afghanistan, Central African Republic (CAR), and Democratic Republic of Congo (DRC) come to mind—but only Somalia was plainly over the line.

By my reckoning, four states almost certainly collapsed in 2011-2012—Libya, Mali, Syria, and Yemen—and Central African Republic probably did. That’s a four- or five-fold increase in the prevalence of state collapse in just two years. In all five cases, collapse was precipitated by the territorial gains of armed challengers. So far, only three of the five states’ governments have fallen, but Assad and Bozize have both seen the reach of their authority greatly circumscribed, and my guess is that neither will survive politically through the end of 2013.

I don’t have historical data to which I can directly compare these observations, but Polity’s “interregnum” (-77) indicator offers a useful (if imperfect) proxy. The column chart below plots annual counts of Polity interregnums (interregna? interregni? what language is this, anyway?) since 1945. A quick glance at the chart indicates that both the incidence and prevalence of state collapse seen in the past two years—which aren’t shown in the plot because Polity hasn’t yet been updated to the present—are historically rare. The only comparable period in the past half-century came in the early 1990s, on the heels of the USSR’s disintegration. (For those of you wondering, the uptick in 2010 comes from Haiti and Ivory Coast. I hadn’t thought of those as collapsed states, and their addition to the tally would only make the past few years look that much more exceptional.)

Annual Counts of Polity Interregnums, 1946-2010

Annual Counts of Polity Interregnums, 1946-2010

I still don’t understand this phenomenon well enough to say anything with assurance about why this “rumble” of state collapses is occurring right now, but I have some hunches. At the systemic level, I suspect that shifts in the relative power of big states are partly responsible for this pattern. Political authority is, in many ways, a confidence game, and growing uncertainty about major powers’ will and ability to support the status quo may be increasing the risk of state collapse in countries and regions where that support has been especially instrumental.

Second and related is the problem of contagion. The set of collapses that have occurred in the past two years are clearly interconnected. Successful revolutions in Tunisia and Egypt spurred popular uprisings in many Arab countries, including Libya, Syria, and Yemen . Libya’s disintegration fanned the rebellion that precipitated a coup and then collapse in Mali. Only CAR seems disconnected from the Arab Spring, and I wonder if the rebels there didn’t time their offensive, in part, to take advantage of the region’s   current distraction with its regional neighbor to the northwest.

Surely there are many other forces at work, too, most of them local and none of them deterministic. Still, I think these two make a pretty good starting point, and they suggest that the current rumble probably isn’t over yet.

Follow

Get every new post delivered to your Inbox.

Join 5,814 other followers

%d bloggers like this: