What Should the U.S. Do in Syria: Survey Results and Lessons on Process

A few days ago, I used the All Our Ideas platform to create a pairwise wiki survey asking, “Which action would you rather see the United States take next in Syria?” I did this partly to get a better sense of peoples’ views on the question posed, and partly to learn more about how to use this instrument. Now, I think it’s a good time to take stock on both counts.

First, some background. A pairwise wiki survey involves a single question with many possible answers. Respondents are presented with answers in pairs, one pair at a time, and asked to cast a vote for one or the other item. The overarching question determines what that vote is about, but the choice always entails a comparison (more, better, more likely, etc.). Respondents can also choose not to decide, and they can propose their own answers to add to the mix. Here’s a screenshot from my survey on U.S. policy in Syria that shows how that looks in action:

syria wiki survey respondent interface screenshot

You vote by clicking on one of the big blue boxes or the smaller “I can’t decide” button tucked under them, or you propose your own answer by writing it into in the “Add your own ideas here…” field at the bottom. Once you vote on one pair, you’re presented with another pair, and you can repeat this process as many times as you like. To make each vote as informative as possible, the All Our Ideas platform doesn’t select answers for each pairing at random. Instead, it uses an algorithm that favors answers with fewer completed appearances. This adaptive approach spreads the votes evenly across the field of answers, and it helps newly-added answers quickly catch up with older ones. The resulting pairwise votes are converted into aggregate ratings using a Bayesian hierarchical model that estimates a set of collective preferences that’s most consistent with the observed data.

I’m already experimenting with pairwise wiki surveys as a way to forecast rare events, but this question about how the U.S. should respond to events in Syria is closer to their original purpose of identifying and ranking a set of options that aren’t exhaustive or mutually exclusive. In situations like these, it’s often easy to criticize or tout each option on its own. Comparing them all in a coherent way is usually much harder, and that’s what the pairwise wiki survey helps us do.

So, what did the respondents to my survey think the U.S. should do now about Syria? The screenshot below shows where things stood around 8:45 AM Eastern time on Tuesday, September 3, after more than 1,400 votes had been cast in nearly 100 unique user sessions. (For the latest results, click here.)

syria wiki survey results 20130903 0842

Clearly, the crowd that’s found its way to this survey so far is not keen on the Obama administration’s plan for military strikes in Syria in response to the chemical-weapons attack that took place on 21 August. The two options that come closest to that stated plan—limited strikes on targets associated with Syria’s chemical weapons capability or limited strikes to degrade various aspects of its military—both rank in the bottom half, below “Do nothing” and “Strongly condemn the Syrian regime.” The idea of military strikes targeting Assad and other senior regime officials—the so-called decapitation approach—ranks last, and increased military aid to Syrian rebels, another option the U.S. government is already pursuing, doesn’t rank much higher. What this crowd wants from the U.S. instead are increased humanitarian aid to Syrian civilians, broader and tighter sanctions on the Syrian regime and its “enablers,” and more pushing for formal talks among the warring parties.

Now, I don’t mean to imply that the results of this survey accurately capture the contours of public opinion in the U.S. or anywhere else. Frankly, I don’t really know how representative they are. As implemented here, a pairwise wiki survey is a form of crowdsourcing. The big advantage of crowdsourcing is the ability quickly and cheaply to get feedback from large group, but that efficiency sometimes comes at the cost of not knowing a lot about who is responding. I know that the participants in my Syria survey come from six continents (see the map below). but I don’t collect any information about the respondents as they vote, so I can’t say anything about how how representative my crowd is of any larger population, or how the characteristics of individual respondents relate to the preferences they express. All I can say with confidence is that these results are probably a reliable gauge of the views of the crowd that became aware of the survey through my blog post and Twitter and other social-media shares and were motivated to respond. I think it’s reassuring that the results of my wiki survey generally accord with the results of traditional public-opinion surveys in the U.S. (e.g., here and here) and elsewhere (e.g., Germany), but it would be irresponsible to make any strong claims about public opinion from these data alone.

Syria wiki survey vote map 20130903 0908

I hope to put this instrument to more ambitious uses in the future, so I’ll close with a lesson learned about how to do it better: respondents really need to be given some explanation about how the survey works before they’re asked to start voting. I rushed to get the Syria survey online because I was trying to get out the door for a bike ride and didn’t include anything in my blog post or tweets about how the voting process works. From the things some people wrote in the submit-your-own-idea field, it quickly became clear that many visitors were confused. Some apparently thought the initial pair presented were the only options being considered, so they either complained directly about that (“This survey, I hope, is designed to demonstrate to takers the way questioners of surveys control the outcome with push-polling”) or proposed adding ideas that were already covered (e.g., “Neither” when “Do nothing” was already on the list, or “Aid to refugees and camps” when “Increase humanitarian aid to Syrian civilians” was an option). I also think the “I can’t decide” button and the options it offers (press it and see) are a really important feature that respondents may overlook because it can be hard to see. Next time, I won’t share a direct link to the survey and will instead embed the link at the bottom of a blog post that describes the voting process and calls out the “I can’t decide” feature first.

Advertisements

What Should the U.S. Do Now in Syria?

You tell me.

To help you do that, I’ve created a pairwise wiki survey on All Our Ideas. Click HERE to participate. You can vote on the options I listed or add your own.

Results are updated in real time. Just click on the View Results tab to see what the crowd is saying so far.

Before you add an idea, make sure it isn’t already covered in the existing set by clicking on the View Results tab and then the View All button at the bottom of the list.

Lost in the Fog of Civil War in Syria

On Twitter a couple of days ago, Adam Elkus called out a recent post on Time magazine’s World blog as evidence of the way that many peoples’ expectations about the course of Syria’s civil war have zigged and zagged over the past couple of years. “Last year press was convinced Assad was going to fall,” Adam tweeted. “Now it’s that he’s going to win. Neither perspective useful.” To which the eminent civil-war scholar Stathis Kalyvas replied simply, “Agreed.”

There’s a lesson here for anyone trying to glean hints about the course of a civil war from press accounts of a war’s twists and turns. In this case, it’s a lesson I’m learning through negative feedback.

Since early 2012, I’ve been a participant/subject in the Good Judgment Project (GJP), a U.S. government-funded experiment in “wisdom of crowds” forecasting. Over the past year, GJP participants have been asked to estimate the probability of several events related to the conflict in Syria, including the likelihood that Bashar al-Assad would leave office and the likelihood that opposition forces would seize control of the city of Aleppo.

I wouldn’t describe myself as an expert on civil wars, but during my decade of work for the Political Instability Task Force, I spent a lot of time looking at data on the onset, duration, and end of civil wars around the world. From that work, I have a pretty good sense of the typical dynamics of these conflicts. Most of the civil wars that have occurred in the past half-century have lasted for many years. A very small fraction of those wars flared up and then ended within a year. The ones that didn’t end quickly—in other words, the vast majority of these conflicts—almost always dragged on for several more years at least, sometimes even for decades. (I don’t have my own version handy, but see Figure 1 in this paper by Paul Collier and Anke Hoeffler for a graphical representation of this pattern.)

On the whole, I’ve done well in the Good Judgment Project. In the year-long season that ended last month, I ranked fifth among the 303 forecasters in my experimental group, all while the project was producing fairly accurate forecasts on many topics. One thing that’s helped me do well is my adherence to what you might call the forecaster’s version of the Golden Rule: “Don’t neglect the base rate.” And, as I just noted, I’m also quite familiar with the base rates of civil-war duration.

So what did I do when asked by GJP to think about what would happen in Syria? I chucked all that background knowledge out the window and chased the very narrative that Elkus and Kalyvas rightly decry as misleading.

Here’s a chart showing how I assessed the probability that Assad wouldn’t last as president beyond the end of March 2013, starting in June 2012. The actual question asked us to divide the probability of his exiting office across several time periods, but for simplicity’s sake I’ve focused here on the part indicating that he would stick around past April 1. This isn’t the same thing as the probability that the war would end, of course, but it’s closely related, and I considered the two events as tightly linked. As you can see, until early 2013, I was pretty confident that Assad’s fall was imminent. In fact, I was so confident that at a couple of points in 2012, I gave him zero chance of hanging on past March of this year—something a trained forecaster really never should do.

gjp assad chart

Now here’s another chart showing my estimates of the likelihood that rebels would seize control of Aleppo before May 1, 2013. The numbers are a little different, but the basic pattern is the same. I started out very confident that the rebels would win the war soon and only swung hard in the opposite direction in early 2013, as the boundaries of the conflict seemed to harden.

gjp aleppo chart

It’s impossible to say what the true probabilities were in this or any other uncertain situation. Maybe Assad and Aleppo really were on the brink of falling for a while and then the unlikely-but-still-possible version happened anyway.

That said, there’s no question that forecasts more tightly tied to the base rate would have scored a lot better in this case. Here’s a chart showing what my estimates might have looked like had I followed that rule, using approximations of the hazard rate from the chart in the Collier and Hoeffler paper. If anything, these numbers overstate the likelihood that a civil war will end at a given point in time.

gjp baserate chart

I didn’t keep a log spelling out my reasoning at each step, but I’m pretty confident that my poor performance here is an example of motivated reasoning. I wanted Assad to fall and the pro-democracy protesters who dominated the early stages of the uprising to win, and that desire shaped what I read and then remembered when it came time to forecast. I suspect that many of the pieces I was reading were slanted by similar hopes, creating a sort of analytic cascade similar to the herd behavior thought to drive many financial-market booms and busts. I don’t have the data to prove it, but I’m pretty sure the ups and downs in my forecasts track the evolving narrative in the many newspaper and magazine stories I was reading about the Syrian conflict.

Of course, that kind of herding happens on a lot of topics, and I was usually good at avoiding it. For example, when tensions ratcheted up on the Korean Peninsula earlier this year, I hewed to the base rate and didn’t substantially change my assessment of the risk that real clashes would follow.

What got me in the case of Syria was, I think, a sense of guilt. The Assad government has responded to a legitimate popular challenge with mass atrocities that we routinely read about and sometimes even see. In parts of the country, the resulting conflict is producing scenes of absurd brutality. This isn’t a “problem from hell,” as Samantha Powers’ book title would have it; it is a glimpse of hell. And yet, in the face of that horror, I have publicly advocated against American military intervention. Upon reflection, I wonder if my wildly optimistic forecasting about the imminence of Assad’s fall wasn’t my unconscious attempt to escape the discomfort of feeling complicit in the prolongation of that suffering.

As a forecaster, if I were doing these questions over, I would try to discipline myself to attend to the base rate, but I wouldn’t necessarily stop there. As I’ve pointed out in a previous post, the base rate is a valuable anchoring device, but attending to it doesn’t mean automatically ignoring everything else. My preferred approach, when I remember to have one, is to take that base rate as a starting point and then use Bayes’ theorem to update my forecasts in a more disciplined way. Still, I’ll bring a newly skeptical eye the flurry of stories predicting that Assad’s forces will soon defeat Syria’s rebels and keep their patron in power. Now that we’re a couple years into the conflict, quantified history tells us that the most likely outcome in any modest slice of time (say, months rather than years) is, tragically, more of the same.

And, as a human, I’ll keep hoping the world will surprise us and take a different turn.

Challenges in Measuring Violent Conflict, Syria Edition

As part of a larger (but, unfortunately, gated) story on how the terrific new Global Data on Events, Language, and Tone (GDELT) might help social scientists forecast violent conflicts, the New Scientist recently posted some graphics using GDELT to chart the ongoing civil war in Syria. Among those graphics was this time-series plot of violent events per day in Syria since the start of 2011:

Syrian Conflict   New Scientist

Based on that chart, the author of the story (not the producers of GDELT, mind you) wrote:

As Western leaders ponder intervention, the resulting view suggests that the violence has subsided in recent months, from a peak in the third quarter of 2012.

That inference is almost certainly wrong, and why it’s wrong underscores one of the fundamental challenges in using event data—whether it’s collected and coded by software or humans or some combination thereof—to observe the dynamics of violent conflict.

I say that inference is almost certainly wrong because concurrent data on deaths and refugees suggest that violence in Syria has only intensified in past year. One of the most reputable sources on deaths from the war is the Syria Tracker. A screenshot of their chart of monthly counts of documented killings is shown below. Like GDELT, their data also identify a sharp increase in violence in late 2012. Unlike GDELT, their data indicate that the intensity of the violence has remained very high since then, and that’s true even though the process of documenting killings inevitably lags behind the actual violence.

Syria Tracker monthly death counts

We see a similar pattern in data from the U.N. High Commissioner on Refugees (UNHCR) on people fleeing the fighting in Syria. If anything, the flow of refugees has only increased in 2013, suggesting that the violence in Syria is hardly abating.

UNHCR syria refugee plot

The reason GDELT’s count of violent events has diverged from other measures of the intensity of the violence in Syria in recent months is probably something called “media fatigue.” Data sets of political events generally depend on news sources to spot events of interest, and it turns out that news coverage of large-scale political violence follows a predictable arc. As Deborah Gerner and Phil Schrodt describe in a paper from the late 1990s, press coverage of a sustained and intense conflicts is often high when hostilities first break out but then declines steadily thereafter. That decline can happen because editors and readers get bored, burned out, or distracted. It can also happen because the conflict gets so intense that it becomes, in a sense, too dangerous to cover. In the case of Syria, I suspect all of these things are at work.

My point here isn’t to knock GDELT, which is still recording scores or hundreds of events in Syria every day, automatically, using open-source code, and then distributing those data to the public for free. Instead, I’m just trying to remind would-be users of any data set of political events to infer with caution. Event counts are one useful way to track variation over time in political processes we care about, but they’re only one part of the proverbial elephant, and they are inevitably constrained by the limitations of the sources from which they draw. To get a fuller sense of the beast, we need as often as possible to cross-reference those event data with other sources of information. Each of the sources I’ve cited here has its own blind spots and selection biases, but a comparison of trends from all three—and, importantly, an awareness of the likely sources of those biases—is enough to give me confidence that the civil war in Syria is only continuing to intensify. That says something important about Syria, of course, but it also says something important about the risks of drawing conclusions from event counts alone.

PS. For a great discussion of other sources of bias in the study of political violence, see Stathis Kalyvas’ 2004 essay on “The Urban Bias in Research on Civil Wars” (PDF).

A Cautionary Note on Increased Aid to Syrian Rebels

According to today’s Washington Post, the U.S. government is starting to supply food and medicine directly to selected Syrian rebel groups. Meanwhile, “Britain and other nations working in concert with the United States are expected to go further to help the rebel Free Syrian Army by providing battlefield equipment such as armored vehicles, night-vision devices or body armor.”

The point of all this assistance, of course, is to hasten the fall of Syrian President Bashir al-Assad. According to newly minted Secretary of State John Kerry, Assad is “out of time and must be out of power.”

220px-Ford_assembly_line_-_1913

Best I can tell, the logic behind this stepped-up support for the Syrian rebels Western governments “like” follows the logic of an assembly line. To increase desired outputs, increase relevant inputs.

But civil wars aren’t like factories. They’re more like ecosystems, and if there’s one thing we’ve learned from our attempts to manage ecosystems, it’s that they often have unintended consequences. Consider this 2009 story from the New York Times:

With its craggy green cliffs and mist-laden skies, Macquarie Island — halfway between Australia and Antarctica — looks like a nature lover’s Mecca. But the island has recently become a sobering illustration of what can happen when efforts to eliminate an invasive species end up causing unforeseen collateral damage.

In 1985, Australian scientists kicked off an ambitious plan: to kill off non-native cats that had been prowling the island’s slopes since the early 19th century. The program began out of apparent necessity — the cats were preying on native burrowing birds. Twenty-four years later, a team of scientists from the Australian Antarctic Division and the University of Tasmania reports that the cat removal unexpectedly wreaked havoc on the island ecosystem.

With the cats gone, the island’s rabbits (also non-native) began to breed out of control, ravaging native plants and sending ripple effects throughout the ecosystem. The findings were published in the Journal of Applied Ecology online in January.

“Our findings show that it’s important for scientists to study the whole ecosystem before doing eradication programs,” said Arko Lucieer, a University of Tasmania remote-sensing expert and a co-author of the paper. “There haven’t been a lot of programs that take the entire system into account. You need to go into scenario mode: ‘If we kill this animal, what other consequences are there going to be?’”

I don’t mean to suggest a moral equivalence between the human beings fighting and being murdered in Syria and the rabbits and cats and birds on Macquarie Island. I do mean to suggest that attempts to manipulate systems like these almost always underestimate the complexity of the problem. What scientist Barry Rice said to the New York Times for that 2009 article on the difficulty of managing invasive species applies just as well to attempts by outside powers to manufacture desired outcomes in civil wars:

When you’re doing a removal effort, you don’t know exactly what the outcome will be. You can’t just go in and make a single surgical strike. Every kind of management you do is going to cause some damage.

I hope Syria gets to a better place soon. Like Dan Trombly and Ahsan Butt, however, I am not confident that increased support for selected rebel factions will help that happen, and I am worried about the unintended consequences it will bring.

A Rumble of State Collapses

The past couple of years have produced an unusually large number of collapsed states around the world, and I think it’s worth pondering why.

As noted in a previous post, when I say “state collapse,” I mean this:

A state collapse occurs when a sovereign state fails to provide public order in at least one-half of its territory or in its capital city for at least 30 consecutive days. A sovereign state is regarded as failing to provide public order in a particular area when a) an organized challenger, usually a rebel group or regional government, effectively controls that area; b) lawlessness pervades in that area; or c) both. A state is considered sovereign when it is granted membership in the U.N. General Assembly.

The concepts used in this definition are very hard to observe, so I prefer to make probabilistic instead of categorical judgments about which states have crossed this imaginary threshold. In other words, I think state collapse is more usefully treated as a fuzzy set instead of a crisp one, so that’s what I’ll do here.

At the start of 2011, there was only state I would have confidently identified as collapsed: Somalia. Several more were plausibly collapsed or close to it—Afghanistan, Central African Republic (CAR), and Democratic Republic of Congo (DRC) come to mind—but only Somalia was plainly over the line.

By my reckoning, four states almost certainly collapsed in 2011-2012—Libya, Mali, Syria, and Yemen—and Central African Republic probably did. That’s a four- or five-fold increase in the prevalence of state collapse in just two years. In all five cases, collapse was precipitated by the territorial gains of armed challengers. So far, only three of the five states’ governments have fallen, but Assad and Bozize have both seen the reach of their authority greatly circumscribed, and my guess is that neither will survive politically through the end of 2013.

I don’t have historical data to which I can directly compare these observations, but Polity’s “interregnum” (-77) indicator offers a useful (if imperfect) proxy. The column chart below plots annual counts of Polity interregnums (interregna? interregni? what language is this, anyway?) since 1945. A quick glance at the chart indicates that both the incidence and prevalence of state collapse seen in the past two years—which aren’t shown in the plot because Polity hasn’t yet been updated to the present—are historically rare. The only comparable period in the past half-century came in the early 1990s, on the heels of the USSR’s disintegration. (For those of you wondering, the uptick in 2010 comes from Haiti and Ivory Coast. I hadn’t thought of those as collapsed states, and their addition to the tally would only make the past few years look that much more exceptional.)

Annual Counts of Polity Interregnums, 1946-2010

Annual Counts of Polity Interregnums, 1946-2010

I still don’t understand this phenomenon well enough to say anything with assurance about why this “rumble” of state collapses is occurring right now, but I have some hunches. At the systemic level, I suspect that shifts in the relative power of big states are partly responsible for this pattern. Political authority is, in many ways, a confidence game, and growing uncertainty about major powers’ will and ability to support the status quo may be increasing the risk of state collapse in countries and regions where that support has been especially instrumental.

Second and related is the problem of contagion. The set of collapses that have occurred in the past two years are clearly interconnected. Successful revolutions in Tunisia and Egypt spurred popular uprisings in many Arab countries, including Libya, Syria, and Yemen . Libya’s disintegration fanned the rebellion that precipitated a coup and then collapse in Mali. Only CAR seems disconnected from the Arab Spring, and I wonder if the rebels there didn’t time their offensive, in part, to take advantage of the region’s   current distraction with its regional neighbor to the northwest.

Surely there are many other forces at work, too, most of them local and none of them deterministic. Still, I think these two make a pretty good starting point, and they suggest that the current rumble probably isn’t over yet.

Coup Forecasts for 2013

Last January, I posted statistical estimates of coup risk for 2012 that drew some wider interest after they correctly identified Mali as a high-risk case. Now that the year’s almost over, I thought it would be a good time to assess more formally how those 2012 forecasts performed and then update them for 2013.

So, first things first: how did the 2012 forecasts fare on the whole? Pretty well, actually.

For purposes of these forecasts, a coup is defined as “as a forceful seizure of executive authority and office by a dissident/opposition faction within the country’s ruling or political elites that results in a substantial change in the executive leadership and the policies of the prior regime.” That language comes from Monty Marshall’s Center for Systemic Peace, whose data set on coup events serves as the basis for one of the two models used to generate the 2012 forecasts. Those forecasts were meant to assess the risk of any coup attempts at some point during the calendar year, whether those attempts succeed or fail. They were not meant to anticipate civil wars, non-violent uprisings, voluntary transfers of executive authority, autogolpes, or interventions by foreign forces, all of which are better thought of (and modeled) as different forms of political crisis.

Okay, so by that definition, I see two countries where coup attempts occurred in 2012: Mali (in March) and Guinea-Bissau (in April). As it happens, both of those countries ranked in the top 10 in January’s forecasts—Guinea-Bissau at no. 2 and Mali at no. 10—so the models seem to be homing in on the right things. We can get a more rigorous take on the forecasts’ accuracy with a couple of statistics commonly used to assess models that try to predict binary outcomes like these (either a coup attempt happens or it doesn’t):

  • AUC Score. The estimated area under the Receiver Operating Characteristic (ROC) curve, used as a measure of the ability of a binary classification model to discriminate between positive and negative cases. Specifically, AUC represents the probability that a randomly selected positive case (here, a country-year with coup activity) will have a higher predicted probability than a randomly selected negative case (e.g., country-year with no coup activity). Ranges from 0.5 to 1, with higher values indicating better discrimination.
  • Brier Score. A general measure of forecast performance, defined as the average squared difference between the predicted and observed values. Ranges from 0 to 1, with lower values indicating more accurate predictions.

Assuming that Mali and Guinea-Bissau were the only countries to see coup activity this year, my 2012 coup forecasts get an AUC score of 0.97 and a Brier score of 0.01. Those are really good numbers. Based on my experience trying to forecast other rare political events around the world, I’m pretty happy with any AUC above the low 0.80s and any Brier score that’s better than an across-the-board base-rate forecast. The 2012 coup forecasts surpass both of those benchmarks.

Of course, with just two events in more than 150 countries, these statistics could be very sensitive to changes in the list of coup attempts. Two possible modifications come from Sudan, where authorities claim to have thwarted coup plots in November and December, and Paraguay, where right-wing legislators pushed leftist President Lugo out of office in June. I didn’t count Sudan because country experts tell me those events were probably just a political ploy President Bashir is using to keep his rivals off balance and not actual coup attempts. I didn’t count Paraguay because President Lugo’s rivals used legal procedures, not force, to oust him in a rushed impeachment. I’m pretty confident that neither of those cases counts as a coup attempt as defined here, but for the sake of argument, it’s worth seeing how the addition of those cases would affect the accuracy assessments.

  • Sudan ranked 11th in the 2012 forecasts, just behind Mali, so the addition of an event there leaves the accuracy stats essentially unchanged at 0.96 and 0.02, respectively.
  • Paraguay would definitely count as a surprise. It ranked in the 80s in the 2012 forecasts, and counting its June events as a coup would drop the AUC to 0.80 and the Brier score to 0.02.
  • If we count both cases as yeses, we get an AUC of 0.84 and a Brier score of 0.02.

All of those are still pretty respectable numbers for true forecasts of rare political events, even if they’re not quite as good as the initial ones. Whatever the exact ground truth, these statistics give me some confidence that the two-model average I’m using here makes a useful forecasting tool.

So, without further ado, what about 2013? The chart below plots estimated coup risk for the coming year for the 30 countries at greatest risk using essentially the same models I used for 2012. (One of the two models differs slightly from last year’s; I cut out a couple of variables that had little effect on the estimates and are especially hard to update.) I picked the top 30 because it’s roughly equivalent to the top quintile, and my experience working with models like these tells me that the top quintile makes a pretty good break point for distinguishing between countries at high and low risk. If a country doesn’t appear in this chart, that means my models think it’s highly unlikely to suffer a coup attempt in the coming year.

2013 Coup Risk Estimates

2013 Coup Risk Estimates

The broad strokes are very similar to 2012, but I’m also seeing a few changes worth noting.

  • Consistent with 2012, countries from sub-Saharan Africa continue to dominate the high-risk group. Nine of the top 10 and 22 of the top 30 countries come from that part of the world. One of those 22 is South Sudan, which didn’t get a forecast in early 2012 because I didn’t have the requisite data but now makes an ignominious debut at no. 20. Another is Sudan, which, as Armin Rosen discusses, certainly isn’t getting any more stable. Mali and Guinea-Bissau also both stay near the top of the list, thanks in part to the “coup trap” I discussed in another recent post. Meanwhile, I suspect the models are overestimating the risk of a new coup attempt in Niger, which seems to have landed on firmer footing after its “democratizing” coup in February 2010, but that recent history will leave Niger in the statistical high-risk group until at least 2015.
  • More surprising to me, Timor-Leste now lands in the top 10. That’s a change from 2012, but only because the data used to generate the 2012 forecasts did not count the assassination attempts of 2008 as a coup try. The latest version of CSP’s coup list does consider those events to be failed coup attempt. Layered on top of Timor-Leste’s high poverty and hybrid political authority patterns, that recent coup activity greatly increases the country’s estimated risk. If Timor-Leste makes it through 2013 without another coup attempt, though, its estimated risk should drop sharply next year.
  • In Latin America, Haiti and Ecuador both make it into the Top 20. As with Timor-Leste, the changes from 2012 are artifacts of adjustments to the historical data—adding a coup attempt in Ecuador in 2010 and counting Haiti as a partial democracy instead of a state under foreign occupation. Those artifacts mean the change from 2012 isn’t informative, but the presence of those two countries in the top 20 most certainly is.
  • Syria also pops into the high-risk group at no. 25. That’s not an artifact of data revisions; it’s a reflection of the effects of that country’s devastating state collapse and civil war on several of the risk factors for coups.
  • Finally, notable for its absence is Egypt, which ranks 48th on the 2013 list and has been a source of coup rumors throughout its seemingly interminable transitional period. It’s worth noting though, that if you consider SCAF’s ouster of Mubarak in 2011 to be a successful coup (CSP doesn’t), Egypt would make its way into the top 30.

As always, if you’re interested in the details of the modeling, please drop me a line at ulfelder@gmail.com and I’ll try to answer your questions as soon as I can.

Update: After a Washington Post blog mapped my Top 30, I produced a map of my own.

The Ambiguous Morality of Foreign Intervention in Syria

As the atrocious violence in Syria intensifies, more and more people seem to be saying that the outside world–the United States, the United Nations, the Arab League, the “West,” the “international community”–has a moral obligation to intervene, with force if need be, in order to stop the killing there. I don’t think the moral case for intervention is nearly as clear as those calls presume it to be, and I’d like to explain why.

I’ll start with two moral principles. First, murder is wrong. Second, when choosing among various possible courses of action, we should select the one that will produce the greatest happiness without violating any fundamental rights. The first of these principles is virtually universal. The second is more specific to modern liberalism, but I suspect it is accepted by many of the people calling for more forceful intervention in Syria.

Now, bearing those two principles in mind, let’s think about the morality of intervening in the following situations. Assume that there is no police force to call, and that you are better armed than the neighbor in question.

  • Your neighbor is murderously abusing his wife and several children. This is obviously wrong, and you have a moral duty to try to stop it.
  • Your neighbor and his wife are murderously abusing their several children. In moral terms, this isn’t really different from the first scenario, and you still have a duty to try to stop it.
  • Your neighbor and his wife are murderously abusing their children, but your intervention might also lead to the deaths of some or all of the people involved, including your own. For example, some of the children might get caught in the crossfire or be executed by the parents before they can be freed, or you might get killed in the attempt. Aware of this, you might decide to intervene without direct force–say, by cutting off his utilities and supplies–but these actions are not selective and could harm the children as well as the parents. This is a more difficult call. In the worst case, you lose three more lives (the parents and your own), while in the best case, you save several. If you believed the parents are eventually going to kill the children, intervention may still look like the right thing to do, but only if you think it stands a decent chance of succeeding.
  • Your neighbor and his wife are murderously abusing their children, but your intervention might lead to some or all of their deaths, and it will probably start a wider, violent feud among families in the neighborhood. This situation is far more complicated, and it is no longer clear at all that intervention is the best course of action. If you don’t act, several children will die. But if you do act, those children still might die, and so might many other people involved in the ensuing feud. As awful as it sounds, it is be morally right not to intervene in this situation, or at least not to intervene in ways that would set off the wider feud. In this case, action motivated by one moral principle (murder is wrong) would end up violating a second (greatest happiness), and partly through more transgressions of the first.

When I look at the current situation in Syria, I see the last of those scenarios. Foreign military intervention–to include arming the Syrian opposition or attempting to establish humanitarian corridors or “safe zones” without permission from the Syrian government–could save some lives, but it would cost others, and it stands a good chance of starting (or intensify, depending on how you look at it) a regional conflict that could kill many, many more.

What’s more, the scenarios I’ve described so far leave out two important elements of international politics that only complicate things further. The first is the problem of opportunity costs–the other things you can’t do once you commit to a particular course of action. Imagine that you’re a doctor, and that while your neighbor is murderously abusing his wife and several children, many other children in your neighborhood are dying from a disease you can usually cure.  What if intervening to stop the abuse meant you no longer had the time and money to obtain and deliver the cure for that disease?

In the real world, there are diseases like malaria and diarrhea that are preventable and curable and kill literally millions of people every year, yet we do not demand that our governments do all they can to stop those deaths. The more of our governments’ resources we tie up in wars, the less those governments can do to address these quiet crises that clearly transgress the second of the two moral principles I outlined at the start of this post: to seek the greatest happiness.

The second added element is time. All of the scenarios described so far involved a single situation, but the real world involves countless situations unfolding over time.

When time is added to the equation, it becomes clearer that what you do now will set a precedent that will affect the future actions of others. On the one hand, intervening forcefully now might deter other regimes from doing similar things in the future, for fear that they will be punished in a similar way. On the other hand, intervening now might lead future resistance movements to believe that they will receive comparable international protection. That belief might encourage them to rise up in strategically unfavorable circumstances, creating more Syria-like situations in a world that is not well prepared to handle them.

It will rarely be clear which way this balance tips, but the fact that a mass killing is happening in Syria so soon after international intervention in Libya shows that it does not lean decisively in favor of intervention. Certainly, the Libyan intervention was not sufficient to deter future atrocities. I don’t know nearly enough about the Syrian opposition to judge whether the Libyan intervention had any effect on their decision-making, but I gather that it might have emboldened some of its elements, especially ones outside the country (for evidence, see the last paragraph of the section called “The Struggle” in this excellent essay).

Taking all of these aspects into consideration, I conclude that the moral course of action in Syria today is not to intervene militarily–by attacking government forces, attempting to establish “safe zones,” or supplying arms to rebel groups. I’ll admit that I’m not 100% certain in this judgment. I hate what it implies for civilians under fire or imprisoned in Syria right now, and I am sure that some reasonable people who accept the same principles will reach a different conclusion. All I’m hoping to do here is to show that the morality of this situation is far more ambiguous than a simple “The killing must be stopped” statement allows.

And, if it were my family that was being killed, I would be screaming at the world to stop it now.

A Liberal Case Against Military Intervention in Syria

The Syrian state is continuing to murder its own citizens, and the pace of that killing appears to be picking up. In a pensive blog post that made its way onto my screen this morning, scholar and writer Jillain York observed:

From opinionators on Syria, be they Syrian or foreign, there are two dominating views: The first is the viewpoint of the Syrian National Council (SNC), or farther right. This “view area,” so to speak, ranges from the precise position of the SNC in calling for intervention, to the hawkish calls–such as this by Daniel Byman in Foreign Policy–for foreign intervention. The second dominant view comes from the anti-imperialist crowd. By and large, the anti-imperialists have largely failed to denounce the Assad regime, and those who have imply that any alternative is worse.

I don’t see myself fitting into either of those two camps she describes, so I thought I would try to lay out my thoughts on what has to be one of the most difficult and important foreign-policy questions of the moment in hopes of clarifying them for myself and contributing to the wider discussion.

What is happening to scores of civilians in Syria every day is horrifying. States are among the most powerful organizations in the world, but state boundaries are not moral boundaries. I want to live in a world where we–not the United States or NATO, but the larger “we,” humanity–can and do stop these kinds of atrocities, punish their perpetrators, and enable the establishment of the accountable government Syrians are literally dying to create. I want to live in a world where attempts to deliver those just ends only save lives and build peace.

We do not (yet?) live in that world. Instead, in the world I see around me, the actions our governments undertake in pursuit of good intentions around the world are often ineffective at best and more often have unintended consequences that run counter to their stated ends. This disconnect is most obvious in the grand state-building schemes winding down in Iraq and still underway in Afghanistan, but it also afflicts most other militarized efforts to achieve humanitarian ends.

As Ben Valentino convincingly shows in a recent Foreign Affairs article, military intervention for the purpose of civilian protection almost always comes with a much steeper price tag than we realize when we contemplate it. Intervening forces often end up accidentally killing many civilians and empowering groups that perpetrate their own atrocities. Harder to see but at least as important, armed interventions and the peacekeeping or nation-building missions that often ensue carry substantial opportunity costs; the resources they absorb might have been applied elsewhere against problems where we can be more certain that they would have saved lives or improved well-being–for example, to public-health problems like malaria or diarrhea that are preventable but still kill millions every year.

The point of all this for making foreign policy is that good intentions are not sufficient. I consider myself a classical liberal, as, I’m sure, do many of those ardently advocating the use of American military power for civilian protection. The first principle of Millian liberalism, however, is to seek the greatest happiness, not to be seen as having acted in defense of liberalism. The consequences of our actions are what really matter, and those consequences are not burnished by the values our actions were meant to uphold.

In that moral universe, it is right to be more humble about our capabilities and more circumspect in our actions. States are not moral islands, and injustice in other states should concern us as moral beings. But it doesn’t always follow that our government can be, or even ought to be, the agent of ending that injustice and promoting liberalism. In situations where the costs and consequences of forceful action are uncertain and might be steep, liberal principles encourage us to consider capabilities as well as ends, and the two will not always align.

In the case of Syria, the recommendations for military intervention I’ve seen all either assume the best, best-case scenario for how that intervention will unfold or simply declare that the current path is unacceptable and then fail to discuss in depth what kind of intervention we should undertake and the many consequences those actions might carry. Unless and until advocates of forceful intervention can make a convincing case that this time will be different, I will infer from the historical record that it will not be different, that the most likely outcome is a clash of armed forces that will itself kill many civilians, will likely require a substantial long-term commitment of forces and money, and could plausibly spiral into a wider war that would kill and destroy many more soldiers and civilians.

Where does that leave me? In Jillian York’s words, “I am an observer of tragedy.” I am convinced that the proper course of action for the U.S. government is to continue to encourage and engage in diplomacy aimed at stopping the killing of civilians and encouraging political change in Syria that will respond to the just demands of the resisters. I realize that might not work, and that the Assad regime may kill thousands more civilians as diplomacy founders. I realize that, but I do not see a better alternative.

(For clear and brilliant thinking about the larger question of how to end mass atrocities, see this essay from the Fletcher Forum of World Affairs, brought to my attention by blogger Daniel Solomon.)

Assessing Coup Risk in 2012

Which countries around the world are most likely to see coup activity in 2012?

This question popped back into my mind this morning when I read a new post on Daniel Solomon’s Securing Rights blog about widening schisms in Sudan’s armed forces that could lead to a coup attempt. There’s also been a lot of talk in early 2012 about the likelihood of a coup in Syria, where the financial and social costs of repression, sanctions, and now civil war continue to mount. Meanwhile, Pakistan seems to have dodged a coup bullet early this year after a tense showdown between its elected civilian government and military leaders. I even saw one story–unsubstantiated, but from a reputable source–about a possible foiled coup plot in China around New Year’s Day. These are all countries where a coup d’etat would shake up regional politics, and coups in some of those countries could substantially alter the direction of armed conflicts in which government forces are committing mass atrocities, to name just two of the possible repercussions.

To give a statistical answer to the question of coup risk in 2012, I’ve decided to dust off a couple of coup-forecasting algorithms I developed in early 2011 and gin up some numbers. Both of these algorithms…

  1. Take the values of numerous indicators identified by statistical modeling as useful predictors of coup activity (see the end of this post for details);
  2. Apply weights derived from that modeling to those indicators; and then
  3. Sum and transform the results to spit out a score we can interpret as an estimate of the probability that a coup event will occur some time in 2011.

Both algorithms are products of Bayesian model averaging (BMA) applied to logistic regression models of annual coup activity (any vs. none) in countries worldwide over the past few decades. One of the modeling exercises, done for a private-sector client, looked only at successful coups using data compiled by the Center for Systemic Peace. The other modeling exercise was done for a workshop at the Council on Foreign Relations on forecasting political instability; this one looked at all coup attempts, successful or failed, using data compiled by Jonathan Powell and Clayton Thyne. For the 2012 coup risk assessments, I’ve simply averaged the output from the two.

The dot plot below shows the estimated coup risk in 2012 for the 40 countries with the highest values (i.e., greatest risk). The horizontal axis is scaled for probabilities ranging from zero to 1; if you’re more comfortable thinking in percentages, just multiply the number by 100. As usual with all statistical forecasts of rare events, the estimates are mostly close to zero. (On average, only a handful of coup attempts occur worldwide each year, and they’ve become even rarer since the end of the Cold War; see this earlier post for details). For a variety of reasons, the estimates are also less precise than those dots might make them seem, so small differences should be taken with a grain of salt. Even so, these results of this exercise should offer plausible estimates of the chances that we’ll see coup activity in these countries some time in 2012.

Here are a few of things that stand out for me in those results.

  • My forecast supports Daniel’s analysis that the risk of a coup attempt in Sudan in 2012 is relatively high. It ranks 11th on the global list, making it one of the most likely candidates for coup activity this year.
  • Surprising to me, Pakistan barely cracks into the top 40, landing at 38th in the company of Iraq, Cambodia, and Senegal. Those countries all rank higher than 120 others, but the distance between their estimated risk and the risk in most other countries is within the realm of statistical noise. Off the top of my head, I would have identified Pakistan and Iraq as relatively vulnerable countries, and I would not have thought of Cambodia or Senegal as particularly coup-prone cases.
  • Unsurprising to me, China doesn’t even make the top 40. Perhaps there has been some erosion in civilian control in recent years, as Gordon Chang discusses, but it still doesn’t much resemble the countries that have seen full-blown coup attempts in the past few decades.
  • Interestingly, Syria doesn’t show up in the top 40, either. To make sense of this forecast, it’s important to note that assigning a low probability to the occurrence of a coup attempt in Syria in 2012 isn’t the same thing as a prediction that President Bashar al-Assad or his regime will survive the year. It might seem like semantic hair-splitting, but the definitions of coups used to construct the data on which these forecasts are based do not include cases where national leaders resign under pressure or are toppled by rebel groups. So the Syria forecast suggests only that Assad is unlikely to be overthrown by his own security forces. As it happens, my analysis of countries most likely to see democratic transitions in 2012 put Syria in the top 10 on that list.
  • Two of the countries near the top of that list–Guinea and Democratic Republic of Congo–are the ones where the Center for Systemic Peace’s Monty Marshall tells me he saw coup activity meeting his definition in 2011. Those recent coup attempts are influencing the 2012 forecasts, but both countries were also near the top of the 2011 risk list. This boosts my confidence in the reliability of these assessments.

I hope there’s a lot more on (or off) that list that interests readers, and I’d be happy to hear your thoughts on the results in the Comments section. For now, though, I’m going to wrap up this post by providing more information on what those forecasts take into account. The algorithm for successful coups uses just four risk factors, one of which is really just an adjustment to the intercept.

  • Infant mortality rate (relative to annual global median, logged): higher risk in countries with higher rates.
  • Degree of democracy (Polity score, quadratic): higher risk for countries in the mid-range of the 21-point scale.
  • Recent coup activity (yes or no): higher risk if any activity in the past five years.
  • Post-Cold War period: lower risk since 1989.

The algorithm for any coup attempts, successful or failed, uses the following ten risk factors, including all four of the ones used to forecast successful coups.

  • Infant mortality rate (relative to annual global median, logged): higher risk in countries with higher rates.
  • Recent coup activity (count of past five years with any, plus one and logged): higher risk with more activity.
  • Post-Cold War period: lower risk since 1989.
  • Popular uprisings in region (count of countries with any, plus one and logged): higher risk with more of them.
  • Insurgencies in region (count of countries with any, plus one and logged): higher risk with more of them.
  • Economic growth (year-to-year change in GDP per capita): higher risk with slower growth.
  • Regime durability (time since last abrupt change in Polity score, plus one and logged): lower risk with longer time.
  • Ongoing insurgency (yes or no): higher risk if yes.
  • Ongoing civil resistance campaign (yes or no): higher risk if yes.
  • Signatory to 1st Optional Protocol of the UN’s International Covenant on Civil and Political Rights (yes or no): lower risk if yes.
  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13,632 other followers

  • Archives

  • Advertisements
%d bloggers like this: