A Plea for More Prediction

The second Annual Bank Conference on Africa happened in Berkeley, CA, earlier this week, and the World Bank’s Development Impact blog has an outstanding summary of the 50-odd papers presented there. If you have to pick between reading this post and that one, go there.

One paper on that roster that caught my eye revisits the choice of statistical models for the study of civil wars. As authors John Paul Dunne and Nan Tian describe, the default choice is logistic regression, although probit gets a little playing time, too. They argue, however, that a zero-inflated Poisson (ZIP) model matches the data-generating process better than either of these traditional picks, and they show that this choice affects what we learn about the causes of civil conflict.

Having worked on statistical models of civil conflict for nearly 20 years, I have some opinions on that model-choice issue, but those aren’t what I want to discuss right now. Instead, I want to wonder aloud why more researchers don’t use prediction as the yardstick—or at least one of the yardsticks—for adjudicating these model comparisons.

In their paper, Dunne and Tian stake their claim about the superiority of ZIP to logit and probit on comparisons of Akaike information criteria (AIC) and Vuong tests. Okay, but if their goal is to see if ZIP fits the underlying data-generating process better than those other choices, what better way to find out than by comparing out-of-sample predictive power?

Prediction is fundamental to the accumulation of scientific knowledge. The better we understand why and how something happens, the more accurate our predictions of it should be. When we estimate models from observational data and only look at how well our models fit the data from which they were estimated, we learn some things about the structure of that data set, but we don’t learn how well those things generalize to other relevant data sets. If we believe that the world isn’t deterministic—that the observed data are just one of many possible realizations of the world—then we need to care about that ability to generalize, because that generalization and the discovery of its current limits is the heart of the scientific enterprise.

From a scientific standpoint, the ideal world would be one in which we could estimate models representing rival theories, then compare the accuracy of the predictions they generate across a large number of relevant “trials” as they unfold in real time. That’s difficult for scholars studying big but rare events like civil wars and wars between states; though; a lot of time has to pass before we’ll see enough new examples to make a statistically powerful comparison across models.

But, hey, there’s an app for that—cross-validation! Instead of using all the data in the initial estimation, hold some out to use as a test set for the models we get from the rest. Better yet, split the data into several equally-sized folds and then iterate the training and testing across all possible groupings of them (k-fold cross-validation). Even better, repeat that process a bunch of times and compare distributions of the resulting statistics.

Prediction is the gold standard in most scientific fields, and cross-validation is standard practice in many areas of applied forecasting, because they are more informative than in-sample tests. For some reason, political science still mostly eschews both.* Here’s hoping that changes soon.

* For some recent exceptions to this rule on topics in world politics, see Ward, Greenhill, and Bakke and Blair, Blattman, and Hartman on predicting civil conflict; Chadefaux on warning signs of interstate war; Hill and Jones on state repression; and Chenoweth and me on the onset of nonviolent campaigns.

On Revolution, Theory or Ideology?

Humans understand and explain through stories, and the stories we in the US tell about why people rebel against their governments usually revolve around deprivation and injustice. In the prevailing narratives, rebellion occurs when states either actively make people suffer or passively fail to alleviate their suffering. Rebels in the American colonies made this connection explicit in the Declaration of Independence. This is also how we remember and understand lots of other rebellions we “like” and the figures who led them, from Moses to Robin Hood to Nelson Mandela.

As predictors of revolution, though, deprivation and injustice don’t fare so well. A chart in a recent Bloomberg Business piece on “the 15 most miserable economies in the world” got me thinking about this again. The chart shows the countries that score highest on a crude metric that sums a country’s unemployment rate and annual change in its consumer price index. Here are the results for 2015:

Of the 15 countries on that list, only two—Ukraine and Colombia—have ongoing civil wars, and it’s pretty hard to construe current unemployment or inflation as relevant causes in either case. Colombia’s civil war has run for decades. Ukraine’s war isn’t so civil (<cough> Russia <cough>), and this year’s spike in unemployment and inflation are probably more consequences than causes of that fighting. Frankly, I’m surprised that Venezuela hasn’t seen a sustained, large-scale challenge to its government since Hugo Chavez’s death and wonder if this year will prove different. But, so far, it hasn’t. Ditto for South Africa, where labor actions have at least hinted the potential for wider rebellion.

That chart, in turn, reminded me of a 2011 New York Times column by Charles Blow called “The Kindling of Change,” on the causes of revolutions in the Middle East and North Africa.  Blow wrote, “It is impossible to know exactly which embers spark a revolution, but it’s not so hard to measure the conditions that make a country prime for one.” As evidence, he offered the following table comparing countries in the region on several “conditions”:

The chart, and the language that precede it, seem to imply that these factors are ones that obviously “prime” countries for revolution. If that’s true, though, then why didn’t we see revolutions in the past few years in Algeria, Morocco,  Sudan, Jordan, and Iran? Morocco and Sudan saw smaller protest waves that failed to produce revolutions, but so did Kuwait and Bahrain. And why did Syria unravel while those others didn’t? It’s true that poorer countries are more susceptible to rebellions than richer ones, but it’s also true that poor countries are historically common and rebellions are not.

All of which makes me wonder how much our theories of rebellion are really theories at all, and not more awkward blends of selective observation and ideology. Maybe we believe that injustice explains rebellion because we want to live in a universe in which justice triumphs and injustice gets punished. When violent or nonviolent rebellions erupt, we often watch and listen to the participants enumerate grievances about poverty and indignity and take those claims as evidence of underlying causes. We do this even though we know that humans are unreliable archivists and interpreters of their own behavior and motivations, and that we could elicit similar tales of poverty and indignity from many, many more people who are not rebelling in those societies and others. If a recent study generalizes, then we in the US and other rich democracies are also consuming news that systematically casts rebels in a more favorable light than governments during episodes of protest and civil conflict abroad.

Meanwhile, when rebel groups don’t fit our profile as agents of justice, we rarely expand our theories of revolution to account for these deviant cases. Instead, we classify the organizations as “terrorists”, “radicals”, or “criminals” and explain their behavior in some other way, usually one that emphasizes flaws in the character or beliefs of the participants or manipulations of them by other nefarious agents. Boko Haram and the Islamic State are rebel groups in any basic sense of that term, but our explanations of their emergence often emphasize indoctrination instead of injustice. Why?

I don’t mean to suggest that misery, dignity, and rebellion are entirely uncoupled. Socioeconomic and emotional misery may and probably do contribute in some ways to the emergence of rebellion, even if they aren’t even close to sufficient causes of it. (For some deeper thinking on the causal significance of social structure, see this recent post by Daniel Little.)

Instead, I think I mean this post to serve as plea to avoid the simple versions of those stories, at least when we’re trying to function as explainers and not activists or rebels ourselves. In light of what we think we know about confirmation bias and cognitive dissonance, the fact that a particular explanation harmonizes with our values and makes us feel good should not be mistaken for evidence of its truth. If anything, it should motivate us to try harder to break it.

China’s Accumulating Risk of Crisis

Eurasia Group founder Ian Bremmer has a long piece in the new issue of The National Interest that foretells continued political stability in China in spite of all the recent turbulence in the international system and at home. After cataloging various messes of the past few years—the global financial crisis and U.S. recession, war in Syria, and unrest in the other BRICS, to name a few—Bremmer says

It is all the more remarkable that there’s been so little noise from China, especially since the rising giant has experienced a once-in-a-decade leadership transition, slowing growth and a show trial involving one of the country’s best-known political personalities—all in just the past few months.

Given that Europe and America, China’s largest trade partners, are still struggling to recover their footing, growth is slowing across much of the once-dynamic developing world, and the pace of economic and social change within China itself is gathering speed, it’s easy to wonder if this moment is merely the calm before China’s storm.

Don’t bet on it. For the moment, China is more stable and resilient than many realize, and its political leaders have the tools and resources they need to manage a cooling economy and contain the unrest it might provoke.

Me, I’m not so sure. Every time I peek under another corner of the “authoritarian stability” narrative that blankets many discussions of China, I feel like I see another mess in the making.

That list is not exhaustive. No one of these situations seems especially likely to turn into a full-blown rebellion very soon, but that doesn’t mean that rebellion in China remains unlikely. That might sound like a contradiction, but it isn’t.

To see why, it helps to think statistically. Because of its size and complexity, China is like a big machine with lots of different modules, any one of which could break down and potentially set off a systemic failure. Think of the prospects for failure in each of those modules as an annual draw from a deck of cards: pull the ace of spades and you get a rebellion; pull anything else and you get more of the same. At 51:1 or about 2 percent, the chances that any one module will fail are quite small. If there are ten modules, though, you’re repeating the draw ten times, and your chances of pulling the ace of spades at least once (assuming the draws are independent) are more like 20 percent than 2. Increase the chances in any one draw—say, count both the king and the ace of spades as a “hit”—and the cumulative probability goes up accordingly. In short, when the risks are additive as I think they are here, it doesn’t take a ton of small probabilities to accumulate into a pretty sizable risk at the systemic level.

What’s more, the likelihoods of these particular events are actually connected in ways that further increase the chances of systemic trouble. As social movement theorists like Sidney Tarrow and Marc Beissinger have shown, successful mobilization in one part of an interconnected system can increase the likelihood of more action elsewhere by changing would-be rebels’ beliefs about the vulnerability of the system, and by starting to change the system itself.

As Bremmer points out, the Communist Party of China has done a remarkable job sustaining its political authority and goosing economic growth as long as it has. One important source of that success has been the Party’s willingness and capacity to learn and adapt as it goes, as evidenced by its sophisticated and always-evolving approach to censorship of social media and its increasing willingness to acknowledge and try to improve on its poor performance on things like air pollution and natural disasters.

Still, when I think of all the ways that system could start to fail and catalog the signs of increased stress on so many of those fronts, I have to conclude that the chances of a wider crisis in China are no longer so small and will only continue to grow. If Bremmer wanted to put a friendly wager on the prospect that China will be governed more or less as it is today to and through the Communist Party’s next National Congress, I’d take that bet.

How Long Will Syria’s Civil War Last? It’s Really Hard to Say

Last week, political scientist Barbara Walter wrote a great post for the blog Political Violence @ a Glance called “The Four Things We Know about How Civil Wars End (and What This Tells Us about Syria),” offering a set of base-rate forecasts about how long Syria’s civil war will last (probably a lot longer) and how it’s likely to end (with a military victory and not a peace agreement).

The post is great because it succeeds in condensing a large and complex literature into a small set of findings directly relevant to an important topic of public concern. It’s no coincidence that this post was written by one of the leading scholars on that subject. A “data scientist” could have looked at the same data sets used in the studies on which Walter bases her summary and not known which statistics would be most informative. Even with the right statistics in hand, a “hacker” probably wouldn’t know much about the relative quality of the different data sources, or the comparative-historical evidence on relevant causal mechanisms—two things that could (and should) inform their thinking about how much confidence to attach to the various results. To me, this is a nice illustration of the point that, even in an era of relentless quantification, subject-matter expertise still matters.

The one thing that seems to have gotten lost in the retellings and retweetings of this distilled evidence, though, is the idea of uncertainty. Apparently inspired by Walter’s post, Max Fisher wrote a similar one for the Washington Post‘s Worldviews blog under the headline “Political science says Syria’s civil war will probably last at least another decade.” Fisher’s prose is appropriately less specific than that (erroneous) headline, but if my Twitter feed is any indication, lots of people read Walter’s and Fisher’s post as predictions that the Syrian war will probably last 10 years or more in total.*

If you had to bet now on the war’s eventual duration, you’d be right to expect an over-under around 10, but the smart play would probably be not to bet at all, unless you were offered very favorable odds or you had some solid hedges in place. That’s because the statistics Walter and Fisher cite are based on a relatively small number of instances of a complex phenomenon, the origins and dynamics of which we still poorly understand. Under these circumstances, statistical forecasting is inevitably imprecise, and the imprecision only increases the farther we try to peer into the future.

We can visualize that imprecision, and the uncertainty it represents, with something called a prediction interval. A prediction interval is just an estimate of the range in which we expect future values of our quantity of interest to fall with some probability. Prediction intervals are sometimes included in plots of time-series forecasts, and the results typically look like the bell of a trumpet, as shown in the example below. The farther into the future you try to look, the less confidence you should have in your point prediction. When working with noisy data on a stochastic process, it doesn’t take a lot of time slices to reach the point where your prediction interval practically spans the full range of possible values.

prediction interval

Civil wars are, without question, one of those stochastic processes with noisy data. The averages Walter and Fisher cite are just central tendencies from a pretty heterogenous set of cases observed over a long period of world history. Using data like these, I think we can be very confident that the war will last at least a few more months and somewhat confident that it will last at least another year or more. Beyond that, though, I’d say the bell of our forecasting trumpet widens very quickly, and I wouldn’t want to hazard a guess if I didn’t have to.

* In fact, neither Walter nor Fisher specifically predicted that the war would last x number of years or more. Here’s what Walter actually wrote:

1. Civil wars don’t end quickly. The average length of civil wars since 1945 have been about 10 years. This suggests that the civil war in Syria is in its early stages, and not in the later stages that tend to encourage combatants to negotiate a settlement.

I think that’s a nice verbal summary of the statistical uncertainty I’m trying to underscore. And here’s what Fisher wrote under that misleading headline:

According to studies of intra-state conflicts since 1945, civil wars tend to last an average of about seven to 12 years. That would put the end of the war somewhere between 2018 and 2023.

Worse, those studies have identified several factors that tend to make civil wars last even longer than the average. A number of those factors appear to apply to Syria, suggesting that this war could be an unusually long one. Of course, those are just estimates based on averages; by definition, half of all civil wars are shorter than the median length, and Syria’s could be one of them. But, based on the political science, Syria has the right conditions to last through President Obama’s tenure and perhaps most or all of his successor’s.

Lost in the Fog of Civil War in Syria

On Twitter a couple of days ago, Adam Elkus called out a recent post on Time magazine’s World blog as evidence of the way that many peoples’ expectations about the course of Syria’s civil war have zigged and zagged over the past couple of years. “Last year press was convinced Assad was going to fall,” Adam tweeted. “Now it’s that he’s going to win. Neither perspective useful.” To which the eminent civil-war scholar Stathis Kalyvas replied simply, “Agreed.”

There’s a lesson here for anyone trying to glean hints about the course of a civil war from press accounts of a war’s twists and turns. In this case, it’s a lesson I’m learning through negative feedback.

Since early 2012, I’ve been a participant/subject in the Good Judgment Project (GJP), a U.S. government-funded experiment in “wisdom of crowds” forecasting. Over the past year, GJP participants have been asked to estimate the probability of several events related to the conflict in Syria, including the likelihood that Bashar al-Assad would leave office and the likelihood that opposition forces would seize control of the city of Aleppo.

I wouldn’t describe myself as an expert on civil wars, but during my decade of work for the Political Instability Task Force, I spent a lot of time looking at data on the onset, duration, and end of civil wars around the world. From that work, I have a pretty good sense of the typical dynamics of these conflicts. Most of the civil wars that have occurred in the past half-century have lasted for many years. A very small fraction of those wars flared up and then ended within a year. The ones that didn’t end quickly—in other words, the vast majority of these conflicts—almost always dragged on for several more years at least, sometimes even for decades. (I don’t have my own version handy, but see Figure 1 in this paper by Paul Collier and Anke Hoeffler for a graphical representation of this pattern.)

On the whole, I’ve done well in the Good Judgment Project. In the year-long season that ended last month, I ranked fifth among the 303 forecasters in my experimental group, all while the project was producing fairly accurate forecasts on many topics. One thing that’s helped me do well is my adherence to what you might call the forecaster’s version of the Golden Rule: “Don’t neglect the base rate.” And, as I just noted, I’m also quite familiar with the base rates of civil-war duration.

So what did I do when asked by GJP to think about what would happen in Syria? I chucked all that background knowledge out the window and chased the very narrative that Elkus and Kalyvas rightly decry as misleading.

Here’s a chart showing how I assessed the probability that Assad wouldn’t last as president beyond the end of March 2013, starting in June 2012. The actual question asked us to divide the probability of his exiting office across several time periods, but for simplicity’s sake I’ve focused here on the part indicating that he would stick around past April 1. This isn’t the same thing as the probability that the war would end, of course, but it’s closely related, and I considered the two events as tightly linked. As you can see, until early 2013, I was pretty confident that Assad’s fall was imminent. In fact, I was so confident that at a couple of points in 2012, I gave him zero chance of hanging on past March of this year—something a trained forecaster really never should do.

gjp assad chart

Now here’s another chart showing my estimates of the likelihood that rebels would seize control of Aleppo before May 1, 2013. The numbers are a little different, but the basic pattern is the same. I started out very confident that the rebels would win the war soon and only swung hard in the opposite direction in early 2013, as the boundaries of the conflict seemed to harden.

gjp aleppo chart

It’s impossible to say what the true probabilities were in this or any other uncertain situation. Maybe Assad and Aleppo really were on the brink of falling for a while and then the unlikely-but-still-possible version happened anyway.

That said, there’s no question that forecasts more tightly tied to the base rate would have scored a lot better in this case. Here’s a chart showing what my estimates might have looked like had I followed that rule, using approximations of the hazard rate from the chart in the Collier and Hoeffler paper. If anything, these numbers overstate the likelihood that a civil war will end at a given point in time.

gjp baserate chart

I didn’t keep a log spelling out my reasoning at each step, but I’m pretty confident that my poor performance here is an example of motivated reasoning. I wanted Assad to fall and the pro-democracy protesters who dominated the early stages of the uprising to win, and that desire shaped what I read and then remembered when it came time to forecast. I suspect that many of the pieces I was reading were slanted by similar hopes, creating a sort of analytic cascade similar to the herd behavior thought to drive many financial-market booms and busts. I don’t have the data to prove it, but I’m pretty sure the ups and downs in my forecasts track the evolving narrative in the many newspaper and magazine stories I was reading about the Syrian conflict.

Of course, that kind of herding happens on a lot of topics, and I was usually good at avoiding it. For example, when tensions ratcheted up on the Korean Peninsula earlier this year, I hewed to the base rate and didn’t substantially change my assessment of the risk that real clashes would follow.

What got me in the case of Syria was, I think, a sense of guilt. The Assad government has responded to a legitimate popular challenge with mass atrocities that we routinely read about and sometimes even see. In parts of the country, the resulting conflict is producing scenes of absurd brutality. This isn’t a “problem from hell,” as Samantha Powers’ book title would have it; it is a glimpse of hell. And yet, in the face of that horror, I have publicly advocated against American military intervention. Upon reflection, I wonder if my wildly optimistic forecasting about the imminence of Assad’s fall wasn’t my unconscious attempt to escape the discomfort of feeling complicit in the prolongation of that suffering.

As a forecaster, if I were doing these questions over, I would try to discipline myself to attend to the base rate, but I wouldn’t necessarily stop there. As I’ve pointed out in a previous post, the base rate is a valuable anchoring device, but attending to it doesn’t mean automatically ignoring everything else. My preferred approach, when I remember to have one, is to take that base rate as a starting point and then use Bayes’ theorem to update my forecasts in a more disciplined way. Still, I’ll bring a newly skeptical eye the flurry of stories predicting that Assad’s forces will soon defeat Syria’s rebels and keep their patron in power. Now that we’re a couple years into the conflict, quantified history tells us that the most likely outcome in any modest slice of time (say, months rather than years) is, tragically, more of the same.

And, as a human, I’ll keep hoping the world will surprise us and take a different turn.

Forecasting Round-Up No. 3

1. Mike Ward and six colleagues recently posted a new working paper on “the next generation of crisis prediction.” The paper echoes themes that Mike and Nils Metternich sounded in a recent Foreign Policy piece responding to one I wrote a few days earlier, about the challenges of forecasting rare political events around the world. Here’s a snippet from the paper’s intro:

We argue that conflict research in political science can be improved by more, not less, attention to predictions. The increasing availability of disaggregated data and advanced estimation techniques are making forecasts of conflict more accurate and precise. In addition, we argue that forecasting helps to prevent overfi tting, and can be used both to validate models, and inform policy makers.

I agree with everything the authors say about the scientific value and policy relevance of forecasting, and I think the modeling they’re doing on civil wars is really good. There were two things I especially appreciated about the new paper.

First, their modeling is really ambitious. In contrast to most recent statistical work on civil wars, they don’t limit their analysis to conflict onset, termination, or duration, and they don’t use country-years as their unit of observation. Instead, they look at country-months, and they try to tackle the more intuitive but also more difficult problem of predicting where civil wars will be occurring, whether or not one is already ongoing.

This version of the problem is harder because the factors that affect the risk of conflict onset might not be the same ones that affect the risk of conflict continuation. Even when they are, those factors might not affect the two risks in inverse ways. As a result, it’s hard to specify a single model that can reliably anticipate continuity in, and changes from, both forms of the status quo (conflict or no conflict).

The difficulty of this problem is evident in the out-of-sample accuracy of the model these authors have developed. The performance statistics are excellent on the whole, but that’s mostly because the model is accurately forecasting that whatever is happening in one month will continue to happen in the next. Not surprisingly, the model’s ability to anticipate transitions is apparently weaker. Of the five civil-war onsets that occurred in the test set, only two “arguably…rise to probability levels that are heuristic,” as the authors put it.

I emailed Mike to ask about this issue, and he said they were working on it:

Although the paper doesn’t go into it, in a separate part of this effort we actually do have separate models for onset and continuation, and they do reasonably well.  We are at work on terminations, and developing a new methodology that predicts onsets, duration, and continuation in a single (complicated!) model.  But that is down the line a bit.

Second and even more exciting to me, the authors close the paper with real, honest-to-goodness forecasts. Using the most recent data available when the paper was written, the authors generate predicted probabilities of civil war for the next six months: October 2012 through March 2013. That’s the first time I’ve seen that done in an academic paper about something other than an election, and I hope it sets a precedent that others will follow.

2. Over at Red (team) Analysis, Helene Lavoix appropriately pats The Economist on the back for publicly evaluating the accuracy of the predictions they made in their “World in 2012” issue. You can read the Economist‘s own rack-up here, but I want to highlight one of the points Helene raised in her discussion of it. Toward the end of her post, in a section called “Black swans or biases?”, she quotes this bit from the Economist:

As ever, we failed at big events that came out of the blue. We did not foresee the LIBOR scandal, for example, or the Bo Xilai affair in China or Hurricane Sandy.

As Helene argues, though, it’s not self evident that these events were really so surprising—in their specifics, yes, but not in the more general sense of the possibility of events like these occurring sometime this year. On Sandy, for example, she notes that

Any attention paid to climate change, to the statistics and documents produced by Munich-re…or Allianz, for example, to say nothing about the host of related scientific studies, show that extreme weather events have become a reality and we are to expect more of them and more often, including in the so-called rich countries.

This discussion underscores the importance of being clear about what kind of forecasting we’re trying to do, and why. Sometimes the specifics will matter a great deal. In other cases, though, we may have reason to be more concerned with risks of a more general kind, and we may need to broaden our lens accordingly. Or, as Helene writes,

The methodological problem we are facing here is as follows: Are we trying to predict discrete events (hard but not impossible, however with some constraints and limitations according to cases) or are we trying to foresee dynamics, possibilities? The answer to this question will depend upon the type of actions that should follow from the anticipation, as predictions or foresight are not done in a vacuum but to allow for the best handling of change.

3. Last but by no means least, Edge.org has just posted an interview with psychologist Phil Tetlock about his groundbreaking and ongoing research on how people forecast, how accurate (or not) their forecasts are, and whether or not we can learn to do this task better. [Disclosure: I am one of hundreds of subjects in Phil’s contribution to the IARPA tournament, the Good Judgment Project.] On the subject of learning, the conventional wisdom is pessimistic, so I was very interested to read this bit (emphasis added):

Is world politics like a poker game? This is what, in a sense, we are exploring in the IARPA forecasting tournament. You can make a good case that history is different and it poses unique challenges. This is an empirical question of whether people can learn to become better at these types of tasks. We now have a significant amount of evidence on this, and the evidence is that people can learn to become better [forecasters]. It’s a slow process. It requires a lot of hard work, but some of our forecasters have really risen to the challenge in a remarkable way and are generating forecasts that are far more accurate than I would have ever supposed possible from past research in this area.

And bonus alert: the interview is introduced by Daniel Kahneman, Nobel laureate and author of one of my favorite books from the past few years, Thinking, Fast and Slow.

N.B. In case you’re wondering, you can find Forecasting Round-Up Nos. 1 and 2 here and here.

How Is Liberia Staying Stable?

Earlier this year, in preparation for a workshop at the Council on Foreign Relations, I developed a set of statistical models to assess the risk of onset of a few forms of political instability–violent rebellion, nonviolent rebellion (link), and coup attempts–and then used those models to generate global forecasts for 2011. Liberia scored in the top five on two of those lists: violent rebellion (a.k.a. civil war) and coup attempts. The models pegged it as having roughly a 15% chance of civil-war onset (3rd highest in the world) and more than a 60% chance of a coup attempt (4th highest) before 2012.

If Liberia is so susceptible to these kinds of political crises, why aren’t they happening now? The country’s horrible 14-year civil war ended in 2003 and has not flared again since then. Elections held in 2005 handed the presidency to Ellen Johnson Sirleaf, and no one has yet tried to depose her by force. For a country supposedly at the leading edge of what Robert Kaplan in 1994 called “the coming anarchy” of state collapse and civil strife (link), that’s a terrific run of political stability.

Of course, Liberia is not out of the woods yet. The year is only half over, and the country is scheduled to hold legislative and executive elections this October. Electoral competition or frustration over its results could trigger civil violence or coup attempts. Fears of exactly that scenario have already prompted more than 20 political parties to sign a Memorandum of Understanding with the Liberian police pledging to conduct their election campaigns with civility (link), but that paper promise is hardly a guarantee against crisis.

Still, let’s take the optimistic view and assume that Liberia crests this hump in its political risk without sliding back into large-scale civil violence or suffering a coup. What explains its ability for nearly a decade now to avoid the “conflict trap” that has plagued so many of the world’s poorest countries after the apparent ends of their civil wars?

Liberia doesn’t get a lot of attention in the U.S. press, but the bits I have seen and heard in the past several years have focused almost exclusively on the figure of President Ellen Johnson Sirleaf. As a plain-talking, Harvard-educated black female economist presiding over a country brutalized by a succession of male warlords, Sirleaf cuts a rare and appealing figure, and foreign governments and international aid organizations seem unusually committed to her government’s success. “We see her as one of us,” U.S. ambassador Linda Thompson-Greenfield told the New York Times. “We don’t want to see her fail.”

I don’t know enough about Liberia to assert anything with confidence, but as a general observer of political instability, my hunch is that international peacekeeping has played a larger role in preserving Liberia’s tenuous stability than the hagiographies of President Sirleaf imply. Since 2003, the United Nations has maintained a large peacekeeping operation (PKO) in Liberia to prevent a return to civil war, support humanitarian work, and train that country’s soldiers and police (link). As of May 2011, that PKO included more than 9,200 uniformed personnel, 1,300 police, nearly 500 international civilian personnel, and roughly 1,000 local staffers and was funded with an annual budget of more than $540 million. That’s a tremendous commitment in a country with a population of about 4 million and a gross domestic product (GDP) of less than $1 billion.

The scale and strength of the peacekeeping efforts in Liberia remind me of the U.N. mission in neighboring Sierra Leone, a neighboring west African country that was also brutalized by civil war and so far has avoided both a resumption of violence and a breakdown of its post-conflict democratic regime. Running from 1999 until 2005, the PKO in Sierra Leone involved as many as 17,000 military personnel at its peak and cost $2.8 billion in total in a country with roughly 5 million residents and a GDP of less than $2 billion (link). At this scale and duration, international peacekeeping operations should stand a better chance of helping domestic rivals overcome the security dilemmas that often drive recurrent conflicts, and Sierra Leone and Liberia’s experiences offer a couple of anecdotes in support of that view. [For excellent academic treatments of civil-war recurrence and settlement, see Barbara Walter and Jack Snyder’s 1999 edited volume and Monica Toft’s 2010 book.]

In short, I believe Liberia’s comparative stability since 2003 is, above all, a testament to the possibility of effective international peacekeeping, especially in smaller countries where intervening forces can more readily achieve a scale that’s virtually impossible to reach in larger countries such as Iraq and Afghanistan. I didn’t have the data required to consider the effects of PKOs in my statistical analysis of violent rebellion and coup attempts and am now eager to try adding them in the future. The 2011 forecasts based on my already-completed analysis could still turn out to be prescient, but I’m certainly hoping they don’t. In the mean time, I would be very interested in hearing from people with expertise on Liberia about their ideas on how that country is beating the odds to stay stable.

UPDATE: Not long after posting these ruminations, I saw a tweet from African Elections (@Africanelection) with a link to an IRIN story, via AlertNet, about Liberia’s current conditions and upcoming election season. The story didn’t change my views about the country’s conflict and coup risks, but it looks like an excellent backgrounder. You can find it here.

Follow

Get every new post delivered to your Inbox.

Join 12,952 other followers

%d bloggers like this: