Early Results from a New Atrocities Early Warning System

For the past couple of years, I have been working as a consultant to the U.S. Holocaust Memorial Museum’s Center for the Prevention of Genocide to help build a new early-warning system for mass atrocities around the world. Six months ago, we started running the second of our two major forecasting streams, a “wisdom of (expert) crowds” platform that aggregates probabilistic forecasts from a pool of topical and area experts on potential events of concern. (See this conference paper for more detail.)

The chart below summarizes the output from that platform on most of the questions we’ve asked so far about potential new episodes of mass killing before 2015. For our early-warning system, we define a mass killing as an episode of sustained violence in which at least 1,000 noncombatant civilians from a discrete group are intentionally killed, usually in a period of a year or less. Each line in the chart shows change over time in the daily average of the inputs from all of the participants who choose to make a forecast on that question. In other words, the line is a mathematical summary of the wisdom of our assembled crowd—now numbering nearly 100—on the risk of a mass killing beginning in each case before the end of 2014. Also:

  • Some of the lines (e.g., South Sudan, Iraq, Pakistan) start further to the right than others because we did not ask about those cases when the system launched but instead added them later, as we continue to do.
  • Two lines—Central African Republic and South Sudan—end early because we saw onsets of mass-killing episodes in those countries. The asterisks indicate the dates on which we made those declarations and therefore closed the relevant questions.
  • Most but not all of these questions ask specifically about state-led mass killings, and some focus on specific target groups (e.g., the Rohingya in Burma) or geographic regions (the North Caucasus in Russia) as indicated.
Crowd-Estimated Probabilities of Mass-Killing Onset Before 1 January 2015

Crowd-Estimated Probabilities of Mass-Killing Onset Before 1 January 2015

I look at that chart and conclude that this process is working reasonably well so far. In the six months since we started running this system, the two countries that have seen onsets of mass killing are both ones that our forecasters promptly and consistently put on the high side of 50 percent. Nearly all of the other cases, where mass killings haven’t yet occurred this year, have stuck on the low end of the scale.

I’m also gratified to see that the system is already generating the kind of dynamic output we’d hoped it would, even with fewer than 100 forecasters in the pool. In the past several weeks, the forecasts for both Burma and Iraq have risen sharply, apparently in response to shifts in relevant policies in the former and the escalation of the civil war in the latter. Meanwhile, the forecast for Uighurs in China has risen steadily over the year as a separatist rebellion in Xinjiang Province has escalated and, with it, concerns about a harsh government response. These inflection points and trends can help identify changes in risk that warrant attention from organizations and individuals concerned about preventing or mitigating these potential atrocities.

Finally, I’m also intrigued to see that our opinion pool seems to be sorting cases into a few clusters that could be construed as distinct tiers of concern. Here’s what I have in mind:

  • Above the 50-percent threshold are the high risk cases, where forecasters assess that mass killing is likely to occur during the specified time frame.  These cases won’t necessarily be surprising. Some observers had been warning on the risk of mass atrocities in CAR and South Sudan for months before those episodes began, and the plight of the Rohingya in Burma has been a focal point for many advocacy groups in the past year. Even in supposedly “obvious” cases, however, this system can help by providing a sharper estimate of that risk and giving a sense of how it is trending over time. In the case of Burma, for example, it is the separation that has happened in the last several weeks that tells the story of a switch from possible to likely and thus adds a degree of urgency to that warning.
  • A little farther down the y-axis are the moderate risk cases—ones that probably won’t suffer mass killing during the period in question but could more readily tip in that direction. In the chart above, Iraq, Sudan, Pakistan, Bangladesh, and Burundi all land in this tier, although Iraq now appears to be sliding into the high risk group.
  • Clustered toward the bottom are the low risk cases where the forecasters seem fairly confident that mass killing will not occur in the near future. In the chart above, Russia, Afghanistan, and Ethiopia are the cases that land firmly in this set. China (Uighurs) remains closer to them than the moderate risk tier, but it appears to be creeping toward the moderate-risk group. We are also running a question about the risk of state-led mass killing in Rwanda before 2015, and it currently lands in this tier, with a forecast of 14 percent.

The system that generates the data behind this chart is password protected, but the point of our project is to make these kinds of forecasts freely available to the global public. We are currently building the web site that will display the forecasts from this opinion pool in real time to all comers and hope to have it ready this fall.

In the meantime, if you think you have relevant knowledge or expertise—maybe you study or work on this topic, or maybe you live or work in parts of the world where risks tend to be higher—and are interested in volunteering as a forecaster, please send an email to us at ewp@ushmm.org.

A Notable Year of the Wrong Kind

The year that’s about to end has distinguished itself in at least one way we’d prefer never to see again. By my reckoning, 2013 saw more new mass killings than any year since the early 1990s.

When I say “mass killing,” I mean any episode in which the deliberate actions of state agents or other organizations kill at least 1,000 noncombatant civilians from a discrete group. Mass killings are often but certainly not always perpetrated by states, and the groups they target may be identified in various ways, from their politics to their ethnicity, language, or religion. Thanks to my colleague Ben Valentino, we have a fairly reliable tally of episodes of state-led mass killing around the world since the mid-1940s. Unfortunately, there is no comparable reckoning of mass killings carried out by non-state actors—nearly always rebel groups of some kind—so we can’t make statements about counts and trends as confidently as I would like. Still, we do the best we can with the information we have.

With those definitions and caveats in mind, I would say that in 2013 mass killings began:

Of course, even as these new cases have developed, episodes of mass killings have continued in a number of other places:

In a follow-up post I hope to write soon, I’ll offer some ideas on why 2013 was such a bad year for deliberate mass violence against civilians. In the meantime, if you think I’ve misrepresented any of these cases here or overlooked any others, please use the Comments to set me straight.

The Fog of War Is Patchy

Over at Foreign Policy‘s Peace Channel, Sheldon Himmelfarb of USIP has a new post arguing that better communications technologies in the hands of motivated people now give us unprecedented access to information from ongoing armed conflicts.

The crowd, as we saw in the Syrian example, is helping us get data and information from conflict zones. Until recently these regions were dominated by “the fog war,” which blinded journalists and civilians alike; it took the most intrepid reporters to get any information on what was happening on the ground. But in the past few years, technology has turned conflict zones from data vacuums into data troves, making it possible to render parts the conflict in real time.

Sheldon is right, but only to a point. If crowdsourcing is the future of conflict monitoring, then the future is already here, as Sheldon notes; it’s just not very evenly distributed. Unfortunately, large swaths of the world remain effectively off the grid on which the production of crowdsourced conflict data depends. Worse, countries’ degree of disconnectedness is at least loosely correlated with their susceptibility to civil violence, so we still have the hardest time observing some of the world’s worst conflicts.

The fighting in the Central African Republic over the past year is a great and terrible case in point. The insurgency that flared there last December drove the president from the country in March, and state security forces disintegrated with his departure. Since then, CAR has descended into a state of lawlessness in which rival militias maraud throughout the country and much of the population has fled their homes in search of whatever security and sustenance they can find.

We know this process is exacting a terrible toll, but just how terrible is even harder to say than usual because very few people on hand have the motive and means to record and report out what they are seeing. At just 23 subscriptions per 100 people, CAR’s mobile-phone penetration rate remains among the lowest on the planet, not far ahead of Cuba’s and North Korea’s (data here). Some journalists and NGOs like Human Rights Watch and Amnesty International have been covering the situation as best they can, but they will be among the first to tell you that their information is woefully incomplete, in part because roads and other transport remain rudimentary. In a must-read recent dispatch on the conflict, anthropologist Louisa Lombard noted that “the French colonists invested very little in infrastructure, and even less has been invested subsequently.”

A week ago, I used Twitter to ask if anyone had managed yet to produce a reasonably reliable estimate of the number of civilian deaths in CAR since last December. The replies I received from some very reputable people and organizations makes clear what I mean about how hard it is to observe this conflict.

C.A.R. is an extreme case in this regard, but it’s certainly not the only one of its kind. The same could be said of ongoing episodes of civil violence in D.R.C., Sudan (not just Darfur, but also South Kordofan and Blue Nile), South Sudan, and in the Myanmar-China border region, to name a few. In all of these cases, we know fighting is happening, and we believe civilians are often targeted or otherwise suffering as a result, but our real-time information on the ebb and flow of these conflicts and the tolls they are exacting remains woefully incomplete. Mobile phones and the internet notwithstanding, I don’t expect that to change as quickly as we’d hope.

[N.B. I didn’t even try to cover the crucial but distinct problem of verifying the information we do get from the kind of crowdsourcing Sheldon describes. For an entry point to that conversation, see this great blog post by Josh Stearns.]

Coup Forecasts for 2013

Last January, I posted statistical estimates of coup risk for 2012 that drew some wider interest after they correctly identified Mali as a high-risk case. Now that the year’s almost over, I thought it would be a good time to assess more formally how those 2012 forecasts performed and then update them for 2013.

So, first things first: how did the 2012 forecasts fare on the whole? Pretty well, actually.

For purposes of these forecasts, a coup is defined as “as a forceful seizure of executive authority and office by a dissident/opposition faction within the country’s ruling or political elites that results in a substantial change in the executive leadership and the policies of the prior regime.” That language comes from Monty Marshall’s Center for Systemic Peace, whose data set on coup events serves as the basis for one of the two models used to generate the 2012 forecasts. Those forecasts were meant to assess the risk of any coup attempts at some point during the calendar year, whether those attempts succeed or fail. They were not meant to anticipate civil wars, non-violent uprisings, voluntary transfers of executive authority, autogolpes, or interventions by foreign forces, all of which are better thought of (and modeled) as different forms of political crisis.

Okay, so by that definition, I see two countries where coup attempts occurred in 2012: Mali (in March) and Guinea-Bissau (in April). As it happens, both of those countries ranked in the top 10 in January’s forecasts—Guinea-Bissau at no. 2 and Mali at no. 10—so the models seem to be homing in on the right things. We can get a more rigorous take on the forecasts’ accuracy with a couple of statistics commonly used to assess models that try to predict binary outcomes like these (either a coup attempt happens or it doesn’t):

  • AUC Score. The estimated area under the Receiver Operating Characteristic (ROC) curve, used as a measure of the ability of a binary classification model to discriminate between positive and negative cases. Specifically, AUC represents the probability that a randomly selected positive case (here, a country-year with coup activity) will have a higher predicted probability than a randomly selected negative case (e.g., country-year with no coup activity). Ranges from 0.5 to 1, with higher values indicating better discrimination.
  • Brier Score. A general measure of forecast performance, defined as the average squared difference between the predicted and observed values. Ranges from 0 to 1, with lower values indicating more accurate predictions.

Assuming that Mali and Guinea-Bissau were the only countries to see coup activity this year, my 2012 coup forecasts get an AUC score of 0.97 and a Brier score of 0.01. Those are really good numbers. Based on my experience trying to forecast other rare political events around the world, I’m pretty happy with any AUC above the low 0.80s and any Brier score that’s better than an across-the-board base-rate forecast. The 2012 coup forecasts surpass both of those benchmarks.

Of course, with just two events in more than 150 countries, these statistics could be very sensitive to changes in the list of coup attempts. Two possible modifications come from Sudan, where authorities claim to have thwarted coup plots in November and December, and Paraguay, where right-wing legislators pushed leftist President Lugo out of office in June. I didn’t count Sudan because country experts tell me those events were probably just a political ploy President Bashir is using to keep his rivals off balance and not actual coup attempts. I didn’t count Paraguay because President Lugo’s rivals used legal procedures, not force, to oust him in a rushed impeachment. I’m pretty confident that neither of those cases counts as a coup attempt as defined here, but for the sake of argument, it’s worth seeing how the addition of those cases would affect the accuracy assessments.

  • Sudan ranked 11th in the 2012 forecasts, just behind Mali, so the addition of an event there leaves the accuracy stats essentially unchanged at 0.96 and 0.02, respectively.
  • Paraguay would definitely count as a surprise. It ranked in the 80s in the 2012 forecasts, and counting its June events as a coup would drop the AUC to 0.80 and the Brier score to 0.02.
  • If we count both cases as yeses, we get an AUC of 0.84 and a Brier score of 0.02.

All of those are still pretty respectable numbers for true forecasts of rare political events, even if they’re not quite as good as the initial ones. Whatever the exact ground truth, these statistics give me some confidence that the two-model average I’m using here makes a useful forecasting tool.

So, without further ado, what about 2013? The chart below plots estimated coup risk for the coming year for the 30 countries at greatest risk using essentially the same models I used for 2012. (One of the two models differs slightly from last year’s; I cut out a couple of variables that had little effect on the estimates and are especially hard to update.) I picked the top 30 because it’s roughly equivalent to the top quintile, and my experience working with models like these tells me that the top quintile makes a pretty good break point for distinguishing between countries at high and low risk. If a country doesn’t appear in this chart, that means my models think it’s highly unlikely to suffer a coup attempt in the coming year.

2013 Coup Risk Estimates

2013 Coup Risk Estimates

The broad strokes are very similar to 2012, but I’m also seeing a few changes worth noting.

  • Consistent with 2012, countries from sub-Saharan Africa continue to dominate the high-risk group. Nine of the top 10 and 22 of the top 30 countries come from that part of the world. One of those 22 is South Sudan, which didn’t get a forecast in early 2012 because I didn’t have the requisite data but now makes an ignominious debut at no. 20. Another is Sudan, which, as Armin Rosen discusses, certainly isn’t getting any more stable. Mali and Guinea-Bissau also both stay near the top of the list, thanks in part to the “coup trap” I discussed in another recent post. Meanwhile, I suspect the models are overestimating the risk of a new coup attempt in Niger, which seems to have landed on firmer footing after its “democratizing” coup in February 2010, but that recent history will leave Niger in the statistical high-risk group until at least 2015.
  • More surprising to me, Timor-Leste now lands in the top 10. That’s a change from 2012, but only because the data used to generate the 2012 forecasts did not count the assassination attempts of 2008 as a coup try. The latest version of CSP’s coup list does consider those events to be failed coup attempt. Layered on top of Timor-Leste’s high poverty and hybrid political authority patterns, that recent coup activity greatly increases the country’s estimated risk. If Timor-Leste makes it through 2013 without another coup attempt, though, its estimated risk should drop sharply next year.
  • In Latin America, Haiti and Ecuador both make it into the Top 20. As with Timor-Leste, the changes from 2012 are artifacts of adjustments to the historical data—adding a coup attempt in Ecuador in 2010 and counting Haiti as a partial democracy instead of a state under foreign occupation. Those artifacts mean the change from 2012 isn’t informative, but the presence of those two countries in the top 20 most certainly is.
  • Syria also pops into the high-risk group at no. 25. That’s not an artifact of data revisions; it’s a reflection of the effects of that country’s devastating state collapse and civil war on several of the risk factors for coups.
  • Finally, notable for its absence is Egypt, which ranks 48th on the 2013 list and has been a source of coup rumors throughout its seemingly interminable transitional period. It’s worth noting though, that if you consider SCAF’s ouster of Mubarak in 2011 to be a successful coup (CSP doesn’t), Egypt would make its way into the top 30.

As always, if you’re interested in the details of the modeling, please drop me a line at ulfelder@gmail.com and I’ll try to answer your questions as soon as I can.

Update: After a Washington Post blog mapped my Top 30, I produced a map of my own.

“State Failure” Has Failed. How About Giving “State Collapse” a Whirl?

Foreign Policy magazine recently published the 2012 edition of the Fund for Peace‘s Failed States Index (FSI), and the response in the corner of the international-studies blogosphere I inhabit has been harsh. Scholars have been grumbling about the Failed States Index for years, but the chorus of academic and advocacy voices attacking it seems to have grown unusually large and loud this year. In an admirable gesture of of fair play, Foreign Policy ran one of the toughest critiques of the FSI on its own web site, where Elliot Ross of the blog Africa is a Country wrote,

We at Africa is a Country think Foreign Policy and the Fund for Peace should either radically rethink the Failed States Index, which they publish in collaboration each year, or abandon it altogether. We just can’t take it seriously: It’s a failed index.

As Ross and many others argue, the core problem with the FSI is that it defines state failure very broadly, and in a way that seems to privilege certain forms of political stability over other aspects of governance and quality of life that the citizens in those states may prize more highly. In a 2008 critique of the “state failure” concept [PDF] that nicely anticipated all of the recent sturm und drang around the FSI, Chuck Call wrote that

The ‘failed states’ concept—and related terms like ‘failing’, ‘fragile’, ‘stressed’ and ‘troubled’ states—has become more of a liability than an asset. Foundations and think tanks have rushed to fund work on ‘failing’ states, resulting in a proliferation of multiple, divergent and poorly defined uses of the term. Not only does the term ‘failing state’ reflect the schoolmarm’s scorecard according to linear index defined by a univocal Weberian endstate, but it has also grown to encompass states as diverse as Colombia, East Timor, Indonesia, North Korea, Cote d’Ivoire, Haiti, Iraq, and the Sudan.

In that essay, Call advocates abandoning the now-hopelessly-freighted concept of “state failure” in favor of a narrower focus on “state collapse”—that is, situations “where no authority is recognisable either internally to a country’s inhabitants or externally to the international community.” I agree.

In fact, in 2010, while still working as research director for the U.S. Government–funded Political Instability Task Force, I led a small research project that aimed to develop a workable definition of state collapse and coding guidelines that would allow researchers to know it when they see it. The project stopped short of producing a global, historical data set, but the coding guidelines were road-tested and refined, and I think the end results have some value. In light of the FSI brouhaha, I’ve posted the results of that project on the Social Science Research Network (SSRN) in hopes that they might be useful to a broader audience.

In those materials—a concept paper and a set of coding guidelines—I argue that we can get to a more workable concept by moving away from Max Weber’s aspirational vision of modern states as legitimate and orderly bureaucracies. Instead, I think we get further when we recognize that real-world states are a specific kind of political organization associated with a particular realization of global politics. That realization—the “Westphalian order,” or just “the international system”—constitutes states and delegates certain forms of political authority to them, but national governments in the real world vary widely in their ability to exercise that authority. When internationally recognized governments cease to exist, or their actual authority is badly circumscribed, we can say that the state has collapsed. That kind of collapse can happen in two different ways: fragmentation and disintegration.

When the failure to rule involves the national government’s territorial reach, we might call it collapse by fragmentation. The ideal of domestic sovereignty presumes final authority within a specific territory and international recognition of that authority, so situations in which large swaths of a state’s territory are effectively governed by organized political challengers whose authority is not internationally recognized represent a form of collapse. In practical terms, these situations usually arise in one of two ways: either 1) a rebel group violently pushes state agents out of a particular area, or 2) a regional government unilaterally proclaims its autonomy or independence and becomes the de facto sovereign authority in that region. In either situation, the rival group directly and publicly challenges the national government’s claim to sovereignty and effectively becomes the supreme political authority in that space. State military forces may still operate in these areas, but they do so in an attempt to reassert control that has already been lost, as indicated by the primacy of the rival organization in day-to-day governance…

State collapse also occurs when the national government fails to enforce its authority in the absence of a rival claimant to sovereignty. This type of failure might be called state collapse by disintegration. The ideal of domestic sovereignty presumes that a central government is capable not just of making rules but also of enforcing them. Dramatic failures of a state’s enforcement capabilities are indicated by widespread lawlessness and disorder, such as rioting, looting, civil violence, and vigilantism. In the extreme, central governments will sometimes disappear completely, but this rarely occurs. More often, a national government will continue to operate, but its rules will be ignored in some portions of its putative territory.

To distinguish state collapse from other forms of political instability and disorder, we have to establish some arbitrary thresholds beyond which the failure is considered catastrophic. Saying focused on the core dimensions of domestic sovereignty—territory and order—I do this as follows:

A state collapse occurs when a sovereign state fails to provide public order in at least one-half of its territory or in its capital city for at least 30 consecutive days. A sovereign state is regarded as failing to provide public order in a particular area when a) an organized challenger, usually a rebel group or regional government, effectively controls that area; b) lawlessness pervades in that area; or c) both. A state is considered sovereign when it is granted membership in the U.N. General Assembly.

If you’re interested, you can find more specific language on how to assess challenger control and lawlessness in the coding guidelines.

Applying this definition to the world today, I see only a handful of states that are clearly collapsed and just a few more that might be. In the “clearly collapsed” category, I would put Libya, Mali, Somalia, and Yemen. In the “probably collapsed” category, I would put Afghanistan and Democratic Republic of Congo. Those judgments are based on cursory knowledge of those cases, however, and I would be interested to hear what others think about where this label does (Chad? Haiti? Ivory Coast? Sudan? South Sudan?) or does not (Afghanistan? Mali?) fit. Either way, the list is much shorter and, I believe, more coherent than the 20-country sets the Failed States Index identifies as “critical” and “in danger.”

More important, this is a topic that still greatly interests me, so I would love to have this conceptual work critiqued, put to use, or both. Fire away!

Assessing Coup Risk in 2012

Which countries around the world are most likely to see coup activity in 2012?

This question popped back into my mind this morning when I read a new post on Daniel Solomon’s Securing Rights blog about widening schisms in Sudan’s armed forces that could lead to a coup attempt. There’s also been a lot of talk in early 2012 about the likelihood of a coup in Syria, where the financial and social costs of repression, sanctions, and now civil war continue to mount. Meanwhile, Pakistan seems to have dodged a coup bullet early this year after a tense showdown between its elected civilian government and military leaders. I even saw one story–unsubstantiated, but from a reputable source–about a possible foiled coup plot in China around New Year’s Day. These are all countries where a coup d’etat would shake up regional politics, and coups in some of those countries could substantially alter the direction of armed conflicts in which government forces are committing mass atrocities, to name just two of the possible repercussions.

To give a statistical answer to the question of coup risk in 2012, I’ve decided to dust off a couple of coup-forecasting algorithms I developed in early 2011 and gin up some numbers. Both of these algorithms…

  1. Take the values of numerous indicators identified by statistical modeling as useful predictors of coup activity (see the end of this post for details);
  2. Apply weights derived from that modeling to those indicators; and then
  3. Sum and transform the results to spit out a score we can interpret as an estimate of the probability that a coup event will occur some time in 2011.

Both algorithms are products of Bayesian model averaging (BMA) applied to logistic regression models of annual coup activity (any vs. none) in countries worldwide over the past few decades. One of the modeling exercises, done for a private-sector client, looked only at successful coups using data compiled by the Center for Systemic Peace. The other modeling exercise was done for a workshop at the Council on Foreign Relations on forecasting political instability; this one looked at all coup attempts, successful or failed, using data compiled by Jonathan Powell and Clayton Thyne. For the 2012 coup risk assessments, I’ve simply averaged the output from the two.

The dot plot below shows the estimated coup risk in 2012 for the 40 countries with the highest values (i.e., greatest risk). The horizontal axis is scaled for probabilities ranging from zero to 1; if you’re more comfortable thinking in percentages, just multiply the number by 100. As usual with all statistical forecasts of rare events, the estimates are mostly close to zero. (On average, only a handful of coup attempts occur worldwide each year, and they’ve become even rarer since the end of the Cold War; see this earlier post for details). For a variety of reasons, the estimates are also less precise than those dots might make them seem, so small differences should be taken with a grain of salt. Even so, these results of this exercise should offer plausible estimates of the chances that we’ll see coup activity in these countries some time in 2012.

Here are a few of things that stand out for me in those results.

  • My forecast supports Daniel’s analysis that the risk of a coup attempt in Sudan in 2012 is relatively high. It ranks 11th on the global list, making it one of the most likely candidates for coup activity this year.
  • Surprising to me, Pakistan barely cracks into the top 40, landing at 38th in the company of Iraq, Cambodia, and Senegal. Those countries all rank higher than 120 others, but the distance between their estimated risk and the risk in most other countries is within the realm of statistical noise. Off the top of my head, I would have identified Pakistan and Iraq as relatively vulnerable countries, and I would not have thought of Cambodia or Senegal as particularly coup-prone cases.
  • Unsurprising to me, China doesn’t even make the top 40. Perhaps there has been some erosion in civilian control in recent years, as Gordon Chang discusses, but it still doesn’t much resemble the countries that have seen full-blown coup attempts in the past few decades.
  • Interestingly, Syria doesn’t show up in the top 40, either. To make sense of this forecast, it’s important to note that assigning a low probability to the occurrence of a coup attempt in Syria in 2012 isn’t the same thing as a prediction that President Bashar al-Assad or his regime will survive the year. It might seem like semantic hair-splitting, but the definitions of coups used to construct the data on which these forecasts are based do not include cases where national leaders resign under pressure or are toppled by rebel groups. So the Syria forecast suggests only that Assad is unlikely to be overthrown by his own security forces. As it happens, my analysis of countries most likely to see democratic transitions in 2012 put Syria in the top 10 on that list.
  • Two of the countries near the top of that list–Guinea and Democratic Republic of Congo–are the ones where the Center for Systemic Peace’s Monty Marshall tells me he saw coup activity meeting his definition in 2011. Those recent coup attempts are influencing the 2012 forecasts, but both countries were also near the top of the 2011 risk list. This boosts my confidence in the reliability of these assessments.

I hope there’s a lot more on (or off) that list that interests readers, and I’d be happy to hear your thoughts on the results in the Comments section. For now, though, I’m going to wrap up this post by providing more information on what those forecasts take into account. The algorithm for successful coups uses just four risk factors, one of which is really just an adjustment to the intercept.

  • Infant mortality rate (relative to annual global median, logged): higher risk in countries with higher rates.
  • Degree of democracy (Polity score, quadratic): higher risk for countries in the mid-range of the 21-point scale.
  • Recent coup activity (yes or no): higher risk if any activity in the past five years.
  • Post-Cold War period: lower risk since 1989.

The algorithm for any coup attempts, successful or failed, uses the following ten risk factors, including all four of the ones used to forecast successful coups.

  • Infant mortality rate (relative to annual global median, logged): higher risk in countries with higher rates.
  • Recent coup activity (count of past five years with any, plus one and logged): higher risk with more activity.
  • Post-Cold War period: lower risk since 1989.
  • Popular uprisings in region (count of countries with any, plus one and logged): higher risk with more of them.
  • Insurgencies in region (count of countries with any, plus one and logged): higher risk with more of them.
  • Economic growth (year-to-year change in GDP per capita): higher risk with slower growth.
  • Regime durability (time since last abrupt change in Polity score, plus one and logged): lower risk with longer time.
  • Ongoing insurgency (yes or no): higher risk if yes.
  • Ongoing civil resistance campaign (yes or no): higher risk if yes.
  • Signatory to 1st Optional Protocol of the UN’s International Covenant on Civil and Political Rights (yes or no): lower risk if yes.
  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13,609 other subscribers
  • Archives

%d bloggers like this: