Alarmed By Iraq

Iraq’s long-running civil war has spread and intensified again over the past year, and the government’s fight against a swelling Sunni insurgency now threatens to devolve into the sort of indiscriminate reprisals that could produce a new episode of state-led mass killing there.

The idea that Iraq could suffer a new wave of mass atrocities at the hands of state security forces or sectarian militias collaborating with them is not far fetched. According to statistical risk assessments produced for our atrocities early-warning project (here), Iraq is one of the 10 countries worldwide most susceptible to an onset of state-led mass killing, bracketed by places like Syria, Sudan, and the Central African Republic where large-scale atrocities and even genocide are already underway.

Of course, Iraq is already suffering mass atrocities of its own at the hands of insurgent groups who routinely kill large numbers of civilians in indiscriminate attacks, every one of which would stun American or European publics if it happened there. According to the widely respected Iraq Body Count project, the pace of civilian killings in Iraq accelerated sharply in July 2013 after a several-year lull of sorts in which “only” a few hundred civilians were dying from violence each month. Since the middle of last year, the civilian toll has averaged more than 1,000 fatalities per month. That’s well off the pace of 2006-2007, the peak period of civilian casualties under Coalition occupation, but it’s still an astonishing level of violence.

Monthly Counts of Civilian Deaths from Violence in Iraq (Source: Iraq Body Count)

Monthly Counts of Civilian Deaths from Violence in Iraq (Source: Iraq Body Count)

What seems to be increasing now is the risk of additional atrocities perpetrated by the very government that is supposed to be securing civilians against those kinds of attacks. A Sunni insurgency is gaining steam, and the government, in turn, is ratcheting up its efforts to quash the growing threat to its power in worrisome ways. A recent Reuters story summarized the current situation:

In Buhriz and other villages and towns encircling the capital, a pitched battle is underway between the emboldened Islamic State of Iraq and the Levant, the extremist Sunni group that has led a brutal insurgency around Baghdad for more than a year, and Iraqi security forces, who in recent months have employed Shi’ite militias as shock troops.

And this anecdote from the same Reuters story shows how that battle is sometimes playing out:

The Sunni militants who seized the riverside town of Buhriz late last month stayed for several hours. The next morning, after the Sunnis had left, Iraqi security forces and dozens of Shi’ite militia fighters arrived and marched from home to home in search of insurgents and sympathizers in this rural community, dotted by date palms and orange groves.

According to accounts by Shi’ite tribal leaders, two eyewitnesses and politicians, what happened next was brutal.

“There were men in civilian clothes on motorcycles shouting ‘Ali is on your side’,” one man said, referring to a key figure in Shi’ite tradition. “People started fleeing their homes, leaving behind the elders and young men and those who refused to leave. The militias then stormed the houses. They pulled out the young men and summarily executed them.”

Sadly, this escalatory spiral of indiscriminate violence is not uncommon in civil wars. Ben Valentino, a collaborator of mine in the development of this atrocities early-warning project, has written extensively on this topic (see especially here , here, and here). As Ben explained to me via email,

The relationship between counter-insurgency and mass violence against civilians is one of the most well-established findings in the social science literature on political violence. Not all counter-insurgency campaigns lead to mass killing, but when insurgent groups become large and effective enough to seriously threaten the government’s hold on power and when the rebels draw predominantly on local civilians for support, the risks of mass killing are very high. Usually, large-scale violence against civilians is neither the first nor the only tactic that governments use to defeat insurgencies. They may try to focus operations primarily against armed insurgents, or even offer positive incentives to civilians who collaborate with the government. But when less violent methods fail, the temptation to target civilians in the effort to defeat the rebels increases.

Right now, it’s hard to see what’s going to halt or reverse this trend in Iraq. “Things can get much worse from where we are, and more than likely they will,” Daniel Serwer told IRIN News for a story on Iraq’s escalating conflict (here). Other observers quoted in the same story seemed to think that conflict fatigue would keep the conflict from ballooning further, but that hope is hard to square with the escalation of violence that has already occurred over the past year and the fact that Iraq’s civil war never really ended.

In theory, elections are supposed to be a brake on this process, giving rival factions opportunities to compete for power and influence state policy in nonviolent ways. In practice, this often isn’t the case. Instead, Iraq appears to be following the more conventional path in which election winners focus on consolidating their own power instead of governing well, and excluded factions seek other means to advance their interests. Here’s part of how the New York Times set the scene for this week’s elections, which incumbent prime minister Nouri al-Maliki’s coalition is apparently struggling to win:

American intelligence assessments have found that Mr. Maliki’s re-election could increase sectarian tensions and even raise the odds of a civil war, citing his accumulation of power, his failure to compromise with other Iraqi factions—Sunni or Kurd—and his military failures against Islamic extremists. On his watch, Iraq’s American-trained military has been accused by rights groups of serious abuses as it cracks down on militants and opponents of Mr. Maliki’s government, including torture, indiscriminate roundups of Sunnis and demands of bribes to release detainees.

Because Iraq ranked so high in our last statistical risk assessments, we posted a question about it a few months ago on our “wisdom of (expert) crowds” forecasting system. Our pool of forecasters is still relatively small—89 as I write this—but the ones who have weighed in on this topic so far have put it in what I see as a middle tier of concern, where the risk is seen as substantial but not imminent or inevitable. Since January, the pool’s estimated probability of an onset of state-led mass killing in Iraq in 2014 has hovered around 20 percent, alongside countries like Pakistan (23 percent), Bangladesh (20 percent), and Burundi (19 percent) but well behind South Sudan (above 80 percent since December) and Myanmar (43 percent for the risk of a mass killing targeting the Rohingya in particular).

Notably, though, the estimate for Iraq has ticked up a few notches in the past few days to 27 percent as forecasters (including me) have read and discussed some of the pre-election reports mentioned here. I think we are on to something that deserves more scrutiny than it appears to be getting.

Relative Risks of State-Led Mass Killing Onset in 2014: Results from a Wiki Survey

In early December, as part of our ongoing work for the Holocaust Museum’s Center for the Prevention of Genocide, Ben Valentino and I launched a wiki survey to help assess risks of state-led mass killing onsets in 2014 (here).

The survey is now closed and the results are in. Here, according to our self-selected crowd on five continents and the nearly 5,000 pairwise votes it cast, is a map of how the world looks right now on this score. The darker the shade of gray, the greater the relative risk that in 2014 we will see the start of an episode of mass killing in which the deliberate actions of state agents or other groups acting at their behest result in the deaths of at least 1,000 noncombatant civilians from a discrete group over a period of a year or less.

wikisurvey.masskilling.state.2014.map

Smaller countries are hard to find on that map, and it’s difficult to compare colors across regions, so here is a dot plot of the same data in rank order. Countries with red dots are ones that had ongoing episodes of state-led mass killing at the end of 2013: DRC, Egypt, Myanmar, Nigeria, North Korea, Sudan, and Syria. It’s possible that these countries will experience additional onsets in 2014, but we wonder if some of our respondents didn’t also conflate the risk of a new onset with the presence or intensity of an ongoing one. Also, there’s an ongoing episode in CAR that was arguably state-led for a time in 2013, but the Séléka militias no longer appear to be acting at the behest of the supposed government, so we didn’t color that dot. And, of course, there are at least a few ongoing episodes of mass killing being perpetrated by non-state actors (see this recent post for some ideas), but that’s not what we asked our crowd to consider in this survey.

wikisurvey.masskilling.state.2014.dotplot

It is very important to understand that the scores being mapped and plotted here are not probabilities of mass-killing onset. Instead, they are model-based estimates of the probability that the country in question is at greater risk than any other country chosen at random. In other words, these scores tell us which countries our crowd thinks we should worry about more, not how likely our crowd thinks a mass-killing onset is.

We think the results of this survey are useful in their own right, but we also plan to compare them to, and maybe even combine them with, other forecasts of mass killing onsets as part of the public early-warning system we expect to launch later this year.

In the meantime, if you’re interested in tinkering with the scores and our plots of them, you can find the code I used to make the map and dot plot on GitHub (here) and the data in .csv format on my Google Drive (here). If you have better ideas on how to visualize this information, please let us know and share your code.

UPDATE: Bad social scientist! With a tweet, Alex Hanna reminded me that I really need to say more about the survey method and respondents. So:

We used All Our Ideas to conduct this survey, and we embedded that survey in a blog post that defined our terms and explained the process. The blog post was published on December 1, and we publicized it through a few channels, including: a note to participants in a password-protected opinion pool we’re running to forecast various mass atrocities-related events; a posting to a Conflict Research group on Facebook; an email to the president of the American Association of Genocide Scholars asking him to announce it on their listserv; and a few tweets from my Twitter account at the beginning and end of the month. Some of those tweets were retweeted, and I saw a few other people post or tweet their own links to the blog post or survey as well.

As for Alex’s specific question about who comprised our crowd, the short answer is that we don’t and can’t know. Participation in All Our Ideas surveys is anonymous, and our blog post was not private. From the vote-level data (here), I can see that we ended the month with 4,929 valid votes from 147 unique voting sessions. I know for a fact that some people voted in more than one session—I cast a small number of votes on a few occasions, and I know at least one colleague voted more than once—so the number of people who participated was some unknown number smaller than 147 who found their way to the survey through those postings and tweets.

Mass Killing in Egypt

Let’s define a state-led mass killing as an episode in which state security forces or groups acting at their behest deliberately kill at least 1,000 noncombatant civilians from a discrete group in a relatively short period of time—weeks, months, or maybe even several years. This is a paraphrased version of the definition my colleague Ben Valentino developed for a U.S. government-funded research project, so using it allows us to identify and compare many episodes over time, as I did in another recent post.

Since World War II, nearly all of the state-led mass killings that have occurred around the world have followed one of three basic scenarios, all of them involving apparent threats to rulers’ power.

First and most common, state security forces fighting an insurgency or locked in a civil war kill large numbers of civilians whom they accuse of supporting their rivals, or sometimes just kill indiscriminately. The genocide in Guatemala is an archetypal example of this scenario. In some cases, like Rwanda, the state also enlists militias or even civilians to assist in that killing.

Second, rulers confronting budding threats to their power—usually a nonviolent popular uprising or coup plot—violently repress and attack their challengers in an attempt to quash the apparent threat. The anti-communist massacres in Indonesia in 1965-1966 fit this pattern. In rare cases, like North Korea today, just the possibility of such a threat suffices to draw the state into killing large numbers of civilians. More often, state repression of nonviolent uprisings succeeds in quashing the challenge with fewer than 1,000 civilian deaths, as happened in China in 1989, Uzbekistan in 2005, and Burma in 2007.

Third, rulers who have recently seized power by coup or revolution sometimes kill large numbers of civilian supporters of the faction they have just replaced as part of their efforts to consolidate their power. The mass killings carried out by the Khmer Rouge in Cambodia in the late 1970s are probably the most extreme example of this scenario, but Argentina’s “dirty war” and the long-running political purges that began in several East European countries after World War II also fit the pattern.

What happened in Egypt yesterday looks like a slide into the third scenario. Weeks after a military coup toppled Mohamed Morsi, state security forces violently assaulted crowds using nonviolent action to protest the coup and demand Morsi’s restoration to the presidency. The death toll from yesterday’s ruthless repression has already surpassed 500 and seems likely to rise further as more of the wounded die and security forces continue to repress further attempts at resistance and defiance. What’s more, the atrocities of the past 24 hours come on top of the killings of scores if not hundreds of Brotherhood supporters around the country over the past several weeks (see this spreadsheet maintained by The Guardian for details).

One of the many rationalizations offered for the July 3 coup was the argument that the Muslim Brotherhood had used violence to suppress its political rivals during and after mass protests against Morsi last December. People were right to challenge the Muslim Brotherhood over that thuggery, which was arguably a nascent version of the second scenario described above. In calling on the military to deliver them from that threat, however, some of those challengers seem to have struck a Faustian bargain that is now producing killings on a much grander scale.

Do Elections Trigger Mass Atrocities?

Kenya plans to hold general elections in early March this year, and many observers fear those contests will spur a reprisal of the mass violence that swept parts of that country after balloting in December 2007.  The Sentinel Project for Genocide Prevention says Kenya is at “high risk” of genocide in 2013, and a recent contingency-planning memo from Joel Barkan the Council on Foreign Relations asserts that “there will almost certainly be further incidents of violence in the run-up to the 2013 elections.” As a recent Africa Initiative backgrounder points out, this violence has roots that stretch much deeper than the 2007 elections, but the fear that mass violence will flare again around this year’s balloting seems well founded.

All of which got me wondering: is this a generic problem? We know that election-related violence is a real and multifaceted thing. We also have works by Jack Snyder and Amy Chua, among others, arguing that democratization actually makes some countries more susceptible to ethnic and nationalist conflict rather than less, as democracy promoters often claim. What I’m wondering, though—as someone who has long studied democratization and is currently working on tools to forecast genocide and other forms of mass killing—is whether or not elections substantially increase the risk of mass atrocities in particular, where “mass atrocities” means the deliberate killing of large numbers of unarmed civilians for apparently political ends.

Best I can tell, the short answer is no. After applying a few different statistical-modeling strategies to a few measures of atrocities, I see little evidence that elections commonly trigger the onset or intensification of this type of political violence. The absence of evidence isn’t the same thing as evidence of absence, but these results convince me that national elections aren’t a major risk factor for mass killing.

If you’re interested in the technical details, here’s what I did and what I found:

My first cut at the problem looked for a connection between national elections and the onset of state-sponsored mass killings, defined as “a period of sustained violence” in which ” the actions of state agents result in the intentional death of at least 1,000 noncombatants from a discrete group.” That latter definition comes from work Ben Valentino and I did for my old research program, the Political Instability Task Force, and it restricts the analysis to episodes of large-scale killing by states or other groups acting at their behest. Defined as such, mass killings are akin to genocide in their scale, and there have only been about 110 of them since 1945.

So, do national elections help trigger this type of mass killing? To try to answer this question, I thought of elections as a kind of experimental “treatment” that some country-years get and others don’t. I used the National Elections Across Democracy and Autocracy (NELDA) data set to identify country-years since 1945 with national elections for chief executive or legislature or both, regardless of how competitive those elections were. I then used the MatchIt package in R to set up a comparison of country-years with and without elections within 107 groups that matched exactly on several other variables identified by prior research as risk factors for mass-killing onset: autocracy vs. democracy, exclusionary elite ideology (yes/no), salient elite ethnicity (yes/no), ongoing armed conflict (yes/no), any mass killing since 1945 (yes/no), and Cold War vs. post-Cold War period. Finally, I used conditional logistic regression to estimate the difference in risk between election and non-election years within those groups.

The results? In my data, mass-killing episodes were 80% as likely to begin in election years as non-election years, other things being equal. The 95% confidence interval for this association was wide (45% to 145%), but the result suggests that, if anything, countries are actually somewhat less prone to suffer onsets of mass killing in election years as non-election years.

I wondered if the risk might differ by regime type, so I reran the analysis on the subset of cases that were plausibly democratic. The estimate was effectively unchanged (80%, CI of 35% to 185%). Then I thought it might be a post-Cold War thing and reran the analysis using only country-years from 1991 forward. The estimate moved, but in the opposite of the anticipated direction. Now it was down to 60%, with a CI of 17% to 215%.

These estimates got me worried that something had gone wacky in my data, so I reran the matching and conditional logistic regression using coup attempts (successful or failed) instead of elections as the “treatment” of interest. Several theorists have identified threats to incumbents’ power as a cause of mass atrocities, and coups are a visible and discrete manifestation of such threats. My analysis strongly confirmed this view, indicating that mass-killing episodes were nearly five times as likely to start in years with coup attempts as years without, other things being equal. More important for present purposes, this result increased my confidence in the reliability of my earlier finding on elections, as did the similar estimates I got from models with country fixed effects, country-specific intercepts (a.k.a. random effects), and interaction terms that allowed the effects of elections to vary across regime types and historical eras.

Then I wondered if this negative finding wasn’t an artifact of the measure I was using for mass atrocities. The 1,000-death threshold for “mass killing” is quite high, and the restriction to killings by states or their agents ignores situations of grave concern in which rebel groups or other non-state actors are the ones doing the murdering. Maybe the danger of election years would be clearer if I looked at atrocities on a smaller scale and ones perpetrated by non-state actors.

To do this, I took the UCDP One-Sided Violence Dataset v1.4 and wrote an R script that aggregated its values for specific conflicts into annual death counts by country and perpetrator (government or non-government). Then I used R’s ‘pscl’ package to estimate zero-inflated negative binomial regression (ZINB) models that treat the death counts as the observable results of a two-stage process: one that determines whether or not a country has any one-sided killing in a particular year, and then another that determines how many deaths occur, conditional on there being any. In addition to my indicator for election years, these models included all the risk factors used in the earlier matching exercise, plus population size and the logged counts of deaths from one-sided violence by government and non-government actors (separately) in the previous year. All of these variables were included in the logistic regression “hurdle” model; only elections, population size, and the lagged death counts were included in the conditional count models.

To my surprise once again, the results suggested that, if anything, atrocities the risk of mass atrocities is actually lower in years with national elections. In the model of government-perpetrated violence, the coefficient for the election indicator in the hurdle model was barely distinguishable from zero (0.04), and the association in the count portion was modestly negative (-0.20, s.e. of 0.20). In the model of violence perpetrated by other groups, the effect in the hurdle portion was modestly negative (-0.25, s.e. of 0.20), and the effect in the count portion was decidedly negative (-0.82, s.e. of 0.19). When I reran the models with separate indicators for executive and legislative elections, the results bounced around a little bit, but the basic patterns remained unchanged. None of the models showed a substantial, positive association between either type of election and the occurrence or scale of one-sided violence against civilians.

In light of the weakness of the observed effects, the noisiness of the measures employed, and my prior beliefs about the effects of elections on risks of mass killing—shaped in part by the Kenyan case I discussed at the start of this post—I’m not quite ready to assert that election years actually reduce the risk of mass atrocities. What I am more comfortable doing, however, is ignoring elections in statistical models meant to forecast mass atrocities across large numbers of countries.

If you’re interested in replicating or tweaking this analysis, please email me at ulfelder@gmail.com, and I’ll be happy to send you the data and R scripts (one to get country-year summaries of the UCDP data, another to run the matching and modeling) I used to do it. [UPDATE: I’ve put the scripts and data in a publicly accessible folder on Google Drive. If you try that link and it doesn’t work, please let me know.] Ideally, I would cut out the middleman by putting them in a Github repository, but I haven’t quite figured out how to do that yet. If you’re in the DC area and interested in getting paid to walk me me through that process, please let me know.

Assessing and Improving Expert Forecasts

What follows is a guest post by Kelsey Woerner, a soon-to-graduate senior at Dartmouth College double-majoring in government and psychology. She completed the research described below as part of her senior honors thesis in Dartmouth’s Department of Government under the guidance of her advisor (and my colleague), Ben Valentino. Her thesis is a terrific piece of research on judgment-based forecasting, and I’m excited to have her share her findings here.

The title of this blog alludes to the unfortunate state of expert political forecasting today.  The only research that has systematically assessed the accuracy of expert political judgment leads us to the sorry conclusion that experts often perform little better than chance. We might as well be “dart-throwing chimps.”

Lots of people have lamented the state of forecasting, but no research to date has asked whether it’s possible to improve the accuracy of our predictions and systematically compared strategies for doing so. That’s what I aimed to do in the project I describe in this post. The findings offer a beacon of hope on the bleak horizon of the forecasting frontier. It turns out that two simple strategies improve accuracy by 10 to 15 percent.

In an original month-long, four-wave panel experiment, I gave forecasters different types of information and watched to see how their accuracy changed as a result. In particular, I looked at the effects of providing forecasters with information about the historical frequencies (base rates) of the phenomena they are predicting and simple feedback about their performance in previous waves. For each of four weeks, participants made probabilistic forecasts in four domains relating to domestic and international politics and economics: the Dow Jones Industrial Average stock index, the national unemployment rate, Obama’s presidential job approval ratings, and the price of crude oil. I randomly assigned 308 participants to one of three groups. The base rate group received information about how frequently changes of various magnitude in these variables occurred in the previous year; the performance feedback group received information about how far off their predictions were the previous week; the control group received no extra information. I also recruited an “expert” subgroup of people with backgrounds in finance and economics in order to look at the effects of expertise on accuracy of Dow predictions, and I distributed these 72 experts evenly among the three groups.

The results are very encouraging.  Both strategies significantly improved forecasting accuracy. On average, participants who received base rate or performance feedback information were 10 or 15 percent more accurate than those who did not. At the end of this post, I’ve included graphs and tables of these results, as well as some extra explanation.

A few other key findings might also be of interest to readers.  Forecasters who received either type of information were able to “learn,” or improve their own accuracy, over the course of the month. In addition, the experiment tells us about how forecasters used the information they received. Base-rate information tended to moderate participants’ forecasts, while feedback produced more aggressive forecasts. Experts predicted with slightly better accuracy than nonexperts in their domain of expertise (the Dow), and they appeared to use treatment information more carefully and effectively than nonexperts.

Of course, these findings would be mildly interesting but not very relevant or useful if expert political forecasters were already employing these kinds of information in their forecasting. In order to know whether these strategies might help forecasters in the real world, I needed to assess whether they already use feedback and base rates, either formally or informally, when making predictions. With a combination of case studies and surveys, I looked at whether expert political analysts: 1) make falsifiable predictions that can be evaluated for accuracy and used for feedback, and 2) have a firm understanding of base rate information.

In case studies of two premier NGOs that aim to warn policy makers of political instability–International Crisis Group (ICG) and Fund for Peace (FFP)–I randomly sampled 20 analytical products from each organization, counted predictions, and scored them according to a set of falsifiability criteria. Not one of the 89 ICG predictions or 27 FFP predictions provided the three pieces of information necessary to evaluate accuracy: 1) a clearly defined event, 2) a clearly defined time frame, and 3) an assessment of the probability of the given event in the given time frame. Expert political forecasters are making predictions in their analytical reports, but these predictions aren’t falsifiable and therefore can’t generate feedback.

I also used two small-scale surveys of analysts at comparable NGOs to assess whether these analysts demonstrate an understanding of the base rates for the types of events they often predict. Analysts were asked to estimate the average annual onsets of rare political events such as coups, civil wars, and episodes of mass killing, and to assess the likelihood of similar events over the course of the coming year in at-risk countries. These experts significantly over- or under-estimated base rates and consistently assigned extremely high probabilities to very unlikely outcomes.

These case studies and surveys indicate that expert political analysts today don’t use feedback and base rate information when they formulate forecasts. Combined with the experimental findings described above, this strongly suggests that analysts could significantly improve their accuracy with these straightforward and inexpensive strategies. For policy makers receiving daily briefings and analysts producing regular warning reports, the promise of improved accuracy holds considerable value.

The greatest impediment to improving predictive judgment is the reluctance of forecasters to make falsifiable predictions and systematically assess performance. This research suggests new ways to encourage forecasters to assess their own performance. It shows that tracking performance is in forecasters’ own best interest because it offers the very tangible deliverable of substantially improved forecasting accuracy.

Let’s hope for the day when one’s predictive judgment is valued because it is supported by an established track record. The unfortunate reality is that such a day is far away, or at least farther than we would like (and I hope that this prediction is proven wrong!). We don’t need to sit idly by, however, and simply wait for that day to come. The strategies explored in this project set us at the beginning of a long but promising path towards minimizing controllable forecasting error and thereby improving our predictive judgment.

Supplemental Materials

The following graphs and tables show the significant and substantive effects of the base rate and feedback treatments. I calculated accuracy scores according to two scoring rules: a quadratic score (often called the Brier score) and a linear score. Although most forecasting literature uses quadratic scoring because it rewards people more for predicting according to their true beliefs, I used both rules for a few reasons. First, my feedback treatment group received feedback according to a linear rule–easier to understand from the forecaster’s perspective–so it made sense to evaluate them according to that rule as well. Second, both rules almost always indicate the same direction of a trend, but sometimes results are significant under one but not the other. This is evident in the following graphs.

Some readers might also be curious to see the breakdown of these results by domain. The following graphs include this information. They illustrate clear improvements in the treatment groups in the Oil and Presidential Approval (Pres) domains as well as the ambiguous effect of the treatments in the Unemployment (Unem) and Dow domains. In these latter domains–the more volatile, difficult-to-predict ones–where treatment information did not appear to help, all of the groups did much worse and claimed relatively similar poor performance as indicated by the much higher accuracy scores. It is important to note that significance values are low and generally observable using only one of the scoring rules in these volatile domains. Treatment information may not have improved accuracy of predictions in the Unem and Dow domains, but it did not hurt.

Building a Public Early-Warning System for Genocide and Mass Atrocities

Can we see genocides and other mass atrocities coming? If so, how, and how far in advance? And would public dissemination of those forecasts help policy-makers, advocates, and affected societies prevent those atrocities from occurring?

In October 2011, the U.S. Holocaust Memorial Museum (USHMM) convened a group of advocates and academics for a one-day seminar to ruminate on these questions. These are big and difficult problems, and the event really had a more practical goal at its heart: to help the Museum and other civil-society groups assess the potential for, and value of, a new public early-warning system focused on genocide and other mass atrocities.

Based on that conversation and the recommendations of USHMM Fellow and Dartmouth professor Ben Valentino, the Museum decided that the need and opportunity were sufficient to start considering what such a system might look like and how to build it. In March 2012, the Museum hired me for an eight-month consulting project, to finish in October, that’s meant to push this process forward.

My project has two main parts. First and most important, I’ve been asked to write a prospectus detailing the elements and funding this program would require. Second, I’ve been asked to build a statistical tool that could produce one set of forecasts for this program, if it gets built. Under Ben’s proposal, a second set of forecasts would come from some form of expert survey, and the two could be compared and combined to useful effect.

As I get deeper into the project, I expect to blog occasionally about what I’m working on and where I could use some help. I’ve already had very helpful exchanges with numerous people engaged in related projects, including former Political Instability Task Force colleagues Ted Gurr and Barbara Harff, who produces her own global genocide risk list each year, and Sentinel Project founder Christopher Tuckwood. I’m also slated to present results from a preliminary version of my statistical analysis at NYU’s Northeast Methods Program (NEMP) in early May, and my work will surely benefit from the constructive criticism that esteemed audience can provide.

In the meantime, I wanted to spread the word about the Museum’s interest in this endeavor and invite your reactions and suggestions. If you know of any relevant research or advocacy projects or might be interested in supporting this work in some fashion, please post a comment or drop me a line at ulfelder <at> gmail <dot> com.

  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13,607 other subscribers
  • Archives

%d bloggers like this: