On Revolution, Theory or Ideology?

Humans understand and explain through stories, and the stories we in the US tell about why people rebel against their governments usually revolve around deprivation and injustice. In the prevailing narratives, rebellion occurs when states either actively make people suffer or passively fail to alleviate their suffering. Rebels in the American colonies made this connection explicit in the Declaration of Independence. This is also how we remember and understand lots of other rebellions we “like” and the figures who led them, from Moses to Robin Hood to Nelson Mandela.

As predictors of revolution, though, deprivation and injustice don’t fare so well. A chart in a recent Bloomberg Business piece on “the 15 most miserable economies in the world” got me thinking about this again. The chart shows the countries that score highest on a crude metric that sums a country’s unemployment rate and annual change in its consumer price index. Here are the results for 2015:

Of the 15 countries on that list, only two—Ukraine and Colombia—have ongoing civil wars, and it’s pretty hard to construe current unemployment or inflation as relevant causes in either case. Colombia’s civil war has run for decades. Ukraine’s war isn’t so civil (<cough> Russia <cough>), and this year’s spike in unemployment and inflation are probably more consequences than causes of that fighting. Frankly, I’m surprised that Venezuela hasn’t seen a sustained, large-scale challenge to its government since Hugo Chavez’s death and wonder if this year will prove different. But, so far, it hasn’t. Ditto for South Africa, where labor actions have at least hinted the potential for wider rebellion.

That chart, in turn, reminded me of a 2011 New York Times column by Charles Blow called “The Kindling of Change,” on the causes of revolutions in the Middle East and North Africa.  Blow wrote, “It is impossible to know exactly which embers spark a revolution, but it’s not so hard to measure the conditions that make a country prime for one.” As evidence, he offered the following table comparing countries in the region on several “conditions”:

The chart, and the language that precede it, seem to imply that these factors are ones that obviously “prime” countries for revolution. If that’s true, though, then why didn’t we see revolutions in the past few years in Algeria, Morocco,  Sudan, Jordan, and Iran? Morocco and Sudan saw smaller protest waves that failed to produce revolutions, but so did Kuwait and Bahrain. And why did Syria unravel while those others didn’t? It’s true that poorer countries are more susceptible to rebellions than richer ones, but it’s also true that poor countries are historically common and rebellions are not.

All of which makes me wonder how much our theories of rebellion are really theories at all, and not more awkward blends of selective observation and ideology. Maybe we believe that injustice explains rebellion because we want to live in a universe in which justice triumphs and injustice gets punished. When violent or nonviolent rebellions erupt, we often watch and listen to the participants enumerate grievances about poverty and indignity and take those claims as evidence of underlying causes. We do this even though we know that humans are unreliable archivists and interpreters of their own behavior and motivations, and that we could elicit similar tales of poverty and indignity from many, many more people who are not rebelling in those societies and others. If a recent study generalizes, then we in the US and other rich democracies are also consuming news that systematically casts rebels in a more favorable light than governments during episodes of protest and civil conflict abroad.

Meanwhile, when rebel groups don’t fit our profile as agents of justice, we rarely expand our theories of revolution to account for these deviant cases. Instead, we classify the organizations as “terrorists”, “radicals”, or “criminals” and explain their behavior in some other way, usually one that emphasizes flaws in the character or beliefs of the participants or manipulations of them by other nefarious agents. Boko Haram and the Islamic State are rebel groups in any basic sense of that term, but our explanations of their emergence often emphasize indoctrination instead of injustice. Why?

I don’t mean to suggest that misery, dignity, and rebellion are entirely uncoupled. Socioeconomic and emotional misery may and probably do contribute in some ways to the emergence of rebellion, even if they aren’t even close to sufficient causes of it. (For some deeper thinking on the causal significance of social structure, see this recent post by Daniel Little.)

Instead, I think I mean this post to serve as plea to avoid the simple versions of those stories, at least when we’re trying to function as explainers and not activists or rebels ourselves. In light of what we think we know about confirmation bias and cognitive dissonance, the fact that a particular explanation harmonizes with our values and makes us feel good should not be mistaken for evidence of its truth. If anything, it should motivate us to try harder to break it.

Forecasting Round-Up No. 6

The latest in a very occasional series.

1. The Boston Globe ran a story a few days ago about a company that’s developing algorithms to predict which patients in cardiac intensive care units are most likely to take a turn for the worse (here). The point of this exercise is to help doctors and nurses allocate their time and resources more efficiently and, ideally, to give them more lead time to try to stop those bad turns from happening.

The story suffers some rhetorical tics common to press reports on “predictive analytics.” For example, we never hear any specifics about the analytic techniques used or the predictive accuracy of the tool, and the descriptions of machine learning tilt toward the ingenuous (e.g., “The more data fed into the model, the more accurate the prediction becomes”). On the whole, though, I think this article does a nice job representing the promise and reality of this kind of work. The following passage especially resonated with me, because it describes a process for applying these predictions that sounds like the one I have in mind when building my own forecasting tools:

The unit’s medical director, Dr. Melvin Almodovar, uses [the prediction tool] to double-check his own clinical assessment of patients. Etiometry’s founders are careful to note that physicians will always be the ultimate bedside decision makers, using the Stability Index to confirm or inform their own diagnoses.

Butler said that an information-overload environment like the intensive care unit is ideal for a data-driven risk assessment tool, because the patients teeter between life and death. A predictive model can act as an early warning system, pointing out risky changes in multiple vital signs in a more sophisticated way than bedside alarms.

When our predictive models aren’t as accurate as we’d like or don’t yet have a clear track record, this hybrid approach—decisions are informed by the forecasts but not determined by them—is a prudent way to go. In the cardiac intensive care unit, doctors are already applying their own mental models to these data, so the idea of developing explicit algorithms to do the same isn’t a stretch (or shouldn’t be, but…). Unlike those doctors, though, statistical models won’t suffer from low blood sugar or distraction or become emotionally attached to some patients but not others. Also unlike the mental models doctors use now, statistical models will produce explicit forecasts that can be collected and assessed over time. The resulting feedback will give the stats guys many opportunities to improve their models, and the hospital staff a chance to get a feel for the models’ strengths and limitations. When you’re making such weighty decisions, why wouldn’t you want that additional information?

2. Lyle Ungar recently discussed forecasting with the Machine Intelligence Research Institute (here). The whole thing deserves a read, but I especially liked this framework for thinking about when different methods work best:

I think one can roughly characterize forecasting problems into categories—each requiring different forecasting methods—based, in part, on how much historical data is available.

Some problems, like the geo-political forecasting [the Good Judgment Project is] doing, require lots collection of information and human thought. Prediction markets and team-based forecasts both work well for sifting through the conflicting information about international events. Computer models mostly don’t work as well here—there isn’t a long enough track records of, say, elections or coups in Mali to fit a good statistical model, and it isn’t obvious what other countries are ‘similar.’

Other problems, like predicting energy usage in a given city on a given day, are well suited to statistical models (including neural nets). We know the factors that matter (day of the week, holiday or not, weather, and overall trends), and we have thousands of days of historical observation. Human intuition is not as going to beat computers on that problem.

Yet other classes of problems, like economic forecasting (what will the GDP of Germany be next year? What will unemployment in California be in two years) are somewhere in the middle. One can build big econometric models, but there is still human judgement about the factors that go into them. (What if Merkel changes her mind or Greece suddenly adopts austerity measures?) We don’t have enough historical data to accurately predict economic decisions of politicians.

The bottom line is that if you have lots of data and the world isn’t changing to much, you can use statistical methods. For questions with more uncertain, human experts become more important.

I might disagree on the particular problem of forecasting coups in Mali, but I think the basic framework that Lyle proposes is right.

3. Speaking of the Good Judgment Project (GJP), a bevy of its researchers, including Ungar, have an article in the March 2014 issue of Psychological Science (here) that shows how certain behavioral interventions can significantly boost the accuracy of forecasts derived from subjective judgments. Here’s the abstract:

Five university-based research groups competed to recruit forecasters, elicit their predictions, and aggregate those predictions to assign the most accurate probabilities to events in a 2-year geopolitical forecasting tournament. Our group tested and found support for three psychological drivers of accuracy: training, teaming, and tracking. Probability training corrected cognitive biases, encouraged forecasters to use reference classes, and provided forecasters with heuristics, such as averaging when multiple estimates were available. Teaming allowed forecasters to share information and discuss the rationales behind their beliefs. Tracking placed the highest performers (top 2% from Year 1) in elite teams that worked together. Results showed that probability training, team collaboration, and tracking improved both calibration and resolution. Forecasting is often viewed as a statistical problem, but forecasts can be improved with behavioral interventions. Training, teaming, and tracking are psychological interventions that dramatically increased the accuracy of forecasts. Statistical algorithms (reported elsewhere) improved the accuracy of the aggregation. Putting both statistics and psychology to work produced the best forecasts 2 years in a row.

The atrocities early-warning project on which I’m working is learning from GJP in real time, and we hope to implement some of these lessons in the opinion pool we’re running (see this conference paper for details).

Speaking of which: If you know something about conflict or atrocities risk or a particular part of the world and are interested in volunteering as a forecaster, please send an email to ewp@ushmm.org.

4. Finally, Daniel Little writes about the partial predictability of social upheaval on his terrific blog, Understanding Society (here). The whole post deserves reading, but here’s the nub (emphasis in the original):

Take unexpected moments of popular uprising—for example, the Arab Spring uprisings or the 2013 riots in Stockholm. Are these best understood as random events, the predictable result of long-running processes, or something else? My preferred answer is something else—in particular, conjunctural intersections of independent streams of causal processes (link). So riots in London or Stockholm are neither fully predictable nor chaotic and random.

This matches my sense of the problem and helps explain why predictive models of these events will never be as accurate as we might like but are still useful, as are properly elicited and combined forecasts from people using their noggins.

  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13,612 other followers

  • Archives

%d bloggers like this: