Forecasting Round-Up No. 4

Another in an occasional series of posts calling out interesting work on forecasting. See here, here, and here for earlier ones.

1. A gaggle of researchers at Penn State, including Phil Schrodt, have posted a new conference paper (PDF) showing how they are using computer-generated data on political interactions around the world (the oft-mentioned GDELT) to forecast various forms of political crisis with respectable accuracy.

One important finding from their research so far: models that mix dynamic data on political interactions with slow-changing data on relevant structural conditions (political, social, economic) produce more accurate forecasts than models that use only one or the other. That’s not surprising, but it is a useful confirmation nonetheless. Thanks to GDELT’s public release, I predict that we’ll see a lot more social-science modelers doing that kind of mixing in the near future.

2. Kaiser Fung reviews Predictive Analytics, a book by Eric Siegel. I haven’t read it, but Kaiser’s review makes me think it would be a good addition to my short list of recommended readings for forecasters.

3. Finally, the 2013 edition of the Failed States Index (FSI) is now up on Foreign Policy‘s web site (here). I call it out here to air a few grievances.

First, it makes me a little crazy that it’s hard to pin down exactly what this index is supposed to do. Is FSI meant to summarize recent conditions or to help forecast new troubles down the road? In their explication of the methodology behind it, the makers of the FSI acknowledge that it’s the largely former but also slide into describing it as an early-warning tool. And what exactly is “state failure,” anyway? They never quite say, which makes it hard to use the index as either a snapshot or a forecast.

Second, as I’ve said before on this blog, I’m also not a big fan of indices that roll up so many different things into a single value on the basis of assumptions alone. Statistical models also combine a lot of information, but they do so with weights that are derived from a systematic exploration of empirical evidence. FSI simply assumes all of its 12 components are equally relevant when there’s ample opportunity to check that assumption against the historical record. Maybe some of the index’s components are more informative than others, so why not use models to try to find out?

Last but not least, on the way FSI is presented, I think the angry reactions it elicits (see comments on previous editions or my Twitter feed whenever FSI is released) are a useful reminder of the risks of presenting rank-ordered lists based on minor variations in imprecise numbers. People spend a lot of time venting about relatively small differences between states (e.g., “Why is Ethiopia two notches higher than Syria?”) when those aren’t very informative, and aren’t really meant to be. I’ve run into the same problem when I’ve posted statistical forecasts of things like coup attempts and nonviolent uprisings, and I’m increasingly convinced that those rank-ordered lists are a distraction. To use the results without fetishizing the numbers, we might do better to focus on the counter-intuitive results (surprises) and on cases whose scores change a lot across iterations.

Leave a comment

5 Comments

  1. Grant

     /  June 24, 2013

    It seems to me that it makes more sense to have nations in “groups” rather than in some numerical list, especially since some of this list just does not make sense. Why would Greece be colored Pea Green (around the same as most of Europe, North America, Japan and South Korea) in 2012 when Greece is (and was then) far more unstable than those other nations? Wouldn’t it have made more sense to put Greece in the same Orange/Red as most of Africa and Asia? They do put it in Orange for 2013, but now they have China in Red. China’s clearly got problems, but what exactly makes it “Critical” now when only last year it was “In Danger”? Things haven’t changed that much there.

    Now, I will fully admit that sometimes a nation that seemed stable and with a firm state will suddenly turn out to have been a giant with weak legs (for example Russia). Maybe it is possible that a recent protest in China over not being permitted to cheat in a major exam (I wish I were making that up) will somehow spread to a much larger revolutionary movement over a host of issues. Perhaps. I’ll still wager a bottle of whiskey that the Chinese Communist Party will still be there and in charge next year though.

    Anyway I’m still confused why FP continues to use this list. Perhaps it’s a matter of pride, they’ve put effort into this and taking it down would look embarrassing?

    Reply
  2. F

     /  June 25, 2013

    I recommend the following article on measuring fragility, looking into conceptualization, measurement and aggregation: http://www.die-gdi.de/CMS-Homepage/openwebcms3_e.nsf/%28ynDK_contentByKey%29/ANES-8X9DKX/$FILE/Ziaja%202012%20What%20do%20fragility%20indices%20measure%20–%20manuscript.pdf

    Reply
  3. Here’s a link to a similar anti-FSI rant that Lionel Beehner and myself blogged about last year. Not sure why that don’t consult people like you when using/constructing the FSI…

    http://www.worldpolicy.org/blog/2012/07/17/failure-failed-states-index

    Reply
  1. Forecasting Round-Up No. 5 | Dart-Throwing Chimp

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 7,730 other followers

%d bloggers like this: