I’ve just posted to SSRN a report describing a statistical forecasting “tournament” undertaken by the CIA-funded Political Instability Task Force (PITF) in 2009–2010. I was PITF’s research director from 2001 until the start of 2011, and I designed and participated in this melee. You can download the full report here. As the abstract states,
The purpose of the tournament was to evaluate systematically the relative merits of several statistical techniques for forecasting various forms of political change in countries worldwide. Among other things, the tournament confirmed our belief that domain expertise and familiarity with relevant data help lead to more accurate forecasts. When knowledge of theory and data were held constant, the forecasts produced by most of the techniques we tried did not diverge by much. Unsurprisingly, this tournament also confirmed that forecasting rare forms of political instability as far as two years in advance is hard to do well. The forecasting tools the participants produced were generally quite good at discriminating high-risk cases from low-risk ones, but none was very precise.
The idea for the tournament came in 2009 from a story about the Netflix Prize, and I was really gratified to get to implement something a little bit like that process within PITF. I hope the report is useful to other practicing forecasters and would love to hear what folks make of the results.