Forecasting Round-Up No. 2

N.B. This is the second in an occasional series of posts I’m expecting to do on forecasting miscellany. You can find the first one here.

1. Over at Bad Hessian a few days ago, Trey Causey asked, “Where are the predictions in sociology?” After observing how the accuracy of some well-publicized forecasts of this year’s U.S. elections has produced “growing public recognition that quantitative forecasting models can produce valid results,” Trey wonders:

If the success of these models in forecasting the election results is seen as a victory for social science, why don’t sociologists emphasize the value of prediction and forecasting more? As far as I can tell, political scientists are outpacing sociologists in this area.

I gather that Trey intended his post to stimulate discussion among sociologists about the value of forecasting as an element of theory-building, and I’m all for that. As a political scientist, though, I found myself focusing on the comparison Trey drew between the two disciplines, and that got me thinking again about the state of forecasting in political science. On that topic, I had two brief thoughts.

First, my simple answer to why forecasting is getting more attention from political scientists that it used to is: money! In the past 20 years, arms of the U.S. government dealing with defense and intelligence seem to have taken a keener interest in using tools of social science to try to anticipate various calamities around the world. The research program I used to help manage, the Political Instability Task Force (PITF), got its start in the mid-1990s for that reason, and it’s still alive and kicking. PITF draws from several disciplines, but there’s no question that it’s dominated by political scientists, in large part because the events it tries to forecast—civil wars, mass killings, state collapses, and such—are traditionally the purview of political science.

I don’t have hard data to back this up, but I get the sense that the number and size of government contracts funding similar work has grown substantially since the mid-1990s, especially in the past several years. Things like the Department of Defense’s Minerva Initiative; IARPA’s ACE Program; the ICEWS program that started under DARPA and is now funded by the Office of Naval Research; and Homeland Security’s START consortium come to mind. Like PITF, all of these programs are interdisciplinary by design, but many of the topics they cover have their theoretical centers of gravity in political science.

In other words, through programs like these, the U.S. government is now spending millions of dollars each year to generate forecasts of things political scientists like to think about. Some of that money goes to private-sector contractors, but some of it is also flowing to research centers at universities. I don’t think any political scientists are getting rich off these contracts, but I gather there are bureaucratic and career incentives (as well as intellectual ones) that make the contracts rewarding to pursue. If that’s right, it’s not hard to understand why we’d be seeing more forecasting come out of political science than we used to.

My second reaction to Trey’s question is to point out that there actually isn’t a whole lot of forecasting happening in political science, either. That might seem like it contradicts the first, but it really doesn’t. The fact is that forecasting has long been pooh-poohed in academic social sciences, and even if that’s changing at the margins in some corners of the discipline, it’s still a peripheral endeavor.

The best evidence I have for this assertion is the brief history of the American Political Science Association’s Political Forecasting Group. To my knowledge—which comes from my participation in the group since its establishment—the Political Forecasting Group was only formed several years ago, and its membership is still too small to bump it up to the “organized section” status that groups representing more established subfields enjoy. What’s more, almost all of the panels the group has sponsored so far have focused on forecasts of U.S. elections. That’s partly because those papers are popular draws in election years, but it’s also because the group’s leadership has had a really hard time finding enough scholars doing forecasting on other topics to assemble panels.

If the discipline’s flagship association in one of the countries most culturally disposed to doing this kind of work has trouble cobbling together occasional panels on forecasts of things other than elections, then I think it’s fair to say that forecasting still isn’t a mainstream pursuit in political science, either.

2. Speaking of U.S. election forecasting, Drew Linzer recently blogged a clinic in how statistical forecasts should be evaluated. Via his web site, Votamatic, Drew:

1) began publishing forecasts about the 2012 elections well in advance of Election Day (so there couldn’t be any post hoc hemming and hawing about what his forecasts really were);

2) described in detail how his forecasting model works;

3) laid out a set of criteria he would use to judge those forecasts after the election; and then

4) walked us through his evaluations soon after the results were (mostly) in.

Oh, and in case you’re wondering: Drew’s model performed very well, thank you.

3. But you know what worked a little better than Drew’s election-forecasting model, and pretty much everyone else’s, too? An average of the forecasts from several of them. As it happens, this pattern is pretty robust. A well-designed statistical model is great for forecasting, but an average of forecasts from a number of them is usually going to be even better. Just ask the weather guys.

4. Finally, for those of you—like me—who want to keep holding pundits’ feet to the fire long after the election’s over, rejoice that Pundit Tracker is now up and running, and they even have a stream devoted specifically to politics. Among other things, they’ve got John McLaughlin on the record predicting that Hillary Clinton will win the presidency in 2016, and that President Obama will not nominate Susan Rice to be Secretary of State. McLaughlin’s hit rate so far is a rather mediocre 49 percent (18 of 37 graded calls correct), so make of those predictions what you will.

Leave a comment

8 Comments

  1. Thanks for the useful overview. Speaking of forecasting, Havard Hegre has gotten TIME coverage for his upcoming ISQ piece on predicting the future of war.

    Now, what I’m missing here (not actually in your Blog but more in general) is a discussion of how much sense it actually makes to make statistical predictions in real-life politics based on observational data. As you’ve pointed out in previous posts, we run into a complexity problem for the kinds of events we’re interested in, like wars, coup d’etats, institutional development etc. I don’t have a sophisticated training in statistics (I know the basics, though), but I’m wondering: if these kinds of events are truly Talebian “gray swans”, how could we ever attempt to predict them if the models we use to do so do not take this complexity into account (i.e. by being based on normal distributions)?

    Complexity theory posits the sum cannot be computed from the behavior of its parts, due to nonlinearity and feedback etc. So, if we accept this (we don’t have to, but then we would need to provide an argument why we don’t, which I don’t really see), why are still trying to collect data on the parts to predict the sum? So to be a little provocative here: how would you “rescue” forecasting in complex systems when we don’t have the models to do so? (maybe we actually do and I just don’t know them. As I said, I’m lacking the formal training and the overview, but I’m very much interested in the philosophical/epistemological ramifications. And I’m trying to get the statistical training soon, so I can assess the arguments myself.)

    I have another related philosophical issue that actually stems (a little bit) from Isaac Asimov’s Foundation books. Even if we had massively more and better-quality data (like, say, psychohistorians do :) ) and could compute precise forecasts: Wouldn’t the knowledge of the prediction alter the predicted outcome in nonpredictable ways? You know, like Hari Seldon and the Mule did. This is not only a science-fictional problem (although it makes extremely good science fiction in Asimov’s case). If, in the next election, we know that Nate Silver’s predictions are largely correct, wouldn’t that cause Democrats & Republicans to adopt their strategy and even voters’ behavior according to his predictions, which then, in turn, would make Silver’s predictions incorrect? So, I guess what I’m getting at is that political forecasting can’t really solve the agency-structure debate, either. Or maybe needs to be more explicit.

    Sorry for that somewhat lengthy comment. It’s just that I’m enjoying your posts on these issues and they make me think, so take it as a compliment. :)

    Reply
    • Please don’t apologize for a long but thoughtful comment. This is like a great blogged response crammed into the miserable corset of the Comments field.

      Your general question about predicting political events is a profound one, and I’m not going to try to address it in full here. For now, I’ll just note that complexity doesn’t always lead to unpredictability. There are plenty of real-world phenomena—the sunrise, the weather, your heart rate—that we can forecast pretty reliably in spite of their situation in complex systems. Political events have the added difficulty of agency, but the fact that we can forecast some of them with tolerable and sometimes even impressive accuracy is evidence enough, I think, that they are not purely unpredictable.

      Your point about the effects of forecasts on political behavior is a fascinating one that we in the business of doing this forecasting probably don’t talk about enough. We’ve been forecasting politics as long as there’ve been politics, so I don’t think it’s a new one. I guess the real question is whether it makes a difference when the forecasts become more accurate or reach further into the future. That’s an issue that deserves more attention than it’s getting now, I think.

      Reply
      • Thanks for the quick reply!

        Political events have the added difficulty of agency, but the fact that we can forecast some of them with tolerable and sometimes even impressive accuracy is evidence enough, I think, that they are not purely unpredictable.

        I very much agree. I’d love to read more about this. My hunch is that predictability depends on the specific complexity of the situation + quality of data. US elections would be less complex situation + very good data, while civil war outbreak in sub-Saharan Africa a very complex situation + little data.

        But a) I don’t know if that hunch is correct; b) even if it is, how do we know a complex situation when we see one? Supreme Court methods probably won’t be sufficient here. Also, are there degrees of complexity? I am not sure if Taleb is providing some advice on how to distinguish complexity from non-complexity (I’d have to re-read the Black Swan), but I’m pretty sure, that he doesn’t offer measures for degrees of complexity. But we would need them in order to make reasonable claims about the predictability of a situation.

        Re: Effects on political behavior. Isn’t this the same that’s happening with credit rating agencies? They are making a forecast (based on observational, historical data) of a country’s credit worthiness & likelihood of default. But by making that very forecast they are influencing the very future they are predicting, because it influences the markets. (And through a negative feedback loop often in a very, let’s put it diplomatically, counterproductive way.)

        Anyway, I’m thinking out loud here. It’s a fascinating topic, but I know too little about it (one of Rumsfeld’s “known unknowns”). I am definitely going to study it in more detail (now that’s a bold forecast…;) ).

  2. @Felix — though this is nothing beyond a single instance, de Mesquita, a forecaster and game theorist who has done work with the CIA (and other US Departments, as I understand), claims to account for knowledge of his forecasts in the forecasts. He gave a Tedtalk about Iran’s nuclear ambitions, and at the very end he makes reference to the issue. I wish he’d offer more details but he does recognize the issue and seems to think it can be incorporated into the model.

    Reply
  1. Forecasting Round-Up No. 3 « Dart-Throwing Chimp
  2. Forecasting Round-Up No. 4 | Dart-Throwing Chimp
  3. End of Year Predictions: the 2013 Yield | Red (team) Analysis
  4. Forecasting Round-Up No. 5 | Dart-Throwing Chimp

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 8,801 other followers

%d bloggers like this: