Hello?!? Not All Forecasters Are Strict Positivists

International relations is the most predictively oriented subfield of political science…Yet even in the other empirical subfields, the positivist notion that everything must ultimately be reducible to (knowable) universal laws displays its hold in excrescences such as quadrennial attempts to derive formulae for predicting the next presidential election outcome, usually on the basis of ‘‘real’’ (economic) factors. Even if one follows Milton Friedman (1953) in insisting that the factors expressed by such formulae are not supposed to be actually causing electoral outcomes, but are merely variables that (for some unknown reason) allow us to make good behavioral predictions, in practice one usually wants to know what is actually causing the behavior, and it is all too easy to assume that whatever is causing it—since it seems to be responsible for a behavioral regularity—must be some universal human disposition.

That’s from a 2012 paper by Jeffrey Friedman on Robert Jervis’ 1997 System Effects and the “problem of prediction.” I actually enjoyed the paper on the whole, but this passage encapsulates what drives me nuts about what many people—including many social “scientists”—think it means to try to make forecasts about politics.

Contrary to the assertions of some haters, political scientists almost never make explicit forecasts about the things they study—at least not in print or out loud. Some of that reticence presumably results from the fact that there’s no clear professional benefit to making predictions, and there is some professional risk in doing so and then being wrong.

Some of that reticence, though, also seems to flow from this silly but apparently widely-held idea that the very act of forecasting implies that the forecaster accepts the strict positivist premise that “everything must ultimately be reducible to (knowable) universal laws.” To that, I say…

charlie brown aaugh

Probability is a mathematical representation of uncertainty, and a probabilistic forecast explicitly acknowledges that we don’t know for sure what’s going to happen. Instead, it’s an educated guess—or, in Bayesian terms, an informed belief.

Forecasters generally use evidence from the past to educate those guesses, but that act of empiricism in itself does not imply that we presume there are universal laws driving political processes lurking beneath that history. Instead, it’s really just a practical solution to the problem of wanting better information—sometimes to help us plan for the future, and sometimes to try to adjudicate between different ideas about the forces shaping those processes now and in the past.

Empiricism is a practical solution because it works—not perfectly, of course, but, for many problems of interest, a lot better than casting bones or reading entrails or consulting oracles. The handful of forecasters I know all embrace the premises that their efforts are only approximations, and that the world can always change in ways that will render the models we find helpful today less helpful in the future. In the meantime, though, we figure we can nibble away at our ignorance by making structured guesses about that future and seeing which ones turn out to be more reliable than the others. Physicists still aren’t entirely sure how planes manage to fly, but millions of us make a prediction every day that the plane we’re about to board is somehow going to manage that feat. We don’t need to be certain of the underlying law to find that prediction useful.

Finally, I can’t resist: there’s real irony in Freidman’s choice of examples of misguided forecasting projects. To have called efforts to predict the outcome of U.S. presidential elections “excrescences” in the year those excrescences had a kind of popular coming out, well, that’s just unfortunate. I guess Friedman didn’t see that one coming.

Leave a comment

5 Comments

  1. Oral Hazard

     /  April 24, 2013

    Yeah, no kidding. Nate Silver’s following would beg to differ. Contrast with Friedmanites (of the Uncle Milty, not Jeffrey, variety), who haven’t exactly had a great showing as of late.

    Speaking of Friedman, I find the ideological affinities to be interesting in the paper, notably reference to Judge Posner from U Chicago (albeit somewhat chastened and now disavowing Efficient Market Hypothesis post-2008 but nevertheless still gamely trotting out his mathy-looking As, Bs, and Cs…)

    The discussion about the limits of individual knowledge of what is “real” in the macro sense — reminded me of the nice post and discussion here from a couple of weeks ago. And, hey, anybody who is an enemy of homo economicus is a friend of mine. Expressed in Posnerian terms, that would be: Where A publicly calls out B’s crazy fundamentalist adherence to EMH and UChicago School economics, C will happily send economic assistance to A so that A can continue to engage in an academic brodown with B.

    But Friedman gets all:

    “And we are here as on a darkling plain
    Swept with confused alarms of struggle and flight,
    Where ignorant armies clash by night.”

    Look, forget our imperfect knowledge of the Laws of Physics for a second. Human sensory organs and thought processes themselves are based on probabilities and can be gamed pretty easily, both by nature and human deception. Saying we’re all too unpredictable and swimming in uncertainty proves too much. Pleading fog of war doesn’t get the job done because it’s the natural state of man. This is why the notion of hive mind is on the rise.

    Reply
  2. Crying Wolf

     /  May 1, 2013

    CONTEXT: I really like applied science and respect the basic. Just do. Always did, always will. Hard science carries more weight than soft but it is all generally good, well intentioned, stuff. This is outside my field (molecular biology) and I am rank amateur at present. I am very, very, very unhappy with two-party status quo politics and mainstream discourse, including its endless social, fiscal and other economic malarky (as I define malarky, not as others define it).

    COMMENT: Tetlock’s work in Expert Political Judgment suggests a way forward to less irrational politics with implied better, e.g., economic and security, futures than what we take our chances with now. There is some empiricism there, but results showing that models without ego significantly beat ego-riddled “expert” humans is provocative, to say the least. That work suggests, but doesn’t yet prove, that reliance on reading entrails or chimps tossing darts is only a little worse than consulting top notch experts, either in or outside their areas of expertise. That work suggests that more expert knowledge confers a rapidly diminishing return on accuracy, i.e., serious dilettantes (but not ill-informed undergrads) are just as good as those we hold in awe and the press likes to trot out for bold (usually wrong – at least 80% wrong) sound bites.

    It is long past time to start finding ways to apply some disciplined form of reason and logic to mainstream politics and the drivel that passes for political insight or advice. Unless the initiated are generally too cautious to take a leap and instead are willing (consciously or not) to see the American experiment crash and burn if that is where unfettered fate leads, eternal debates among academics and other cognescenti just don’t cut it any more. Maybe the hesitation comes from the view that statistical models are allegedly too crude or admittedly too inhuman. Its time to fish or cut bait, I think. If cars without drivers are on the horizon, maybe political forecasts grounded in statistics without much (or any) human ego pollution are as well.

    Or, do I goof and just extrapolate too much from too little as amateurs sometimes do? A little knowledge can be dangerous, maybe deadly. But, so can a little more status quo.

    Reply
  3. Jonas

     /  May 3, 2013

    The higher up you are in the status hierarchy, the less you understand probability instinctively. After all, people in charge expect to be on top 100% of the time not 75% of the time. Anything not 100% makes them anxious, itchy all over.

    Reply
  4. Reblogged this on Future Perfect and commented:
    DTChimp takes on some persistent misconceptions about forecasting. Jay diagnoses the problem as belief that forecasting implies positivism or belief in universal laws. He thinks not. His first argument is that forecasters are not determinists, because many of us — all of us in the ACE circles — are using probabilistic forecasts, explicitly acknowledging there is a lot we don’t know. This is worth saying because I think many objections to forecasting do in fact revolve around determinism, but it doesn’t address what Jay calls positivism — the belief in fundamental universal laws.

    Forecasting does require the belief in locally stable regularities. Jay points out that most of the models we use are simply empirical generalizations that test out to be useful. Some of us may believe there are “universal laws driving political processes”, but it’s a far cry to say that we’ve found them. Nate Silver’s election forecasts were hardly about universal laws, except maybe the arithmetic of adding noisy samples.

    I’m particularly baffled by social scientists who disclaim forecasting but still fit models. If your model is intended to generalize, you’re forecasting. It only remains to see if you are doing it well or badly.

    Reply
  1. Duck of Minerva

Leave a Comment

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13.6K other subscribers
  • Archives