Rules of Thumb vs. Statistical Models, or the Misconception that Will Not Die

Steve LeVine kicked off the new year on Quartz with a nice post called “14 rules for predicting future geopolitical events.” According to LeVine,

Nations are eccentric. But they also have threads of repeated history through which we can discern what comes next…Many political scientists dismiss the detection of such trends as “deterministic.” Some insist that, unlike in economics and statistics, there is as yet in fact no useful algorithm for foreseeing events—the only tool available to political forecasters is their own intuition. But it is vapid to observe the world, its nations and peoples as an unfathomable mob. History is not a science—but neither is it pure chaos.

If you’re a regular reader of this blog, you know I basically agree. Borrowing Almond and Genco’s classic metaphor, politics isn’t clock-like, but it’s not purely random, either. I also found little to dispute in the 14 rules that followed. For example, LeVine’s Muddle-Along Rule and its corollary, the Precipice Rule, are really just admonitions to take a deep breath when considering the risk of big but rare crises and recognize that, most of the time, the crisis won’t materialize. In statistical terms, that’s analogous to forecasting the base rate, and that’s actually a pretty powerful rule of thumb.

Still, after reading LeVine’s piece, I felt frustrated. As someone who uses statistical models to do the kind of forecasting he seems to be proposing, I couldn’t help but wonder: Why stop halfway? Rules of thumb can be very helpful, but they are often pretty coarse. Okay, so most cases will “tend to muddle along regardless of the trouble, and not collapse,” but can’t we say something more specific about just how unlikely that collapse is? Does it vary across forms of crisis or types of countries? LeVine proposes using history as resource for gleaning useful patterns but then stops short of doing so in anything but the fuzziest terms.

Equally important, it’s often not clear how to use rules of thumb together, especially when they’re in tension with one another. Some of the rules on LeVine’s list contradict each other, and it’s not clear to me how you’d adjudicate between them when trying to make judgments about specific cases. For example, in addition to the Muddle-Along and Precipice Rules, LeVine gives us the True-Believer Rule:

While people and countries tend toward the middle, events can turn on exceptions operating on the extremes. Hitler’s Germany is an example. Today, Khameini’s Iran, Afghanistan’s Taliban, Kim’s North Korea and Chavez’s Venezuela punch above their weight in influencing the geopolitical landscape.

Now imagine you’re trying to apply these rules to a case that isn’t already on that short list of exceptions. How can we tell in advance whether it’s a muddler or a true believer? If you’re not sure, what’s the forecast?

I don’t know LeVine personally, so I won’t make any assumptions about his motivations, but I do think the preference for rules of thumb over quantified forecasts exemplified in his Quartz post is pretty common to political forecasting. And I wonder if this aversion to statistics isn’t born, in part, of ignorance of what the use of statistics implies. A couple of days ago, I asked on Twitter: “Why do lay audiences consume weather forecasts w/o asking how they’re made but want peek under hood of stat forecasts of pol crises?” To which Dan Drezner replied, “My (obvious) answer is that people accept meteorology as an actual science, don’t believe the same about political science.”

But here’s the thing: statistics isn’t science, it’s a set of tools for doing science. The decision to use statistics does not presume either regularity in, or certainty about, the object of study. If anything, that decision is a reasoned choice to search for empirical evidence of regularity, an attempt to clarify our un-certainty. The whole point of statistical modeling for forecasting is to take a bunch of conjectures like LeVine’s and run them through a mill that provides clearer answers to the questions that naturally arise when we try to apply those conjectures to specific situations.

Put another way, a statistical forecasting model is really nothing more than a meta-rule of thumb, a flow chart for moving from those initial conjectures to a single best estimate. That the estimate is presented as a number does not automatically imply that its presenter believes it’s any more true or certain than an estimate described in a phrase. It’s just another form of representation for our ideas, and one that happens to be especially useful because it lends itself to the application of some really powerful tools for pattern recognition we’ve finally devised after a few million years of human evolution.

Yes, there was a time when statistics was new and notions of science and modernity and quantification all got mashed together in some professional and social circles into an extreme optimism about the predictability of human behaviors. As far as I can tell, though,very few practicing social scientists think that way any more. And, honestly, I’m just tired of carrying the intellectual baggage those 19th-century hacks left behind.

PS. In a follow-up post, LeVine applies his rules of thumb to produce “six geopolitical predictions for 2013.” On the whole, I think this is a thoughtful exercise, and I only wish more qualitative analysts would be as transparent as Steve is here about the mental models underlying their predictions.

Leave a comment


  1. mg

     /  January 9, 2013

    I agree with most of what you’ve said here, except that you use the term “statistics”, I believe, to refer to “computer models.” These models can come in many forms and are often a combination of different techniques and logics. Agent-based modeling, which you posted about previously, can begin to help us gain some traction on problems that statistical models have a lot of trouble with, such as when country may or may not be a “muddler” moving forward from the present. I think when you say “statistics” you should be saying something more like “computer-based modeling with an emphasis on reducing cognitive biases inherent in traditional expert forecasts.”

  2. Jonas

     /  January 17, 2013

    The problem with statistical analysis is overfitting. It’s very easy to fall prey to that, and it’s easy for one to fool yourself and your audience with it.

    As much as people get fooled by qualitative arguments, at least we’ve had tens of thousands of years of evolution to learn to fight back against them (not all that well, but it’s part of the common wisdom). On the other hand, lay audience are much more helpless against quantitative arguments. The overall level of mathematical literacy in the population is lower than language fluency or reasoning ability. Also, comparatively, we’ve had at most a few hundred years of human experience in evaluating quantitative analysis.

    That’s why people hold onto the rule of thumb of lies, damn lies and statistics.


Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13,612 other followers

  • Archives

%d bloggers like this: