Democracy and Development Revisited…Again

Over the weekend, I started reading Eric Beinhocker’s The Origin of Wealth, a book that attempts to reinterpret the whole of economics through the lens of complexity theory. I’m only a couple of chapters in at this point, but the most striking thing in the book so far is a chart that shows how absurdly uneven the growth in human wealth has been over time. As he describes (pp. 9-11, emphasis added),

If we use the appearance of the first tools as our starting point, it took about 2,485,000 years, or 99.4 percent, of our economic history to go from the first tools to the hunter-gatherer level of economic and social sophistication… The economic journey between the hunter-gatherer world and the modern world was also very slow over most of the 15,000-year period, and then progress exploded in the last 250 years… To summarize 2.5 million years of economic history in brief: for a very, very, very long time not much happened; then all of a sudden, all hell broke loose.

Just a few days after I read that passage, political scientist Xavier Marquez dropped a tremendous blog post on the global diffusion of democracy over the past two centuries. Marquez opens the post this way (again, emphasis added):

People sometimes do not realize how total has been the normative triumph of some of the ideas typically associated with democracy, even if one thinks that democracy itself has not succeeded quite as spectacularly. Take, for instance, the norm that rulers of states should be selected through some process that involves voting by all adults in society (I’m being deliberately vague here) rather than, say, inheriting their position by succeeding their fathers. In 1788 there were only a couple of countries in the world that could even claim to publicly recognize something remotely like this norm. Most people could not vote, and voting was not generally recognized as something that needed to happen before rulers could rule; rulers could and did claim to have authority to rule on other grounds. Norms of hereditary selection structured the symbolic universe in which political competition took place, and defined its ultimate boundaries for most people (at least those who lived in state spaces). Yet by 2008 there were only four or five countries in the world that did not publicly acknowledge universal voting rights.

If you consider the timing, pace, and character of those two trends side by side, it’s very hard to believe that they aren’t interrelated. Take a look at this figure below. The red line replicates a portion of Beinhocker’s aforementioned plot, using world GDP estimates produced by economist Brad De Long (PDF) to show the exponential growth in human wealth over the past 200 years. The blue line plots the spread of universal suffrage across states in the international political system, as recorded in the Political Institutions and Political Events (PIPE) data set Marquez used in his blog post.

I do not read this chart as evidence in favor of modernization theory, which posits a causal arrow running from economic development to democracy and envisions that changes within specific nation-states unfold in a particular sequence: industrialization –> urbanization + education –> value changes –> democratization. In fact, the chart of long-term global trends masks lots of short-term churn in the status of specific countries and regions. Many countries have diverged sharply from the developmental sequence posited by modernization theory, and that’s a serious problem for a theory of change.

Instead, I see the chart as evidence that human society at the global level has become a complex adaptive system that is currently experiencing a period of radical transformation, or “state shift.” These trends in wealth and governance aren’t cause and effect in the traditional sense, nor are they spuriously correlated. Instead, they are twin streams of single evolutionary process that is driven, in part, by the creation, selection, and modification of a rapidly widening array of physical and social technologies. Economic complexity is simultaneously a product and a catalyst of this process, and political institutions—including the ones we use to select national rule-makers—are among the most influential social technologies also involved in this “reciprocal dance,” as Beinhocker calls it. (N.B. Weapons are one of the more influential physical technologies in this system that economists often ignore, and their interplay with the evolution of political institutions is a crucial part of this wider story, but that’s a topic for another day.)

Why would democracy and wealth grow hand in hand? On this point, I take my cues from Owen Barder, Henry Farrell, and Cosma Rohilla Shalizi. In a brilliant online talk, Barder follows Beinhocker’s lead and argues that economic development is an evolutionary process which depends heavily on processes of innovation and selection. Farrell and Shalizi describe why democracy is generally better than other forms of government at supporting those processes:

Democracy has unique benefits as a form of collective problem solving in that it potentially allows people with highly diverse perspectives to come together in order collectively to solve problems. Democracy can do this better than either markets and hierarchies, because it brings these diverse perceptions into direct contact with each other, allowing forms of learning that are unlikely either through the price mechanism of markets or the hierarchical arrangements of bureaucracy. Furthermore, democracy can, by experimenting, take advantage of novel forms of collective cognition that are facilitated by new media.

One point I would like to amplify in this line of thinking is that democracy isn’t really a specific “thing” so much as the label we stick on a cluster of seemingly similar things. Like human “races,” political regime types are a set of concepts we’ve developed to organize our thinking about similarities and differences in forms of the social technology we call government. These concepts are neither natural nor inevitable, and they often obscure a tremendous diversity within the categories they establish. Our decision to classify something as a “democracy” depends on many different features, each of which can take a wide variety of forms without violating our mental classification scheme. On electoral systems alone, you’ll be hard pressed to find two cases that look exactly alike, and that’s just one of many relevant attributes. And, of course, even in cases we might consider archetypal, these rules are constantly evolving.

One practical implication of this point is the political version of Owen Barder’s advice to purveyors of foreign aid: instead of searching for “best practices” we can copy from one context and paste onto another, we should think about how to facilitate appropriate experimentation, feedback, and learning within societies we wish to assist, and about what kinds of changes we might make in our own rules and organizations that will further support those processes. These institutions are not modular, and we cannot control the systems in which they’re embedded. We don’t build states, we perturb them, and we should never lose sight of that difference.

In Defense of Political Science and Forecasting

Under the headline “Political Scientists Are Lousy Forecasters,” today’s New York Times includes an op-ed by Jacqueline Stevens that takes a big, sloppy swipe at most of the field. The money line:

It’s an open secret in my discipline: in terms of accurate political predictions (the field’s benchmark for what counts as science), my colleagues have failed spectacularly and wasted colossal amounts of time and money.

As she sees it, this poor track record is an inevitability. Referencing the National Science Foundation‘s history of funding research in which she sees little value, Stevens writes:

Government can—and should—assist political scientists, especially those who use history and theory to explain shifting political contexts, challenge our intuitions and help us see beyond daily newspaper headlines. Research aimed at political prediction is doomed to fail. At least if the idea is to predict more accurately than a dart-throwing chimp.

I don’t have much time to write today, so I was glad to see this morning that Henry Farrell has already penned a careful rebuttal that mirrors my own reactions. On the topic of predictions in particular, Farrell writes:

The claim here—that “accurate political prediction” is the “field’s benchmark for what counts as science” is quite wrong. There really isn’t much work at all by political scientists that aspires to predict what will happen in the future…It is reasonable to say that the majority position in political science is a kind of soft positivism, which focuses on the search for law-like generalizations. But that is neither a universal benchmark (I, for one, don’t buy into it), nor indeed, the same thing as accurate prediction, except where strong covering laws (of the kind that few political scientists think are generically possible) can be found.

To Farrell’s excellent rebuttals, I would add a couple of things.

First and most important, there’s a strong case to be made that political scientists don’t engage in enough forecasting and really ought to do more of it. Contrary to Stevens’ assertion in that NYT op-ed, most political scientists eschew forecasting, seeing description and explanation as the goals of their research instead. As Phil Schrodt argues in “Seven Deadly Sins of Quantitative Political Science” (PDF), however, to the extent that we see our discipline as a form of science, political scientists ought to engage in forecasting, because prediction is an essential part of the scientific method.

Explanation in the absence of prediction is not somehow scienti cally superior to predictive analysis, it isn’t scienti c at all! It is, instead, “pre-scientific.”

In a paper on predicting civil conflicts, Mike Ward, Brian Greenhill, and Kristin Bakke help to explain why:

Scholars need to make and evaluate predictions in order to improve our models. We have to be willing to make predictions explicitly – and plausibly be wrong, even appear foolish – because our policy prescriptions need to be undertaken with results that are drawn from robust models that have a better chance of being correct. The whole point of estimating risk models is to be able to apply them to specific cases. You wouldn’t expect your physician to tell you that all those cancer risk factors from smoking don’t actually apply to you. Predictive heuristics provide a useful, possibly necessary, strategy that may help scholars and policymakers guard against erroneous recommendations.

Second, I think Stevens actually gets the historical record wrong. It drives me crazy when I see people take the conventional wisdom about a topic—say, the possibility of the USSR’s collapse, or a wave of popular uprisings in Middle East and North Africa—and turn it into a blanket statement that “no one predicted X.” Most of the time, we don’t really know what most people would have predicted, because they weren’t asked to make predictions. The absence of a positive assertion that X will happen is not the same thing as a forecast that X will not happen. In fact, in at least one of the cases Stevens discusses—the USSR’s collapse—we know that some observers did forecast its eventual collapse, albeit usually without much specificity about the timing of that event.

More generally, I think it’s fair to say that, on just about any topic, there will be a distribution of forecasts—from high to low, impossible to inevitable, and so on. Often, that distribution will have a clear central tendency, as did expectations about the survival of authoritarian regimes in the USSR or the Arab world, but that central tendency should not be confused with a consensus. Instead, this divergence of expectations is precisely where the most valuable information will be found. Eventually, some of those predictions will prove correct while others will not, and, as Phil and Mike and co. remind us, that variation in performance tells us something very useful about the power of the explanatory models—quantitative, qualitative, it doesn’t really matter—from which they were derived.

PS. For smart rebuttals to other aspects of Steven’s jeremiad, see Erik Voeten’s post at the Monkey Cage and Steve Saideman’s rejoinder at Saideman’s Semi-Spew.

PPS. Stevens provides some context for her op-ed on her own blog, here. (I would have added this link sooner, but I’ve just seen it myself.)

PPPS. For some terrific ruminations on uncertainty, statistics, and scientific knowledge, see this latecomer response from Anton Strezhnev.

  • Author

  • Follow me on Twitter

  • Follow Dart-Throwing Chimp on WordPress.com
  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 13,609 other subscribers
  • Archives

%d bloggers like this: