Yesterday, the blog Political Violence @ a Glance carried a thoughtful post by Thomas Zietzoff about why international relations and conflict researchers need to work harder to get at the micro-foundations of the processes they study. For those of you not immersed in these methodological debates, “micro-foundations” in this context is just a fancy way of referring to individual people instead of the groups and networks in which they’re members, or the towns or countries or regions in which they’re located.
What’s bugging Zeitzoff is the ecological fallacy—that is, the (flawed) assumption that patterns observed across groups necessarily hold for the individuals who belong to those groups. As Zeitzoff notes, many theories of political conflict involve decision processes occurring within individuals—to participate in a protest, to join a rebel group, to vote for A instead of B—but virtually all of the data we use to test those theories describes the groups or environments in which those individuals are embedded.
Take, for example, the idea that poverty increases the risk of civil war. Statistical models of civil-war onset have shown over and over that poorer countries are indeed more susceptible to civil war, but the country-level data used in those models don’t tell us who’s actually doing the fighting. It could be true that poorer individuals are more motivated to rebel than richer ones, but it could also be true that poorer countries are more susceptible to rebellions by collections of individuals whose own economic status has little effect on their decision to participate. Without data on who participates and how poor they are, we can’t really say which is correct.
The broader point is that patterns we observe at these higher levels might may match what’s going on within specific individuals, but they also might not. To make confident inferences about why people act like they do, we really need to try to directly observe (some of) those individuals and the choices they make.
Zeitzoff is absolutely right about the importance of avoiding the ecological fallacy, of course, but I don’t agree with the prescription he writes to remedy this ailment. According to Zeitzoff,
Field experiments offer a promising path forward and need to be incorporated into the repertoire of techniques conflict scholars adopt; a stronger version of this point is that conflict scholars have to do this or else leave unexplored the central arguments that animate the field.
Contra Zeitzoff, I’m skeptical that field experiments will shed much light on many topics of interest to students of international politics, mostly because I don’t think those field experiments will ever happen. Maybe some of the experimental-design pros will set me straight on this, but I don’t see how researchers are going to create and reliably observe experimental and control groups for things like war between states, participation in insurgencies, or protests against authoritarian regimes, given the political sensitivity and ethical dilemmas involved. Many of the actions theorists of international politics care about are dangerous and illegal. Those qualities give participants strong incentives to conceal their actions, and they give states affected by those actions strong incentives to block experiments that could help catalyze unrest or insurgency.
Instead of field experiments, I think this is where Big Data could really help push political science forward. And when I say Big Data here, I don’t just mean larger data sets (although those are great, too). Instead, I’m referring more specifically to what organizations like the U.N.’s Global Pulse have in mind when they use this term: massive collections of digital observations created, sometimes incidentally, as people go about their daily lives.
As Patrick Meier noted in a blog post last year, these high-frequency digital data sets come with significant concerns and constraints of their own, including the need to respect the privacy of the individuals being observed and selection bias in the samples they generate. Still, as long as the data are handled and analyzed with these limitations in mind, they should offer new opportunities to explore the micro-foundations of our theories in ways we’re only just starting to imagine.
In principle, evidence from carefully designed experiments would be even better. In practice, though, I just don’t see many of those experiments happening, and I see no reason to eschew improvement in quixotic pursuit of perfection.