If occasional readers of this blog remember only one thing from their time here, I’d like it to be this: we may be getting better at measuring political things around the world, but huge gaps remain, sometimes on matters that seem basic or easy to see, and we will never close those gaps completely.
Two items this week reminded me of this point. The first came from the World Bank, which blogged that only about half of the countries they studied for a recent paper had “adequate” data on poverty. As a chart from an earlier World Bank blog post showed, the number of countries suffering from “data deprivation” on this topic has declined since the early 1990s, but it’s still quite large. Also notice that the period covered by the 2015 study ends in 2011. So, in addition to “everywhere”, we’ve still got serious problems with the “all the time” part of the Big Data promise, too.
The other thing that reminded me of data gaps was a post on the Lowy Institute’s Interpreter blog about Myanmar’s military, the Tatmadaw. According to Andrew Selth,
Despite its dominance of Burma’s national affairs for decades, the Tatmadaw remains in many respects a closed book. Even the most basic data is beyond the reach of analysts and other observers. For example, the Tatmadaw’s current size is a mystery, although most estimates range between 300,000 and 350,000. Official statistics put Burma’s defence expenditure this year at 3.7 % of GDP, but the actual level is unknown.
This kind of situation may be especially pernicious. It looks like we have data—350,000 troops, 3.7 percent of GDP—but the subject-matter expert knows that those data are not reliable. For those of us trying to do cross-national analysis of things like conflict dynamics or coup risk, the temptation to plow ahead with the numbers we have is strong, but we shouldn’t trust the inferences we draw from them.
The size and capability of a country’s military are obviously political matters. It’s not hard to imagine why governments might want to mislead others about the true values of those statistics.
Measuring poverty might seem less political and thus more amenable to technical fixes or workarounds, but that really isn’t true. At each step in the measurement process, the people being observed or doing the observing may have reasons to obscure or mislead. Survey respondents might not trust their observers; they may fear the personal or social consequences of answering or not answering certain ways, or just not like the intrusion. When the collection is automated, they may develop ways to fool the routines. Local officials who sometimes oversee the collection of those data may be tempted to fudge numbers that affect their prospects for promotion or permanent exile. National governments might seek to mislead other governments as a way to make their countries look stronger or weaker than they really are—stronger to deter domestic and international adversaries or get a leg up in ideological competitions, or weaker to attract aid or other help.
As social scientists, we dream of data sets that reliably track all sorts of human behavior. Our training should also make us sensitive to the many reasons why that dream is impossible and, in many cases, undesirable. Measurement begets knowledge; knowledge begets power; and struggles over power will never end.