The idea for the piece came from reading Chris Fariss’s May 2014 article in American Political Science Review and then digging around in the other work he and others have done on the topic. It’s hard to capture the subtleties of a debate as technical as this one in a short piece for a general audience, so if you’re really interested in the subject, I would encourage you to read further. See especially the other relevant papers on Chris’s Publications page and the 2013 article by Anne Marie Clark and Kathryn Sikkink.
In the piece, I report that “some human rights scholars see Fariss’ statistical adjustments as a step in the right direction.” Among others I asked, Christian Davenport wrote to me that he agrees with Fariss about how human rights reporting has evolved over time, and what that implies for measurement of these trends. And Will Moore described Fariss’s estimates in an email as a “dramatic improvement” over previous measures. As it happens, Will is working with Courtenay Conrad on a data set of allegations of torture incidents around the world from specific watchdog groups (see here). Like Chris, Will presumes that the information we see about human rights violations is incomplete, so he encourages researchers to treat available information as a biased sample and use statistical models to better estimate the underlying conditions of concern.
When I asked David Cingranelli, one of the co-creators of what started out at the Cingranelli and Richards (CIRI) data set, for comment, he had this to say (and more, but I’ll just quote this bit here):
I’m not convinced that either the “human rights information paradox” or the “changing standard of accountability” produce a systematic bias in CIRI data. More importantly, the evidence presented by Clark and Sikkink and the arguments made by Chris Fariss do not convince me that there is a better alternative to the CIRI method of data recording that would be less likely to suffer from biases and imprecision. The CIRI method is not perfect, but it provides an optimal trade-off between data precision and transparency of data collection. Statistically advanced indexes (scores) might improve the precision but would for sure significantly reduce the ability of scholars to understand and replicate the data generation process. Overall, the empirical research would suffer from such modifications.
I hope this piece draws wider attention to this debate, which interests me in two ways. The first is the substance: How have human rights practices changed over time? I don’t think Fariss’ findings settle that question in some definitive and permanent way, but they did convince me that the trend in the central tendency over the past 30 or 40 years is probably better than the raw data imply.
The second way this debate interests me is as another example of the profound challenges involved in measuring political behavior. As is the case with violent conflict and other forms of contentious politics, almost every actor at every step in the process of observing human rights practices has an agenda—who doesn’t?—and those agendas shape what information bubbles up, how it gets reported, and how it gets summarized as numeric data. The obvious versions of this are the attempts by violators to hide their actions, but activists and advocates also play important roles in selecting and shaping information about human rights practices. And, of course, there are also technical and practical features of local and global political economies that filter and alter the transmission of this information, including but certainly not limited to language and cultural barriers and access to communications technologies.
This blog post is now about half as long as the piece it’s meant to introduce, so I’ll stop here. If you work in this field or otherwise have insight into these issues and want to weigh in, please leave a comment here or at Democracy Lab.