Advocascience

I think comparative politics has a bigger problem with conflicts of interest than scholars who work in this field generally acknowledge. I don’t think the problem can be eliminated, but I imagine that talking about it more can help, so that’s what I’m going to do.

When you hear the term “conflict of interest,” you probably think of corporations paying for studies that advance their commercial interests. I know I do. It’s easy to see why studies on the effectiveness of new drug therapies or the link between pollution and cancer, for example, warrant closer scrutiny when they’re funded by firms with profits riding on the results. You don’t have to be a misanthrope to believe that the profit motive might have shaped the analysis, and there are enough examples of outright fraud to make skepticism the prudent default setting.

That’s not the only conflict that can arise, though. What I think many scholars working in comparative politics don’t appreciate as much as we should is that it’s also possible for political values and advocacy to play a similar role, and to similar effect. When a researcher’s work deals with issues on which he or she has strong moral beliefs, that confluence can hinder his or her ability to identify and fairly weigh relevant evidence. Confirmation bias is hard to overcome, especially in studies that rely entirely on an author’s interpretation, as many qualitative studies do. The problem is even more intense if the researchers’ personal life is interwoven with her work. Certain conclusions may be more palatable or appealing to people with certain values, and it can be professionally and personally damaging for researchers to report findings that suggest the work their friends and colleagues are doing may not be all that useful, or may even be counterproductive.

The example I know best comes from one of my primary research interests, comparative democratization. Some of the best-known and most respected researchers and organizations in this sub-field routinely engage in advocacy through op-eds, policy briefs, and meetings and speaking engagements with advocates and development professionals. One of the leading journals on this topic, the Journal of Democracy (JoD), is published for the National Endowment for Democracy, a U.S. government-funded organization that supports the U.S. government’s efforts to promote democracy around the world. In contrast to conventional academic practice, most submissions to JoD are commissioned by the editors, and they aren’t formally peer-reviewed.

Perhaps it’s just a coincidence, but for the past 20 years or so, the main themes to emerge from research on this topic are that democratization has all kinds of ancillary benefits—peace, wealth, and freedom from terrorism, to name a few—and that the kinds of the things the U.S. government and the advocates it supports generally do to advance democratization are helpful. In other words, scholars’ studies often reach conclusions that affirm the value of U.S. policy and their own advocacy, which is intimately connected to their personal beliefs and relationships.

That happy alignment doesn’t automatically invalidate those studies, of course, but I think it does warrant closer scrutiny than it now gets. I have great respect for many of the people working in democratization studies, and I happen to share their moral convictions that democracy is the best form of government and that every human being deserves citizenship. Still, let’s be honest: we feel better when we believe our research is helping people we admire change the world for the better, and we’re more likely to get that positive feedback when our findings validate the work those people are already doing. The effects of this feedback loop on the questions we ask, the designs we adopt to answer them, and the conclusions we reach may not be trivial. I think we should talk more about it, both in a general way and whenever evaluating specific pieces of research.

It would be unfair and probably unethical of me to conclude without pointing out that similar issues arise when scholars do consulting work, as I have for the past 15 or so years. Even if a client asks for as fair and objective a study as possible, interpersonal and financial concerns can shape the design of the analysis and interpretation of the results. For example, if you’re paid handsomely to develop a system to forecast event X, you have a financial interest in saying that you can indeed forecast event X and that you can do it well. We can ameliorate this problem by being as transparent as possible about our funding, data, and methods, but we can’t eliminate it, and we’re usually not the best judges of our own motives. Contract research like this occupies a pretty small space in comparative politics right now, so I don’t think this is having much effect on the field at the moment, but I think it’s important for me to note it, given the career path I’ve taken.

Leave a comment

12 Comments

  1. The phrase “the kinds of the things the U.S. government and the advocates it supports generally do to advance democratization” is very ambiguous. Can you give some concrete examples of what you’re referring to?

    Reply
    • Fair question. I think of electoral assistance, training and funding for NGOs, and capacity-building for government entities as the main forms of U.S. democracy promotion. As far as I can tell, all of those forms enjoy broad support among advocates, but empirical evidence that they produce the desired effects and do not produce unwanted and unintended consequences is scarce, and there is some theory and evidence to the contrary.

      Reply
  2. Interesting post, and you’re right that confirmation bias is always a concern, especially when dealing with such ethically weighty issues as democracy promotion, where most of those engaged in research are themselves morally committed to democracy. Still, in the little bit of work I’ve done in this field, I’ve found that the clients are thrilled when they get good news, but also highly receptive to and willing to accept bad news. It’s rare that any study produces all positive or all negative results, and the details on what works and what doesn’t are often the most interesting and useful results.

    Reply
  3. I think this is a very important discussion to have, even more so when it isn’t just individual researchers that have a conflict of interest, but a whole discipline. I think such discussions are becoming more and more common in econ.

    As always, though, I have a big disagreement and it is with this statement (emphasis added by me).

    When a researcher’s work deals with issues on which he or she has strong moral beliefs, that confluence can hinder his or her ability to identify and fairly weigh relevant evidence. Confirmation bias is hard to overcome, especially in studies that rely entirely on an author’s interpretation, as many qualitative studies do.

    I actually think this concern is actually backwards. I think in purely qualitative studies, people are more used to taking into account the background and interests of the author. It is quantitative studies where the danger is, I feel like people tend to fetishize numbers as ‘objective’ or at least ‘value neutral’. This is seldom the case, but a heavy dose of math can be used to hide your personal or large social biases; boyd and Crawford say this very well in the context of computer scientists wandering into the social sciences:

    As computational scientists have started engaging in acts of social science, there is a tendency to claim their work as the business of facts and not interpretation. A model may be mathematically sound, an experiment may seem valid, but as soon as a researcher seeks to understand what it means, the process of interpretation has begun. This is not to say that all interpretations are created equal, but rather that not all numbers are neutral.

    There is some nice thoughts of this in the context of data science and journalism on Cathy O’Neil’s blog that you might enjoy.

    Reply
  1. Confirmation bias in the study of human rights « Jake Wobig's blog
  2. Useful Rules for Foresight from Taleb’s The Black Swan | Red (team) Analysis
  3. Do You Need a Pro-Cancer Oncologist? Bias and Human Rights Scholarship » Duck of Minerva
  4. Avocascience? Or Objective Neutrality? | Opinion - Policy Nexus
  5. First Past the Post: February 5 | Suffragio
  6. Will Unarmed Civilians Soon Get Massacred in Ukraine? | Dart-Throwing Chimp
  7. The Ethics of Political Science in Practice | Dart-Throwing Chimp

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 7,061 other followers

%d bloggers like this: