As citizens and as engaged intellectuals, we all have the right—indeed, an obligation—to make moral judgments and act based on those convictions. As political scientists, however, we have a unique set of potential contributions and constraints. Political scientists do not typically have anything of distinctive value to add to a chorus of moral condemnation or declarations of normative solidarity. What we do have, hopefully, is the methodological training, empirical knowledge and comparative insight to offer informed assessments about alternative courses of action on contentious issues. Our primary ethical commitment as political scientists, therefore must be to get the theory and the empirical evidence right, and to clearly communicate those findings to relevant audiences—however unpalatable or inconclusive they might be.
That’s a manifesto of sorts, nested in a great post by Marc Lynch at the Monkey Cage. Marc’s post focuses on analysis of the Middle East, but everything he writes generalizes to the whole discipline.
I’ve written a couple of posts on this theme, too:
- “This Is Not a Drill,” on the challenges of doing what Marc proposes in the midst of fast-moving and politically charged events with weighty consequences; and
- “Advocascience,” on the ways that researchers’ political and moral commitments shape our analyses, sometimes but not always intentionally.
Putting all of those pieces together, I’d say that I wholeheartedly agree with Marc in principle, but I also believe this is extremely difficult to do in practice. We can—and, I think, should—aspire to this posture, but we can never quite achieve it.
That applies to forecasting, too, by the way. Coincidentally, I saw this great bit this morning in the Letter from the Editors for a new special issue of The Appendix, on “futures of the past”:
Prediction is a political act. Imagined futures can be powerful tools for social change, but they can also reproduce the injustices of the present.
Concern about this possibility played a role in my decision to leave my old job, helping to produce forecasts of political instability around the world for private consumption by the U.S. government. It is also part of what attracts me to my current work on a public early-warning system for mass atrocities. By making the same forecasts available to all comers, I hope that we can mitigate that downside risk in an area where the immorality of the acts being considered is unambiguous.
As a social scientist, though, I also understand that we’ll never know for sure what good or ill effects our individual and collective efforts had. We won’t know because we can’t observe the “control” worlds we would need to confidently establish cause and effect, and we won’t know because the world we seek to understand keeps changing, sometimes even in response to our own actions. This is the paradox at the core of applied, empirical social science, and it is inescapable.
leofassb
/ July 3, 2014Reblogged this on Wirtschaftsprofiling und Unternehmenssicherheit and commented:
Some engaging thoughts on professionality.
Rex Brynen
/ July 3, 2014I don’t think Marc meant to ignore the issue, but he’s clearly wrong in stating that “Our primary ethical commitment as political scientists, therefore must be to get the theory and the empirical evidence right, and to clearly communicate those findings to relevant audiences.” Rather, our primary ethical commitment is to do no harm to those we engaged as research subjects. As chair of a Research Ethics Board (or IRB in American terminology), I’ve certainly come across cases where prioritizing knowledge-creation has potentially put subjects at risk.
I doubt there is much philosophical disagreement on this, even if researchers have been known to get sloppy in practice. The bigger moral challenge is what to do when our research findings might themselves generate harm. To take one concrete case, a graduate student of mine once came to the well-founded conclusion that the systematic use of torture was, in the context he was examining it, politically efficacious. There was every likelihood that his findings would not only be read by state security services in the region, but might even affect policy. Should he publish his “unpalatable” findings (to use Marc’s term) regardless? Or downplay them to minimize the very real risk of his research buttressing the use of torture? He chose to downplay the findings significantly–in my view, the right call in that case.
Similarly, our ethical obligations do not stop at producing knowledge, throwing it into the policy hopper, and washing our hands of the effects–ESPECIALLY given that we’re supposed to be political scientists, with particular insight into the policy process. Otherwise we end up as the social science version of Wernher von Braun (to quote the classic Tom Lehrer song):
Gather round while I sing you of Wernher von Braun
A man whose allegiance is ruled by expedience
Call him a Nazi, he won’t even frown
“Ha, Nazi schmazi,” says Wernher von Braun
Don’t say that he’s hypocritical
Say rather that he’s apolitical
“Once the rockets are up, who cares where they come down
That’s not my department,” says Wernher von Braun
Some have harsh words for this man of renown
But some think our attitude should be one of gratitude
Like the widows and cripples in old London town
Who owe their large pensions to Wernher von Braun
You too may be a big hero
Once you’ve learned to count backwards to zero
“In German oder English I know how to count down
Und I’m learning Chinese,” says Wernher von Braun
dartthrowingchimp
/ July 3, 2014A comment that really should be a full post of its own, Rex. Thanks very much.
Brent Sasley
/ July 3, 2014I wonder if talk of “primary” ethical obligations is a bit misleading. As academics, we serve multiple audiences–all of whom require our careful and ethical attention: students, colleagues, policymakers, the public, the communities we research, even our own identity communities.
in the middle
/ July 3, 2014I’ve read several of your recent posts and greatly respect your broad grasp of empirical findings and thoughtful reflections. The blog is much appreciated. But one observation keeps coming up when you touch on topics about the relationship of theory to practice, such as this post on ethics and the ones about the new endeavor on mass atrocities. I continue to be puzzled why the bulk of political science examines the causes of problems but so little of it applies the same rigorous tools to ask which policy or other responses are most effective in reducing those problems, and what factors affect whether such responses are adopted or not.
Answers to this “solutions” question are only partly derivable from causal diagnoses of the problem. Many other factors affect what actually happens when deliberate policy actions are taken — dealing not only with the well-recognized concept of unintended effects, but also political feasibility, bureaucratic resistance, funding, implementation, social and political backlash, etc. etc.. Of course, there are many exceptions to this generalization, and the balance may be changing somewhat (a few recent examples being the political science research on the impact of development aid on democratization, and the PITF case-studies and other research on responses to mass atrocities). Still, by and large, the assumption by many political scientists that their research is contributing greatly to finding out “what works” is largely belied by the comparatively small amount of rigorous policy analysis in the discipline.
To put this another way, rather than adding yet another database and empirical model of why mass atrocities happen to the multiple rigorous early warning systems that are already well-developed and in place, why not apply your impressive technical skills and knowledge of politics to figuring out why those EW systems continue to be ignored (twenty years after Rwanda) — except sometimes through last-minute, crisis-moment, often largely ineffective and costly military interventions (e.g., Libya, Syria, CAR)? And to the question of what other effective and feasible alternatives might be available?
There seems to be a serious fallacy in the field that adding more and better causal knowledge of a problem will prompt more responsiveness in addressing it. Is this imbalance not another ethical question facing political science?
dartthrowingchimp
/ July 3, 2014Michael, thank you, you’ve raised a number of big and important issues here. I’ll respond briefly to a couple of them.
First, I suspect that a fair share of political scientists aren’t especially concerned with how their work might shape policy or other practice. That’s just not what they do or why they do it. If that’s right, then for better or for worse, the flawed assumption you identify—“that adding more and better causal knowledge of a problem will prompt more responsiveness in addressing it”—doesn’t apply to many social scientists. People get into this line of work for all kinds of reasons, and helping to fix or mitigate problems is only one of them.
Second and more important, there’s a practical problem here: research on the effectiveness of specific policy interventions is probably even harder to do well than research on causal mechanisms, if that’s possible. How do you properly assess the effects of things that get tried rarely on moving targets and in combination with myriad other processes and interventions, only a fraction of which you can observe? Even people who are highly motivated to study policy effectiveness can’t make these problems of fuzzy measurement, missing counterfactuals, and unobserved confounders go away. Just look at the raging debate in the field of development economics over the value of randomized controlled trials (RCTs) and their relevance (or not) to policy prescription, and I think it’s plain to see that these difficulties are not easily overcome.
I don’t mean to discourage people who are interested in applied social science and policy-making from trying to understand these things in spite of those challenges. I just think you overstate how well we understand causes and how accurate our current forecasts are and understate the difficulty of rigorously assessing the impacts of things policy-makers and advocates might do to change the world.