Did Libya Cause Mali?

Did the fall of the Gaddafi regime in Libya cause the ongoing crisis in Mali?

A lot of people seem to think so. Number 4 on Max Fisher’s “Nine Questions about Mali You Were Too Embarrassed to Ask” is: “I heard that this whole crisis happened because of the war in Libya. Is that true?” Yesterday on the BBC’s This Week, former U.N. Secretary General Kofi Annan seemed to answer in the affirmative when he described Mali as “collateral damage” from Libya.

The accounts I’ve read from people who closely study the country generally attribute the crisis in Mali to two things: 1) the resumption of armed rebellion in northern Mali in January 2012; and 2) the mutiny and coup that ensued in March. As I understand those experts’ arguments, the scale of the current crisis is due to the intersection of these two. Neither the rebellion nor the coup alone was sufficient to produce the state collapse that is compelling the large-scale international response. If neither was sufficient alone, then both were necessary.

Did Libya’s collapse cause one or both of these events? It certainly seems to have played some role. As proponents of the “Libya caused Mali” line have pointed out, the resumption of rebellion in the north was driven, in part, by an inflow of fighters and arms fleeing Libya after the fall of their patron and purchaser, Moammar Qaddafi. The resumption of the Tuareg’s rebellion, in turn, appears to have helped trigger the military coup. After seizing power, the putschists sometimes identified the government’s weak support for their fight against the rebels as the motivation behind the mutiny that evolved into a coup when it encountered little resistance.

To make strong claims about the importance of Libya to Mali, though, we have to believe that one or both of these things—the rebellion and the coup—would not have happened if Libya hadn’t imploded. Here, I think the assertion that “Libya caused Mali” gets much weaker.

On the fight in the north, a recent Think Africa Press piece by Andy Morgan asserts that the resumption of rebellion had been planned for some time, suggesting that Libya’s collapse was not a necessary condition for its occurrence. “In truth, neither Gaddafi’s fall nor AQIM nor drugs and insecurity are the prime movers behind this latest revolt,” Morgan writes. “They are just fresh opportunities and circumstances in a very old struggle.” Morgan’s account isn’t gospel, of course, but it does imply that rebellion could have and probably would have recurred in the north regardless of Gaddafi’s fate. Libya’s collapse seems to have affected the timing and possibly the strength of that assault, but it doesn’t appear to have been necessary for its occurrence.

The connection between Libya and the March 2012 coup is even more tenuous. Statistical models I developed to forecast coups d’etat identified Mali as one of the countries at greatest risk in 2012 before the coup happened, and that assessment was not particularly sensitive to events in Libya. The chief drivers of that forecast were Mali’s extreme poverty (as captured by its infant mortality rate) and the character of its pre-coup political institutions. One of the models takes armed conflict in the region into account, but it’s not an especially influential risk factor, and the impact of Libya’s civil war on the final forecast is negligible.

This forecast suggests that a coup in Mali was entirely plausible absent the rebellion in the north, and that impression is bolstered by the reporting of Bruce Whitehouse from Bamako in a March 2012 blog post:

The way [coup leader Capt.] Sanogo went on to justify the coup was inconsistent and wide-ranging. His initial responses to questions about his troops’ demands indicated that their primary concerns centered around living conditions, pay, and education and job opportunities for their children. When prompted about insecurity in northern Mali, however, he claimed that this issue “occupied 70 percent of their preoccupations.” (During a later interview, Sanogo again had to be reminded about the rebellion after listing the factors that led to the coup.)

The statements of actors engaged in the politics in question aren’t always (often? ever?) honest or reliable, but in this case they align with the information we get from the statistical model. It really isn’t that hard to imagine a coup occurring in Mali in 2012 regardless of events in Libya.

In retrospect, it’s easy to construct narratives that connect Mali to Libya. What’s harder is to imagine the other ways things might have unfolded and assess how likely those counterfactual histories are. We’ll never know for sure, of course, but I think this quick accounting shows that we could have arrived at something very much like the current crisis in Mali even if the Gaddafi regime had never collapsed. That doesn’t mean events in Libya have had no effect on the crisis in Mali, but it does suggest that the one is not the cause of the other.

On the Limits of Our Causal Imagination

This morning, while I was driving my boys to school, my 13-year-old son said:

When I was a kid, I thought you controlled the car with the steering wheel. I would see you go like this <pushes arms out> and like this <pulls arms in> and thought that was how you made it go.

What a perfect illustration of how our minds imperfectly construct causality. Sitting in the back seat when he was younger, my son couldn’t see my feet as I drove; he could only see my hands. When he wondered what caused the car to speed up and slow down, he built a complete mental model from observed materials. It didn’t occur to him that I might be doing things he didn’t see—that the real causes of the car’s acceleration and deceleration might lie hidden from his view. Only in retrospect did that idea seem silly. At the time, that mental model made complete sense to him, and he implicitly entrusted his life to it every time he climbed in that car.

Think about that next time you’re trying to explain something as complex as the flow and ebb of a social movement or the collapse of a state.

Ignorance Is Not Always Bliss

Contrary to the views of some skeptics, I think that political science deserves the second half of its name, and I therefore consider myself to be a working scientist. The longer I’ve worked at it, though, the more I wonder if that status isn’t as much a curse as a blessing. After more than 20 years of wrestling with a few big questions, I’m starting to believe that the answers to those questions are fundamentally unknowable, and permanent ignorance is a frustrating basis for a career.

To see what I’m getting at, it’s important to understand what I take science to be. In a book called Ignorance, neurobiologist Stuart Firestein rightly challenges the popular belief that science is a body of accumulated knowledge. Instead, Firestein portrays scientists as explorers—“feeling around in dark rooms, bumping into unidentifiable things, looking for barely perceptible phantoms”—who prize questions over answers.

Working scientists don’t get bogged down in the factual swamp because they don’t care all that much for facts. It’s not that they discount or ignore them, but rather that they don’t see them as an end in themselves. They don’t stop at the facts; they begin there, right beyond the facts, where the facts run out.

What differentiates science from philosophy is that scientists then try to answer those questions empirically, through careful observation and experimentation. We know in advance that the answers we get will be unreliable and impermanent—“The known is never safe,” Firestein writes; “it is never quite sufficient”—but the science is in the trying.

The problem with social science is that it is nearly always impossible to do the kinds of experimentation that would provide us with even the tentative knowledge we need to develop a fresh set of interesting questions. It’s not that experimentation is impossible; it isn’t, and some social scientists are working hard to do them better. Instead, as Jim Manzi has cogently argued, the problem is that it’s exceptionally difficult to generalize from social-scientific experiments, because the number and complexity of potential causes is so rich, and the underlying system, if there even is such a thing, is continually evolving.

This problem is on vivid display in a recent Big Think blog post in which eight researchers identified as some of the world’s “top young economists” identify what they see as their discipline’s biggest unanswered questions. The first entry begins with the sentence, “Why are developing countries poor?” The flip side of that question is, of course, “Why are rich countries rich?”, and if you put those two questions together, you get “What makes some economies grow faster than others?” That is surely the most fundamental riddle of macroeconomics,  and yet the sense I get from empirical economists is that, after centuries of inquiry, we still really just don’t know.

My own primary field of political development and democratization suffers from the same problem. After several decades of pondering why some countries have democratic governments while others don’t, the only thing we really know is that we still don’t know. When we pore over large data sets, we see a few strong correlations, but those correlations can’t directly explain the occurrence of relevant changes in specific cases. What’s more, so many factors are so deeply intertwined with each other that it’s really impossible to say which causes which. When we narrow our focus to clusters of more comparable cases—say, the countries of Eastern Europe after the collapse of Communism—we catch glimpses of things that look more like causal mechanisms, but the historical specificity of the conditions that made those cases comparable ensures that we can never really generalize even those ephemeral inferences.

It’s tempting to think that smarter experimentation will overcome or at least ameliorate this problem, but on broad questions of political and economic development, I’m not buying it. Take the question of whether or not U.S.-funded programs aimed at promoting democracy in other countries actually produce the desired effects. This sounds like a problem amenable to experimental design–what effect does intervention X have on observable phenomenon Y?–but it really isn’t. Yes, we can design and sometimes even implement randomized controlled trials (RCTs) to try to evaluate the impacts of individual interventions under specific conditions. As Jennifer Gauck has convincingly argued, however, it’s virtually impossible to get clear answers to the original macro-level questions from the micro-level analyses RCTs on this topic must entail when the micro- to macro- linkages are themselves unknown. Add thick layers of politicization, power struggles, and real-time learning, and it’s hard to see how even well-designed RCTs can push us off of old questions onto new ones.

I’m not sure where this puts me. To be honest, I increasingly wonder if my attraction to forecasting has less to do with the lofty scientific objective of using predictions to hone theories and more to do with the comfort of working on a more tractable problem. I know I can never really answer the big questions, and my attempts to do so sometimes leave me feeling like I’m trying to bail out the ocean, pouring one bucket at a time onto the sand in hopes of one day catching a glimpse of the contours of the floor below. By contrast, forecasting at least provides a yardstick against which I can assess the incremental value of specific projects. On a day-to-day basis, the resulting sense (illusion?) of progress provides a visceral feeling of accomplishment and satisfaction that is missing when I offer impossibly uncertain answers to deeper questions of cause and effect. And, of course, the day-to-day world is the one I actually have to inhabit.

I’d really like to end this post on a hopeful note, but today I’m feeling defeated. So, done.

Follow

Get every new post delivered to your Inbox.

Join 7,700 other followers

%d bloggers like this: