We Are All Victorians

“We have no idea, now, of who or what the inhabitants of our future might be. In that sense, we have no future. Not in the sense that our grandparents had a future, or thought they did. Fully imagined cultural futures were the luxury of another day, one in which ‘now’ was of some greater duration. For us, of course, things can change so abruptly, so violently, so profoundly, that futures like our grandparents’ have insufficient ‘now’ to stand on. We have no future because our present is too volatile… We have only risk management. The spinning of the given moment’s scenarios. Pattern recognition.”

That’s the fictional Hubertus Bigend sounding off in Chapter Six of William Gibson’s fantastic 2003 novel. Gibson is best known as an author of science fiction set in the not-too-distant future. As that passage suggests, though, he is not uniquely interested in looking forward. In Gibson’s renderings, future and past might exist in some natural sense, but our ideas of them can only exist in the present, which is inherently and perpetually liminal.

In Chapter Six, the conversation continues:

“Do we have a past, then?” Stonestreet asks.

“History is a best-guess narrative about what happened and when,” Bigend says, his eyes narrowing. “Who did what to whom. With what. Who won. Who lost. Who mutated. Who became extinct.”

“The future is there,” Cayce hears herself say, “looking back at us. Trying to make sense of the fiction we will have become. And from where they are, the past behind us will look nothing at all like the past we imagine behind us now.”

“You sound oracular.” White teeth.

“I only know that the one constant in history is change: The past changes. Our version of the past will interest the future to about the extent we’re interested in in whatever the past the Victorians believed in. It simply won’t seem very relevant.”

I read that passage and I picture a timeline flipped vertical and frayed at both ends. Instead of a flow of time from left to right, we have only the floating point of the present, with ideas about the future and past radiating outwards and nothing to which we can moor any of it.

In a recent interview with David Wallace-Wells for Paris Review, Gibson revisits this theme when asked about science fiction as futurism.

Of course, all fiction is speculative, and all history, too—endlessly subject to revision. Particularly given all of the emerging technology today, in a hundred years the long span of human history will look fabulously different from the version we have now. If things go on the way they’re going, and technology keeps emerging, we’ll eventually have a near-total sorting of humanity’s attic.

In my lifetime I’ve been able to watch completely different narratives of history emerge. The history now of what World War II was about and how it actually took place is radically different from the history I was taught in elementary school. If you read the Victorians writing about themselves, they’re describing something that never existed. The Victorians didn’t think of themselves as sexually repressed, and they didn’t think of themselves as racist. They didn’t think of themselves as colonialists. They thought of themselves as the crown of creation.

Of course, we might be Victorians, too.

Of course we are. How could we not be?

That idea generally fascinates me, but it also specifically interests me as a social scientist. As discussed in a recent post, causal inference in the social sciences depends on counterfactual reasoning—that is, imagining versions of the past and future that we did not see.

Gibson’s rendering of time reminds us that this is even harder than we like to pretend. It’s not just that we can’t see the alternative histories we would need to compare to our lived history in order to establish causality with any confidence. We can’t even see that lived history clearly. The history we think we see is a pattern that is inexorably constructed from materials available in the present. Our constant disdain for most past versions of those renderings should give us additional pause when attempting to draw inferences from current ones.

The Ethics of Political Science in Practice

As citizens and as engaged intellectuals, we all have the right—indeed, an obligation—to make moral judgments and act based on those convictions. As political scientists, however, we have a unique set of potential contributions and constraints. Political scientists do not typically have anything of distinctive value to add to a chorus of moral condemnation or declarations of normative solidarity. What we do have, hopefully, is the methodological training, empirical knowledge and comparative insight to offer informed assessments about alternative courses of action on contentious issues. Our primary ethical commitment as political scientists, therefore must be to get the theory and the empirical evidence right, and to clearly communicate those findings to relevant audiences—however unpalatable or inconclusive they might be.

That’s a manifesto of sorts, nested in a great post by Marc Lynch at the Monkey Cage. Marc’s post focuses on analysis of the Middle East, but everything he writes generalizes to the whole discipline.

I’ve written a couple of posts on this theme, too:

  • This Is Not a Drill,” on the challenges of doing what Marc proposes in the midst of fast-moving and politically charged events with weighty consequences; and
  • Advocascience,” on the ways that researchers’ political and moral commitments shape our analyses, sometimes but not always intentionally.

Putting all of those pieces together, I’d say that I wholeheartedly agree with Marc in principle, but I also believe this is extremely difficult to do in practice. We can—and, I think, should—aspire to this posture, but we can never quite achieve it.

That applies to forecasting, too, by the way. Coincidentally, I saw this great bit this morning in the Letter from the Editors for a new special issue of The Appendix, on “futures of the past”:

Prediction is a political act. Imagined futures can be powerful tools for social change, but they can also reproduce the injustices of the present.

Concern about this possibility played a role in my decision to leave my old job, helping to produce forecasts of political instability around the world for private consumption by the U.S. government. It is also part of what attracts me to my current work on a public early-warning system for mass atrocities. By making the same forecasts available to all comers, I hope that we can mitigate that downside risk in an area where the immorality of the acts being considered is unambiguous.

As a social scientist, though, I also understand that we’ll never know for sure what good or ill effects our individual and collective efforts had. We won’t know because we can’t observe the “control” worlds we would need to confidently establish cause and effect, and we won’t know because the world we seek to understand keeps changing, sometimes even in response to our own actions. This is the paradox at the core of applied, empirical social science, and it is inescapable.

Scientific Updating as a Social Process

Cognitive theories predict that even experts cope with the complexities and ambiguities of world politics by resorting to theory-driven heuristics that allow them: (a) to make confident counterfactual inferences about what would have happened had history gone down a different path (plausible pasts); (b) to generate predictions about what might yet happen (probable futures); (c) to defend both counterfactual beliefs and conditional forecasts from potentially disconfirming data. An interrelated series of studies test these predictions by assessing correlations between ideological world view and beliefs about counterfactual histories (Studies 1 and 2), experimentally manipulating the results of hypothetical archival discoveries bearing on those counterfactual beliefs (Studies 3-5), and by exploring experts’ reactions to the confirmation or disconfirmation of conditional forecasts (Studies 6-12). The results revealed that experts neutralize dissonant data and preserve confidence in their prior assessments by resorting to a complex battery of belief-system defenses that, epistemologically defensible or not, make learning from history a slow process and defections from theoretical camps a rarity.

That’s the abstract to a 1999 AJPS paper by Phil Tetlock (emphasis added; ungated PDF here). Or, as Phil writes in the body of the paper,

The three sets of studies underscore how easy it is even for sophisticated professionals to slip into borderline tautological patterns of thinking about complex path-dependent systems that unfold once and only once. The risk of circularity is particularly pronounced when we examine reasoning about ideologically charged historical counterfactuals.

As noted in a recent post, ongoing debates over who “lost” Iraq or how direct U.S. military intervention in Syria might or might not have prevented wider war in the Middle East are current cases in point.

This morning, though, I’m intrigued by Phil’s point about the rarity of defections from theoretical camps tied to wider belief systems. If that’s right—and I have no reason to doubt that it is—then we should not put much faith in any one expert’s ability to update his or her scientific understanding in response to new information. In other words, we shouldn’t expect science to happen at the level of the individual. Instead, we should look wherever possible at the distribution of beliefs across a community of experts and hope that social cognition is more powerful than our individual minds are.

This evidence should also affect our thinking about how scientific change occurs. In The Structure of Scientific Revolutions, Thomas Kuhn (p. 19 in the 2nd Edition) described scientific revolutions as a process that happens at both the individual and social levels:

When, in the development of a natural science, an individual or group first produces a synthesis able to attract most of the next generation’s practitioners, the older schools gradually disappear. In part their disappearance is caused by their members’ conversion to the new paradigm. But there are always some men who cling to one or another of the older views, and they are simply read out of the profession, which thereafter ignores their work.

If I’m reading Tetlock’s paper right, though, then this description is only partly correct. In reality, scientists who are personally and professionally (or cognitively and emotionally?) invested in existing theories probably don’t convert to new ones very often. Instead, the recruitment mechanism Kuhn also mentions is probably the more relevant one. If we could reliably measure it, the churn rate associated with specific belief clusters would be a fascinating thing to watch.

Why Skeptics Make Bad Pundits

First Rule of Punditry: I know everything; nothing is complicated.

First Rule of Skepticism: I know nothing; everything is complicated.

Me on BuzzFeed on Venezuela

Journalist Rosie Gray has a story up at BuzzFeed on the wave of protests occurring now in Venezuela and the backdrop of economic crisis and political polarization against which it’s occurring. I found the piece interesting and informative, but I think it also illustrates how hard it is for journalists—and, for that matter, social scientists—to avoid openly sympathizing with one “side” or another in their reporting on conflicts like Venezuela’s and thereby leading readers to do the same.

Analytically, Gray’s piece attempts to explain why this wave of protests is occurring now and why anti-government activists have largely failed so far, in spite of the country’s severe economic problems, to draw large numbers of government supporters to their cause. Most of the sources quoted in Gray’s story are opposition activists, and they are generally described sympathetically. The first opposition activist we encounter, Carlos Vargas, tells us that he and other student protesters are “making an effort to reach out to the poor.” The next, a community organizer, admits that the opposition hasn’t made serious efforts to organize in his neighborhood, but we are then reminded that censorship and pro-government paramilitaries make it very hard for them to do so.

Gray also includes portions of an interview with two Chavistas, members of a colectivo in the 23 de Enero neighborhood. This interview and one with a pro-government economist ostensibly provide the “balance” in the piece, but their remarks and other descriptions of activity sympathetic to the government are framed in a way that evokes a sense of false consciousness. Hugo Chavez is dead, but he remains popular because of a “personality cult” that “still holds a grip on many Venezeulans, especially the poor.” Gray reports the government’s line that anti-government protesters “are a group of revanchist elites out of touch with regular Venezuelans” and writes that this line has “some grain of truth.” She immediately follows that sentence, however, with a description of protesters’ efforts to recruit poorer Venezuelans who, we are told by two of Gray’s sources, would participate more if they weren’t being menaced by pro-government militias. Gray tells us that the Chavistas she interviewed in 23 de Enero have a picture of Syrian president Bashar al-Assad on their wall, and that they blame their country’s unrest on “right-wing elements” in the U.S. and some of its allies. As for where ideas like that one come from, we are told that

Across town, the Chavista intelligentsia is hard at work coming up with theories for the foot soldiers to buy into.

To me, all of those phrases and details convey a belief that Chavistas aren’t joining the protesters because they are being duped. As a social scientist, I find that hypothesis unconvincing. The model of political behavior it implies echoes some instrumentalist theories of ethnic conflict, which posit that ethnic groups fight each other because self-interested leaders goad them into doing so. Those leaders’ efforts are certainly relevant to the story, but simple versions of the theory beg the question of why anyone listens. To try to understand that, we need more sympathetic accounts of the beliefs and choices made by those ostensible followers. Gray’s piece suggests one answer to that question when she recounts protesters’ claims that Chavista militias are intimidating them into obedience, but that also seems like a partial explanation at best. After all, some people are protesting in spite of that intimidation, so why not others?

This slant matters because it affects our judgments about what is possible and what is right, and those judgments affect the actions we and our governments take. Objectivity is an impossible ideal, not just for reporters but for anyone. Still, I think political reporters should aspire to afford the same sympathy to all of their sources and the causes they espouse, and then trust their readers to draw their own conclusions. Measured against that standard, I think Gray’s Venezuela piece—and, frankly, much of the reporting we get on factional disputes and popular protest in all parts of the world—fell a bit short.

A Nice Pat on the Back

I had to leave the annual convention of the International Studies Association yesterday, before it wrapped up, but not before receiving a nice pat on the back. In the second annual Online Achievement in International Studies (OAIS) awards—a.k.a. the Duckies—Dart-Throwing Chimp was recognized as Best Blog (Individual).

It seems fitting to use this platform to thank the Duck of Minerva crew for organizing the OAIS awards and SAGE Publications for helping to make them happen. Most of all, though, I want to say thanks to all of you for reading and conversing with me. I hope I can keep it interesting.

This Is Not a Drill

Times like these, part of me wishes I studied microbes or aeronautics or modern American fiction.

One of the most significant crises in international relations of the past 20 years is unfolding right now in Ukraine, but it is impossible to talk or write publicly about it without engaging in a political act that can have significant personal and even public consequences. There is no political science in real time, only politics. When analysis overlaps with practice, the former becomes part of the latter. Sometimes the stakes are high, and I’ve found recently that more people are listening that I had anticipated when I started blogging about current events, among other things.

Or, more accurately, I just hadn’t thought that part through. I think I started blogging because I had time to do it, I enjoyed and benefited from the mental exercise, and I hoped it would advance my career. Best I can recall, I did not think much about how it might eventually entangle me in public conversations with significant consequences, and how I would handle those situations if and when they arose.

In case it isn’t obvious, my last post, on Ukraine, is the catalyst for this bout of introspection. That post had ramifications in two spheres.

The first was personal. Shortly after I published it, an acquaintance whose opinion I respect called me out for stating so unequivocally that Yanukovych’s ouster was “just.” His prodding forced me to think more carefully about the issue, and the more I did, the less confident I was in the clarity of that judgment. In retrospect, I think that statement had as much to do with not wanting to be hated by people whose opinions I value as it did with any serious moral reasoning. I knew that some people whose opinions I value would read my calling the ouster a “coup” as a betrayal, and I felt compelled to try to soften that blow by saying that the act was good anyway. That moral argument is there for the making, but I didn’t make it in my post, and to be honest I didn’t even make it clearly in my own head before asserting it.

The other sphere is the political one. I still don’t believe that my opinions carry more than a feather’s weight in the public conversation, if that. Still, this post has forced me to think more carefully about the possibility that it could, and that I won’t control when that happens and what the consequences will be.

Before I wrote the post, I queried two scholars who have studied Ukrainian politics and law and asked them whether or not Yanukovych’s removal from office had followed constitutionally prescribed procedures. Both of them replied, but both also asked me not to make their views public. As one explained in an email I received after I had already published my post, the risk wasn’t in being wrong. Instead, the risk was that publicizing a certain interpretation might abet Russia’s ongoing actions in the region, and that potential political effect was more important to this person than the analytical issues my question covered. Of course, it was impossible for me to read that email and not feel some regret about what I had already written.

One irony here is that lots of political scientists talk about wanting their work to be “policy relevant,” to have policymakers turn to them for understanding on significant issues, but I think many of the scholars who say that don’t fully appreciate this point about the inseparability of analysis and politics (just as I didn’t). Those policymakers aren’t technocratic robots, crunching inputs through smart algorithms in faithful pursuit of the public interest.  When you try to inform their decisions in real time, you step out of the realm of intellectual puzzle-solving and become part of a process of power-wielding. I suppose that’s the point for some, but I’m finding it more unnerving than I’d expected.

If you work in this field and haven’t already done so, I urge you to read Mark Lilla’s The Reckless Mind: Intellectuals in Politics for much deeper consideration of this fraught terrain. I picked up Lilla’s book again this morning and found this passage (p. 211) particularly relevant:

Some tyrannical souls become rulers of cities and nations, and when they do entire peoples are subjugated by the rulers’ erotic madness. But such tyrants are rare and their grip on power is weak. There is another, more common class of tyrannical souls that Socrates considers, those who enter public life not as rulers, but as teachers, orators, poets—what today we would call intellectuals. These men can be dangerous, for they are ‘sunburned’ by ideas. Like Dionysius, this kind of intellectual is passionate about the life of the mind, but unlike the philosopher he cannot master that passion; he dives headlong into political discussion, writing books, giving speeches, offering advice in a frenzy of activity that barely masks his incompetence or irresponsibility. Such men consider themselves to be independent minds, when the truth is that they are a herd driven by their inner demons and thirsty for the approval of a fickle public.

In the 2010s, a lot of oration happens in cyberspace, and a public intellectual is more likely to blog than to give a speech. In other words, scholars who blog about politics in real time must recognize that we are “offering advice,” and must therefore guard against the risk of becoming the “sunburned” intellectuals whose urge to speak drowns out our “incompetence or irresponsibility.”

But what does that mean in practice? Lilla isn’t trying to write a self-help guide for bloggers, but he does go on to say this (p. 212):

The philosopher-king is an ‘ideal,’ not in the modern sense of a legitimate object of thought demanding realization, but what Socrates calls a ‘dream’ that serves to remind us how unlikely it is that the philosophical life and the demands of politics can ever be made to coincide. Reforming a tyranny may not be within our power, but the exercise of intellectual self-control always is. That is why the first responsibility of a philosopher who finds himself surrounded by political and intellectual corruption may be to withdraw.

I do not consider myself a philosopher, but I take his point nonetheless.

How’d Those Football Forecasts Turn Out?

Yes, it’s February, and yes, the Winter Olympics are on, but it’s a cold Sunday so I’ve got football on the brain. Here’s where that led today:

Last August, I used a crowdsourcing technique called a wiki survey to generate a set of preseason predictions on who would win Super Bowl 48 (see here). I did this fun project to get a better feel for how wiki surveys work so I could start applying them to more serious things, but I’m also a pro football fan who wanted to know what the season portended.

Now that Super Bowl 48’s in the books, I thought I would see how those forecasts fared. One way to do that is to take the question and results at face value and see if the crowd picked the right winner. The short answer is “no,” but it didn’t miss by a lot. The dot plot below shows teams in descending order by their final score on the preseason survey. My crowd picked New England to win, but Seattle was second by just a whisker, and the four teams that made the conference championship games occupied the top four slots.

nflpostmortem.dotplotSo the survey did great, right? Well, maybe not if you look a little further down the list. The Atlanta Falcons, who finished the season 4-12, ranked fifth in the wiki survey, and the Houston Texans—widely regarded as the worst team in the league this year—also landed in the top 10. Meanwhile, the 12-4 Carolina Panthers and 11-5 KC Chiefs got stuck in the basement. Poke around a bit more, and I’m sure you can find a few other chuckles.

Still, the results didn’t look crazy, and I was intrigued enough to want to push it further. To get a fuller picture of how well this survey worked as a forecasting tool, I decided to treat the results as power rankings and compare them across the board to postseason rankings. In other words, instead of treating this as a classification problem (find the Super Bowl winner), I thought I’d treat it as a calibration problem, where the latent variable I was trying to observe before and after is relative team strength.

That turned out to be surprisingly difficult—not because it’s hard to compare preseason and postseason scores, but because it’s hard to measure team strength, even after the season’s over. I asked Trey Causey and Sean J. Taylor, a couple of professional acquaintances who know football analytics, to point me toward an off-the-shelf “ground truth,” and neither one could. Lots of people publish ordered lists, but those lists don’t give us any information about the distance between rungs on the ladder, a critical piece of any calibration question. (Sean later produced and emailed me a set of postseason Bradley-Terry rankings that look excellent, but I’m going to leave the presentation of that work to him.)

About ready to give up on the task, it occurred to me that I could use the same instrument, a wiki survey, to convert those ordered lists into a set of scores that would meet my criteria. Instead of pinging the crowd, I would put myself in the shoes of those lists’ authors for a while, using their rankings to guide my answers to the pairwise comparisons the wiki survey requires. Basically, I would kluge my way to a set of rankings that amalgamated the postseason judgments of several supposed experts. The results would have the added advantage of being on the same scale as my preseason assessments, so the two series could be directly compared.

To get started, I Googled “nfl postseason power rankings” and found four lists that showed up high in the search results and had been updated since the Super Bowl (here, here, here, and here). Then I set up a wiki survey and started voting as List Author #1. My initial thought was to give each list 100 votes, but when I got to 100, the results of the survey in progress didn’t look as much like the original list as I’d expected. Things were a little better at 200 but still not terrific. In the end, I decided to give each survey 320 votes, or the equivalent of 10 votes for each item (team) on the list. When I got to 320 with List 1, the survey results were nearly identical to the original, so I declared victory and stuck with that strategy. That meant 1,280 votes in all, with equal weight for each of the four list-makers.

The plot below compares my preseason wiki survey’s ratings with the results of this Mechanical Turk-style amalgamation of postseason rankings. Teams in blue scored higher than the preseason survey anticipated (i.e., over-performed), while teams in red scored lower (i.e., under-performed).

nflpostmortemplot

Looking at the data this way, it’s even clearer that the preseason survey did well at the extremes and less well in the messy middle. The only stinkers the survey badly overlooked were Houston and Atlanta, and I think it’s fair to say that a lot of people were surprised by how dismal their seasons were. Ditto the Washington [bleep]s and Minnesota Vikings, albeit to a lesser extent. On the flip side, Carolina stands out as a big miss, and KC, Philly, Arizona, and the Colts can also thumb their noses at me and my crowd. Statistically minded readers might want to know that the root mean squared error (RMSE) here is about 27, where the observations are on a 0-100 scale. That 27 is better than random guessing, but it’s certainly not stellar.

A single season doesn’t offer a robust test of a forecasting technique. Still, as a proof of concept, I think this exercise was a success. My survey only drew about 1,800 votes from a few hundred respondents whom I recruited casually through my blog and Twitter feed, which focuses on international affairs and features very little sports talk. When that crowd was voting, the only information they really had was the previous season’s performance and whatever they knew about off-season injuries and personnel changes. Under the circumstances, I’d say a RMSE of 27 ain’t terrible.

It’d be fun to try this again in August 2014 with a bigger crowd and see how that turns out. Before and during the season, it would also be neat to routinely rerun that Mechanical Turk exercise to produce up-to-date “wisdom of the (expert) crowd” power rankings and see if they can help improve predictions about the coming week’s games. Better yet, we could write some code to automate the ingestion of those lists, simulate their pairwise voting, and apply All Our Ideas‘ hierarchical model to the output. In theory, this approach could scale to incorporate as many published lists as we can find, culling the purported wisdom of our hand-selected crowd without the hassle of all that recruiting and voting.

Unfortunately, that crystal palace was a bit too much for me to build on this dim and chilly Sunday. And now, back to our regularly scheduled programming…

PS If you’d like to tinker with the data, you can find it here.

A Tale from the Replication Crypt

I got an email this morning from a colleague asking for the replication files for a paper I published in 2005 (PDF). Sheepishly, I had to admit that I didn’t have them.

Data-sharing and replication weren’t the professional norm in political science 10 years ago. Best I can recall, it never even occurred to me to put the files where future me could easily find them. I did the research, submitted the paper, and moved on to the next project. During peer review, no one asked to see the data and .do files I used, and the email I got today was, I think, the first time anyone had asked for them.

I’ve probably changed PCs three or four times in the intervening decade and haven’t kept all of the retired machines. I spent some time this afternoon looking on a DVD with files from one of those out-to-pasture PCs, but to no avail. Now, I’m staring at a frozen blue Microsoft ScanDisk screen on a laptop running Windows 98 and realizing that this path is probably a dead end, too. Those were all my options.

There’s a simple lesson here: if you’re going to do something you want to construe as science, you need to store your data—quantitative, qualitative, audio, imagery, whatever—where you can easily find and share it in perpetuity.

That’s a helluva lot easier now than it was 10 years ago, thanks to things like GitHub, Google Drive, Dataverse, and various other backup and cloud-storage services. It still doesn’t happen by itself, though. You still have to choose to do it. Today, I’m relearning why that’s important—for science, of course, but also for my professional reputation.

Most Popular Posts of 2013

Between my day job, a data-intensive side project with Erica Chenoweth, some family stuff (see my wife’s three-week-old blog), and the impending holidays, I haven’t had the time or brain power to write much over the past couple of weeks. Oddly enough, I think the Chenoweth project has been the most taxing. My role involves aggregating and analyzing a bunch of ragged data sets, and I find that the mental processing power required for that work doesn’t leave much room for abstract or creative thinking.

In lieu of new content, I thought I would call out what the site stats tell me were my most popular posts from the year. Here’s the top 10, with some manual concatenation and the home page omitted.

10. What Causes Social Unrest? Apparently, Everything

9. Egypt’s Mass Killing in Historical Perspective

8. Big Data Won’t Kill the Theory Star

7. Some Thoughts on the Causes of Mass Protest

6. Assessing Coup Risk in 2012

5. Why Is Academic Writing So Bad? A Brief Response to Stephen Walt

4. A Few Suggestions for Social Scientists New to Twitter

3. The Future of Political Science Just Showed Up [on GDELT]

2. Yes, That’s a Coup in Egypt

1. Coup Forecasts for 2013 (and a map of them)

So…coups, forecasting, social unrest, Egypt, and advice for academics look like the big themes. That’s funny, because I would self-identify as an expert of sorts on just three of those—coups, social unrest, and forecasting—and one of my primary research interests, democratization, is nowhere to be found. And advice for academics? Heck, I’m not even one myself.

As it probably goes for every blogger, a few of the posts I most enjoyed writing landed nowhere near the top 10. But, hey, it’s a blog, so I get to call them out again. In no particular order:

* On the interplay between global, regional, and local forces in politics (here and here)

* On the quixotic pursuit of templates for democratic transitions

* On how social science is like microbiology

* And, appropriately enough, on blogs as catalysts for intellectual work

I’m hoping my brain will switch back into writing mode soon, but in case it doesn’t, let me just say thank you very much for reading and engaging with me for another year. Blogging continues to be a pleasure on balance, and as long as I can keep saying that, I’ll keep doing it.

Follow

Get every new post delivered to your Inbox.

Join 6,449 other followers

%d bloggers like this: