A Few Suggestions for Social Scientists New to Twitter

Earlier today, one scholar whose work I greatly admire asked another scholar whose work I greatly admire for advice on how to get started on Twitter. I liked Dan’s response, but I thought I’d take Christian’s query as an open invitation to share a few suggestions of my own. So:

Replace the egg with a picture of you. Seriously, don’t even start following people until you’ve done this. It’s not vain; it’s just letting people know that there’s (probably) a real human on the other end, and letting us know something about how you plan to present yourself in this context. Some people can get away with using cartoons or pictures of their pets or kids, but most of us can’t. So, unless you’re trying to make a very specific statement by doing something different, you probably shouldn’t try.

Decide why you’re using Twitter. If your main goal is to use Twitter as a news feed or to follow other peoples’ work, then it’s a really easy tool to use. Just poke around until you find people and organizations that routinely cover the issues that interest you, and follow them. If, however, your goal is to develop a professional audience, then you need to put more thought into what you tweet and retweet, and the rest of my suggestions might be useful.

Pick your niche(s). There are a lot of social scientists on Twitter, and many of them are picky about whom they follow. To make it worth peoples’ while to add you to their feed, pick one or a few of your research interests and focus almost all of your tweets and retweets on them. For example, I’ve tried to limit my tweets to the topics I blog about: democratization, coups, state collapse,  forecasting, and a bit of international relations. When I was new to Twitter, I focused especially on democratization and forecasting because those weren’t topics other people were tweeting much about at the time. I think that differentiation made it easier for people to attach an identity to my avatar, and to understand what they would get by following me that they weren’t already getting from the 500 other accounts in their feeds.

Keep the tweet volume low, at least at the start. For a long time, I tried to limit myself to two or three tweets per Twitter session, usually once or twice per day. That made me think carefully about what I tweeted, (hopefully) keeping the quality higher and preventing me from swamping peoples’ feeds, a big turnoff for many.

Don’t just share the news; augment it. If you’re tweeting a news story or journal article or something, use a short quote or comment that crystallizes the story or tells us something about why you think it’s worth reading. In other words, try to add value. I usually lead with the title, then insert the link, then hang the quote or comment at the end, like this:

But, of course, there are lots of ways to do this. You can also drop the title entirely, like this recent one from Joshua Kucera that got me laughing:

Keep it professional.  If you’re thinking of Twitter as an extension of your work, don’t tweet about personal stuff. This is especially important when you’re new to the medium. The occasional reference to your life outside the office can help people feel more connected to you, but please err on the side of reticence. I have chosen not to follow or unfollowed many people because the interesting stuff in their feed was overwhelmed by the personal and trivial (and sometimes just downright gross). At some point, all that jetsam gets in the way of the information I’m actually looking for, so I choose to cut it off.

Related to the previous suggestion, be polite. In theory, this should go without saying, but, hey, this is the Internet. If you’re using Twitter for professional purposes, I think it makes sense to use the same language and demeanor you’d use in the office or at a professional conference. That can include humor and the occasional personal tidbit you’d share in a hallway conversation, but probably not the bar talk, and definitely not the post-conference conversations with your confidantes. It most definitely does not include nastiness or pettiness.

Be generous. Don’t retweet something under your own handle just to troll for RTs. If you want to share something someone else already shared, just pass along his or her tweet. The exception to this rule is when you’re going to add your own comment. Then just be sure to acknowledge the source with a via or h/t (hat tip). If a bunch of people already shared something so you’re not sure whom to credit, the answer is, Don’t share it again.

If you modify someone’s tweet at all before passing it along, use MT. This is a Twitter pet peeve of mine. RT (retweet) should only be used when what follows is a verbatim replication of the original. If you change anything—abbreviate, drop a comma, whatever—use MT (modified tweet) instead.

Finally, know that it’s addictive. I don’t mean fun-and-time-consuming addictive; I mean addictive addictive, like nicotine and booze. Before you dive in, it’s worth considering how that addiction might negatively affect your life and how you plan to deal with it. Just because lots of people do it doesn’t mean it’s good for you. The time you spend on Twitter is time you could have spent doing something else. If that something else is more important and you’re prone to addiction, be careful.

Will Chuck Hagel Be the Next SecDef? A Case Study in How (Not) to Forecast

Yesterday, Foreign Policy defense blogger Tom Ricks tabled a pessimistic forecast on the fate of Sen. Chuck Hagel’s nomination to succeed Leon Panetta as U.S. Secretary of Defense:

Will Hagel withdraw? I’d say 50-50…But declining by the day. Bottom line: Every business day that the Senate Armed Services Committee doesn’t vote to send the nomination to the full Senate, I think the likelihood of Hagel becoming defense secretary declines by about 2 percent.

Ricks’ gloomy outlook was quickly derided by a few of the national-security pros I follow on Twitter. Political scientist Dan Drezner wrote, “I’ll bet anyone who thinks he’ll withdraw,” prompting Slate reporter Dave Weigel to quip, “Dan, Dan, people withdraw when they have the votes necessary to confirm ALL THE TIME.”

I don’t normally blog about (or even pay much attention to, really) the inside baseball of U.S. politics, but I suspect that Drezner and co. are right on this one, and I thought an explication of the reasons why would make a nice case study in how (not) to forecast.

So: let’s put ourselves in Ricks’ shoes and decide we want to assess the odds that Hagel is going to become the next Secretary of Defense. Where do we start? Experienced forecasters often start with the base rate—that is, the observed frequency of the event of interest over many previous trials. If, for example, we want to estimate how likely it is that Lebron James will sink his next free throw, the single most-useful piece of information is usually going to be his career or season free-throw percentage (as it happens, both about 75%).

Okay, so what do we know about the outcomes of the past nominations for Secretary of Defense? Since the establishment of the position soon after World War II, it looks like only one of 24 official nominees has been rejected by the Senate (John Tower), and none has withdrawn. (See here for the list of approved candidates and here for the rejections and withdrawals.) If we think there’s something unique about that position, we have a base rate of about 96 percent. If we think there isn’t anything particularly unique about the approval process for Secretary of Defense, we might decide to draw on the record of nominations to all Cabinet posts. At that broader level, the approval rate is about the same, perhaps even a bit higher. As Matt Dabrowski noted when I asked about this on Twitter, “Officially, only 10 actual nominees for any Cabinet office have failed since Reconstruction,” and that’s out of several hundred.

Given a base rate in the high 90-percent range, the safe money for any Cabinet nomination that reaches the Senate-hearings stage is going to be on approval. Surely Ricks has some sense of this history, so why is he predicting that Hegel’s nomination will probably fail? Apparently, it was Hagel’s performance at his Senate hearing that swayed Ricks.

That hearing last week didn’t reflect well on the U.S. Senate. But [Hagel] didn’t do well in it, either. He didn’t appear that interested in the job.

Now, let’s give Ricks the benefit of the doubt here and allow a) that Hagel didn’t do so hot and b) that a weak performance before the Armed Services Committee would damage his candidacy. Now the question becomes, “How much damage?”

One way to answer this is with Bayes’ theorem. Taking the base approval rate as our prior and the hearing performance as an impetus for updating, Bayes’ theorem requires us to estimate two things: 1) how likely are we to see a poor Senate performance when the nominee is destined to fail and 2) how likely are we to see a poor performance when the nominee is bound for approval?

I don’t have data on the relative frequency of these pathways, but let’s give Ricks the benefit of the doubt and assume the nominee’s performance in his or her Senate hearing is very revealing. For the sake of argument, I’ll assume that only one of every five nominees bound for success does poorly in confirmation hearings, but 19 of 20 bound for failure do. (If you’re wondering why not 20 of 20, it’s easy to imagine some cases arising where some scandal emerges after the confirmation hearings that derails the nominee.) I think that 1-in-5 figure for successful nominees is probably too low, but again, I’m trying to tip the scales in Ricks’ favor here for illustrative purposes.

Okay, so what happens to our predicted probability of approval when we plug those values into Bayes’ theorem? It tumbles from 96 percent to a lowly…83 percent. Even if we accept Ricks’ judgment that Hagel flubbed his hearing and assume that a flubbed hearing tells us a lot about a nominee’s prospects, Bayes’ theorem tells us that the odds still overwhelmingly favor Hagel’s accession to the post. In what I consider to be a more realistic world where “flubbing” is in the eye of the beholder and confirmation hearings are essentially pro forma, Hagel’s mediocre showing would have little impact on a careful estimate of his odds, which would still hover around 90 percent. Either way, Ricks’ “50 – 2(business days)” estimate is looking pretty wild.

So where is Ricks coming from? I have never met the man and haven’t spoken to him about this blog post, so I can only guess, but a comment he made about John Tower suggests a common culprit: availability bias. As wiseGEEK explains, “Availability bias is a human cognitive bias that causes us to overestimate probabilities of events associated with memorable or vivid occurrences.” In explaining why he thinks Hagel will probably withdraw, Ricks writes:

SecDef nominees have blown up on the launch pad before: Remember John Tower (picked by the first President Bush) and Bobby Inman (picked by President Clinton to replace Les Aspin)?

It sure looks like a couple of recent and salient failures have stuck in Ricks’ mind, tempting him to ignore the overwhelming number of successful nominations. (Note: Inman didn’t factor into the base rate calculations above because he withdrew before Senate confirmation hearings were held, as have a number of other Cabinet nominees over the years, and I’m assuming that nominations which progress to that stage are different from ones that don’t.)

Of course, the base rate is often but not always the most useful piece of information. In situations where we have specific and compelling evidence about the case in question—say, word from a trusted source that Hagel actually plans to withdraw—we’ll want to discount the base rate and weight that specific evidence more heavily. The only specifics Ricks offers on Hagel, however, are his personal impressions of Hagel’s performance in front of the Armed Services Committee, which we’ve already addressed above, and a vague sense that “no one much wants him running the Pentagon.” With “specific” evidence as squishy as this, we’re probably better off sticking with the base rate.

UPDATE: On Tuesday, February 26, the Senate voted 58-41 to confirm Hagel’s appointment, and he was sworn into office the next morning.

Rules of Thumb vs. Statistical Models, or the Misconception that Will Not Die

Steve LeVine kicked off the new year on Quartz with a nice post called “14 rules for predicting future geopolitical events.” According to LeVine,

Nations are eccentric. But they also have threads of repeated history through which we can discern what comes next…Many political scientists dismiss the detection of such trends as “deterministic.” Some insist that, unlike in economics and statistics, there is as yet in fact no useful algorithm for foreseeing events—the only tool available to political forecasters is their own intuition. But it is vapid to observe the world, its nations and peoples as an unfathomable mob. History is not a science—but neither is it pure chaos.

If you’re a regular reader of this blog, you know I basically agree. Borrowing Almond and Genco’s classic metaphor, politics isn’t clock-like, but it’s not purely random, either. I also found little to dispute in the 14 rules that followed. For example, LeVine’s Muddle-Along Rule and its corollary, the Precipice Rule, are really just admonitions to take a deep breath when considering the risk of big but rare crises and recognize that, most of the time, the crisis won’t materialize. In statistical terms, that’s analogous to forecasting the base rate, and that’s actually a pretty powerful rule of thumb.

Still, after reading LeVine’s piece, I felt frustrated. As someone who uses statistical models to do the kind of forecasting he seems to be proposing, I couldn’t help but wonder: Why stop halfway? Rules of thumb can be very helpful, but they are often pretty coarse. Okay, so most cases will “tend to muddle along regardless of the trouble, and not collapse,” but can’t we say something more specific about just how unlikely that collapse is? Does it vary across forms of crisis or types of countries? LeVine proposes using history as resource for gleaning useful patterns but then stops short of doing so in anything but the fuzziest terms.

Equally important, it’s often not clear how to use rules of thumb together, especially when they’re in tension with one another. Some of the rules on LeVine’s list contradict each other, and it’s not clear to me how you’d adjudicate between them when trying to make judgments about specific cases. For example, in addition to the Muddle-Along and Precipice Rules, LeVine gives us the True-Believer Rule:

While people and countries tend toward the middle, events can turn on exceptions operating on the extremes. Hitler’s Germany is an example. Today, Khameini’s Iran, Afghanistan’s Taliban, Kim’s North Korea and Chavez’s Venezuela punch above their weight in influencing the geopolitical landscape.

Now imagine you’re trying to apply these rules to a case that isn’t already on that short list of exceptions. How can we tell in advance whether it’s a muddler or a true believer? If you’re not sure, what’s the forecast?

I don’t know LeVine personally, so I won’t make any assumptions about his motivations, but I do think the preference for rules of thumb over quantified forecasts exemplified in his Quartz post is pretty common to political forecasting. And I wonder if this aversion to statistics isn’t born, in part, of ignorance of what the use of statistics implies. A couple of days ago, I asked on Twitter: “Why do lay audiences consume weather forecasts w/o asking how they’re made but want peek under hood of stat forecasts of pol crises?” To which Dan Drezner replied, “My (obvious) answer is that people accept meteorology as an actual science, don’t believe the same about political science.”

But here’s the thing: statistics isn’t science, it’s a set of tools for doing science. The decision to use statistics does not presume either regularity in, or certainty about, the object of study. If anything, that decision is a reasoned choice to search for empirical evidence of regularity, an attempt to clarify our un-certainty. The whole point of statistical modeling for forecasting is to take a bunch of conjectures like LeVine’s and run them through a mill that provides clearer answers to the questions that naturally arise when we try to apply those conjectures to specific situations.

Put another way, a statistical forecasting model is really nothing more than a meta-rule of thumb, a flow chart for moving from those initial conjectures to a single best estimate. That the estimate is presented as a number does not automatically imply that its presenter believes it’s any more true or certain than an estimate described in a phrase. It’s just another form of representation for our ideas, and one that happens to be especially useful because it lends itself to the application of some really powerful tools for pattern recognition we’ve finally devised after a few million years of human evolution.

Yes, there was a time when statistics was new and notions of science and modernity and quantification all got mashed together in some professional and social circles into an extreme optimism about the predictability of human behaviors. As far as I can tell, though,very few practicing social scientists think that way any more. And, honestly, I’m just tired of carrying the intellectual baggage those 19th-century hacks left behind.

PS. In a follow-up post, LeVine applies his rules of thumb to produce “six geopolitical predictions for 2013.” On the whole, I think this is a thoughtful exercise, and I only wish more qualitative analysts would be as transparent as Steve is here about the mental models underlying their predictions.

Follow

Get every new post delivered to your Inbox.

Join 7,102 other followers

%d bloggers like this: