Forecasting Round-Up No. 5

This is the latest in a (very) occasional series: No. 1, No. 2, No. 3, and No. 4.

1. Pacific Standard ran a long feature by Graeme Wood last week on the death of Intrade CEO John Delaney and the rise and demise of the prediction market he helped build. The bits about Delaney’s personal life weren’t of interest to me, but the piece is very much worth reading for the intellectual history of prediction markets it offers. Wood concludes with an intriguing argument about which I would like to see more evidence:

More traditional modes of prediction have proved astonishingly bad, yet they continue to run our economic and political worlds, often straight into the ground. Bubbles do occur, and we can all point to examples of markets getting blindsided. But if prediction markets are on balance more accurate and unbiased, they should still be an attractive policy tool, rather than a discarded idea tainted with the odor of unseemliness. As [economist Robin] Hanson asks, “Who wouldn’t want a more accurate source?”

Maybe most people. What motivates us to vote, opine, and prognosticate is often not the desire for efficacy or accuracy in worldly affairs—the things that prediction markets deliver—but instead the desire to send signals to each other about who we are. Humans remain intensely tribal… More than we like accuracy, we like listening to talkers on our side, and identifying them as being on our team—the right team.

2. One thing I think Graeme Wood gets wrong in that article is this prediction: “The chances of another Intrade’s coming into existence are slim.” Au contraire, mon frère. Okay, so for regulatory reasons we many never see another market that looks and works just like Intrade did, but there’s still a lot of action in this area, including the recently-launched American Civics Exchange (ACE), “the first U.S.-based commercial market for political futures.” ACE uses play money (for now), but the exchange pays out real-cash monthly prizes to the most successful traders. I registered when they launched and have focused my trading so far on the 2014 Congressional elections. I don’t have any expectations of winning the prizes they’re offering, but I’m excited to see that the forum even exists at all.

3. Speaking of actual existing prediction markets, have you seen Sean J. Taylor‘s brainchild, Creds? This is a free and open market in which the currency is reputational. Anyone can create statements and make trades, but you have to register to participate so that your name, and thus your credibility, is attached to those trades. The site needs more liquidity (hint, hint) to become really useful as a forecasting resource, but some of the features and functions Sean is experimenting with on the site are novel and very cool.

I’ve been trading on Creds for a little while and recently used it to create a couple of statements about the possibility that Saudi Arabia will acquire nuclear weapons in the next five years (here and here). Those statements were inspired by a tweet from Ian Bremmer and an ensuing report on BBC News. It’s nice to have open venues to quantify our collective beliefs about topics like this one, something we simply couldn’t do not so long ago.

4. Why is quantifying our beliefs so important? I recently had an email exchange with a colleague on this issue. After that colleague wrote a piece on a timely situation that amounted to a “maybe, maybe not” prediction, I pushed him to assign some probabilities to his thinking. He pushed back, saying that any number he produced would “simply be a guess,” and that numeric guesses would smack of “false precision.” In the end, I failed to convince him to offer what I would consider a real forecast.

The next time that debate comes up, I will point my antagonist toward Mike Ward & co.‘s new Predictive Heuristics blog, and in particular to this passage from this post of Mike’s on “Prediction versus Explanation?“:

Pretending that our explanations don’t have to supply accurate predictions—i.e., we are explaining rather than predicting—leads to worse understanding. Rather than ignoring or hiding predictions we should put them front and center so that they may help us in the evaluation of how well our understandings play out in political events and remind us that our understandings are incomplete as well as uncertain… Real understanding will involve both explanation and prediction. Time to get on with it rather than pretending that these two goals are polar opposites. We have a long way to go.

It’s Not Just The Math

This week, statistics-driven political forecasting won a big slab of public vindication after the U.S. election predictions of an array of number-crunching analysts turned out to be remarkably accurate. As John Sides said over at the Monkey Cage, “2012 was the Moneyball election.” The accuracy of these forecasts, some of them made many months before Election Day,

…shows us that we can use systematic data—economic data, polling data—to separate momentum from no-mentum, to dispense with the gaseous emanations of pundits’ “guts,” and ultimately to forecast the winner.  The means and methods of political science, social science, and statistics, including polls, are not perfect, and Nate Silver is not our “algorithmic overlord” (a point I don’t think he would disagree with). But 2012 has showed how useful and necessary these tools are for understanding how politics and elections work.

Now I’ve got a short piece up at Foreign Policy explaining why I think statistical forecasts of world politics aren’t at the same level and probably won’t be very soon. I hope you’ll read the whole thing over there, but the short version is: it’s the data. If U.S. electoral politics is a data hothouse, most of international politics is a data desert. Statistical models make very powerful forecasting tools, but they can’t run on thin air, and the density and quality of the data available for political forecasting drops off precipitously as you move away from U.S. elections.

Seriously: you don’t have to travel far in the data landscape to start running into trouble. In a piece posted yesterday, Stephen Tall asks rhetorically why there isn’t a British Nate Silver and then explains that it’s because “we [in the U.K.] don’t have the necessary quality of polls.” And that’s the U.K., for crying out loud. Now imagine how things look in, say, Ghana or Sierra Leone, both of which are holding their own national elections this month.

Of course, difficult does not mean impossible. I’m a bit worried, actually, that some readers of that Foreign Policy piece will hear me saying that most political forecasting is still stuck in the Dark Ages, when that’s really not what I meant. I think we actually do pretty well with statistical forecasting on many interesting problems in spite of the dearth of data, as evidenced by the predictive efforts of colleagues like Mike Ward and Phil Schrodt and some of the work I’ve posted here on things like coups and popular uprisings.

I’m also optimistic that the global spread of digital connectivity and associated developments in information-processing hardware and software are going to help fill some of those data gaps in ways that will substantially improve our ability to forecast many political events. I haven’t seen any big successes along those lines yet, but the changes in the enabling technologies are pretty radical, so it’s plausible that the gains in data quality and forecasting power will happen in big leaps, too.

Meanwhile, while we wait for those leaps to happen, there are some alternatives to statistical models that can help fill some of the gaps. Based partly on my own experiences and partly on my read of relevant evidence (see here, here, and here for a few tidbits), I’m now convinced that prediction markets and other carefully designed systems for aggregating judgments can produce solid forecasts. These tools are most useful in situations where the outcome isn’t highly predictable but relevant information is available to those who dig for it. They’re somewhat less useful for forecasting the outcomes of decision processes that are idiosyncratic and opaque, like North Korean government or even the U.S. Supreme Court. There’s no reason to let the perfect be the enemy of the good, but we should use these tools with full awareness of their limitations as well as their strengths.

More generally, though, I remain convinced that, when trying to forecast political events around the world, there’s a complexity problem we will never overcome no matter how many terabytes of data we produce and consume, how fast our processors run, and how sophisticated our methods become. Many of the events that observers of international politics care about are what Nassim Nicholas Taleb calls “gray swans”—”rare and consequential, but somewhat predictable, particularly to those who are prepared for them and have the tools to understand them.”

These events are hard to foresee because they bubble up from a complex adaptive system that’s constantly evolving underfoot. The patterns we think we discern in one time and place can’t always be generalized to others, and the farther into the future we try to peer, the thinner those strands get stretched. Events like these “are somewhat tractable scientifically,” as Taleb puts it, but we should never expect to predict their arrival the way we can foresee the outcomes of more orderly processes like U.S. elections.


Get every new post delivered to your Inbox.

Join 5,711 other followers

%d bloggers like this: