The Peculiar Business of Political Risk Assessment

I don’t understand the business of political risk assessment.

Risk assessment, I get. I know something about politics, too. What I don’t get is the success of a business model that seems to boil down to: “Trust us, we know our stuff.”

Political risk consultancies offer various services, only some of which involve forecasting. Here, I’m talking specifically about the forecasting parts. As far as I can tell, none of the major purveyors of political risk assessment systematically and transparently assesses the accuracy of the forecasts they produce.

(If you know I’m wrong about this, please use the Comments section to set me straight—preferably with links to relevant documents.)

This kind of assessment should not be hard to do, especially in cases where forecasts are quantified, as a probability or in some other scalar form. The Economist Intelligence Unit, for example, assesses risks over the next two years in each of 10 areas on a 100-point scale, where 100 is maximum risk. Do bad things really happen more often in countries ranking higher on that scale? Is an 80 twice as risky as a 40, or is risk distributed differently across that range? These are questions that can be answered in a transparent way with statistics comparing those rankings to data on relevant events. As far as I know, though, this analysis either isn’t happening, or the results from it aren’t being shared with the consumers of those forecasts.

I don’t mean to pick on the EIU here. In fact, they are unusual in the field for making their risk assessments available for free and describing the process they use to produce those forecasts (the judgments of analysts working in regional teams) in some detail.

What puzzles me is that this opacity on past performance is standard practice and the customers don’t seem to mind. Basing business decisions on forecasts without knowing anything about how accurate the forecasts are is like continuing to take investment advice from a financial adviser without ever checking to see how your portfolio is doing. The stakes may be quite high, and there is an empirical answer to your question, but you just don’t bother to find out what it is.

Perhaps the clients paying for these services should take a page from the book on arms control in the 1980s and adopt a new slogan of their own: “Trust, but verify.”

Previous Post
Leave a comment

5 Comments

  1. Gyre

     /  July 17, 2012

    Perhaps there’s an institutionalized sense of ‘do not question’? Political science doesn’t always make itself accessible to the public.

    Reply
    • I’m not sure I understand where you’re going with the comment about political science. I’m talking about a business exchange; customers are buying a service. In a well-functioning market, I would expect the quality of that service to drive demand and market share, and the reliability of these prognostications seems like it ought to be a big element of that quality. Yet, in this market, customers don’t even seem to know how reliable those forecasts are. I don’t understand why they put up with that.

      Academic research is not a commercial exchange, so the same incentives don’t apply. That said, there is a strong push in quantitative social science toward greater transparency through the sharing of data and code and routine replication of models. At the same time, there’s also a rising interest in forecasting among political scientists. Those trends intersect in work on U.S. presidential elections, where we’re seeing a proliferation of great forecasting work and a lot of pressure to show the details of the modeling. That intersection of trends is actually part of what got me thinking about the opacity of forecast modeling on the commercial side.

      Reply
  2. mg

     /  July 19, 2012

    I think a lot of it may have to do with justifying business decisions a manager would like to make rather than making the best business decision according to the forecasts. People with cursory knowledge of a political situation already have their own forecast in their head and are generally convinced of its accuracy. If a Political Risk Assessment provides information contrary to their forecast, they will most likely ignore it and come up with ad hoc theories about why the forecast is inaccurate.

    Besides justifying before hand, it also probably helps to have a set of risk assessments in your back pocket in case their is a disaster and it needs to be justified that according to the reports, it may have been the right decision at the time. Not an expert in this area, just some thoughts.

    Reply
    • I suspect you’re right—that the demand is driven more by process than outcome. Process demands a set of numbers to use, and these products fill that demand. It may not be hard to measure forecast accuracy, but it is hard to measure the impact of those decision processes on business outcomes, so firms generally don’t bother.

      I wonder if there’s a supply-side story here, too. In some ways, this field is more like an oligopoly than an open marketplace. There’s a lot of money being spent already, and as long as none of the established players is getting hammered by the others, they may be more concerned about preventing newcomers from stealing their market share than they are about knocking off their established rivals. One of the best ways to lower barriers to entry to this market would be to measure accuracy, because newcomers could quickly make a name for themselves by beating the established players’ benchmarks. So the oligarchs have some incentive to cooperate in maintaining this wall against outsiders by making reputation the benchmark instead of performance. I’m not claiming they actually collude–I have no inside knowledge of these firms–but I can see why they wouldn’t proactively move in this direction.

      Reply
  3. As someone who intimately follows the field I can tell you this is something that bothers me as well. Something I’ve been interested in doing, but would involve heavy amounts of data analysis (thesis idea for Brian!!), would be to attain the material and then review it. As someone who produces these assessments I can tell you that most do not produce many predictions; if they do, they’re couched in so many caveats to make them almost useless. Typically, the quantitative models are the ones which produce the most useful analysis. I’m thinking of The PRS Group, in particular – they produce percentage chances of government coalitions surviving, etc. It is more a device for framing, than anything else.

    Reply

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: