Assessing the Risks of Risk Assessment

Tuesday’s Washington Post reports that a U.S. government task force now recommends “men should no longer receive a routine blood test to check for prostate cancer because the test does more harm than good.”

After reviewing the available scientific evidence, the task force concluded that such testing will help save the life of just one in 1,000 men. At the same time, the test steers many more men who would never die of prostate cancer toward unnecessary surgery, radiation and chemotherapy, the panel concluded. For every man whose life is saved by PSA testing, another one will develop a dangerous blood clot, two will have heart attacks, and 40 will become impotent or incontinent because of unnecessary treatment, the task force said in a statement Monday.

This recommendation will sound familiar to any American who was within earshot of a TV or radio a few years ago, when the same task force updated its guidance on breast cancer to recommend against routine screening for women in their 40s. That recommendation raised a ruckus in some quarters, and the new guidance on prostate-cancer screening may do the same.

Whatever you think of them, these recommendations are useful reminders that applied risk assessment can have a downside. Because no screening system works perfectly, attempts to identify high-risk cases will always flag some low-risk cases by mistake. In statistical jargon, these mistakes are called “false positives.” For mathematical reasons, the rarer the condition–or, in many policy contexts, the rarer the unwanted event–the larger the number of false positives you can expect to incur for every “true positive,” or correct warning.

This inevitable imprecision is the crux of the problem. To assess the value of any risk-assessment system, we have to compare its benefits to its costs. In the plus column, we have the expected benefits of early intervention: lives saved, suffering averted, crimes preempted, and the like. In the minus column, we have not only the costs of building and operating the screening system, but also the harmful effects of preventive action in the false positives. The larger the ratio of false positives to true positives, the larger these costs loom.

In the case of prostate cancer, epidemiologists can produce sharp estimates for each of those variables and arrive at a reasonably confident judgment about the net value of routine screening. With political risks like coups or mass killings, however, that’s a lot harder to do.

For one thing, it’s often not clear in the political realm what form preventive action should take, and some of the available forms can get pretty expensive. Diplomatic pressure is not especially costly, but things like large aid projects, covert operations, and peace-keeping forces often are.

What’s more, the preventive actions available to policy-makers often have uncertain benefits and are liable to produce unintended consequences. Aid projects sometimes distort local markets or displace local producers in ways that prolong suffering instead of alleviating it. Military interventions aimed at nipping threats in the bud may wind up expanding the problem by killing or angering bystanders and spurring “enemy” recruitment. Support for proxy forces can intensify conflicts instead of resolving them and may distort post-conflict politics in undesirable ways. The list goes on.

If a screening system were perfectly accurate, the costs of those unintended consequences would only accrue to interventions in true positives, and we could weigh them directly against the expected benefits of preventive action. In the real world, though, where false positives usually outnumber true positives by a large margin, there often won’t be any preventive benefits to counterbalance those unintended consequences. When we unwittingly intervene in a false positive, we get all of the costs and none of the prevention.

Improvements in the accuracy of our risk assessments can shrink this problem, but they can’t eliminate it. Even the most accurate early-warning system will never be precise enough to eliminate false positives, and with them the problem of costly intervention in cases that didn’t need it.

We also know that social scientists still don’t understand the dynamics of the political and economic systems they study nearly well enough to speak with confidence about the likely effects and side-effects of specific interventions. (That’s to say, we shouldn’t speak with great confidence on cause and effect, but that doesn’t stop many of us from doing so anyway.) As Jim Manzi surmises in a brilliant 2010 essay, the problem is that, with social phenomena, “the number and complexity of potential causes of the outcome of interest”–what Manzi calls “causal density”–is fantastically high, and the counterfactuals required to untangle those causal threads are rarely available. As a result,

At the moment, it is certain that we do not have anything remotely approaching a scientific understanding of human society. And the methods of experimental social science are not close to providing one within the foreseeable future. Science may someday allow us to predict human behavior comprehensively and reliably. Until then, we need to keep stumbling forward with trial-and-error learning as best we can.

In short, we’re stuck in a world of imprecise early warnings and persistent uncertainty about the consequences of the interventions we might  undertake in response to those imprecise warnings. It’s like trying to practice medicine with a grab bag of therapies and nothing but observational studies of one small population to guide choices about who needs them when, and what happens when they get them.

So what’s an empiricist to do? It’s tempting to throw up our hands and just say “fuggedaboudit,” but, as PM observes in a recent post at Duck of Minerva, “The alternative to good social science is not no social science. It’s bad social science.” In the absence of systematic risk assessment and cautious inferences about the consequences of various interventions, we won’t forego risk assessment and preventive action. Instead, we’ll stumble ahead with haphazard risk assessment and interventions driven by anecdote or ideology. Confronted with this choice, I’ll take fuzzy knowledge over willful ignorance any day.

That said, I do think the breadth of our uncertainty in these areas obliges us to concentrate our preventive efforts on two kinds of interventions: 1) ones that we understand well (e.g., vaccinations against infectious diseases), and 2) ones that are so small and simple that any side-effects will be inherently limited.

It’s tempting to think that bigger interventions will yield bigger benefits, but the benefits of these big schemes are often unproven, and the unintended consequences are likely to be larger as well (Exhibit A: U.S.-funded road-building in Afghanistan). There are a lot of ways that international politics isn’t like medicine, but the ethical concept of “First, do no harm” is undoubtedly relevant to both.

Leave a comment

2 Comments

  1. Very, very well put, and I daresay, wise. Thanks also for the link to the great Manzi article, which I hadn’t seen.

    Reply
  2. On Twitter, Brett Keller offered up this fantastic chart illustrating the downside of prostate-cancer screening for men over 50. If only we could get estimates like that for the unintended consequences of political interventions…

    Reply

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: