“They Said It Was Going to Rain”

Most Saturdays and some Sundays, I hook up with a bike ride that winds out of DC’s Rock Creek Park into semi-rural Maryland and back again over the course of a few hours. I depend on this ride for hard training and a shot of competition, but I’m a wet-weather wimp and will usually stay home and use the trainer in my basement if it’s raining or probably going to rain. So, one of the first things I do when I get up most weekend mornings is check the hourly forecasts at weather.com and Weather Underground. If there’s much risk of rain, I’ll open the radar map again close to my 9:45 departure and run the animated forecast for the next few hours. If that animation shows yellow or orange blobs swarming my regular route when I’m going to be on it, I almost always stay in.

One recent Sunday, the forecast had me hemming and hawing for a bit before I decided to go. The hourly breakout at weather.com pegged the chance of rain at 70 percent for the first couple of hours I’d be out, but it wasn’t raining at 9:30 and the radar map didn’t look bad, either. Updating completed, out I went.

The weather often dominates conversations at the start and finish of the ride, and on that Sunday two themes rang through the chatter I overheard: we’d gotten really lucky, and weather forecasters are idiots. “They said it was going to rain,” the Greek chorus kept repeating.

wet paris roubaix

But, of course, that’s not what “they” said. In point of fact, meteorologists had pegged the odds of rain at about 2:1. According to those forecasts, it was probably going to rain, but the chances that it would stay dry weren’t so bad, either. I wouldn’t bet my mortgage on a probability of 0.3, but I’m okay with occasionally risking a soggy ride on one.

As a weather-wimpy cyclist, I was happy to catch the lucky break that Sunday. As a guy who sometimes forecasts for a living, I was intrigued by the consistent way in which so many people had distorted that probability. In our heads, the quantified uncertainty we saw in the paper or on the web was transformed into a categorical prediction of rain. What the modeler would want to contextualize before assessing—“For all of the hours I said there was a 70-percent chance of rain, how often did rain actually happen?”—the intended audience was fine judging in isolation and declaring, “Wrong!”

That we’re not so great at processing probabilities won’t surprise anyone familiar with psychological research from the past few decades on that subject. Exactly what form that bias takes under what conditions, though, still seems to be something of a mystery. In a New York Times blog post about forecasts of the U.S. presidential election, statistician Andrew Gelman wrote:

What if the weatherman told you there was a 30 percent chance of rain—would you be shocked if it rained that day? No.

Apparently, Gelman hasn’t met the crew from my weekend ride. Gelman goes on to connect his assertion to work by Amos Tversky and Daniel Kahneman on prospect theory, which is based, in part, on the expectation people systematically overestimate the risk of low-probability events and underestimate the risk of high-probability ones. That expectation, in turn, is based on empirical research that has been replicated elsewhere, as the following chart shows:

probability weighting estimates

What’s puzzling to me here is that my fellow riders seemed to be distorting things in the opposite direction. Instead of taking a probability of 0.7 and thinking of it as a toss-up as Gelman and that chart predict they would, they had converted it into a sure thing. That’s still bias, of course—just not the kind I would have expected.

If there’s a moral to this story, it’s that we still have a lot of work left to do in understanding how we cogitate on uncertainty and what that implies about how we should produce and present probabilistic forecasts. In many domains, we’re getting better and better at the forecasting part, but even very accurate forecasts are only as useful as we make them or let them be. To get from the one to the other, we still need to learn a lot more about how we process and act on that information—not just individually, but also organizationally and socially.

Leave a comment

6 Comments

  1. Tom Parris

     /  May 20, 2013

    Jay,

    Interesting post.

    Minor comment. Use weather.gov for your weather needs. In my humble opinion, much better than weather.com (though not as pretty, much more info in much geekier detail).

    More interesting comment. My recollection is that the research you cite on prospecting theory is for high consequence outcomes (e.g., global thermo-nuclear war, stock market meltdowns, catastrophic sea level rise, …) whereas your fellow bicyclists were responding to a prediction of a low consequence outcome (so low that some of them would have continued the ride even if it was going to rain with certainty). You hinted at this when you wrote about betting your mortgage versus getting wet.

    Another factor is the degree to which it is a repeated exercise. So, for example, imagine you get a weekly stock tip that accurately predicts 10% gains over the next month with 70% probabilities. If you are like me, you would gladly bet a on each tip (in an amount where you could absorb the loss if the prediction was false), knowing that over the long run you would come out a winner. But you might not bet anything if you only got that tip once a lifetime.

    Cheers

    Reply
    • Great points, Tom. On prospect theory, that’s part of what I’m getting at here. It would be really helpful to know under what conditions different heuristics apply. Prospect theory tells us that the size of the potential gains and losses will produce their own distortions, but that doesn’t get us all the way to a flipped s-curve. On Twitter, Dan Gardner also brought up our tendency to discretize probabilities into binary predictions, and that would help explain this case, too.

      Reply
  2. Grant

     /  May 20, 2013

    People want some certainty. They want to know what’s going to happen, especially when it’s something that they have so little control over like the weather. Silver’s written often defending weather forecasters and pointing out their improvements. Who knows? Give it another century and maybe we won’t be working with probabilities anymore but definite forecasts.

    Reply
  3. I think rain probability is always tough to understand because it’s rarely clear what the scale of prediction is. If there is a 30% chance of rain in my zip code, does that mean that SOMEWHERE in my zip code will get rain? Or EVERYWHERE will get rain? Is the zip code actually the prediction unit?

    There’s actually an answer to this, but few people know it: http://www.utexas.edu/depts/grg/kimmel/nwsforecasts.html

    Reply
  1. The Arab Spring and the Limits of Understanding | Dart-Throwing Chimp

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: