Why doesn’t the weather forecast have a confidence interval?

typical

Almost every discipline of statistical science requires a confidence level be ascribed to a result. You have to estimate with what certainty you think your results/hypothesis will hold.  Call it statistical significance, error bars, hypothesis testing, or whatever you’d like.  Good scientists say how right they think they are.

Yet, meteorologists don’t normally give a confidence measure with the weather report. Why?

Here in NJ we’ve had rain on and off for weeks.  Every day the weather forecast is something resembling “40%* chance of thunderstorms. Showers Possible. Party Cloudy. 72 degrees.”  Roughly speaking, this translates to, “We have no idea what the weather will be, though we are slightly more than 0% sure it wont be sunny and 90.” Fair enough, weather forecasting is a chaotic science. We can’t (wont) ever predict chaotic events far in the future.  But what’s the harm in admitting uncertainty? Surely there are times when meteorologists are pretty damn sure of the weather (e.g. during a drought with no fronts in sight, or in Buffalo during any day of the winter) and times when their guess is as good as my LL Bean barometer (e.g. the humid, wily days of summer, where an evening thunderstorm is as much a coin toss as the baseball game it ruins).  Why not say, “we are a paltry 10% sure there is a 50% chance of precipitation in the region”? I refuse to believe confidence has gone unnoticed by academic meteorologists, so why hasn’t it trickled in to mainstream forecasting?

Enlighten me, weather(wo)men.

*As I understand it, the percent chance of precipitation given by most forecasts refers to the chance that a measurable amount of rain (usually to 1/100th of an inch) will fall somewhere in the region. While a 100% chance means it’s probably going ot rain, it doesn’t tell you the confidence with which that 100% prediction is made.

Advertisements

~ by wcuk on June 22, 2009.

28 Responses to “Why doesn’t the weather forecast have a confidence interval?”

  1. Do you mean something like this: http://www.yr.no/place/Norway/Oslo/Oslo/Oslo/long.html

  2. Or maybe this: http://www.dmi.dk/dmi/index/danmark/regionaludsigten/kbhnsj/kbhnsj-7-9.htm

  3. “As I understand it, the percent chance of precipitation given by most forecasts refers to the chance that a measurable amount of rain (usually to 1/100th of an inch) will fall somewhere in the region.”

    This is not completely correct. A percent chance is defined as the probability that precipitation will fall at any given point within the area for the prescribed forecast period.

    Here is an enlightening memo regarding this subject: http://pajk.arh.noaa.gov/info/articles/survey/poptext.htm

    I don’t know the answer to your question, but I’m currently interning at the National Weather Center this summer. The Norman area WFO (weather forecast office) is just downstairs. I’m curious now, too, so I will try to ask around and report back.

  4. Seattle-area weather researchers are already on it:

    http://www.probcast.com/

  5. Based on the quality of most local news broadcasts, I have to imagine the simplicity of the weather forecasts has something to do with the target audience. Trying to explain confidence intervals to anyone without an understanding of statistics (probably most people who rely on the local TV news) can be a painful endeavor.

  6. For forecasts issued by the National Weather Service, you can read the Area Forecast Discussion (AFD). This is a product the forecasters write, generally released in advance of the “finished” forecast, that explains some of the reasoning behind the forecast. It doesn’t give numeric confidence values, but the forecaster will often discuss some of the uncertainties.

    Forecasters often discuss their confidence in a forecast, and from what I understand there’s been some discussion of including a confidence interval in some forecasts. I think the real problem, though is that most of the public would have no idea how to process the extra information and would end up being confused. There’s still a significant portion of the population who don’t understand the difference between a watch and a warning, so asking them to understand statistics might be a bit optimistic.

  7. It’s because they aren’t measuring something. They are predicting. The difference between probability and confidence limits are subtle but this is one instance where it is significant.

  8. won’t has an apostrophe in it

  9. Meteorology is not a chaotic science. A sample of patterns known to create particular weather events is taken to predict future events based on past observances. For the most part the patterns repeat themselves every year. Being a station meteorologist, however, is not the same as being a scientist specializing in meteorology, and so many of your statements contain a flaw of scope.

  10. As a meteorologist working for Environment Canada, we’ve discussed this amongst us peons a few times. But presenting this to the public in a understandable way seems a bit daunting. Even as your discussion of probability of precipitation shows, current delivery of information isn’t well understood at times. We’ve started developing Day 6 & 7 forecasts, based on an ensemble forecast system. How we can sit and present this forecast as a deterministic solution boggles my mind. Here in this instance, confidence statistics would be well suited in my opinion.
    Glad to see even non-wx geeks are thinking about this!

  11. @Maj

    Meteorology is most certainly a chaotic science. Why do you think weather forecasts can not be reliably extended past ~5 days? Weather has all the hallmarks of nonlinear systems that exhibit chaos, most notably that perturbations of initial conditions lead to wildly different outcomes.

    When you speak of events that repeat every year, you are in the realm of climatology (the study of long term atmospheric events). This is a more deterministic field, where patterns are dominated by periodic, predictable causes (the angle of the sun to the earth, etc.).

  12. Per Anonymous Environment Canada, the problem is generating the confidence intervals in the first place. I assume by “ensemble” the poster means that they run a Monte-Carlo style simulation a large number of times for the same time period. In theory, you could base a confidence interval on the distribution of outcomes in your ensemble, centered around the most common outcome.

    But even then, I’ve seen weather forecasts that show they’re running multiple different models, even. At least that’s true in the Chicagoland area, and I assume elsewhere. It seems like the forecast is coming out of intelligent interpretation of the differences (or similarities) in outcomes from the two models. And if they knew how “right” each model was, we wouldn’t have the forecasting problems we do!

    So I guess as a non-weather geek (I’m just a computer guy) the problem is that the ensemble runs cost more cpu time (+1x per ensemble member) plus there’s not even just the one model to look at ensembles of.

  13. It does here bro http://www.eldersweather.com.au/

  14. here in the northeast part of Wisconsin, USA, you can be pretty confident any forecast for a period more than twelve hours in the future will be 100% wrong.

  15. One fascinating weather theory is the Lezak Recurring Cycle.

  16. I also wish they would add a variance for average temperature. Seems like we never get the average temperature — maybe it’s 60 on June 1 one year, 80 the next, and so the average is a normal-sounding 70. Yet it’s never 70 degrees on June 1.

    Adding variance to the reporting could help make the weather forecast less ridiculous.

  17. Humans are too stupid and worthless to grab confidence indexes. The proof is how many females (emphasis on mothers) with say “theeey say that if you don’t eat your greens…”. The common issue is that many females, unlike males, dont look for the truth, females can be regularly caught looking for facts that support their agenda. Every read women’s magazines? Try it. You will be amazed at what they claim the whole US thinks based on a survey of 100 (sometimes up to 900) people. Little problem, there are 400 million people in the US. No matter, when an emotional female is flailing about because the world is catering to her agenda she will whip out all these things that “they” say. Next time our in this situation, ask to see the source document on look for the standard deviation (The probability the people doing the survey is wrong). If there is no list of what the error is, its probably so far off from reality, they didnt want to put that fact in there. To the point this is EXACTLY why weather forecasters dont put an error reading in their forecasts. If you could actually make sense of it, you would realize they get paid $150,000 a year to work for 15 minutes and be completely wrong.

  18. Weather reports are for the general public. It is very uncommon to see confidence intervals presented in any reports to the general public.

  19. The South African Weather Service shows the confidence % for each of there daily forecasts.

    http://www.weathersa.co.za

  20. This is not exactly true, but you COULD think of it this way and thus get to sleep at night.

    The percentage given IS the confidence interval. The forcast is “It will rain” or “there will be a thunderstorm”

    A near 100 percent ‘chance’ of a thunderstorm means “There will be a thunderstorm. Confidience interval = .02 p of being wrong (with a near 100 percent chance).” Or, “40% chance of rain today” means “It will rain today, p=0.6”

    Again, this is NOT statisticaly what is happening, but it help you to feel better about it.

    By the way, weather prediction as it is practices is not about chaos at all. Over meso- and macro-scales, weather may act in a chaotic way, but weather prediction is more about linear processes such as air movement and temperature, pressure, and humidity gradients.

  21. I’m a meteorologist, with over 20 years of experience (my blog is cloudyandcool.com), and I’ve always believed that meteorologists should use fewer percentages, and a confidence level would only make the forecast more confusing–and make forecasters look even less accurate.

    The government’s hurricane forecast (http://cloudyandcool.com/2009/06/18/2009-hurricane-season-forecast/) adds a confidence percentage to the forecast, with the result being a forecast of a 70% chance of 9-14 named storms. The range of storms already indicates uncertainty and adding a percentage to the forecast adds another level of uncertainty. They even apply the confidence level to their ACE, which is already a percentage estimate of the intensity of the season. The result is a forecast of a 70% chance of an intensity rating of 65%-130%. Tell me what that means–I have no idea how that could be useful.

    I think people understand that a forecast is just that–a prediction of the future, with intrinsic uncertainty–so forecasters are better off taking a stand and issuing a more direct forecast rather than deliberately adding more uncertainty to the equation.

    Paul Yeager (cloudyandcool.com)

  22. You should try looking at the maps they use on the BBC TV weather forecasts these days – ‘spurious accuracy’ if ever I saw it. Kind of like reporting a result to 3 decimal places – it may be wrong, but at least it’s *precisely* wrong (when approximately right would be better, certainly in a maritime climate)!! Lol. 😉

  23. They probably convert the output directly from the computer models to make the tv graphics. These things are just as you said—too precise looking (but not literal) when traditional maps showing precipitation would give the public a better feel for the forecast.

  24. I am not a meteorologist, and with respect to those who are and who’ve posted so far, this is what I was told by a meteorologist years ago:

    “Percentage chance” refers not to the event, but to the likely outcome of the conditions expected for the forecast period. (Over)Simplified, the basis for the calculation is the historical data for all days in the past that had closely similar conditions to those expected to occur, and the percentage value is the proportion of those days that actually had that outcome.

    Personally, before any confidence adjustments, I’d rather see weather broadcasts remind the audience that forecasts are for fairly large geographical regions, well beyond the scope of most weather systems to affect in a manner consistent with the regional forecast. People will be less apt to conclude that meteorologists are incompetent if they understand that not every square mile of the region will experience the same things.

  25. How is “looking” accurate or not relevant?

  26. What a ferociously commented post. Tom Waits was right when he said “strangers talk only ’bout the weather; All over the world, it’s the same, it’s the same.”
    Will, you have assembled a set of expert conversationalists.

  27. http://en.wikipedia.org/wiki/Weather_Rock

    I also think that the National Weather Service should enlist the predictions of retired grandfathers for local regions throughout the country. I’d pay more attention to the forecast if it included statements like: “warm out today” or “squall’s a comin, I can feel it in me bones”

  28. markov chains

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: