Why doesn’t the weather forecast have a confidence interval?
Almost every discipline of statistical science requires a confidence level be ascribed to a result. You have to estimate with what certainty you think your results/hypothesis will hold. Call it statistical significance, error bars, hypothesis testing, or whatever you’d like. Good scientists say how right they think they are.
Yet, meteorologists don’t normally give a confidence measure with the weather report. Why?
Here in NJ we’ve had rain on and off for weeks. Every day the weather forecast is something resembling “40%* chance of thunderstorms. Showers Possible. Party Cloudy. 72 degrees.” Roughly speaking, this translates to, “We have no idea what the weather will be, though we are slightly more than 0% sure it wont be sunny and 90.” Fair enough, weather forecasting is a chaotic science. We can’t (wont) ever predict chaotic events far in the future. But what’s the harm in admitting uncertainty? Surely there are times when meteorologists are pretty damn sure of the weather (e.g. during a drought with no fronts in sight, or in Buffalo during any day of the winter) and times when their guess is as good as my LL Bean barometer (e.g. the humid, wily days of summer, where an evening thunderstorm is as much a coin toss as the baseball game it ruins). Why not say, “we are a paltry 10% sure there is a 50% chance of precipitation in the region”? I refuse to believe confidence has gone unnoticed by academic meteorologists, so why hasn’t it trickled in to mainstream forecasting?
Enlighten me, weather(wo)men.
*As I understand it, the percent chance of precipitation given by most forecasts refers to the chance that a measurable amount of rain (usually to 1/100th of an inch) will fall somewhere in the region. While a 100% chance means it’s probably going ot rain, it doesn’t tell you the confidence with which that 100% prediction is made.