I often wondered the same thing myself, why don't weather providers provide their own accuracy rates? I started to make a "simple" system for estimating NOAA accuracy, and immediately ran into trouble determining what accuracy meant.<p>If they predict a high to be 86deg, and it's really 85, what does that mean as far as accuracy goes? If we use the Kelvin scale, even a 10 degree error makes it seem pretty accurate, though a person's experience in those extremes will be very different.<p>But I think the biggest problem is that the simple weather forecasts that we use on a daily basis, is a poor representation of what weather forecasters actually do. They're modeling how weather systems form, move, and interact. If a model predicts storm forming and moving a particular direction, but the 10 day forecast is off by 100 miles causing it to rain a day later, what does that mean for accuracy? Another model could just use the average weather as their forecast, and might score pretty high as far as long term accuracy, but would be pretty useless from a user's perspective.<p>So, if someone forecasts a high of 86 with a 99% confidence level. What would that mean. That it'll be 86 somewhere near there, that it'll be close to 86 at that location that day, or that it'll be 86 at that location within some timer period? You really can't boil all of those variables down into a single number.<p>And then you'll run into issues tracking the confidence of the confidence levels. Ad infinum.