Saturday, January 9, 2016

Value of certainty

So you like to look at 120h deterministic forecasts from your favorite model. GOOD FOR YOU!
My favorites dont go out that far. But when MPAS went out to Day 5 they were fun to look at it. Not just for the immediate future forecast but for the future of forecasting.


The conversation of late is about how people sharing 5 day model forecasts is WRONG or hype or useless.  Or claiming that ensembles are the only way. The right way. By this logic it takes an ensemble of hammers to fuse two boards together with a nail. More is better! Actually, wise is best.

If you never look at the forecast at some future time, how can you make use of it? Thats the attitude that the people who made the first forecasts probably had to wrestle with. Those forecasts were bad. And they had to decide, with BAD forecasts, how to get better. With little evidence that their science yielded great results!  Success comes from failure. Or for our field, one success led to not abandoning the science.

Murphy et al (1993) outlines forecast goodness into 3 types:
1. Consistency: forecasts correspond to the forecasters best judgment derived from their knowledge.
2. Quality: forecasts are good if they correspond to the observations.
3. Value: forecasts are good if they result in some benefit to the receiver.

3) says nothing about the quality, directly. It says that in some cost-loss scenario there is a benefit to the user. Of course we want the quality to be high because that could lead to deriving the maximum possible benefit. This is discussed in the context that at some quality forecasts become useful. But increasing quality doesn't necessarily lead to an increase in value. See Harold Brooks' NWA presentation here: http://nwas.org/annualmeeting/presents/2420.pptx (powerpoint file)

For value to even be realized, you need people to use your forecasts in some way! Hiding those forecasts means no one uses them. Feedback on such forecasts is also absent. This is a practical issue that scientists have to deal with even if we want to avoid it. Users can tell you what they need or what or whatever ... but until they use them even they won't be in a position to evaluate the benefits or costs that are associated with them.

Consistency assumes you have knowledge about the forecasts. The only way you acquire knowledge is by doing the work necessary to find what is and isnt good about such forecasts. Murphy even discusses that the granularity of forecasts (whether its grid spacing or time or ranges) should correspond to the best quality.  This is WHY many of us argue for verification of forecasts. It isnt just about acquiring a level of "skill". Its about finding the meteorological value of forecasts, in many different formats, so we can then decide (hopefully with stakeholders) which format is the best. Note that oftentimes users dont know what will be useful until they start using them. We might choose, together, to offer a convenient format even though the skill is low. Connecting the user to the forecasts is just as important to establishing trust as it is to have quality in the forecasts. Even low quality in the right format can help someone derive a benefit.

Its a process. It requires starting somewhere, building, failing, reconstructing, and learning what works and what you can be capable of forecasting.

Its also not enough to be probabilistic. How you codify such things as uncertainty is very important.

For example, look at SPCs forecasts. Day 1 you can get an individual hazard probability out to 60%. On Day 2 you get all hazards out to 60%, Day 3 out to 45%. Day 4-8 out to 30%. Built in uncertainty silently expressed in what magnitude of probability we are comfortable forecasting.

Severe weather forecasts at 192h (Day 8, see the figure)! And you think snow is hard at 120h? Its the same kind of problem, though not identical. Both are small scale phenomena. Both can have large impacts, even when its only 1 tornado or 1" of snow. Depends where it hits or where it falls and when.

Sharing a model forecast may not be the wisest thing to do. But declaring with CERTAINTY that it will be "wrong" isn't wise either. What is wrong? Where is your evidence "that every forecast is wrong at 120h"? Where is your evidence that an incorrect forecast has no value for everyone?

For the sharer of such "awful" forecasts, perhaps we can consider that they have a motive of finding the value. Maybe they wish to improve. Maybe they wish to scare. Maybe they wish to help. Maybe.

We do live in the era of click bait (interwebz: "See what happens next...") and ear bait (television: "Stay tuned to see if we will get tornados this week..."). And some people have learned that this is an acceptable (and accepted) way of getting attention. So lets not pretend that we haven't participated, either passively or actively, in letting this happen.

Ensembles offer equally tempting bait, though they add a probabilistic framework to our arsenal. You get a lot more choices for armageddon. Probabilistically, for rare events, it may very well be that the outlier solution corresponds better to the observations! This is the nature of forecasting under uncertainty. We might have equations but we dont (and we can not yet) simulate (or forecast)  all the kinds of things that might make the perfect forecast.

The model/ensemble has no skill. This usually means its of low quality. More appropriately it means the model/ensemble is unreliable. It may not stay that way though. We are making great strides in developing forecast models at increasing lead times. Ensembles, data assimilation, finer resolution, and improved physics have all led to some kind of advance in forecasts. I hope this trend continues.

But stop being so CERTAIN about the reliability and skill of our models. You are polluting our future science communication environment when we make improvements to our forecasting. Keep checking that 10 day ... someday it might be as good as the 5 day forecast today.






No comments:

Post a Comment