Monday, October 17, 2016

Sensitivity and the poor forecasts that follow

I want to ask a different question. Why are we having these struggles in forecasting? Why are these models so sensitive as to make poor forecasts?

Many people are of the opinion that models can lead forecasters astray. And those people should win a prize. We are in a stranger place in weather prediction, broadly speaking. Models have much better capability the finer we resolve the atmosphere but there are many more mine fields to navigate around. This community has been publishing on this for the better part of 30 years.  These new capabilities allow us to forecast events with much more specificity (in time and space).

But there is a trade off and it can lead to ... nothing or something awful or a myriad of places in between. Why? Even small differences in the initial conditions can result in large scale storms as too strong or weak (which effects where they go). With the added specificity that we desire, we are more prone to these types of errors. You can try to minimize them but they are still there.

These "errors" are more common because these finer resolution models are more sensitive. the smaller the important details, the more sensitive the forecast. Its less that these forecasts are "wrong". They just represent a possible future, how probable is dependent on how many forecasts we can make, assuming we can resolve these fine details. <Enter ensembles>

Lets estimate how likely this forecast is. So we run many versions of the model and hope to stumble upon the best, and hopefully the most common, forecast. We hope the atmosphere is not very sensitive and the most common forecast verifies. Most of the time, this strategy works well. This strategy assumes we have completely accounted for all the sensitivities of that particular forecast. Most times we have (maybe because we have a lot of members but we can always use more!). We can't get everything perfect mind you, there are limits as to what we can resolve with any particular model.

Unfortunately we have these things called extreme events. They are rare. Storm dumping 20" of rain in a day? Rare. Tornado outbreak? Rare. Hurricane landfall? Rare. Fatty Rib eye I want to eat for dinner? Medium rare, but actually quite common at your local meat counter. Anyway, 10000 times more common than these weather events.  And way more troublesome for modelers. These events can be really sensitive.

If you run a lot of models, you can see that sensitivity barrier breached, even on benign weather events. The forecasts are possible, but improbable future states. They are hard to differentiate sometimes. After all, while we have the best equations, even the smallest "errors" can send the model down the wrong road. That's sensitivity.  It can mean the difference between no snow and 2 feet. And hopefully the model with 2 feet of snow has it at the right spot at the right time. This makes it easy to forecast, at least in hindsight! But at least some of the models had the 2 feet, otherwise we would call it a surprise snow storm.

So finer resolution modeling comes with the potential for greater accuracy but not without risks. In fact some of the models do a great job some of the time, hitting on the right solution. But those pesky rare events ... sometimes that model outlier is correct and the rest of the ensemble did not "lock on" to that solution. This is what surprises are made of. And surprises are less common. What we are in now is a battle for accuracy and we have a long way to go to achieve perfection.

And we should be talking about these sensitivities, because they are appearing in our forecasts. These sensitivities actually tell us a lot about the forecast and how many possible ways there are to get 2 feet of snow. Or a hurricane 20 miles off the coast or inland because of a wobble! Forgive me if I didn't get that last wobble right, I was watching for it, I knew about it, but I just didn't quite get that detail correct.

We are growing in our technical abilities and with any growth there is a steep learning curve. Both for the forecasters and the recipients of our forecasts. As our capability grows, so do the sensitivities! And our job hinges on communicating these sensitivities. We call it uncertainty & the language of uncertainty is probability. We have to communicate well for these forecasts to have meaning. So we might have to meet in the middle, between sensitivity and accuracy. And the needle pointing to where me meet will probably be bouncing around for a while.