Saturday, February 17, 2018

Modelology

My thing is modeling. How do you get forecasters to use more of the tools? And will these tools actually help them? Cause they have to learn the models to use them, and use them often to learn them. Then we get to figure out if any of that was worth it.


Science has been and still is essentially a tool we use to change the way we think about existing problems. I like the implication that through new perspectives we can find new ways to learn. 

I like what NSF has written in their strategic planning document: 
"Advances in our capability to observe, model, comprehend, and predict the complexity of the world around us will provide us with a deeper understanding of the processes that underpin life, learning, and society."


From observations to models we gain understanding, perhaps enough understanding (LEARNING) to make predictions. Those predictions help validate what we thought we knew and show us what we don't know.

Lots has been said chastising people for showing model output. The history in the field also has chastised forecasters from "simply using guidance" as a forecast. Or even going further and saying a forecast is not a forecast unless issued by a forecaster. As if anyone integrates the equations of motion 24h or 10 days into the future with accuracy.

When I see people sharing model output, I see them trying to learn ... which is a process. You have to use the data to learn from the data. It also means you have to provide and listen to constructive criticism. The most effective model output discussions try to explain what they see and why. Perhaps even what it means to the rest of us. I especially appreciate the follow ups. revisiting that prediction to see what happened, a verification of sorts, for an individual "event" but qualitatively not just quantitatively. 

We should encourage more sharing of predictions, with explanations, and with the goal of learning ... and humbly discuss the issues. Less propaganda, less insults to "the kid in his basement" (mentioned in too many AMS2018 talks and twitter discussions) sharing model output, and more building up our knowledge base, encouraging those that do it well to share some tips of how they do what they do. More positive examples.

Predictive modeling is undergoing steady improvement. and keeping humans up to date with the successes and failures is paramount. This amounts to more distributions oriented verification alongside measures oriented verification. Lets answer the good questions:
1. When do the models do well and when do they not?
2. Is there anything the models do consistently that we can take advantage of? (both good and bad)
3. How can we tell the difference between good model forecasts and poor?
4. Do the metrics we use for bulk model quality help us use the model output?










No comments:

Post a Comment