Sunday, October 19, 2014

Oct 13 Verification

Here is the promised update to my experimental graphics, though reports are still trickling in as surveys are completed. When I ran my code last Tuesday not all of these reports were accounted for. Remember that I have a modeling slant to present. Below are two versions of verification (all reports including damage, and one where we stick to reports that meet formal severe criteria).

Note the maximum probability has been increased from .33 to .74! Talk about uncertainty. But when trying to formally assess severity in the models criteria really matter when interpreting your verification metrics. In this case, in this corner of the country, the damage matches well with this particular representation of strong storm occurrence. This verification of UH track events with formal severe criteria actually leads to lower skill overall from a mismatch in location and the number of "events".

This does not look like a well simulated event, a problem we have statistically noted for the winter months (October thru March) in our work. More work is needed on this end, but clearly a very real issue continues to be how do we formally verify our severe forecasts when we cant get observations that help us distinguish formal severe from damage (whether it is from severe wind gusts or sub-severe)?

Formal:

All reports:

No comments:

Post a Comment