Note the maximum probability has been increased from .33 to .74! Talk about uncertainty. But when trying to formally assess severity in the models criteria really matter when interpreting your verification metrics. In this case, in this corner of the country, the damage matches well with this particular representation of strong storm occurrence. This verification of UH track events with formal severe criteria actually leads to lower skill overall from a mismatch in location and the number of "events".
This does not look like a well simulated event, a problem we have statistically noted for the winter months (October thru March) in our work. More work is needed on this end, but clearly a very real issue continues to be how do we formally verify our severe forecasts when we cant get observations that help us distinguish formal severe from damage (whether it is from severe wind gusts or sub-severe)?