I don't wish to rehash too much of the nonsense I have read. I have some things to say:
1. I believe in gun control.
A weather, education, and science blog run amok. Brought to you by James Correia, Jr., PhD. I have a BS from SUNYA in Atmospheric Sciences, MS from FSU in Meteorology, and a PhD from ISU in Agricultural Meteorology. I specialize in mesoscale numerical weather prediction on scales larger than 4km for both forecasting and regional climate. The views expressed here do not reflect those of NOAA, the NWS, or the University of Oklahoma.
Monday, December 31, 2012
Saturday, October 27, 2012
Communication and products
An excellent twitter discussion broke out this evening. A quote to consider about public products:
"MUCH too much science" .
And consider that the Impacts might be placed in briefings from last to first, followed by the science for the why of the impacts.
Now this all assumes we have a fundamental grasp on the Impacts. I argue sometimes we do sometimes we don't. Not all meteorologists are skilled communication experts. Not all meteorologists have skills in impacts or training in engineering to assess potential impacts.
"MUCH too much science" .
And consider that the Impacts might be placed in briefings from last to first, followed by the science for the why of the impacts.
Now this all assumes we have a fundamental grasp on the Impacts. I argue sometimes we do sometimes we don't. Not all meteorologists are skilled communication experts. Not all meteorologists have skills in impacts or training in engineering to assess potential impacts.
Tuesday, October 2, 2012
Bull$hit: The other winter storm
Dear Choir-
You may have heard that Winter Storms will now be named. After all Greek is the new American. Unless of course you are the letter Q. Or that random Finnish name (Sorry Finlanders, don't really know how you got sucked into this). But don't worry, all storms will get a name. I mean they have a potpourri of names, all 26 of them. Assuming winter is 20 weeks, and a "storm" brews on average every 5 days from November thru March, well that comes in at 28 storms (#arithmetic).
But the rules are fair. A storm dropping a few feet in the Sierra's won't be named, and never-you-mind about the weekend. We can ignore those too, because you get the message then. We have our priorities on the east coast. Dusting of snow in Atlanta? Name it. Alberta Clipper in upstate NY? Houston we have a problem.
How about we sit back and actually put all that brain power into thinking of something more useful in communication than thinking of clever hashtags that we can use to communicate? (BOOM!) Did you know that not everyone has a tweety account? And that not all snow that falls can be "blamed" on a "storm"? I am looking at you Mountains.
Names are awesome for babies! I hear every baby (eventually) gets a name. But have you not heard about switching up the letters? I mean for Pete's sake, man, names have to be unique, Peat! And if you are going to use unique names, well, you better have more than 26. Cause next years winter is going to be worse than this years (maybe not, my skill at 25 days is horrible).
And boy I just cant wait for the train of zipper lows followed by a double barrel Low. I can hear it now: "Bonney will split from Clyde as Clyde goes bezerker in the Atlantic and Bonney drifts into Canada where it will be called...($#*!, they dont name them in Canada! )." Followed by those crazy socialists over in Europe who called it Lothar, totally giving our naming convention so much more street cred. I mean what kind of European would be named Lothar? Oh, ...right...German. Pretty fitting for Europeans. So yeah, well, stop interupting, you are breaking my train of thought.
In conclusion, a low is a Low right? And the most important point is that we have names that the public can use to recall how we failed to name that 6 incher in Dubuque, but gave 6 names to inch storms in NYC ...where (surprise!) no one actually noticed the frost on the ground!
Can we give the proper attention to the things that actually matter? How about (just throwing this out there) consistency of message. Lets focus on being skillful. Skilled at communicating threats. Attentive to your concerns. Putting some useful information out there that helps people plan, manage their risk, stay safe, and protect life. Better information to help you make informed decisions and finding ways to keep you informed and updated and hopefully prepared and not panicked. How does naming storms help you do that?
Sincerely,
Iam Makingfunofyou
You may have heard that Winter Storms will now be named. After all Greek is the new American. Unless of course you are the letter Q. Or that random Finnish name (Sorry Finlanders, don't really know how you got sucked into this). But don't worry, all storms will get a name. I mean they have a potpourri of names, all 26 of them. Assuming winter is 20 weeks, and a "storm" brews on average every 5 days from November thru March, well that comes in at 28 storms (#arithmetic).
But the rules are fair. A storm dropping a few feet in the Sierra's won't be named, and never-you-mind about the weekend. We can ignore those too, because you get the message then. We have our priorities on the east coast. Dusting of snow in Atlanta? Name it. Alberta Clipper in upstate NY? Houston we have a problem.
How about we sit back and actually put all that brain power into thinking of something more useful in communication than thinking of clever hashtags that we can use to communicate? (BOOM!) Did you know that not everyone has a tweety account? And that not all snow that falls can be "blamed" on a "storm"? I am looking at you Mountains.
Names are awesome for babies! I hear every baby (eventually) gets a name. But have you not heard about switching up the letters? I mean for Pete's sake, man, names have to be unique, Peat! And if you are going to use unique names, well, you better have more than 26. Cause next years winter is going to be worse than this years (maybe not, my skill at 25 days is horrible).
And boy I just cant wait for the train of zipper lows followed by a double barrel Low. I can hear it now: "Bonney will split from Clyde as Clyde goes bezerker in the Atlantic and Bonney drifts into Canada where it will be called...($#*!, they dont name them in Canada! )." Followed by those crazy socialists over in Europe who called it Lothar, totally giving our naming convention so much more street cred. I mean what kind of European would be named Lothar? Oh, ...right...German. Pretty fitting for Europeans. So yeah, well, stop interupting, you are breaking my train of thought.
In conclusion, a low is a Low right? And the most important point is that we have names that the public can use to recall how we failed to name that 6 incher in Dubuque, but gave 6 names to inch storms in NYC ...where (surprise!) no one actually noticed the frost on the ground!
Can we give the proper attention to the things that actually matter? How about (just throwing this out there) consistency of message. Lets focus on being skillful. Skilled at communicating threats. Attentive to your concerns. Putting some useful information out there that helps people plan, manage their risk, stay safe, and protect life. Better information to help you make informed decisions and finding ways to keep you informed and updated and hopefully prepared and not panicked. How does naming storms help you do that?
Sincerely,
Iam Makingfunofyou
Sunday, September 23, 2012
"The Signal and the Noise"
The new book by Nate Silver about the statistics involved in making predictions from gambling and stock markets to weather and climate to terrorism was quite good. The perspective that each topic adds is not trivial and the story is layered enough that it binds.
Monday, August 27, 2012
Anyone can be this right
Apparently it is really easy to forecast (e.g. weather, airplanes, objects, finances, emergencies, crises, etc). So easy any one can do it.
Sunday, August 26, 2012
Social Media bombardment
I feel like I am under attack. I am being bombarded by imagery, forecasts, retweets, shares, and the like about Tropical Storm Isaac. Of course, the more you tweet, the harder it is for good information to be found that it timely and relevant. I certainly can not claim that I everything I tweet is necessary, or good, or even useful (Thankfully I do not tweet very much). Do you stay on message? Stay focused? Provide substantive updates?
Sunday, August 12, 2012
Big Data - Big Deal
I am a fan of the concept of big data. It is easy to identify what you mean when you say "big data". It means lots of data. Data so large or complex or both that you can derive meaning or knowledge from it. But the name itself has evolved more into a marketing trend.
What I do not like is that big data is seen as both a promise and the answer to our problems. If only we had enough data to find out if X is true, or what causes Y, or what exactly is related to Z. These questions are at the heart of any analysis procedure. But what makes Big Data unique is that we can actually address these questions without the caveat of adding "but future work will require larger sample sizes and more robust data collection to verify these findings". At least thats what we hope.
And hope is exactly the right word to use there. Getting more data does not make the problem easier. It adds to the volume of data, muddies the waters when variables are correlated, and makes computing that much more difficult. Machine learning is then inevitably added to the conversation, as a way to address the issue. As this article points out:
Big data should be a big deal to help with making informed decisions. Decisions that may need to be counter intuitive, especially in non-linear situations, considering the whole of the system in an environment where the rules, bounds are unwritten. As is the case with Big Science, it is the physical processes that are important, and we must always be careful to understand the implications of Big Data before it is simply taken as fact.
There is a lot of promise in Big Data, but don't believe in the managerial hype just yet. Dealing with Big Data is currently the problem whereas before it was creating coherent big data. As these techniques mature we can leverage the gains promised by Big Data with our big data. Then Big Data will be a big deal.
What I do not like is that big data is seen as both a promise and the answer to our problems. If only we had enough data to find out if X is true, or what causes Y, or what exactly is related to Z. These questions are at the heart of any analysis procedure. But what makes Big Data unique is that we can actually address these questions without the caveat of adding "but future work will require larger sample sizes and more robust data collection to verify these findings". At least thats what we hope.
And hope is exactly the right word to use there. Getting more data does not make the problem easier. It adds to the volume of data, muddies the waters when variables are correlated, and makes computing that much more difficult. Machine learning is then inevitably added to the conversation, as a way to address the issue. As this article points out:
"In theory, Big Data could improve decision-making in fields from business to medicine, allowing decisions to be based increasingly on data and analysis rather than intuition and experience."And that is where I draw the line. I am admittedly an amateur (if even that skilled) in cognitive psychology, but the idea that technology simply has the best answer is ridiculous. The interface of the human mind (with intuition and experience) with technology (big data and correlations) offers the best solution. If you want answers, technology can provide them. If you want good answers, let each do what it is best at. Let each contribute according to its skill, reliability, quickness, and subject mastery. Technology is quick so it can do things like automation where a set task is linear or nearly linear (machines don't yet program tasks for other machines, despite what you might see in commercials).
Big data should be a big deal to help with making informed decisions. Decisions that may need to be counter intuitive, especially in non-linear situations, considering the whole of the system in an environment where the rules, bounds are unwritten. As is the case with Big Science, it is the physical processes that are important, and we must always be careful to understand the implications of Big Data before it is simply taken as fact.
There is a lot of promise in Big Data, but don't believe in the managerial hype just yet. Dealing with Big Data is currently the problem whereas before it was creating coherent big data. As these techniques mature we can leverage the gains promised by Big Data with our big data. Then Big Data will be a big deal.
Tuesday, July 17, 2012
Out with the old?
http://www.huffingtonpost.com/c-m-rubin/education-technology_b_1675040.html?utm_hp_ref=education
In this fascinating critique of the current US educational system, the one thing they leave out is the why. Why is the system failing? And what does failing even mean?
In this fascinating critique of the current US educational system, the one thing they leave out is the why. Why is the system failing? And what does failing even mean?
Saturday, June 23, 2012
Data Discovery
Today was a good day for mentoring. Teaching students the power of discovery is never easy. They have to have some passion, a little bit of creativity, and some imagination. There has to be some time pressure. They have to learn how to explore on their own, to make sense of the data, and to display it in obvious and intuitive ways ... to know the data well. They must take ownership of the data set. If there is something wrong you must trust them to find it so it doesn't interfere with your analysis.
Saturday, May 26, 2012
A tale of two heatbursts
In the Hazardous Weather Testbed we have a ton of data coming in showing all kinds of unique phenomena and yesterdays models had a few strong heatbursts. This was expected given the very strong capping inversion and the likelihood that storms would initiate along the dryline and move into the capped region. As it turned out the storms in Texas/Oklahoma moved North and not Northeasterly thus putting the residents of western OK into jeopardy for Heatbursts. And indeed there quite a few damage reports from the high winds generated by the heatbursts.
Sunday, May 6, 2012
Convection Initiation (lack thereof)
Yesterday everyone was talking about the cap. As if that is the only player in a complex tapestry of processes that drives whether we see any convective clouds. The SPC issued a slight risk for a situation last evening in Nebraska along a boundary where CI and thus tornadoes were possible. A TOR watch was issued with the caveat that the best thermodynamics were out in the warm sector just slightly removed from the zone of better vertical shear located in a narrow corridor behind the front.
Thursday, April 26, 2012
Norman tornado and broader implications
news9.com published a story today about Norman schools acquiring GPS radios to keep kids safe from severe weather. The story includes the quote that 24 minutes before the tornado, kids were released because all was clear to do so. The tornado formed at 3:59 PM putting release just after 3:30PM. At that time a severe thunderstorm warning was in effect for Norman issued at 3:14 PM. Oddly though Norman was not mentioned in the original warning nor in the update at 3:38 PM. By 3:55 PM Norman was in a severe thunderstorm warning but for 17 minutes the mesocyclone, though weak and complex, was not a warned on feature. At 3:59 PM a tornado warning was issued. [Please, kindly correct me if I am wrong.]
There are many questions to ask:
1. Does every mesocyclone warrant a warning? This should include a discussion on the mesocyclone location relative to populated areas, relative to its vertical location and thus structure, and its strength. How did the specific characteristics of the mesocyclone influence the warning forecasters? I am NOT a warning forecaster
2. Why would schools let out with a severe thunderstorm warning in effect? What information and when would have contributed to make the decisions to either release or not release? These are time sensitive decisions and become complicated quick, mostly because you go from a situation involving a building to one involving an entire city. And more importantly, why purchase GPS radios for the buses? I hope they come with training. I certainly hope there will be someone on the other end of the radio helping bus drivers steer clear of potential storm hazards. But wouldn't it be wiser for the schools to secure dedicated personnel to monitor severe weather situations so they don't release into an existing warning? I am all for being prepared if there is a surprise (the probability of a surprise is much less than than it used to be).
The decision was made quite quickly to solve the problem with technology. It would appear we have a significant opportunity for social science research to understand a very specific situation in the warning process. And I am going to go out on a limb and say that its a bit soon to say we have a greater understanding of the entire situation from all relevant perspectives.
Not having a significant social science presence to rapidly assess this situation is disappointing. It would be par with an organization like the NTSB. Investigate the whole situation, assess ways to make the system better, to make the components better, and ultimately provide better service. By not having a system in place to do this we are losing valuable feedback on the process. But as everyone says, "in austere budgetary times", it is difficult to do so.
Well, the NWS didnt do a service assessment for 2 March which was a big event and would have been costly. Other notable days with notable tornadoes include: 4/3 , 3/23, and 2/28. And probably others. All stick out to me as missed opportunities to learn from impacted communities. I am optimistic about the Weather Ready Nation initiative. I know people are passionate about solving problems. We must also be passionate about identifying them and having the courage of our convictions to keep after this important aspect of bridging science and service.
Monday, March 5, 2012
Predictability
I was reading the story of the toddler who was thrown from her family into a field, flung by the tornado in Indiana. A truly tragic story on so many different levels. The family of 5 chose to seek the best possible shelter they could. Their options were fairly limited, however.
It needs to be well known that a lot of fatalities occur in mobile homes. As Brooks and Doswell (2002) have noted mobile home deaths in tornadoes are much, much higher than deaths in permanent structures. It doesn't much matter what kind of mobile home as far as I am aware. They simply cannot withstand the forces at work in any tornado let alone violent, strong tornadoes.
Does your mobile home weigh 1 million pounds? I would bet you answered No. Well the EF5 tornado that hit El Reno in Oklahoma on 24 May threw ... read that again... threw a 1 million pound oil well drilling platform. The tornado that hit this young family was an EF4.
Improving the science of forecasting tornadoes can not solve all of the issues. We may not ever be able to tell you with 100 percent certainty that your house will be hit by a violent tornado at 1:42 pm. But it is easy to say that your survival chances in any mobile home are significantly reduced. That is the one thing that is predictable.
http://edition.cnn.com/2012/03/05/us/indiana-tornado-girl/?hpt=us_c1
http://www.nssl.noaa.gov/users/brooks/public_html/essays/mobilehome.html
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&cts=1330998875843&ved=0CEwQFjAE&url=http%3A%2F%2Fwww.consumersunion.org%2Fpdf%2Fmh%2FTornado.pdf&ei=HW5VT77zF-jisQLD64TwBQ&usg=AFQjCNEaVQAvFxabrlYgimacbeIFULPDbg
http://blog.al.com/spotnews/2011/12/alabama_tornadoes_mobile_home.html
The information is out there. And it needs to get to the right ears, or eyes.
It needs to be well known that a lot of fatalities occur in mobile homes. As Brooks and Doswell (2002) have noted mobile home deaths in tornadoes are much, much higher than deaths in permanent structures. It doesn't much matter what kind of mobile home as far as I am aware. They simply cannot withstand the forces at work in any tornado let alone violent, strong tornadoes.
Does your mobile home weigh 1 million pounds? I would bet you answered No. Well the EF5 tornado that hit El Reno in Oklahoma on 24 May threw ... read that again... threw a 1 million pound oil well drilling platform. The tornado that hit this young family was an EF4.
Improving the science of forecasting tornadoes can not solve all of the issues. We may not ever be able to tell you with 100 percent certainty that your house will be hit by a violent tornado at 1:42 pm. But it is easy to say that your survival chances in any mobile home are significantly reduced. That is the one thing that is predictable.
http://edition.cnn.com/2012/03/05/us/indiana-tornado-girl/?hpt=us_c1
http://www.nssl.noaa.gov/users/brooks/public_html/essays/mobilehome.html
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&cts=1330998875843&ved=0CEwQFjAE&url=http%3A%2F%2Fwww.consumersunion.org%2Fpdf%2Fmh%2FTornado.pdf&ei=HW5VT77zF-jisQLD64TwBQ&usg=AFQjCNEaVQAvFxabrlYgimacbeIFULPDbg
http://blog.al.com/spotnews/2011/12/alabama_tornadoes_mobile_home.html
The information is out there. And it needs to get to the right ears, or eyes.
Saturday, March 3, 2012
From near the hot seat
I was job shadowing. I went to bed last evening after having seen the model forecasts for the outbreak. They were convincing with the usual caveats. It was a fairly large Moderate Risk but I figured that when I awoke the High Risk would be issued and it was. The environment was favorable for a significant severe event.
The whole point of job shadowing (for me) is to prepare for the upcoming Hazardous Weather Testbed Experimental Forecasting Program. We will be replicating, to some extent, the forecast process for defining the threat of severe weather probabilistically. So it is very helpful to watch the forecast team execute on big days.
The forecast environment was great for supercells. High bulk wind shear, deep and low. Storm relative helicity was, at least for the models, in a favorable range for tornadoes. Observed wind profiles from the radar (VAD) were favorable ... even at 9am when the first watch was being considered in what look liked elevated run-of-the-mill storms in an uncapped, warm advection regime. And within 20 minutes the first TOR warning was issued and within the hour the first tornado emergency went out for north of Huntsville, AL. That was the big hint of the day ... its going to be big. And the atmosphere had no plans on being subtle.
The forecast process was quickly shuffled to the team and products were being disseminated for nearly the entire shift. If it wasn't an Discussion going out it was a Watch. It wasn't stressful but I am pretty sure that it wasn't until an hour and a half before shift ended that everything that needed to be covered in a watch or MD was covered...at least for now. Even when shift ended storms that had done nothing for the last hour (no warnings and seemingly, benign severe wise, started producing tornadoes in southern AL.
We were busy doing surface analysis from AR to SC and MI to FL to keep track of the evolution of the warm sector, looking at any number of supercells producing the latest nasty looking circulation or debris ball. We watched a supercell pair wreak havoc for quite a while.
I think what was disorienting for me was that I never had a clear picture of what the radar would look like from a forecasting perspective. The scenario of supercells and line segments was evident but before I could form any conceptual model of the outbreak we had storms all over the place. It was the sheer size of the areas being considered and the ongoing nature of the threat that was distracting because I am not used to that. I am not a forecaster as much as a researcher. I simply lack that experience.
As soon as shift started Corey said to me: "Its all about boxology." meaning that today was all about planning for non-stop watch issuance (this is something I am realizing now even though I thought I understood then), planning for when and how the outbreak would start and evolve, respectively, and planning for the size of Watches so that you capture what you need to capture and give yourself time to draw up the next one in between coordination calls. And when I think about it now, that process was smooth, exacting, purposeful and skillful. They make it look easy but it takes quite a lot of experience and skill to pull that off.
The Huntsville storms were of concern only 45 minutes into shift. FYI: Famous last words always start with "I think you have at least 2 hours to get setup". That was watch 1 and within the next hour the next watch was out. Then storms started going on the dryline in MO. So that meant it was time for watch 3 to be drawn up, coordinated and issued around 1600 UTC (all times approximate). By 1730 UTC the MO storms had supercell characteristics.
The next wave hit in LA by 1830 UTC but these were not doing much but were along a cloud line feature that connected to the de-evolving threat from Huntsville area to Greer earlier. We expected this to change as the storms in MS and AL initiated seemingly anywhere north of the cloud line. So we had nearly 4 areas going strong and the next watch was out by 1930 UTC and another shortly thereafter.
It was at this point that I stopped taking copious notes. There was simply a lot to look at and keep track of, plus I ate lunch for 25 minutes. It was simply keeping tabs on all the storms, their trends, and making sure every area that needed a watch had one. Monitoring the latest observational data, peering from a far at all the supercells, and in all cases hoping that the spawned tornadoes would stay out over unpopulated land.
Keep in mind that this is a narrow perspective on that day. The outlooks were issued at 1630 and 2000 UTC, more than 13 severe weather MDs, many more winter weather MDs (I am jealous MI of your blizzard but I was blissfully unaware), and countless media calls. Shift change came about at 2200 UTC and those words were used again: "I think you are covered for a bit." And by 2220 UTC the next watch went out the door.
All told I think 6 watches with another 14 severe weather MD's went out that shift. For the convective day it was another 8 watches with another 10 MDs through the next 2 shifts. And that does not count the 2 watches from the previous evening of a long-lived supercell that started in MO and went through to IL-IN.
It was a great experience on a day where a lot of folks had a negative experience. So the stats are still coming in. Roughly 98 tornado reports (not individual tornadoes). 35 fatalities, maybe more. Whole towns are gone. Many more damaged. The damage assessment teams are out from multiple local NWS offices in which multiple teams per office are out trying to rate the tornadoes damage and estimate their strength.
The whole point of job shadowing (for me) is to prepare for the upcoming Hazardous Weather Testbed Experimental Forecasting Program. We will be replicating, to some extent, the forecast process for defining the threat of severe weather probabilistically. So it is very helpful to watch the forecast team execute on big days.
The forecast environment was great for supercells. High bulk wind shear, deep and low. Storm relative helicity was, at least for the models, in a favorable range for tornadoes. Observed wind profiles from the radar (VAD) were favorable ... even at 9am when the first watch was being considered in what look liked elevated run-of-the-mill storms in an uncapped, warm advection regime. And within 20 minutes the first TOR warning was issued and within the hour the first tornado emergency went out for north of Huntsville, AL. That was the big hint of the day ... its going to be big. And the atmosphere had no plans on being subtle.
The forecast process was quickly shuffled to the team and products were being disseminated for nearly the entire shift. If it wasn't an Discussion going out it was a Watch. It wasn't stressful but I am pretty sure that it wasn't until an hour and a half before shift ended that everything that needed to be covered in a watch or MD was covered...at least for now. Even when shift ended storms that had done nothing for the last hour (no warnings and seemingly, benign severe wise, started producing tornadoes in southern AL.
We were busy doing surface analysis from AR to SC and MI to FL to keep track of the evolution of the warm sector, looking at any number of supercells producing the latest nasty looking circulation or debris ball. We watched a supercell pair wreak havoc for quite a while.
I think what was disorienting for me was that I never had a clear picture of what the radar would look like from a forecasting perspective. The scenario of supercells and line segments was evident but before I could form any conceptual model of the outbreak we had storms all over the place. It was the sheer size of the areas being considered and the ongoing nature of the threat that was distracting because I am not used to that. I am not a forecaster as much as a researcher. I simply lack that experience.
As soon as shift started Corey said to me: "Its all about boxology." meaning that today was all about planning for non-stop watch issuance (this is something I am realizing now even though I thought I understood then), planning for when and how the outbreak would start and evolve, respectively, and planning for the size of Watches so that you capture what you need to capture and give yourself time to draw up the next one in between coordination calls. And when I think about it now, that process was smooth, exacting, purposeful and skillful. They make it look easy but it takes quite a lot of experience and skill to pull that off.
The Huntsville storms were of concern only 45 minutes into shift. FYI: Famous last words always start with "I think you have at least 2 hours to get setup". That was watch 1 and within the next hour the next watch was out. Then storms started going on the dryline in MO. So that meant it was time for watch 3 to be drawn up, coordinated and issued around 1600 UTC (all times approximate). By 1730 UTC the MO storms had supercell characteristics.
The next wave hit in LA by 1830 UTC but these were not doing much but were along a cloud line feature that connected to the de-evolving threat from Huntsville area to Greer earlier. We expected this to change as the storms in MS and AL initiated seemingly anywhere north of the cloud line. So we had nearly 4 areas going strong and the next watch was out by 1930 UTC and another shortly thereafter.
It was at this point that I stopped taking copious notes. There was simply a lot to look at and keep track of, plus I ate lunch for 25 minutes. It was simply keeping tabs on all the storms, their trends, and making sure every area that needed a watch had one. Monitoring the latest observational data, peering from a far at all the supercells, and in all cases hoping that the spawned tornadoes would stay out over unpopulated land.
Keep in mind that this is a narrow perspective on that day. The outlooks were issued at 1630 and 2000 UTC, more than 13 severe weather MDs, many more winter weather MDs (I am jealous MI of your blizzard but I was blissfully unaware), and countless media calls. Shift change came about at 2200 UTC and those words were used again: "I think you are covered for a bit." And by 2220 UTC the next watch went out the door.
All told I think 6 watches with another 14 severe weather MD's went out that shift. For the convective day it was another 8 watches with another 10 MDs through the next 2 shifts. And that does not count the 2 watches from the previous evening of a long-lived supercell that started in MO and went through to IL-IN.
It was a great experience on a day where a lot of folks had a negative experience. So the stats are still coming in. Roughly 98 tornado reports (not individual tornadoes). 35 fatalities, maybe more. Whole towns are gone. Many more damaged. The damage assessment teams are out from multiple local NWS offices in which multiple teams per office are out trying to rate the tornadoes damage and estimate their strength.
Tuesday, January 24, 2012
A note on "forcing"
It is easy to take a hard look at the maps and determine easily what constitutes strong "forcing". Usually we see a highly dynamical setup (e.g. deepening surface low, intensifying short wave trough) and immediately point to strong "forcing" as a reason for an outbreak. So what did yesterdays outbreak look like in terms of forcing, where we can be specific and look at two metrics of forcing: 700 hPa Q-vector divergence (shaded) and thermal absolute vorticity advection (contour). The Q vector divergence is an approximation for the QG omega equation forcing function. Thermal vorticity advection is the Trenberth approximation for QG omega; though as noted by Sanders (1990) the divergence of Q vector method may be more reliable in frontogenetical forcing. So here is the map of these two forcing functions on the NAM 12 km grid:
The same convection as yesterdays post is used comparing the 36 (upper left),24 (upper right),12 (lower left) hr forecast to the model analysis (lower right). Although this plot is for 700 hPa the one at 500 hPa was similar. Clearly the front (e.g. cold front aloft) was present the forcing is not that strong according to the model analysis, though the forecasts suggest a much greater forcing than diagnosed through the data assimilation system. One could reach the same conclusion from yesterdays plot of the derived QG omega (through the harmonic method used in SUNYPak) at 500 hPa (shown again below). At most both of these plots suggest that forcing in the region of the outbreak in AR and later in MS and AL was more weak to moderate than strong. The forcing for ascent shown here is localized to the front aloft. Where there was strong forcing indicated by the direct retrieval of 500 hPa QG omega was located around Kansas City ahead of the upper low relative to its translation as a negatively tilted trough. Certainly we can say this was a strong upper low but by no means was the outbreak area under strong dynamic forcing.
The same convection as yesterdays post is used comparing the 36 (upper left),24 (upper right),12 (lower left) hr forecast to the model analysis (lower right). Although this plot is for 700 hPa the one at 500 hPa was similar. Clearly the front (e.g. cold front aloft) was present the forcing is not that strong according to the model analysis, though the forecasts suggest a much greater forcing than diagnosed through the data assimilation system. One could reach the same conclusion from yesterdays plot of the derived QG omega (through the harmonic method used in SUNYPak) at 500 hPa (shown again below). At most both of these plots suggest that forcing in the region of the outbreak in AR and later in MS and AL was more weak to moderate than strong. The forcing for ascent shown here is localized to the front aloft. Where there was strong forcing indicated by the direct retrieval of 500 hPa QG omega was located around Kansas City ahead of the upper low relative to its translation as a negatively tilted trough. Certainly we can say this was a strong upper low but by no means was the outbreak area under strong dynamic forcing.
Monday, January 23, 2012
23 Jan 2012 outbreak: Synoptic evolution
The first moderate risk with tornado potential was forecast last night for portions of AR, MS, TN. A cold front aloft (CFA) associated with a short wave trough at 500 hPa came ripping across OK during the day and became negatively tilted across AR by 00 UTC. Cold advection at 700 hPa was well ahead of the low level front. QG diagnostics at 500 hPa from the 12 hr NAM forecast valid at 00 UTC show the relatively weak cross-front contribution to QG omega confined to AR while further North the along front component was dominant (courtesy of the legacy SUNYPak from UAlbany).
Soundings were taken at LZK at 21 and 00 UTC which illustrate minor warming (+0.8C at 700 hPa) but but stronger warming (+2.60 at 677 hPa) indicative of the strengthening inversion. Soundings taken at JAN for 00 and 03 UTC showed warming around 600 hPa (+1.3C). Looping the water vapor imagery it appears as though the warming was in advance of the CFA but this effective cap was insufficient to limit convection. All of the available soundings indicate that parcel paths had zero negative area yielding uncapped near surface parcels in the warm sector. With no cap storm coverage was large and relatively uninhibited. With little in the way to focus convection multiple messy lines and clusters formed.
SRH was extreme approaching 500 m2s-2 at JAN at 03 UTC. 1st tor warning south of JAN came out around 0530 UTC. It appears that this line of storms formed along an effective dryline (mostly moisture gradient).
12-14 hr HRRR forecasts valid from 00-02 UTC showed development very similar to observations albeit just about a or two far east and lacking the secondary more westward line of convection. Given the large 0-1km SRH from observed soundings and the lateness of model convection, it is no surprise why the HRRR failed to show any (and thus not significant) updraft helicity associated with the storms.
This point alone should highlight why it is so tough to forecast severe weather with models that may be only slight late in initiation and slow to develop. 1-2 hours late and 1-2 hours too slow to become significant (in a relative sense) means the models can be as much as 4 hours behind in convective evolution or even later if the environment is evolving and the convection doesn't follow the same evolution as observed.
As for the NAM, lets compare the 36,24,12 hour forecast of the CFA:
The 36 hr (upper left), 24 hr (upper right), 12 hour (lower left) forecast are compared to the analysis (lower right) for the frontal positions (magnitude of the potential temperature gradient) for the 700 hPa (shaded x10-5 K km-1 per 3 hrs), 500 hPa and 850 hPa (6 x10-5 K km per 3 hrs black and blue contours respectively). The difference between 36 and 24 hr is the difference in frontal position at 850 hPa into the cold front aloft. 700 hPa frontal positions were surprisingly stable, albeit with fluctuations in magnitude. So at least in theory this event had some measure of predictability associated specifically with the synoptic precursors, but the dynamical evolution of those precursors had little predictability beyond 36 (or maybe 30?) hours prior to convection initiation.
Soundings were taken at LZK at 21 and 00 UTC which illustrate minor warming (+0.8C at 700 hPa) but but stronger warming (+2.60 at 677 hPa) indicative of the strengthening inversion. Soundings taken at JAN for 00 and 03 UTC showed warming around 600 hPa (+1.3C). Looping the water vapor imagery it appears as though the warming was in advance of the CFA but this effective cap was insufficient to limit convection. All of the available soundings indicate that parcel paths had zero negative area yielding uncapped near surface parcels in the warm sector. With no cap storm coverage was large and relatively uninhibited. With little in the way to focus convection multiple messy lines and clusters formed.
SRH was extreme approaching 500 m2s-2 at JAN at 03 UTC. 1st tor warning south of JAN came out around 0530 UTC. It appears that this line of storms formed along an effective dryline (mostly moisture gradient).
12-14 hr HRRR forecasts valid from 00-02 UTC showed development very similar to observations albeit just about a or two far east and lacking the secondary more westward line of convection. Given the large 0-1km SRH from observed soundings and the lateness of model convection, it is no surprise why the HRRR failed to show any (and thus not significant) updraft helicity associated with the storms.
This point alone should highlight why it is so tough to forecast severe weather with models that may be only slight late in initiation and slow to develop. 1-2 hours late and 1-2 hours too slow to become significant (in a relative sense) means the models can be as much as 4 hours behind in convective evolution or even later if the environment is evolving and the convection doesn't follow the same evolution as observed.
As for the NAM, lets compare the 36,24,12 hour forecast of the CFA:
The 36 hr (upper left), 24 hr (upper right), 12 hour (lower left) forecast are compared to the analysis (lower right) for the frontal positions (magnitude of the potential temperature gradient) for the 700 hPa (shaded x10-5 K km-1 per 3 hrs), 500 hPa and 850 hPa (6 x10-5 K km per 3 hrs black and blue contours respectively). The difference between 36 and 24 hr is the difference in frontal position at 850 hPa into the cold front aloft. 700 hPa frontal positions were surprisingly stable, albeit with fluctuations in magnitude. So at least in theory this event had some measure of predictability associated specifically with the synoptic precursors, but the dynamical evolution of those precursors had little predictability beyond 36 (or maybe 30?) hours prior to convection initiation.
Friday, January 13, 2012
Tornado days
Revisiting the tornado data set (1950-2010), I summed up the tornado's per day to take a look at the dependency of reports per day, daily path length, daily fatalities and daily injuries. The figure below shows these variables with respect to the daily maximum tornado magnitude. Given the magnitude of this years April, the April climatology is highlighted in Red.
10 of the 43 E-F5 days (23%) occur in April contrast that with 71 of the 317 E-F4 days (22%). The April E-F5 fatalities have a median of 22 while injuries have a median of 290. This all occurs with a median of 26 tornadoes and a minimum of 11. The 3 April 1974 outbreak is the largest outlier in the E-F5 category with 148 tornadoes, 2553 miles path length, 368 fatalities, and and 6149 injuries.This years April had roughly 200 tornadoes with an estimated path length of 1950 miles. Final official numbers probably won't be available until March. I will update the graphics then.
March has 7%, May has 35% and June has 20% of the E-F5 tornado days to make up the monthly distribution. The E-F4 tornado days are distributed as such: March has 10%, May has 23%, and June has 16%. These 4 months comprise the most deadly and numerous tornado days.
Always know your data: Note the outlier in the E-F0 category for Path Length. That is an error in the database associated with one tornado on 14 AUG 2006 in New Mexico. Apparently these types of errors appear now and again and are hard to officially remove.
10 of the 43 E-F5 days (23%) occur in April contrast that with 71 of the 317 E-F4 days (22%). The April E-F5 fatalities have a median of 22 while injuries have a median of 290. This all occurs with a median of 26 tornadoes and a minimum of 11. The 3 April 1974 outbreak is the largest outlier in the E-F5 category with 148 tornadoes, 2553 miles path length, 368 fatalities, and and 6149 injuries.This years April had roughly 200 tornadoes with an estimated path length of 1950 miles. Final official numbers probably won't be available until March. I will update the graphics then.
March has 7%, May has 35% and June has 20% of the E-F5 tornado days to make up the monthly distribution. The E-F4 tornado days are distributed as such: March has 10%, May has 23%, and June has 16%. These 4 months comprise the most deadly and numerous tornado days.
Always know your data: Note the outlier in the E-F0 category for Path Length. That is an error in the database associated with one tornado on 14 AUG 2006 in New Mexico. Apparently these types of errors appear now and again and are hard to officially remove.
Sunday, January 8, 2012
Tornado reports year in review
I queried the storm reports page from SPC, collecting the tornado reports for the last 7 years (2005-2011). I wanted to see what kind of year this was. I started by looking at (convective) days over the last 7 years (2557 days) where tornado reports were received (1186, or 46.4%).
When 30 or more tornado reports were received daily (sample size of 68), the yearly distribution was:
2011: 13
2010: 11
2009: 7
2008: 19
2007: 5
2006: 4
2005: 9
The top 6 report days over this period were:
27 APR 2011: 292
15 APR 2011: 146
12 MAR 2006: 140
16 APR 2011: 139
5 FEB 2008: 131
25 MAY 2011: 127
So 2011 stands out both in terms of the maximum tornado report day on 27 APR (twice the reports of the next closest day), and 4 out of the top 6 report days. Quite the year for regional outbreaks, but does not beat 2008.
When 30 or more tornado reports were received daily (sample size of 68), the yearly distribution was:
2011: 13
2010: 11
2009: 7
2008: 19
2007: 5
2006: 4
2005: 9
The top 6 report days over this period were:
27 APR 2011: 292
15 APR 2011: 146
12 MAR 2006: 140
16 APR 2011: 139
5 FEB 2008: 131
25 MAY 2011: 127
So 2011 stands out both in terms of the maximum tornado report day on 27 APR (twice the reports of the next closest day), and 4 out of the top 6 report days. Quite the year for regional outbreaks, but does not beat 2008.
Subscribe to:
Posts (Atom)