Verification of a Blowing Snow Model and Applications for Blizzard Forecasting Jeff Makowski, Thomas Grafenauer, Dave Kellenbenz, Greg Gust National Weather.

Slides:



Advertisements
Similar presentations
Winter Storm Impacts By: Jenna Bodisch :]. Winter Storms Heavy snowfall and extreme cold can immobilize many people. Winter storms can result in flooding,
Advertisements

Chapter 13 – Weather Analysis and Forecasting
Seth Linden and Jamie Wolff NCAR/RAL Evaluation of Selected Winter ’04/’05 Performance Results.
Challenges Related to Ground Blizzards and Potential Solutions through an IWT Process Jeff Makowski, Thomas Grafenauer, Dave Kellenbenz, Greg Gust National.
GROUP 4 : UPDATED 22 Feb 2007 Huigang Yang Wendi Kaufeld Matt Sienko.
Upcoming Changes in Winter Weather Operations at the Weather Prediction Center (WPC) Great Lakes Operational Meteorological Workshop Dan Petersen, Wallace.
WPC Winter Weather Desk Operations and Verification Dan Petersen Winter weather focal point Keith Brill, David Novak, Wallace Hogsett, and Mark.
Louisville, KY August 4, 2009 Flash Flood Frank Pereira NOAA/NWS/NCEP/Hydrometeorological Prediction Center.
Part 5. Human Activities Chapter 13 Weather Forecasting and Analysis.
1 Operational low visibility statistical prediction Frédéric Atger (Météo-France)
Severe Weather Kim Penney September 30,2010 Science Fair Open House All are Welcome October 20, 2010 Gymnasium Fremont Elementary Waupaca, WI Watches.
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Surface parameters forecast in Vancouver and Whistler area - Verification of MC2v498 parallel runs Yan Shen University of British Columbia.
National Centers for Environmental Prediction (NCEP) Hydrometeorlogical Prediction Center (HPC) Forecast Operations Branch Winter Weather Desk Dan Petersen.
© Crown copyright Met Office Met Office seasonal forecasting for winter Jeff Knight (with thanks to many colleagues)
MOS What does acronym stand for ? –MODEL OUTPUT STATISTICS What is the difference between the GFS and GFS MOS ?
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Winter Weather & How To Prepare STEPS December 8 th, 2009.
Robert LaPlante NOAA/NWS Cleveland, OH David Schwab Jia Wang NOAA/GLERL Ann Arbor, MI 22 March 2011.
The March 01/02 Non-Winter Weather Event: Part 1 Michael W. Cammarata Anthony W. Petrolito.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
1 Localized Aviation Model Output Statistics Program (LAMP): Improvements to convective forecasts in response to user feedback Judy E. Ghirardelli National.
WINTER WEATHER FORECAST PROBLEMS INCLUDE: –SNOW –ICE –STRONG WINDS.
1 Chapter 20 Two Categorical Variables: The Chi-Square Test.
Weather Briefing January 30, 2015 National Weather Service Detroit/Pontiac, MI
Weather Briefing #4 Significant Snow Storm Potential ChrisTheWeatherTeen’s Forecast Office Rome, NY Created: March 9 th :30PM EDT Briefing By: Mary.
A light snow event: Feb 2-4, /3/03 – 6Z (midnight) Small storm passes to the SE, cold front to the NW +
Performance of the MOGREPS Regional Ensemble
Rating Snowstorms Based on Travel Impacts Ernie Ostuno National Weather Service, GRR.
Winter Storms by Paul Kocin Winter Storm Expert The Weather Channel and Dr. Gerry Bell Climate Prediction Center.
February 6, 2012 Verification Western California Dan Tomaso, Tyler Roys, & Brian Clavier.
I n t e g r i t y - S e r v i c e - E x c e l l e n c e Air Force Weather Agency Lt Col Jeffrey S. Tongue Individual Mobilization Augmentee Air Force Weather.
Are Exceptionally Cold Vermont Winters Returning? Dr. Jay Shafer July 1, 2015 Lyndon State College 1.
Evan Webb NOAA/National Weather Service Forecast Office Grand Rapids, MI USING GIS TO ENHANCE IMPACT-BASED WEATHER WARNINGS.
Verification of the Cooperative Institute for Precipitation Systems‘ Analog Guidance Probabilistic Products Chad M. Gravelle and Dr. Charles E. Graves.
Challenges in Urban Meteorology A Forum for Users and Providers September 21-23, 2004 Dr. Sharon LeDuc, Deputy Director National Climatic Data Center NOAA’s.
1 The Historic Ice Storm of January 26-28, OUTLINE Brief review of the stormBrief review of the storm Review of the environment and forcing (Why.
Winter Weather Spotter Course National Weather Service Northern Indiana.
1 Appendix. 2 For the Encore: NYC Blizzard (Dec 25-27, 2010)
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Forecasting in the Field: How to read the weather without a TV or computer.
NWS St. Louis Decision Support Workshop Watch, Warning, and Advisory Products and Criteria.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
Winter Weather Situational Awareness Jeff Last / Todd Shea NOAA National Weather Service Green Bay.
Insights from CMC BAMS, June Short Range The SPC Short-Range Ensemble Forecast (SREF) is constructed by post-processing all 21 members of the NCEP.
Probability of exceedance tool and additional useful smart tools Robert Deal IFPS/GFE Team WFO Burlington.
Blizzard.
Briefing by: Roque Vinicio Céspedes Finally Here! The MOST awaited briefing ever! February 16, 2011.
The Record South Carolina Rainfall Event of 3-5 October 2015: NCEP Forecast Suite Success story John LaCorte Richard H. Grumm and Charles Ross National.
Joe Villani Ian Lee Vasil Koleci NWS Albany, NY NROW – November 2015 Update to Gridded Snowfall Verification: Computing Seasonal Bias Maps.
Fly - Fight - Win 2 d Weather Group Mr. Evan Kuchera HQ AFWA 2 WXG/WEA Template: 28 Feb 06 Approved for Public Release - Distribution Unlimited AFWA Ensemble.
Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb.
MOS and Evolving NWP Models Developer’s Dilemma: Frequent changes to NWP models… Make need for reliable statistical guidance more critical Helps forecasters.
An Overview of HPC Winter Weather Guidance for Three Warning Criteria Snowfall Events That Occurred During the Winter Season. A Review of the.
Science 10 Mr. Jean May 7 th, The plan: Video clip of the day Predicting the perfect snow day –Types of storms –Timing –Public Opinion Powerful.
By: Michael Ley, Joey Ashbacher, and Alex Podberezin.
Weather Briefing February 11, 2013 National Weather Service Detroit/Pontiac, MI
Applied Meteorology Unit 1 Observation Denial and Performance of a Local Mesoscale Model Leela R. Watson William H. Bauman.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Encast Global forecasting.
Impact of AMDAR/RS Modelling at the SAWS
Overview of Deterministic Computer Models
Dan Petersen Bruce Veenhuis Greg Carbin Mark Klein Mike Bodner
Communicating Uncertainty via Probabilistic Forecasts for the January 2016 Blizzard in Southern New England Frank M Nocera, Stephanie L. Dunten & Kevin.
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Challenge: High resolution models need high resolution observations
Potential Snow Sunday evening into Monday morning (March 1-2, 2009)
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Presentation transcript:

Verification of a Blowing Snow Model and Applications for Blizzard Forecasting Jeff Makowski, Thomas Grafenauer, Dave Kellenbenz, Greg Gust National Weather Service – Grand Forks

Outline  Marginal vs. Real Blizzards  Canadian Blowing Snow Model  What is it?  Is it useful?

Marginal vs. Real Blizzards  Real blizzard = Widespread zero visibility for a long enough duration  Usually with several inches of falling snow  Shuts down most if not all activities/commerce/transportation  Marginal blizzard = Areas of zero visibility for a long enough duration  Usually with very little falling snow  Rural vs urban areas - cities may not be affected  Challenges:  Easier when heavy snow is predicted, but:  Events with little to no falling snow, difficult to forecast the differences between marginal and real blizzards  Events with little to no falling snow, difficult to forecast the differences between marginal blizzards and winter weather advisories for blowing snow  Is it possible to forecast these differences?  Is there accurate guidance available that would assist in the forecast process, and could help collaboration?

Help from Environment Canada  Baggaley, D. G., and J. M. Hanesiak, 2005: An Empirical Blowing Snow Forecast Technique for the Canadian Arctic and Prairie Provinces. Wea. Forecasting, 20,

What is the Canadian (Baggaley) Blowing Snow Model?  Based on a robust set of observations from Canadian Prairie stations  Simplifies the complexities related to forecasting blowing snow  Inputs: SnowRate, Temperature, WindSpeed, Snow Age  Outputs: Probability, Low End Wind Threshold (Patchy), High End Wind Threshold (Definite)  Probability = Probability that the visibility due to blowing snow will be 1/2sm or less  Needs a snow density model (How much snow is available to blow around?) – FUTURE WORK

Very Brief Literature Review  Created a series of charts that summarize the proportion of times where the combinations of wind speed, temperature, and snow age gave blowing snow visibility reductions of a given threshold.  This method will not always give a deterministic answer, but rather a statistical likelihood.

SnowAge 1-2 Hours

SnowAge 3-5 Hours

SnowAge 6-11 Hours

SnowAge Hours

SnowAge Hours

SnowAge 48+ Hours

Tips – From Dave Baggaley  We generally want to see some big numbers, for several hours.  Probabilities around 50% = “Blowing snow at times" or perhaps just limited to vulnerable areas.  Probabilities 80+% = Straight blowing snow forecast with the understanding that there will be variability through the period.  Probabilities 100% = Unbroken <1/4 mile visibilities.

Can this Model provide useful Guidance?  If yes…forecasting the differences between ‘real’ and marginal blizzards may be possible (or the difference between marginal blizzards and advisories).

Research Results (so far…)  Looked at each of the 10 verified winter season blizzards within FGF CWA  For each blizzard:  Selected the most severe hour  Determined the Blowing Snow Model output for selected sites (KDVL, KJMS, KGFK, KFAR, KHCO, KBJI, KPKD)  Observed data  Model data (NAM, GFS, ECMWF, MOSGuide, SREF)  Attempted to define a marginal blizzard  Compare Blowing Snow Model results  Computed MOS wind speed biases at each forecast hour

Defining a Marginal Blizzard  The difference between a marginal blizzard and a ‘real’ blizzard depends on two factors:  Coverage of low visibility  Duration of that low visibility  Downloaded ASOS/AWOS observations from each blizzard event

Defining a Marginal Event  Developed Python scripts to read the observations, and calculate coverage and duration values at different visibility thresholds (2sm, 1sm, 3/4sm, 1/2sm, 1/4sm)

Snow Small Area Snow Small Area

Classify Blizzards by Coverage and Duration – Related to Impacts  Real blizzard with snow – March 31 st  Real blizzard no snow – Jan. 26 th  Real/marginal blizzard – Dec. 28 th and Jan. 16 th  Marginal blizzard – Jan. 22 nd and Feb. 13 th  Marginal/no blizzard – Jan. 3 rd, Feb. 26 th, and March 5th  Not used  March 21 (Very small area)

Some Preliminary Results Blowing Snow Model probabilities (based on observed data)  Jan 26 th and March 31 st  Probability = 92%  Dec. 28 th and Jan. 16 th  Probability = 69%  Jan 22 nd and Feb 13 th  Probability = 55%  Jan 3 rd, Feb 26 th and March 5 th  Probability = 29% Note: If 6-hr snowfall was less than 1 inch, used the NoSnow probability

Observed Coverage vs. BLSN Model Probabilities

Model Biases Blowing Snow Model - Model biases  Inputted model T, Wind, SnowAmt into the Blowing Snow Model, and then compared that value to the observed Blowing Snow Model value (with falling snow).  Used a recent model run  NAM12  GFS40  ECMWF  MOSGuide  SREF

MOS Guidance Biases – Winter 2013/14 Blizzard Events

Takeaways - Conclusion  Canadian Blowing Snow Model shows usefulness:  Coverage indicator of low visibilities.  Output could potentially provide better shift to shift, and office to office consistency:  <50% Probability = Lower impact marginal blizzard or advisory  50% to 70% Probability = Lower impact marginal blizzard  70% to 90% Probability = High impact marginal blizzard  >90% Probability = “Real” blizzard  All information could be used in some sort of a program to give a probability based on known biases (especially wind).  Potential for a MOS Guidance Bias Smarttool (used during winter cold air advection events)?  Need to look at more cases…