Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J.

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

Mei Xu, Jamie Wolff and Michelle Harrold National Center for Atmospheric Research (NCAR) Research Applications Laboratory (RAL) and Developmental Testbed.
5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
Object Based Cluster-Analysis and Verification of a Convection-Allowing Ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment Aaron.
Storm Prediction Center Highlights NCEP Production Suite Review December 3, 2013 Steven Weiss, Israel Jirak, Chris Melick, Andy Dean, Patrick Marsh, and.
SPC Potential Products and Services and Attributes of Operational Supporting NWP Probabilistic Outlooks of Tornado, Severe Hail, and Severe Wind.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Aspects of 6 June 2007: A Null “Moderate Risk” of Severe Weather Jonathan Kurtz Department of Geosciences University of Nebraska at Lincoln NOAA/NWS Omaha/Valley,
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Roll or Arcus Cloud Squall Lines.
Testbeds and Projects with Ongoing Ensemble Research:  Hydrometeorology Testbed (HMT)  Hazardous Weather Testbed (HWT)  Hurricane Forecast Improvement.
Tara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3 1 NCAR/RAL and NOAA/GSD, Boulder Colorado, USA 2 NOAA/NWS/Storm Prediction Center,
Evaluation and Comparison of Multiple Convection-Allowing Ensembles Examined in Recent HWT Spring Forecasting Experiments Israel Jirak, Steve Weiss, and.
Jamie Wolff Jeff Beck, Laurie Carson, Michelle Harrold, Tracy Hertneky 15 April 2015 Assessment of two microphysics schemes in the NOAA Environmental Modeling.
Warn on Forecast Briefing September 2014 Warn on Forecast Brief for NCEP planning NSSL and GSD September 2014.
Storm Prediction Center: Storm-Scale Ensemble of Opportunity Israel Jirak & Steve Weiss Science Support Branch Storm Prediction Center Norman, OK Acknowledgments:
The 2014 Flash Flood and Intense Rainfall Experiment Faye E. Barthold 1,2, Thomas E. Workoff 1,3, Wallace A. Hogsett 1*, J.J. Gourley 4, and David R. Novak.
Forecasting in a Changing Climate Harold E. Brooks NOAA/National Severe Storms Laboratory (Thanks to Andy Dean, Dave Stensrud, Tara Jensen, J J Gourley,
Toward a 4D Cube of the Atmosphere via Data Assimilation Kelvin Droegemeier University of Oklahoma 13 August 2009.
Integration of Storm Scale Ensembles, Hail Observations, and Machine Learning for Severe Hail Prediction David John Gagne II Center for Analysis and Prediction.
Fly - Fight - Win 16 th Weather Squadron Evan Kuchera Fine Scale Models and Ensemble 16WS/WXN Template: 28 Feb 06 Air Force Weather Ensembles.
Barbara Brown 1, Ed Tollerud 2, and Tara Jensen 1 1 NCAR/RAL, Boulder, CO and DTC 2 NOAA/GSD, Boulder, CO and DTC DET: Testing and Evaluation Plan Wally.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
HWT Spring Forecasting Experiment: History and Success Dr. Adam Clark February 25, 2015 National Weather Center Norman, Oklahoma.
NOAA Hazardous Weather Testbed Experimental Forecast Program (EFP) Steven Weiss (SPC) and Jack Kain (NSSL)
Joe Klemp National Center for Atmospheric Research Boulder, Colorado Convection Resolving NWP using WRF.
Performance of the Experimental 4.5 km WRF-NMM Model During Recent Severe Weather Outbreaks Steven Weiss, John Kain, David Bright, Matthew Pyle, Zavisa.
NSSL’s Warn-on-Forecast Project Dr. Lou Wicker February 25–27, 2015 National Weather Center Norman, Oklahoma.
SPC Ensemble Applications: Current Status, Recent Developments, and Future Plans David Bright Storm Prediction Center Science Support Branch Norman, OK.
Phillip Bothwell Southern Thunder 2011 Workshop July 13, 2011 Multi-Model Lightning Prediction.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
1 Current and Future needs of Numerical Weather Prediction Guidance at the NOAA Aviation Weather Center Jason Levit Techniques Development Meteorologist.
NOAA Hazardous Weather Test Bed (SPC, OUN, NSSL) Objectives – Advance the science of weather forecasting and prediction of severe convective weather –
INFORMATION EXTRACTION AND VERIFICATION OF NUMERICAL WEATHER PREDICTION FOR SEVERE WEATHER FORECASTING Israel Jirak, NOAA/Storm Prediction Center Chris.
Forecasting a Continuum of Environmental Threats (FACETs): Overview, Plans and Early Impressions of a Proposed High-Impact Weather Forecasting Paradigm.
Application of Short Range Ensemble Forecasts to Convective Aviation Forecasting David Bright NOAA/NWS/Storm Prediction Center Norman, OK Southwest Aviation.
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
DET Module 5 Products and Display Tara Jensen 1 and Paula McCaslin 2 1 NCAR/RAL, Boulder, CO 2 NOAA/GSD, Boulder, CO Acknowledgements: HWT Spring Experiment.
R2O and O2R between NSSL and SPC: The benefits of Collocation Jack Kain, Steve Weiss, Mike Coniglio, Harold Brooks, Israel Jirak, and Adam Clark.
HMT-DTC Project – 2009 Funded by USWRP Collaborators: NCAR – Tara Jensen, Tressa Fowler, John Halley-Gotway, Barb Brown, Randy Bullock ESRL – Ed Tollerud,
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
Convective-Scale Numerical Weather Prediction and Data Assimilation Research At CAPS Ming Xue Director Center for Analysis and Prediction of Storms and.
Edward Tollerud 1, Tara Jensen 2, John Halley Gotway 2, Huiling Yuan 1,3, Wally Clark 4, Ellen Sukovich 4, Paul Oldenburg 2, Randy Bullock 2, Gary Wick.
1 Aviation Forecasting – Works in Progress NCVF – Ceiling & Visibility CoSPA – Storm Prediction A Joint Effort Among: MIT Lincoln Laboratory NCAR – National.
CAPS Realtime 4-km Multi-Model Convection-Allowing Ensemble and 1-km Convection-Resolving Forecasts for the NOAA Hazardous Weather Testbed (HWT) 2009 Spring.
Recent and Future Advancements in Convective-Scale Storm Prediction with the High- Resolution Rapid Refresh (HRRR) Forecast System NOAA/ESRL/GSD/AMB Curtis.
Convection-Permitting Ensemble Forecasts at CAPS for Hazardous Weather Testbed (HWT) Ming Xue Center for Analysis and Prediction of Storms and School of.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
DET Module 1 Ensemble Configuration Linda Wharton 1, Paula McCaslin 1, Tara Jensen 2 1 NOAA/GSD, Boulder, CO 2 NCAR/RAL, Boulder, CO 3/8/2016.
The Performance of a Weather-Adaptive 3DVAR System for Convective-scale RUA and some suggestions for future GSI-based RUA Jidong Gao NOAA/National Severe.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
The Quantitative Precipitation Forecasting Component of the 2011 NOAA Hazardous Weather Testbed Spring Experiment David Novak 1, Faye Barthold 1,2, Mike.
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
RUC Convective Probability Forecasts using Ensembles and Hourly Assimilation Steve Weygandt Stan Benjamin Forecast Systems Laboratory NOAA.
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
ASAP Convective Weather Research at NCAR Matthias Steiner and Huaqing Cai Rita Roberts, John Williams, David Ahijevych, Sue Dettling and David Johnson.
CAPS Realtime 4-km Multi-Model Convection-Allowing Ensemble and 1-km Convection-Resolving Forecasts for the NOAA Hazardous Weather Testbed (HWT) 2009 Spring.
A few examples of heavy precipitation forecast Ming Xue Director
Hydrometeorological Predication Center
Center for Analysis and Prediction of Storms (CAPS) Briefing by Ming Xue, Director CAPS is one of the 1st NSF Science and Technology Centers established.
5Developmental Testbed Center
Dan Petersen Bruce Veenhuis Greg Carbin Mark Klein Mike Bodner
Verification of nowcasting products: Issues and methods
CAPS Real-time Storm-Scale EnKF Data Assimilation and Forecasts for the NOAA Hazardous Weather Testbed Spring Forecasting Experiments: Towards the Goal.
New Developments in Aviation Forecast Guidance from the RUC
2018 EnKF Workshop Development and Testing of a High-Resolution Rapid Refresh Ensemble (HRRRE) David Dowell, Trevor Alcott, Curtis Alexander, Jeff Beck,
Presentation transcript:

Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J. Levit 3, Michelle Harrold 1, Lisa Coco 1, Patrick Marsh 4, Adam Clark 4, Fanyou Kong 5, Kevin Thomas 5, Ming Xue 5, Jack Kain 4, Russell Schneider 2, Mike Coniglio 4, and Barbara Brown 1 1 NCAR/Research Applications Laboratory (RAL), Boulder, Colorado 2 NOAA/Storm Prediction Center (SPC), Norman, Oklahoma 3 NOAA/Aviation Weather Center (AWC), Kansas City, Missouri 4 NOAA/National Severe Storms Laboratory (NSSL), Norman, Oklahoma 5 Center for Analysis and Prediction of Storms (CAPS), University of Oklahoma, Norman, Oklahoma

NOAA Testbeds NOAA Testbeds Funded by: NOAA, USWRP, AFWA, NCAR Bridge between Research And Operations Community Code Support Testing and Evaluation Verification Research NOAA/ ESRL/ GSD NCAR/ RAL/ JNT Distributed Facility with 23 staff members at either NOAA/ESRL and NCAR/RAL and 2 staff at NOAA/NCEP

HWT-DTC Collaboration Objectives Supplementsubjective assessments objective evaluation Supplement HWT Spring Experiment subjective assessments with objective evaluation of experimental forecasts contributed to Spring Experiment Expose the forecastersto both traditional and new approaches for verifying Expose the forecasters and researchers to both traditional and new approaches for verifying forecasts Further DTC Mission of Testing and Evaluation of cutting edge NWP Further DTC Mission of Testing and Evaluation of cutting edge NWP for R2O.

2010 Models CAPS Storm-Scale Ensemble – 4km (all 26 members plus products) CAPS deterministic – 1 km SREF Ensemble Products – km NAM – 12 km HRRR – 3 km NSSL – 4 km MMM – 3 km NAM high-res window – 4km 2/3 CONUS VORTEX2 DAILY Region Of Interest (Moved Daily) Obs were NSSL Q2 data

General Approach for Objective Evaluation of Contributed Research Models MODELS OBS REGIONS DTC Model Evaluation Tools (MET) Web Spatial* Statistics Output Traditional Statistics Output *Spatial = Object Oriented

Statistics and Attributes calculated using MET Traditional (Categorical)Object-Oriented from MODE Gilbert Skill Score (GSS - aka ETS) Critical Success Index (CSI - aka Threat Score) Frequency Bias Prob. of Detection (POD) False Alarm Ratio (FAR) Centroid Distance Area Ratio Angle Difference Intensity Percentiles Intersection Area Boundary Distance between matched forecast and observed object pairs Etc…

HWT 2010 Spring Experiment AviationQPFSevere Probability of Severe: Winds Hail Tornadoes Probability of Extreme: 0.5 inches in 6hrs 1.0 inches in 6 hrs Max accumulation Probability of Convection: Echos > 40 dBZ Echo Top Height >25 kFt, >35 kFt REFC 20, 25, 30, 35, 40, 50, 60 dBZ APCP and Prob. 0.5, 1.0, 2,0 inches In 3h and 6h RETOP 25, 30, 35, 40, 45 kFT Evaluation: Traditional and Spatial Evaluation: Traditional and Spatial Evaluation: Traditional and Spatial

Preliminary Results

Caveats 25 samples of 00z runs– not quite enough to assign statistical significance Aggregations: Represent the median of the 25 samples (17 May – 18 Jun 2010) Generated using alpha version of METviewer database and display system

5/14/2010 Object Definition

5/14/2010 Use of Attributes of Objects defined by MODE Centroid Distance: Provides a quantitative sense of spatial displacement of cloud complex. Small is good Forecast Field Observed Field Axis Angle: Provides an objective measure of linear orientation. Small is good Area Ratio: Provides an objective measure of whether there is an over- or under- prediction of areal extent of cloud. Close to 1 is good Obs Area Fcst Area Area Ratio = Fcst Area Obs Area

5/14/2010 Symmetric Diff: May be a good summary statistic for how well Forecast and Observed objects match. Small is good Forecast Field Observed Field P50/P90 Int: Provides objective measures of Median (50 th percentile) and near-Peak (90 th percentile) intensities found in objects. Ratio close To 1 is good Total Interest: Summary statistic derived from fuzzy logic engine with user-defined Interest Maps for all these attributes plus some others. Close to 1 is good Symmetric Difference: Non-Intersecting Area Fcst P50 = 29.0 P90 = 33.4 Obs P50 = 26.6 P90 = 31.5 Total Interest 0.75 Use of Attributes of Objects defined by MODE

Example: Radar Echo Tops 1 hr forecast valid 9 June 2010 – 01 UTC NSSL Q2 ObservedHRRRCAPS MeanCAPS 1km RETOP Observed Objects Matched Object 1 Matched Object 2 Unmatched Object

Example: Radar Echo Tops 1 hr forecast valid 9 June 2010 – 01 UTC NSSL Q2 ObservedHRRRCAPS MeanCAPS 1km RETOP Observed Objects Matched Object 1 Matched Object 2 Unmatched Object

Example: Radar Echo Tops 1 hr forecast valid 9 June 2010 – 01 UTC NSSL Q2 ObservedHRRRCAPS MeanCAPS 1km RETOP Centroid Distance: Angle Diff: Area Ratio: Symmetric Diff: P50 Ratio: Total Interest: km gs km 5.83 deg gs km 5.87 deg gs

Example: Radar Echo Tops Ensemble Mean not always so useful RETOP ObservedCAPS MeanThompsonWSM6WDM6Morrison

CAPS Ensemble Mean CAPS 1 km Model CAPS SSEF ARW-CN (control w/ radar assimilation) 3 km HRRR 12km NAM CAPS SSEF ARW-C0 (control w/o radar assimilation) Traditional Stats – GSS (aka ETS)

CAPS Ensemble Mean CAPS 1 km Model CAPS SSEF ARW-CN (control w/ radar assimilation) 3 km HRRR 12km NAM CAPS SSEF ARW-C0 (control w/o radar assimilation) Traditional Stats – Freq. Bias

MODE Attributes – Area Ratio

MODE Attributes – Symmetric Diff

Summary 30 models and 4 ensemble products evaluated during HWT 2010 Most models had reflectivity as a variable 3 models had Radar Echo Top as a variable (HRRR, CAPS Ensemble, CAPS 1km) All models appears to over predict RETOP areal coverage by at least a factor of 2-5 based on FBIAS and a factor of 5-10 based on MODE Area Ratio Based on some Traditional and Object-Oriented Metrics: HRRR appears to have a slight edge over CAPS simulations for RETOP during the 2010 Spring Experiment but the differences are not statistically significant The Ensemble post-processing technique (seen in Ensemble Mean) seems to inflate the over-prediction of areal extent of cloud shield to a non-useful level. Additional Evaluation of Probability of Exceeding 40 dBZ is planned for later this winter.

Thank You s … Questions? Support for the Developmental Testbed Center (DTC), is provided by NOAA, AFWA NCAR and NSF Support for the Developmental Testbed Center (DTC), is provided by NOAA, AFWA NCAR and NSF Evaluation: MET: DTC would like to thank all of the AWC participants who helped improve our evaluation through their comments and suggestions.