NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Report of the Q2 Short Range QPF Discussion Group Jon Ahlquist Curtis Marshall John McGinley - lead Dan Petersen D. J. Seo Jean Vieux.
Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
Storm Prediction Center Highlights NCEP Production Suite Review December 3, 2013 Steven Weiss, Israel Jirak, Chris Melick, Andy Dean, Patrick Marsh, and.
Jess Charba Fred Samplatsky Phil Shafer Meteorological Development Laboratory National Weather Service, NOAA Updated September 06, 2013 LAMP Convection.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Verification and evaluation of a national probabilistic prediction system Barbara Brown NCAR 23 September 2009.
HFIP Regional Ensemble Call Audio = Passcode = # 16 September UTC.
Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley.
Improving Probabilistic Ensemble Forecasts of Convection through the Application of QPF-POP Relationships Christopher J. Schaffer 1 William A. Gallus Jr.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Testbeds and Projects with Ongoing Ensemble Research:  Hydrometeorology Testbed (HMT)  Hazardous Weather Testbed (HWT)  Hurricane Forecast Improvement.
Tara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3 1 NCAR/RAL and NOAA/GSD, Boulder Colorado, USA 2 NOAA/NWS/Storm Prediction Center,
Evaluation and Comparison of Multiple Convection-Allowing Ensembles Examined in Recent HWT Spring Forecasting Experiments Israel Jirak, Steve Weiss, and.
Copyright 2012, University Corporation for Atmospheric Research, all rights reserved Verifying Ensembles & Probability Fcsts with MET Ensemble Stat Tool.
Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J.
1 On the use of radar data to verify mesoscale model precipitation forecasts Martin Goeber and Sean Milton Model Diagnostics and Validation group Numerical.
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
The 2014 Flash Flood and Intense Rainfall Experiment Faye E. Barthold 1,2, Thomas E. Workoff 1,3, Wallace A. Hogsett 1*, J.J. Gourley 4, and David R. Novak.
Forecasting in a Changing Climate Harold E. Brooks NOAA/National Severe Storms Laboratory (Thanks to Andy Dean, Dave Stensrud, Tara Jensen, J J Gourley,
Ed Tollerud, Tara Jensen, Barb Brown ( also Yuejian Zhu, Zoltan Toth, Tony Eckel, Curtis Alexander, Huiling Yuan,…) Module 6 Objective: Provide a portable,
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
IMPROVING VERY-SHORT-TERM STORM PREDICTIONS BY ASSIMILATING RADAR AND SATELLITE DATA INTO A MESOSCALE NWP MODEL Allen Zhao 1, John Cook 1, Qin Xu 2, and.
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
Improving Ensemble QPF in NMC Dr. Dai Kan National Meteorological Center of China (NMC) International Training Course for Weather Forecasters 11/1, 2012,
Potential Benefits of Multiple-Doppler Radar Data to Quantitative Precipitation Forecasting: Assimilation of Simulated Data Using WRF-3DVAR System Soichiro.
DTC Verification for the HMT Edward Tollerud 1, Tara Jensen 2, John Halley Gotway 2, Huiling Yuan 1,3, Wally Clark 4, Ellen Sukovich 4, Paul Oldenburg.
A QPE Product with Blended Gage Observations and High-Resolution WRF Ensemble Model Output: Comparison with Analyses and Verification during the HMT-ARB.
Use of Mesoscale Ensemble Weather Predictions to Improve Short-Term Precipitation and Hydrological Forecasts Michael Erickson 1, Brian A. Colle 1, Jeffrey.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
Evaluation of three-dimensional cloud structures in DYMECS Robin Hogan John Nicol Robert Plant Peter Clark Kirsty Hanley Carol Halliwell Humphrey Lean.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
USWRP Multi-Agency Cool- Season QPF Workshop Co-Chairs Marty Ralph (NOAA/ETL) Bob Rauber (Univ. Illinois)
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
DET Module 5 Products and Display Tara Jensen 1 and Paula McCaslin 2 1 NCAR/RAL, Boulder, CO 2 NOAA/GSD, Boulder, CO Acknowledgements: HWT Spring Experiment.
HMT-DTC Project – 2009 Funded by USWRP Collaborators: NCAR – Tara Jensen, Tressa Fowler, John Halley-Gotway, Barb Brown, Randy Bullock ESRL – Ed Tollerud,
Probabilistic Forecasts of Extreme Precipitation Events for the U.S. Hazards Assessment Kenneth Pelman 32 nd Climate Diagnostics Workshop Tallahassee,
Do the NAM and GFS have displacement biases in their MCS forecasts? Charles Yost Russ Schumacher Department of Atmospheric Sciences Texas A&M University.
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
Convective-Scale Numerical Weather Prediction and Data Assimilation Research At CAPS Ming Xue Director Center for Analysis and Prediction of Storms and.
Edward Tollerud 1, Tara Jensen 2, John Halley Gotway 2, Huiling Yuan 1,3, Wally Clark 4, Ellen Sukovich 4, Paul Oldenburg 2, Randy Bullock 2, Gary Wick.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Potential Use of the NOAA G-IV for East Pacific Atmospheric Rivers Marty Ralph Dave Reynolds, Chris Fairall, Allen White, Mike Dettinger, Ryan Spackman.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
DTC Overview Bill Kuo September 25, Outlines DTC Charter DTC Management Structure DTC Budget DTC AOP 2010 Processes Proposed new tasks for 2010.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
DET Module 1 Ensemble Configuration Linda Wharton 1, Paula McCaslin 1, Tara Jensen 2 1 NOAA/GSD, Boulder, CO 2 NCAR/RAL, Boulder, CO 3/8/2016.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Testing of Objective Analysis of Precipitation Structures (Snowbands) using the NCAR Developmental Testbed Center (DTC) Model Evaluation Tools (MET) Software.
The Quantitative Precipitation Forecasting Component of the 2011 NOAA Hazardous Weather Testbed Spring Experiment David Novak 1, Faye Barthold 1,2, Mike.
User-Focused Verification Barbara Brown* NCAR July 2006
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
A few examples of heavy precipitation forecast Ming Xue Director
Hydrometeorological Predication Center
Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR
University of Washington Modeling Infrastructure Available for Olympex
5Developmental Testbed Center
Verification of nowcasting products: Issues and methods
CAPS Real-time Storm-Scale EnKF Data Assimilation and Forecasts for the NOAA Hazardous Weather Testbed Spring Forecasting Experiments: Towards the Goal.
Numerical Weather Prediction Center (NWPC), Beijing, China
Verification of Tropical Cyclone Forecasts
Presentation transcript:

NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1, and Wallace Clark 2 1 NCAR, USA 2 NOAA Earth System Research Laboratory, USA ECAM/EMS September 2011

DTC and Testbed Collaborations Developmental Testbed Center (DTC) Mission: Provide a bridge between the research and operational communities to improve mesoscale NWP Activities: Community support (e.g., access to operational models); Model testing and evaluation Goals of interactions with other “testbeds”: Examine latest capabilities of high-resolution models Evaluate impacts of physics options New approaches for presenting and evaluating forecasts

Testbed collaborations Hydrometeorological Testbed (HMT) Evaluation of regional ensemble forecasts (including operational models) and global forecasts in western U.S. (California) Winter precipitation Atmospheric Rivers Hazardous Weather Testbed (HWT) Evaluation of storm scale ensemble forecasts Late spring precipitation, reflectivity, cloud top height Comparison of model capabilities for high impact weather forecasts

Testbed Forecast Verification Observations HMT: Gauges and Stage 4 gauge analysis HWT: NMQ 1-km radar and gauge analysis; radar Traditional metrics RMSE, Bias, ME, POD, FAR, etc. Brier score, Reliability, ROC, etc. Spatial approaches Spatial approaches are needed for evaluation of ensemble forecasts for same reasons as for non-probabilistic forecasts (“double penalty”, impact of small errors in timing and location etc.) Neighborhood methods Method for Object-based Diagnostic Evaluation (MODE)

New Spatial Verification Approaches Neighborhood Successive smoothing of forecasts/obs Object- and feature-based Evaluate attributes of identifiable features Scale separation Measure scale-dependent error Field deformation Measure distortion and displacement (phase error) for whole field Web site:

HMT: Standard Scores for Ensemble Inter-model QPF Comparisons  Example: RMSE results for December 2010  Dashed – HMT (WRF) ensemble members  Solid: Deterministic members  Black: Ens Mean

HMT Application: MODE 19 December 2010, 72-h forecast, Threshold for Precip > 0.25” OBSEns Mean

MODE Application to atmospheric rivers QPF vs. IWV and Vapor Transport Capture coastal strike timing and location Large impacts on precipitation in the California Coast and Coastal mountains => Major flooding impacts

Atmospheric rivers Area=312 Area=369 Area=306Area=127 GFS Precipitable Water SSMI Integrated Water Vapor 72 hr 48 hr24 hr

HWT Example: Attribute Diagnostics for NWP Neighborhood & Object-based Methods - REFC > 30 dBZ FSS = 0.14 FSS = 0.30FSS = 0.64 Matched Interest: 0 Area Ratio: n/a Centroid Distance: n/a P90 Intensity Ratio: n/a Matched Interest: 0.89 Area Ratio: 0.18 Centroid Distance: 112km P90 Intensity Ratio: 1.08 Matched Interest: 0.96 Area Ratio: 0.53 Centroid Distance: 92km P90 Intensity Ratio: 1.04 Neighborhood Methods provide a sense of how model performs at different scales through Fraction Skill Score. Object-Based Methods Provide a sense of how forecast attributes compare with observed. Includes a measure of overall matching skill, based on user-selected attributes 20-h22-h24-h

MODE application to HWT ensembles RETOP Observed CAPS PM Mean Radar Echo Tops (RETOP)

Applying spatial methods to ensembles As probabilities: Areas do not have “shape” of precipitation areas; may “spread” the area As mean: Area is not equivalent to any of the underlying ensemble members

Treatment of Spatial Ensemble Forecasts Alternative: Consider ensembles of “attributes” Evaluate distributions of “attribute” errors

Example: MODE application to HMT ensemble members  Systematic microphysics impacts  3 Thompson Scheme members (circled) are:  Less intense  Larger areas  Note  Heavy tails  Non-symmetric distributions for both size and intensity (medians vs. averages) 90 th percentile intensity Object area >6.35 >25,4 Threshold

Probabilistic Fields (PQPF) and QPF Products Prob APCP QPE QPFPROBABILITY Ens- 4kmSREF - 32km4km NbrhdNAM-12kmEnsMean-4km

50% Prob(APCP_06>25.4 mm) vs. QPE_06 >25.4 mm Good Forecast with Displacement Error? Traditional Metrics Brier Score: 0.07 Area Under ROC: 0.62 Spatial Metrics Centroid Distance: Obj1) 200 km Obj2) 88km Area Ratio: Obj1) 0.69 Obj2) Median Of Max Interest: 0.77 Obj PODY: 0.72 Obj FAR: 0.32

Summary Evaluation of high-impact weather is moving toward use of spatial verification methods Initial efforts in place to bring these methods forward for ensemble verification evaluation

MODE-based evaluations of AR objects

Spatial method motivation Traditional approaches ignore spatial structure in many (most?) forecasts Spatial correlations Small errors lead to poor scores (squared errors… smooth forecasts are rewarded) Methods for evaluation are not diagnostic Same issues exist for ensemble forecasts Forecast Observed

MODE example: 9 May 2011 Ensemble Workshop2111 May 2011

MODE Example: combined objects 22 Consider and compare various attributes, such as: Area Location Intensity distribution Shape / Orientation Overlap with obs Measure of overall “fit” to obs Summarize distributions of attributes and differences In some cases, conversion to probabilities may be informative Spatial methods can be used for evaluation

Spatial attributes Object intersection areas vs. lead time Overall field comparison by MODE (“interest” summary) vs. lead time