Download presentation
Presentation is loading. Please wait.
1
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1, and Wallace Clark 2 1 NCAR, USA 2 NOAA Earth System Research Laboratory, USA bgb@ucar.edu ECAM/EMS 2011 14 September 2011
2
DTC and Testbed Collaborations Developmental Testbed Center (DTC) Mission: Provide a bridge between the research and operational communities to improve mesoscale NWP Activities: Community support (e.g., access to operational models); Model testing and evaluation Goals of interactions with other “testbeds”: Examine latest capabilities of high-resolution models Evaluate impacts of physics options New approaches for presenting and evaluating forecasts
3
Testbed collaborations Hydrometeorological Testbed (HMT) Evaluation of regional ensemble forecasts (including operational models) and global forecasts in western U.S. (California) Winter precipitation Atmospheric Rivers Hazardous Weather Testbed (HWT) Evaluation of storm scale ensemble forecasts Late spring precipitation, reflectivity, cloud top height Comparison of model capabilities for high impact weather forecasts
4
Testbed Forecast Verification Observations HMT: Gauges and Stage 4 gauge analysis HWT: NMQ 1-km radar and gauge analysis; radar Traditional metrics RMSE, Bias, ME, POD, FAR, etc. Brier score, Reliability, ROC, etc. Spatial approaches Spatial approaches are needed for evaluation of ensemble forecasts for same reasons as for non-probabilistic forecasts (“double penalty”, impact of small errors in timing and location etc.) Neighborhood methods Method for Object-based Diagnostic Evaluation (MODE)
5
New Spatial Verification Approaches Neighborhood Successive smoothing of forecasts/obs Object- and feature-based Evaluate attributes of identifiable features Scale separation Measure scale-dependent error Field deformation Measure distortion and displacement (phase error) for whole field Web site: http://www.ral.ucar.edu/projects/icp/
6
HMT: Standard Scores for Ensemble Inter-model QPF Comparisons Example: RMSE results for December 2010 Dashed – HMT (WRF) ensemble members Solid: Deterministic members Black: Ens Mean
7
HMT Application: MODE 19 December 2010, 72-h forecast, Threshold for Precip > 0.25” OBSEns Mean
8
MODE Application to atmospheric rivers QPF vs. IWV and Vapor Transport Capture coastal strike timing and location Large impacts on precipitation in the California Coast and Coastal mountains => Major flooding impacts
9
Atmospheric rivers Area=312 Area=369 Area=306Area=127 GFS Precipitable Water SSMI Integrated Water Vapor 72 hr 48 hr24 hr
10
HWT Example: Attribute Diagnostics for NWP Neighborhood & Object-based Methods - REFC > 30 dBZ FSS = 0.14 FSS = 0.30FSS = 0.64 Matched Interest: 0 Area Ratio: n/a Centroid Distance: n/a P90 Intensity Ratio: n/a Matched Interest: 0.89 Area Ratio: 0.18 Centroid Distance: 112km P90 Intensity Ratio: 1.08 Matched Interest: 0.96 Area Ratio: 0.53 Centroid Distance: 92km P90 Intensity Ratio: 1.04 Neighborhood Methods provide a sense of how model performs at different scales through Fraction Skill Score. Object-Based Methods Provide a sense of how forecast attributes compare with observed. Includes a measure of overall matching skill, based on user-selected attributes 20-h22-h24-h
11
MODE application to HWT ensembles RETOP Observed CAPS PM Mean Radar Echo Tops (RETOP)
12
Applying spatial methods to ensembles As probabilities: Areas do not have “shape” of precipitation areas; may “spread” the area As mean: Area is not equivalent to any of the underlying ensemble members
13
Treatment of Spatial Ensemble Forecasts Alternative: Consider ensembles of “attributes” Evaluate distributions of “attribute” errors
14
Example: MODE application to HMT ensemble members Systematic microphysics impacts 3 Thompson Scheme members (circled) are: Less intense Larger areas Note Heavy tails Non-symmetric distributions for both size and intensity (medians vs. averages) 90 th percentile intensity Object area >6.35 >25,4 Threshold
15
Probabilistic Fields (PQPF) and QPF Products Prob APCP QPE QPFPROBABILITY Ens- 4kmSREF - 32km4km NbrhdNAM-12kmEnsMean-4km
16
50% Prob(APCP_06>25.4 mm) vs. QPE_06 >25.4 mm Good Forecast with Displacement Error? Traditional Metrics Brier Score: 0.07 Area Under ROC: 0.62 Spatial Metrics Centroid Distance: Obj1) 200 km Obj2) 88km Area Ratio: Obj1) 0.69 Obj2) 0.65 1 2 Median Of Max Interest: 0.77 Obj PODY: 0.72 Obj FAR: 0.32
17
Summary Evaluation of high-impact weather is moving toward use of spatial verification methods Initial efforts in place to bring these methods forward for ensemble verification evaluation
19
MODE-based evaluations of AR objects
20
Spatial method motivation Traditional approaches ignore spatial structure in many (most?) forecasts Spatial correlations Small errors lead to poor scores (squared errors… smooth forecasts are rewarded) Methods for evaluation are not diagnostic Same issues exist for ensemble forecasts Forecast Observed
21
MODE example: 9 May 2011 Ensemble Workshop2111 May 2011
22
MODE Example: combined objects 22 Consider and compare various attributes, such as: Area Location Intensity distribution Shape / Orientation Overlap with obs Measure of overall “fit” to obs Summarize distributions of attributes and differences In some cases, conversion to probabilities may be informative Spatial methods can be used for evaluation
23
Spatial attributes Object intersection areas vs. lead time Overall field comparison by MODE (“interest” summary) vs. lead time
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.