Presentation is loading. Please wait.

Presentation is loading. Please wait.

Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley.

Similar presentations


Presentation on theme: "Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley."— Presentation transcript:

1 Verification Methods for High Resolution Model Forecasts Barbara Brown (bgb@ucar.edu) NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley Gotway, Chris Davis, David Ahijevych, Eric Gilleland, Beth Ebert Hurricane Diagnostics and Verification Workshop May 2009

2 2 Goals Describe new approaches for evaluation of high-resolution spatial (gridded) forecasts Potentially useful for evaluating hurricane rainfall (and wind) patternsPotentially useful for evaluating hurricane rainfall (and wind) patterns The aim of these methods is to Provide more meaningful information about performance than traditional approachesProvide more meaningful information about performance than traditional approaches Overcome some of the insensitivity of traditional approachesOvercome some of the insensitivity of traditional approaches

3 Relevance for tropical cyclones Meaningful evaluation of precipitation evolution for land-falling hurricanes Meaningful evaluation of precipitation evolution for land-falling hurricanes Short-term predictions (e.g., extreme events; precipitation distribution; flooding)Short-term predictions (e.g., extreme events; precipitation distribution; flooding) Total storm precipitation predictionsTotal storm precipitation predictions Meaningful comparisons of high- and low- resolution models Meaningful comparisons of high- and low- resolution models Low resolution tends to “win” using traditional measures even if pattern is not as good…Low resolution tends to “win” using traditional measures even if pattern is not as good… Evaluation of oceanic precipitation using satellite Evaluation of oceanic precipitation using satellite Evaluation of wind pattern forecasts Evaluation of wind pattern forecasts Other?? Other?? 3

4 Hurricane Rita precipitation 4

5 Hurricane precipitation verification 5 Swath-based precip verification approach (Marchok et al. 2007) QPF pattern matching QPF pattern matching Mean, volume, and distribution of rain values Mean, volume, and distribution of rain values Production of extreme amounts Production of extreme amounts

6 Hurricane Rita precipitation 6 9/209/219/22 9/23 9/24 9/25 9/26

7 7 Consider gridded forecasts and observations of precipitation Traditional approach Which is better? OBS 1 23 4 5

8 8 Traditional approach OBS 1 23 4 5 Scores for Examples 1-4: Correlation Coefficient = -0.02 Probability of Detection = 0.00 False Alarm Ratio = 1.00 Hanssen-Kuipers = -0.03 Gilbert Skill Score (ETS) = -0.01 Scores for Example 5: Correlation Coefficient = 0.2 Probability of Detection = 0.88 False Alarm Ratio = 0.89 Hanssen-Kuipers = 0.69 Gilbert Skill Score (ETS) = 0.08 Forecast 5 is “Best”

9 9 Traditional approach OBS 1 23 4 5 Some problems with the traditional approach: (1) Non-diagnostic – doesn’t tell us what was wrong with the forecast – or what was right (2) Utra-sensitive to small errors in simulation of localized phenomena

10 10 Spatial verification techniques aim to: Account for Account for Uncertainties in timing and locationUncertainties in timing and location Spatial structureSpatial structure Provide information that is Provide information that is Able to characterize error in physical termsAble to characterize error in physical terms DiagnosticDiagnostic Meaningful to forecast usersMeaningful to forecast users Weather variables (e.g., precipitation, wind fields) defined over spatial domains have coherent structure and features Spatial forecasts

11 Spatial verification approaches Filtering 1. Neighborhood 2. Scale separation Displacement 3. Feature-based 4. Field deformation 11

12 12 Field deformation approaches Field deformation approaches Measure distortion and displacement (phase error) for whole fieldMeasure distortion and displacement (phase error) for whole field How should the forecast be adjusted to make the best match with the observed field? Scale decomposition methods Scale decomposition methods Measure scale-dependent errorMeasure scale-dependent error Neighborhood verification methods Neighborhood verification methods Give credit to "close" forecastsGive credit to "close" forecasts Object- and feature-based methods Object- and feature-based methods Evaluate attributes of identifiable featuresEvaluate attributes of identifiable features New spatial verification approaches Keil and Craig, 2008

13 13 Intensity-scale method Casati et al. (2004) Evaluate forecast skill as a function of the precipitation intensity and the spatial scale of the error

14 14 Scale  wavelet decomposition of binary error Scale l=8 (640 km) Scale l=1 (5 km) mean (1280 km) Scale l=6 (160 km) Scale l=7 (320 km) Scale l=5 (80 km)Scale l=4 (40 km) Scale l=3 (20 km)Scale l=2 (10 km) 1 0 From Ebert 2008

15 15 MSE skill score Sample climatology (base rate) From Ebert 2008

16 16 Neighborhood verification Also called “fuzzy” verification Also called “fuzzy” verification Upscaling Upscaling Put observations and/or forecast on coarser gridPut observations and/or forecast on coarser grid Calculate traditional metricsCalculate traditional metrics Provide information about scales where the forecasts have skill Provide information about scales where the forecasts have skill

17 17 Neighborhood methods From Mittermaier 2008 Fractional Skill Score (FSS)

18 18 Feature-based verification Composite approach (Nachamkin) Composite approach (Nachamkin) Contiguous rain area approach (CRA; Ebert and McBride, 2000; Gallus and others) Contiguous rain area approach (CRA; Ebert and McBride, 2000; Gallus and others) Error components Error components displacementdisplacement volumevolume patternpattern

19 19 Spatial verification method: MODE MODE: Method for Object-based Diagnostic Evaluation MODE: Method for Object-based Diagnostic Evaluation Goals Goals Mimic how a human would identify storms and evaluate forecastsMimic how a human would identify storms and evaluate forecasts Measure forecast “attributes” that are of interest to usersMeasure forecast “attributes” that are of interest to users Steps Steps Identify objectsIdentify objects Measure attributesMeasure attributes Match forecast attributesMatch forecast attributes Measure differences in attributesMeasure differences in attributes Example: Precipitation; 1 Jun 2005 ForecastObserved User inputs: Object identification; attribute selection and weighting

20 20 Object-based example: 1 June 2005 MODE quantitative results indicate MODE quantitative results indicate Most forecast areas too largeMost forecast areas too large Forecast areas slightly displacedForecast areas slightly displaced Median and extreme intensities too largeMedian and extreme intensities too large BUT – overall – forecast is pretty goodBUT – overall – forecast is pretty good In contrast: In contrast: POD = 0.40POD = 0.40 FAR = 0.56FAR = 0.56 CSI = 0.27CSI = 0.27 1 2 3 Forecast precipitation objects with Diagnosed objects overlaid

21 21 Back to the original example… MODE “Interest” measures overall ability of forecasts to match obs MODE “Interest” measures overall ability of forecasts to match obs Interest values provide more intuitive estimates of performance than the traditional measure (ETS) Interest values provide more intuitive estimates of performance than the traditional measure (ETS) But – even for spatial methods, Single measures don’t tell the whole story! But – even for spatial methods, Single measures don’t tell the whole story!

22 22 Spatial Method Intercomparison Project (ICP) Goal: Compare the various approaches using the same datasets (real, geometric, known errors) Goal: Compare the various approaches using the same datasets (real, geometric, known errors) Includes all of the methods described here; international participants Includes all of the methods described here; international participants Collection of papers in preparation (Weather and Forecasting) Collection of papers in preparation (Weather and Forecasting) http://www.rap.ucar.edu/projects/icp/index.html

23 23 What do the new methods measure?

24 Applicability to ensemble forecasts 24 From C. Davis

25 Statistical inference Confidence intervals are required to provide Meaningful evaluations of individual model performanceMeaningful evaluations of individual model performance Meaningful comparisons of model performanceMeaningful comparisons of model performance 25 Threshold 24-h QPF, 24-h lead

26 Method availability Many methods available as part of the Model Evaluation Tools (MET) Many methods available as part of the Model Evaluation Tools (MET) MODEMODE NeighborhoodNeighborhood Intensity-scaleIntensity-scale MET is freely available MET is freely available Strong user supportStrong user support Software for some others is available on the intercomparison website or from original developers Software for some others is available on the intercomparison website or from original developers 26 http://www.dtcenter.org/met/users/

27 Conclusion New spatial methods provide great opportunities for more meaningful evaluation of precipitation forecasts – and other forecast fields New spatial methods provide great opportunities for more meaningful evaluation of precipitation forecasts – and other forecast fields Feed back into forecast developmentFeed back into forecast development Provide information to usersProvide information to users Each method is useful for particular types of situations and for answering particular types of questions Each method is useful for particular types of situations and for answering particular types of questions 27

28 Topics for discussion Consider how new methods may be beneficial (and adaptable) for high-res NWP for hurricanes Consider how new methods may be beneficial (and adaptable) for high-res NWP for hurricanes Can these methods help?Can these methods help? How do they need to be adapted/altered?How do they need to be adapted/altered? Would they be useful for other fields (e.g., winds)?Would they be useful for other fields (e.g., winds)? Are other kinds of new methods needed?Are other kinds of new methods needed? Use of aircraft observations – incomplete grids (reflectivity) Use of aircraft observations – incomplete grids (reflectivity) Methods for evaluation of genesis? A global problem… Need to consider false alarms as well as misses Methods for evaluation of genesis? A global problem… Need to consider false alarms as well as misses Use of confidence intervals Use of confidence intervals 28 From Bender et al. 2007

29 29


Download ppt "Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley."

Similar presentations


Ads by Google