Download presentation
Presentation is loading. Please wait.
Published byMichael Hicks Modified over 8 years ago
1
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia
2
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 2 New approaches are needed to quantitatively evaluate high resolution model output
3
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 3 What modelers want Diagnostic information What scales are well represented by the model? How realistic are forecast features / structures? How realistic are distributions of intensities / values? What are the sources of error? How can I improve the model? It's not so easy! How can I score?
4
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 4 Spatial forecasts Spatial verification techniques aim to: account for field spatial structure provide information on error in physical terms account for uncertainties in timing and location Weather variables defined over spatial domains have coherent spatial structure and features (intrinsic spatial correlation)
5
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 5 Recent research in spatial verification Scale decomposition methods measure scale-dependent error Fuzzy (neighborhood) verification methods give credit to "close" forecasts Object-oriented methods evaluate attributes of identifiable features Field verification evaluate phase errors
6
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 6 Scale decomposition methods scale-dependent error
7
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 7 ECMWF Analysis36-h Forecast (CCM-2) 500 mb GZ, 9 Dec 1992, 12:00 UTC, N. America Wavelet scale components Briggs and Levine (1997)
8
Skill threshold (mm/h) 1 0 -2 -3 -4 640 320 160 80 40 20 10 5 scale (km) 0 1/16 ¼ ½ 1 2 4 8 16 32 Intensity-scale verification technique Casati et al. (2004) Measures the skill as function of intensity and spatial scale of the error 1.Intensity: threshold Categorical approach 2.Scale: 2D Wavelets decomposition of binary images 3.For each threshold and scale: skill score associated to the MSE of binary images = Heidke Skill Score Intense storm displaced threshold = 1mm/h
9
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 9 Multiscale statistical properties Harris et al. (2001) Does a model produce the observed precipitation scale- dependent variability, i.e. does it look like real rain? Compare multi-scale statistics for model and radar data Power spectrumStructure functionMoment scaling
10
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 10 Fuzzy (multi-scale) verification methods give credit to "close" forecasts
11
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 11 Why is it called "fuzzy"? observationforecastobservationforecast Squint your eyes! "Fuzzy" verification methods Don't require an exact match between forecasts and observations Unpredictable scales Uncertainty in observations Look in a space / time neighborhood around the point of interest Evaluate using categorical, continuous, probabilistic scores / methods t t + 1 t - 1 Forecast value Frequency
12
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 12 "Fuzzy" verification methods Treatment of forecast data within a window: Mean value (upscaling) Occurrence of event* somewhere in window Frequency of event in window probability Distribution of values within window May apply to observations as well as forecasts (neighborhood observation-neighborhood forecast approach) * Event defined here as a value exceeding a given threshold, for example, rain exceeding 1 mm/hr
13
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 13 Sydney "high probability of some heavy rain near Sydney", not "62 mm of rain will fall in Sydney" single threshold EPS ROC Spatial multi-event contingency table Atger (2001) Vary decision thresholds: magnitude (ex: 1 mm h -1 to 20 mm h -1 ) distance from point of interest (ex: within 10 km,...., within 100 km) timing (ex: within 1 h,..., within 12 h) anything else that may be important in interpreting the forecast Forecasters mentally "calibrate" the deterministic forecast according to how close the forecast is to the place / time / magnitude of interest. Very close high probability Not very close low probability
14
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 14 Fractions skill score Roberts (2005) We want to know How forecast skill varies with neighbourhood size. The smallest neighbourhood size that can be can be used to give sufficiently accurate forecasts. Does higher resolution provide more accurate forecasts on scales of interest (e.g. river catchments) Compare forecast fractions with observed fractions (radar) in a probabilistic way over different sized neighbourhoods observedforecast
15
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 15 Fractions skill score Roberts (2005)
16
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 16 Decision models *NO-NF = neighborhood observation-neighborhood forecast, SO-NF = single observation-neighborhood forecast
17
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 17 Fuzzy verification framework good performance poor performance
18
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 18 Object-oriented methods evaluate attributes of features
19
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 19 Entity-based approach (CRA) Ebert and McBride (2000) Define entities using threshold (Contiguous Rain Areas) Horizontally translate the forecast until a pattern matching criterion is met: minimum total squared error between forecast and observations maximum correlation maximum overlap The displacement is the vector difference between the original and final locations of the forecast. Observed Forecast
20
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 20 CRA information Gives information on: Location error RMSE and correlation before and after shift Attributes of forecast and observed entities Error components displacement volume pattern
21
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 21 MODE* Davis et al. (2006) *Method for Object-based Diagnostic Evaluation Two parameters: 1.Convolution radius 2.Threshold
22
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 22 MODE object matching/merging Compare attributes: - centroid location - intensity distribution - area - orientation - etc. When objects not matched: - false alarms - missed events - rain volume - etc. 24h forecast of 1h rainfall on 1 June 2005
23
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 23 MODE methodology Identification Merging Matching Comparison Measure Attributes Convolution – threshold process Summarize Fuzzy Logic Approach Compare forecast and observed attributes Merge single objects into composite objects Compute interest values Identify matched pairs Accumulate and examine comparisons across many cases
24
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 24 Cluster analysis approach Marzban and Sandgathe (2006) MM5 precipitation forecasts 8 clusters identified in x-y-p space Goal: Assess the agreement between fields using clusters identified using agglomerative hierarchical cluster analysis (CA) Optimize clusters (and numbers of clusters) based on Binary images (x-y optimization) Magnitude images (x-y-p optimization) Compute Euclidean distance between clusters in forecast and observed fields (in x-y and x-y-p space)
25
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 25 Cluster analysis example Stage IV COAMPS Error = average distance between matched clusters in x-y-p space log e error
26
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 26 Composite approach Nachamkin (2004) Goal: Characterize distributions of errors from both a forecast and observation perspective Procedure: Identify events of interest in the forecasts Define a kernel and collect coordinated samples Compare forecast PDF to observed PDF Repeat process for observed events Forecast Observation x Event center
27
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 27 Composite example Compare kernel grid-averaged values Average rain (mm) given an event was predicted FCST-shade OBS-contour Average rain (mm) given an event was observed
28
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 28 Field verification evaluate phase errors
29
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 29 Feature calibration and alignment (Hoffman et al., 1995; Nehrkorn et al., 2003) Error decomposition e = X f (r) - X v (r) where X f (r) is the forecast, X v (r) is the verifying analysis, and r is the position. e = e p + e b + e r where e p = X f (r) - X d (r)phase error e b = X d (r) - X a (r)local bias error e r = X a (r) - X v (r)residual error Original forecast X f (r) 500 mb analysis X v (r) Adjusted forecast X a (r) Forecast adjustment Residual error e r
30
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 30 Forecast quality measure (FQM) Keil and Craig (2007) Combines distance measure and intensity difference measure Pyramidal image matching (optical flow) to get vector displacement field e distance Unmatched features are penalized for their intensity errors e intensity Forecast quality measure satelliteorig.modelmorphed model
31
WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 31 Conclusions What method should you use for model verification? Depends what question(s) you would like to address Many spatial verification approaches Scale decomposition – scale-dependent error Fuzzy (neighborhood) – credit for "close" forecasts Object-oriented – attributes of features Field verification – phase error
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.