WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.

Slides:



Advertisements
Similar presentations
Tim Smyth and Jamie Shutler Assessment of analysis and forecast skill Assessment using satellite data.
Advertisements

Validation of Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert BMRC, Melbourne, Australia 3 rd IPWG Workshop / 3.
Assessment of Tropical Rainfall Potential (TRaP) forecasts during the Australian tropical cyclone season Beth Ebert BMRC, Melbourne, Australia.
Monitoring the Quality of Operational and Semi-Operational Satellite Precipitation Estimates – The IPWG Validation / Intercomparison Study Beth Ebert Bureau.
Validation of the Ensemble Tropical Rainfall Potential (eTRaP) for Landfalling Tropical Cyclones Elizabeth E. Ebert Centre for Australian Weather and Climate.
Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
PROVIDING DISTRIBUTED FORECASTS OF PRECIPITATION USING A STATISTICAL NOWCAST SCHEME Neil I. Fox and Chris K. Wikle University of Missouri- Columbia.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Univ of AZ WRF Model Verification. Method NCEP Stage IV data used for precipitation verification – Stage IV is composite of rain fall observations and.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Object-based Spatial Verification for.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Mesoscale Model Evaluation Mike Baldwin Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma Also affiliated with NOAA/NSSL.
© Crown copyright Met Office From the global to the km-scale: Recent progress with the integration of new verification methods into operations Marion Mittermaier.
1 Verification of nowcasts and very short range forecasts Beth Ebert BMRC, Australia WWRP Int'l Symposium on Nowcasting and Very Short Range Forecasting,
1 On the use of radar data to verify mesoscale model precipitation forecasts Martin Goeber and Sean Milton Model Diagnostics and Validation group Numerical.
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Development of an object- oriented verification technique for QPF Michael Baldwin 1 Matthew Wandishin 2, S. Lakshmivarahan 3 1 Cooperative Institute for.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
12 June 2008TIES -- Kelowna, B.C. Canada The Image Warp for Evaluating Gridded Weather Forecasts Eric Gilleland National Center for Atmospheric Research.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
1- Near-Optimized Filtered Forecasts (NOFF) using wavelet analysis (O-MAPLE) 2- Probabilistic MAPLE (Probability of rain occurrence at different thresholds)
On the spatial verification of FROST-2014 precipitation forecast fields Anatoly Muraviev (1), Anastasia Bundel (1), Dmitry Kiktev (1), Nikolay Bocharnikov.
Ebert-McBride Technique (Contiguous Rain Areas) Ebert and McBride (2000: Verification of precipitation in weather systems: determination of systematic.
Improving Ensemble QPF in NMC Dr. Dai Kan National Meteorological Center of China (NMC) International Training Course for Weather Forecasters 11/1, 2012,
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Priority project « Advanced interpretation and verification.
Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
DIAMET meeting 7 th-8th March 2011 “New tools for the evaluation of convective scale ensemble systems” Seonaid Dey Supervisors: Bob Plant, Nigel Roberts.
EMS 2013 (Reading UK) Verification techniques for high resolution NWP precipitation forecasts Emiel van der Plas Kees Kok Maurice.
An Object-Based Approach for Identifying and Evaluating Convective Initiation Forecast Impact and Quality Assessment Section, NOAA/ESRL/GSD.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Spatial Forecast Methods Inter-Comparison Project -- ICP Spring 2008 Workshop NCAR Foothills Laboratory Boulder, Colorado.
Page 1© Crown copyright 2004 The use of an intensity-scale technique for assessing operational mesoscale precipitation forecasts Marion Mittermaier and.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Evaluation of Precipitation from Weather Prediction Models, Satellites and Radars Charles Lin Department of Atmospheric and Oceanic Sciences McGill University,
Deutscher Wetterdienst Long-term trends of precipitation verification results for GME, COSMO-EU and COSMO-DE Ulrich Damrath.
Application of Probability Density Function - Optimal Interpolation in Hourly Gauge-Satellite Merged Precipitation Analysis over China Yan Shen, Yang Pan,
UERRA user workshop, Toulouse, 3./4. Feb 2016Cristian Lussana and Michael Borsche 1 Evaluation software tools Cristian Lussana (2) and Michael Borsche.
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
LEPS VERIFICATION ON MAP CASES
Intensity-scale verification technique
Fuzzy verification using the Fractions Skill Score
Systematic timing errors in km-scale NWP precipitation forecasts
Daniela Rezacova, Zbynek Sokol IAP ASCR, Prague, Czech Republic
Verifying Precipitation Events Using Composite Statistics
Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR
Multi-scale validation of high resolution precipitation products
Verifying and interpreting ensemble products
General framework for features-based verification
Composite-based Verification
Verification of nowcasting products: Issues and methods
Sub-daily temporal reconstruction of historical extreme precipitation events using NWP model simulations Vojtěch Bližňák1 Miloslav.
Numerical Weather Prediction Center (NWPC), Beijing, China
Presentation transcript:

WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia

WRF Verification Toolkit Workshop, Boulder, February New approaches are needed to quantitatively evaluate high resolution model output

WRF Verification Toolkit Workshop, Boulder, February What modelers want Diagnostic information  What scales are well represented by the model?  How realistic are forecast features / structures?  How realistic are distributions of intensities / values?  What are the sources of error? How can I improve the model? It's not so easy! How can I score?

WRF Verification Toolkit Workshop, Boulder, February Spatial forecasts Spatial verification techniques aim to: account for field spatial structure provide information on error in physical terms account for uncertainties in timing and location Weather variables defined over spatial domains have coherent spatial structure and features (intrinsic spatial correlation)

WRF Verification Toolkit Workshop, Boulder, February Recent research in spatial verification Scale decomposition methods  measure scale-dependent error Fuzzy (neighborhood) verification methods  give credit to "close" forecasts Object-oriented methods  evaluate attributes of identifiable features Field verification  evaluate phase errors

WRF Verification Toolkit Workshop, Boulder, February Scale decomposition methods  scale-dependent error

WRF Verification Toolkit Workshop, Boulder, February ECMWF Analysis36-h Forecast (CCM-2) 500 mb GZ, 9 Dec 1992, 12:00 UTC, N. America Wavelet scale components Briggs and Levine (1997)

Skill threshold (mm/h) scale (km) 0 1/16 ¼ ½ Intensity-scale verification technique Casati et al. (2004) Measures the skill as function of intensity and spatial scale of the error 1.Intensity: threshold  Categorical approach 2.Scale: 2D Wavelets decomposition of binary images 3.For each threshold and scale: skill score associated to the MSE of binary images = Heidke Skill Score Intense storm displaced threshold = 1mm/h

WRF Verification Toolkit Workshop, Boulder, February Multiscale statistical properties Harris et al. (2001) Does a model produce the observed precipitation scale- dependent variability, i.e. does it look like real rain? Compare multi-scale statistics for model and radar data Power spectrumStructure functionMoment scaling

WRF Verification Toolkit Workshop, Boulder, February Fuzzy (multi-scale) verification methods  give credit to "close" forecasts

WRF Verification Toolkit Workshop, Boulder, February Why is it called "fuzzy"? observationforecastobservationforecast Squint your eyes! "Fuzzy" verification methods Don't require an exact match between forecasts and observations  Unpredictable scales  Uncertainty in observations Look in a space / time neighborhood around the point of interest  Evaluate using categorical, continuous, probabilistic scores / methods t t + 1 t - 1 Forecast value Frequency

WRF Verification Toolkit Workshop, Boulder, February "Fuzzy" verification methods Treatment of forecast data within a window:  Mean value (upscaling)  Occurrence of event* somewhere in window  Frequency of event in window  probability  Distribution of values within window May apply to observations as well as forecasts (neighborhood observation-neighborhood forecast approach) * Event defined here as a value exceeding a given threshold, for example, rain exceeding 1 mm/hr

WRF Verification Toolkit Workshop, Boulder, February Sydney "high probability of some heavy rain near Sydney", not "62 mm of rain will fall in Sydney" single threshold EPS ROC Spatial multi-event contingency table Atger (2001) Vary decision thresholds: magnitude (ex: 1 mm h -1 to 20 mm h -1 ) distance from point of interest (ex: within 10 km,...., within 100 km) timing (ex: within 1 h,..., within 12 h) anything else that may be important in interpreting the forecast Forecasters mentally "calibrate" the deterministic forecast according to how close the forecast is to the place / time / magnitude of interest. Very close  high probability Not very close  low probability

WRF Verification Toolkit Workshop, Boulder, February Fractions skill score Roberts (2005) We want to know  How forecast skill varies with neighbourhood size.  The smallest neighbourhood size that can be can be used to give sufficiently accurate forecasts.  Does higher resolution provide more accurate forecasts on scales of interest (e.g. river catchments) Compare forecast fractions with observed fractions (radar) in a probabilistic way over different sized neighbourhoods observedforecast

WRF Verification Toolkit Workshop, Boulder, February Fractions skill score Roberts (2005)

WRF Verification Toolkit Workshop, Boulder, February Decision models *NO-NF = neighborhood observation-neighborhood forecast, SO-NF = single observation-neighborhood forecast

WRF Verification Toolkit Workshop, Boulder, February Fuzzy verification framework good performance poor performance

WRF Verification Toolkit Workshop, Boulder, February Object-oriented methods  evaluate attributes of features

WRF Verification Toolkit Workshop, Boulder, February Entity-based approach (CRA) Ebert and McBride (2000) Define entities using threshold (Contiguous Rain Areas) Horizontally translate the forecast until a pattern matching criterion is met:  minimum total squared error between forecast and observations  maximum correlation  maximum overlap The displacement is the vector difference between the original and final locations of the forecast. Observed Forecast

WRF Verification Toolkit Workshop, Boulder, February CRA information Gives information on: Location error RMSE and correlation before and after shift Attributes of forecast and observed entities Error components  displacement  volume  pattern

WRF Verification Toolkit Workshop, Boulder, February MODE* Davis et al. (2006) *Method for Object-based Diagnostic Evaluation Two parameters: 1.Convolution radius 2.Threshold

WRF Verification Toolkit Workshop, Boulder, February MODE object matching/merging Compare attributes: - centroid location - intensity distribution - area - orientation - etc. When objects not matched: - false alarms - missed events - rain volume - etc. 24h forecast of 1h rainfall on 1 June 2005

WRF Verification Toolkit Workshop, Boulder, February MODE methodology Identification Merging Matching Comparison Measure Attributes Convolution – threshold process Summarize Fuzzy Logic Approach Compare forecast and observed attributes Merge single objects into composite objects Compute interest values Identify matched pairs Accumulate and examine comparisons across many cases

WRF Verification Toolkit Workshop, Boulder, February Cluster analysis approach Marzban and Sandgathe (2006) MM5 precipitation forecasts 8 clusters identified in x-y-p space Goal: Assess the agreement between fields using clusters identified using agglomerative hierarchical cluster analysis (CA) Optimize clusters (and numbers of clusters) based on  Binary images (x-y optimization)  Magnitude images (x-y-p optimization) Compute Euclidean distance between clusters in forecast and observed fields (in x-y and x-y-p space)

WRF Verification Toolkit Workshop, Boulder, February Cluster analysis example Stage IV COAMPS Error = average distance between matched clusters in x-y-p space log e error

WRF Verification Toolkit Workshop, Boulder, February Composite approach Nachamkin (2004) Goal: Characterize distributions of errors from both a forecast and observation perspective Procedure:  Identify events of interest in the forecasts  Define a kernel and collect coordinated samples  Compare forecast PDF to observed PDF  Repeat process for observed events Forecast Observation x Event center

WRF Verification Toolkit Workshop, Boulder, February Composite example Compare kernel grid-averaged values Average rain (mm) given an event was predicted FCST-shade OBS-contour Average rain (mm) given an event was observed

WRF Verification Toolkit Workshop, Boulder, February Field verification  evaluate phase errors

WRF Verification Toolkit Workshop, Boulder, February Feature calibration and alignment (Hoffman et al., 1995; Nehrkorn et al., 2003) Error decomposition e = X f (r) - X v (r) where X f (r) is the forecast, X v (r) is the verifying analysis, and r is the position. e = e p + e b + e r where e p = X f (r) - X d (r)phase error e b = X d (r) - X a (r)local bias error e r = X a (r) - X v (r)residual error Original forecast X f (r) 500 mb analysis X v (r) Adjusted forecast X a (r) Forecast adjustment Residual error e r

WRF Verification Toolkit Workshop, Boulder, February Forecast quality measure (FQM) Keil and Craig (2007) Combines distance measure and intensity difference measure  Pyramidal image matching (optical flow) to get vector displacement field  e distance  Unmatched features are penalized for their intensity errors  e intensity  Forecast quality measure satelliteorig.modelmorphed model

WRF Verification Toolkit Workshop, Boulder, February Conclusions What method should you use for model verification?  Depends what question(s) you would like to address Many spatial verification approaches  Scale decomposition – scale-dependent error  Fuzzy (neighborhood) – credit for "close" forecasts  Object-oriented – attributes of features  Field verification – phase error