Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

Introduction to the Forecast Impact and Quality Assessment Section GSD Verification Summit Meeting 8 September 2011 Jennifer Mahoney 1.
Verification and evaluation of a national probabilistic prediction system Barbara Brown NCAR 23 September 2009.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Univ of AZ WRF Model Verification. Method NCEP Stage IV data used for precipitation verification – Stage IV is composite of rain fall observations and.
¿How sensitive are probabilistic precipitation forecasts to the choice of the ensemble generation method and the calibration algorithm? Juan J. Ruiz 1,2.
WWOSC 2014 Assimilation of 3D radar reflectivity with an Ensemble Kalman Filter on a convection-permitting scale WWOSC 2014 Theresa Bick 1,2,* Silke Trömel.
Verification of extreme events Barbara Casati (Environment Canada) D.B. Stephenson (University of Reading) ENVIRONMENT CANADA ENVIRONNEMENT CANADA.
1 Verification of nowcasts and very short range forecasts Beth Ebert BMRC, Australia WWRP Int'l Symposium on Nowcasting and Very Short Range Forecasting,
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
Verification Summit AMB verification: rapid feedback to guide model development decisions Patrick Hofmann, Bill Moninger, Steve Weygandt, Curtis Alexander,
Page 1© Crown copyright 2007SRNWP 8-11 October 2007, Dubrovnik SRNWP – Revised Verification Proposal Clive Wilson Presented by Terry Davies at SRNWP Meeting.
Page 1© Crown copyright 2005 SRNWP – Revised Verification Proposal Clive Wilson, COSMO Annual Meeting September 18-21, 2007.
Verification in NCEP/HPC Using VSDB-fvs Keith F. Brill November 2007.
“High resolution ensemble analysis: linking correlations and spread to physical processes ” S. Dey, R. Plant, N. Roberts and S. Migliorini Mesoscale group.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
On the spatial verification of FROST-2014 precipitation forecast fields Anatoly Muraviev (1), Anastasia Bundel (1), Dmitry Kiktev (1), Nikolay Bocharnikov.
Ebert-McBride Technique (Contiguous Rain Areas) Ebert and McBride (2000: Verification of precipitation in weather systems: determination of systematic.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Priority project « Advanced interpretation and verification.
Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.
World Meteorological Organization Working together in weather, climate and water Enhanced User and Forecaster Oriented TAF Quality Assessment CAeM-XIV.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
Traditional Verification Scores Fake forecasts  5 geometric  7 perturbed subjective evaluation  expert scores from last year’s workshop  9 cases x.
DIAMET meeting 7 th-8th March 2011 “New tools for the evaluation of convective scale ensemble systems” Seonaid Dey Supervisors: Bob Plant, Nigel Roberts.
DIAMET meeting 7 th-8th March 2011 “New tools for the evaluation of convective scale ensemble systems” Seonaid Dey Supervisors: Bob Plant, Nigel Roberts.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Spatial Forecast Methods Inter-Comparison Project -- ICP Spring 2008 Workshop NCAR Foothills Laboratory Boulder, Colorado.
Page 1© Crown copyright 2005 Met Office Verification -status Clive Wilson, Presented by Mike Bush at EWGLAM Meeting October 8- 11, 2007.
Common verification methods for ensemble forecasts
Page 1© Crown copyright 2004 The use of an intensity-scale technique for assessing operational mesoscale precipitation forecasts Marion Mittermaier and.
Science plan S2S sub-project on verification. Objectives Recommend verification metrics and datasets for assessing forecast quality of S2S forecasts Provide.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
Verification of C&V Forecasts Jennifer Mahoney and Barbara Brown 19 April 2001.
User-Focused Verification Barbara Brown* NCAR July 2006
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
Deutscher Wetterdienst Long-term trends of precipitation verification results for GME, COSMO-EU and COSMO-DE Ulrich Damrath.
New results in COSMO about fuzzy verification activities and preliminary results with VERSUS Conditional Verification 31th EWGLAM &16th SRNWP meeting,
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Fuzzy Verification toolbox: definitions and results Felix.
SAL - Structure, Ampliutde, Location
Intensity-scale verification technique
Fuzzy verification using the Fractions Skill Score
Systematic timing errors in km-scale NWP precipitation forecasts
Multi-scale validation of high resolution precipitation products
Verifying and interpreting ensemble products
General framework for features-based verification
Composite-based Verification
Verification of nowcasting products: Issues and methods
CMEMS R&D KICK-OFF MEETING
Sub-daily temporal reconstruction of historical extreme precipitation events using NWP model simulations Vojtěch Bližňák1 Miloslav.
New Developments in Aviation Forecast Guidance from the RUC
Numerical Weather Prediction Center (NWPC), Beijing, China
Verification of Tropical Cyclone Forecasts
Medium-range ensemble prediction of hydrological droughts
CMEMS R&D MID-TERM MEETING
Peter May and Beth Ebert CAWCR Bureau of Meteorology Australia
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR Fuzzy Verification Ebert, E.E., 2007: Fuzzy verification of high resolution gridded forecasts: A review and proposed framework. Meteorol. Appls., submitted Available online at http://www.bom.gov.au/bmrc/wefor/staff/eee/beth_ebert.htm Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR

Why is it called "fuzzy"? Squint your eyes! observation forecast

Data used Any spatial forecasts and observations Best suited for high resolution Most convenient if forecast is on a grid Observations can be gridded or point data

How does it work? Look in a space / time neighborhood around the point of interest Evaluate using categorical, continuous, probabilistic scores / methods Will only consider spatial neighborhood for the moment t t + 1 t - 1 Forecast value Frequency

How does it work? (cont'd) Fuzzy methods use one of two approaches to compare forecasts and observations: observation forecast single observation – neighborhood forecast neighborhood observation – neighborhood forecast observation forecast

Methods and decision models Many fuzzy methods have been developed in recent years. The main thing that distinguishes them is whether they are NO-NF or SO-NF, and what the decision model is for what constitutes a useful forecast. Barbara Casati pointed out in Reading that her intensity-scale method really differs from the rest of these methods in that it isolates the errors at each scales, whereas the fuzzy methods essentially smooth out the behavior by scale. *NO-NF = neighborhood observation-neighborhood forecast, SO-NF = single observation-neighborhood forecast

Information provided Forecast performance depends on the scale and intensity of the event

Strengths Knowing which scales have skill suggests the scales at which the forecast should be presented and trusted Can give good results for forecasts that verify poorly using exact-match approach Suitable for discontinuous fields like precipitation Can be used to compare forecasts at different resolutions Multiple decision models and metrics Direct approach  verification of intensities Categorical approach  verification of binary events Probabilistic approach  verification of event frequency Can be extended to time domain Other diagnostics available from some methods (e.g. PP)

Weaknesses and limitations Less intuitive than object-based methods Imperfect scores for perfect forecasts for methods that match neighborhood forecasts to single observations Information overload if all methods invoked at once Let appropriate decision model(s) guide the choice of method(s) Even for a single method … there are lots of numbers to look at evaluation of scales and intensities with best performance depends on metric used (CSI, ETS, HK, etc.)

Example: 13 May 2005 The circles indicate the intensity-scale combination at which the best score was achieved. Depending on the method (i.e. on the decision model for a "good" forecast) different intensities and scales are selected. For this forecast the best performance tended to be at the larger scales for almost all methods, although there was less agreement on the intensities with best skill.

Best performer on 13 May 2005 Scale (km) Scale (km) Fractions skill score – FSS (neighborhood obs – neighborhood fcst) 260 wrf4ncar wrf2caps 132 wrf4ncep 68 - 36 20 12 4 1 2 5 10 50 100 200 Scale (km) Threshold (0.01") Multi-event contingency table - HK (single obs – neighborhood fcst) 260 wrf4ncar wrf2caps 132 68 36 20 wrf4ncep - 12 4 1 2 5 10 50 100 200 FSS is a neighborhood observation – neighborhood forecast method, which is model-oriented. Multi-event cont. table is single observation – neighborhood forecast, which is user-oriented. Even so, for most scales and intensities the two approaches agreed on which model performed best. Scale (km) Threshold (0.01")