Presentation is loading. Please wait.

Presentation is loading. Please wait.

Joint Working Group on Forecast Verification Research

Similar presentations


Presentation on theme: "Joint Working Group on Forecast Verification Research"— Presentation transcript:

1 Joint Working Group on Forecast Verification Research
Based on report presented to WWRP (Barb) and WGNE (me) in 2009. Report to SSC 9 October 25, 2016 Laurence Wilson With thanks to the members of the working group

2 Outline Role in WWRP Examples (highlights) of activities
1. User-oriented verification from ECMWF 2. MesoVICT 3. Ice verification in Polar regions 4. HIGHWAY 5. Verification Challenge 6. 7IVMW

3 Role of JWGFVR Similar in some ways to WGNE
Support and links to all other WGs and projects Promotion of good verification practices Research and major focus: “User – oriented” verification Training

4 Fit to WWRP Plan Mostly AA1; research in AA5
Work with uncertainty and probability verification in AA2 “Best verification practice” in most of the other parts of the plan Includes adapting or developing new techniques E.g for hydrology, applications, urban etc

5 HRES T2m skill – Europe from T. Haiden
Day 2 against observations Representativeness At T+60, using analysis instead of observations for verification at observation locations gives K difference in SDEV. Verifying against the analysis just at observation locations (compared to everywhere) gives only 0.2 K difference in SDEV. against analysis at obs locations Sampling against analysis

6 HRES T2m skill – Europe Day 10 Representativeness Sampling
against observations Representativeness Sampling against analysis at obs locations At T+240, using analysis instead of observations for verification at observation locations gives K difference in SDEV. Verifying against the analysis just at observation locations (compared to everywhere) gives only K difference in SDEV. So at this lead time, the effect of representativeness is smaller than the ocean/land effect on scores. against analysis

7 MesoVICT To determine the characteristics of spatial verification methods in complex terrain Precipitation accumulation and wind Ensembles High quality model-independent analysis Possibility of model reruns to add to dataset 2nd Workshop in Bologna Sept 2016 Basic cases have been run with some of the methods Lots of interest and activities in model reruns

8 MesoVICT – experiment design

9 Spatial methods Plans for MesoVICT Spatial methods examples
CRA (Ebert and McBride 2000) Fractions Skill Score (Roberts and Lean, 2008)(FSS) (scale-separation) SAL (Wernli et al 2008) Field deformation (image warping) Here the verification group could offer help to the data assimilation community? B. Casati’s wavelet analysis method – a way to do model-independent analysis of sfc data AND keep track of spatial resolution variations over the domain MODE (Davis et al 2006) (feature-based) continues to be developed, most comprehensive set of tools Plans for MesoVICT 3rd Workshop, probably the last one in 2018 – analysis to be completed by then Special “collection” of papers planned to document output.

10 SAL applied to MesoVICT core case
Problem: SAL results hard to interpret when many objects in the domain Solution suggested: Subdivide the domain, compute separately for each subset.

11 Gregor Skok1, Nigel Roberts2, Veronika Hladnik3
Analysis of the Fractions Skill Score properties for random precipitation fields and ECMWF forecasts + FSS based wind verification score Gregor Skok1, Nigel Roberts2, Veronika Hladnik3 1 Faculty of Mathematics and Physics, University of Ljubljana, Slovenia 2 Met Office, UK 3 Slovenian Environment Agency MesoVICT 2nd workshop 2016

12 Ice Verification - PPP Problem: How to quantify the difference between a forecast and observed ice area Work by B. Casati at ECCC: Evaluation of Hausdorff metric and Baddeley metric to quantify the difference Applied to the Canadian Regional Ice Prediction System (RIPS) Observations are from satellite Account for observation errors/uncertainty

13 Modified Hausdorff, RIPS vs IMS
Reminder: the backward and forward (mean) distances are not symmetric: A B = 0 ≠ 0 Differences are due to inclusion of sea-ice features, sea-ice extent over and underestimation. Asymmetry is informative!

14 Mod Haus primary peak, fwd >> bkw: RIPS forecast / analysis underestimate the sea-ice extent because melt ponds are assimilated as water Mod Haus secondary peak, bkw >> fwd: RIPS forecast overestimates the sea-ice extent

15 Cloud verification research
User-Oriented –USAF Testing standard scores, MODE, distance measures between cloud areas

16 Performance Diagram: Multiple measures
Best GFS Raw: <22.5, <35, <50 GFS DCF POD GFS Raw: >60, >75 Lines of equal bias Success Ratio = 1-FAR Lines of equal CSI

17 Baddeley delta metric

18 HIGHWAY Project Funded by DFID UK – 3.6 M GBP
Goal is to improve knowledge and prediction of severe weather in the L. Victoria basin International cooperative project, proposal led by WMO With special observation period datasets, an opportunity to significantly improve forecasting of HI weather phenomena. Verification integral part – asking that the verification component be funded as part of the project, but funding specifically set aside. What we will do: Research into spatial verification methods for verifying the regional SWFDP products (Example) Build integrated database for use during and after the project for verification. “Catch up” verification of operational regional models, and demonstrate improvements

19 Spatial verification of RMSC products
Forecast Observed False alarms Hits Misses Spatial verification is a currently a developing research topic. Several different techniques have been proposed and tested. Verification of spatially-defined variables requires both spatially continuous observations and a spatial definition of the contingency table components. Spatial contingency table: -Can accomplish IF one has quasi-continuous spatial observation data -Stephanie’s method

20 Verification of regional forecast map using HE

21 Verification Challenge – HIWeather, PPP, S2S
Forecast verification metric challenge The public, industry, emergency managers and other decision makers can use weather, climate and impact forecasts more effectively in their decision making when the quality of forecasts is measured in terms that are meaningful to them. Yet very few metrics exist to measure forecast quality in user-relevant terms. To encourage the development of user-oriented verification approaches, the World Meteorological Organization's Joint Working Group on Forecast Verification Research (JWGFVR) is conducting a challenge to develop and demonstrate new user-oriented forecast verification metrics. The new metrics will support the WWRP High Impact Weather, Subseasonal to Seasonal Prediction (S2S), and Polar Prediction (PPP) projects. The JWGFVR warmly encourages all interested researchers and practitioners to participate. The deadline for entries is 31 October 2016.

22 7th International Verification methods workshop
“Forecast verification methods across time and space scales” Berlin, May 3-6, 2017 (Tutorial) Basic verification methods, spatial methods, hands-on laboratory exercises and projects Students can bring their own data Berlin, May 8-11, 2017 (Science conference) Invited lead speakers, contributed papers and posters Increased emphasis on S2S and climate verification Announcement and website to come very soon

23 Thanks!


Download ppt "Joint Working Group on Forecast Verification Research"

Similar presentations


Ads by Google