Download presentation
Presentation is loading. Please wait.
1
Forecast Verification Research
Beth Ebert, Bureau of Meteorology Laurie Wilson, Meteorological Service of Canada WWRP-JSC, Geneva, April 2012
2
Verification working group members
Beth Ebert (BOM, Australia) Laurie Wilson (CMC, Canada) Barb Brown (NCAR, USA) Barbara Casati (Ouranos, Canada) Caio Coelho (CPTEC, Brazil) Anna Ghelli (ECMWF, UK) Martin Göber (DWD, Germany) Simon Mason (IRI, USA) Marion Mittermaier (Met Office, UK) Pertti Nurmi (FMI, Finland) Joel Stein (Météo-France) Yuejian Zhu (NCEP, USA)
3
Aims Verification component of WWRP, in collaboration with WGNE, WCRP, CBS Develop and promote new verification methods Training on verification methodologies Ensure forecast verification is relevant to users Encourage sharing of observational data Promote importance of verification as a vital part of experiments Promote collaboration among verification scientists, model developers and forecast providers
4
Relationships / collaboration
WGCM WGNE TIGGE SDS-WAS HyMeX Polar Prediction SWFDP YOTC Subseasonal to Seasonal Prediction CG-FV WGSIP SRNWP COST-731
5
FDPs and RDPs Sydney 2000 FDP Beijing 2008 FDP/RDP SNOW-V10 RDP
FROST-14 FDP/RDP MAP D-PHASE Other FDPs: Lake Victoria Intend to establish collaboration with SERA on verification of tropical cyclone forecasts and other high impact weather warnings Typhoon Landfall FDP Severe Weather FDP
6
SNOW-V10 Nowcast and regional model verification at obs sites
User-oriented verification Tuned to decision thresholds of VANOC, whole Olympic period Model-oriented verification Model forecasts verified in parallel, January to August 2010 User Relatively high concentration of data available for the Olympic period. Status Significant effort to process and quality-control observations Multiple observations at some sites observation error
7
Wind speed verification (model-oriented)
Visibility verification (user-oriented)
8
FROST-14 User-focused verification Model-focused verification
Threshold-based as in SNOW-V10 Timing of events – onset, duration, cessation Real-time verification Road weather forecasts? Model-focused verification Neighborhood verification of high-resolution NWP Spatial verification of ensembles Account for observation uncertainty Anatoly Muravyev and Evgeny Atlaskin came to the Verification Methods Workshop in December, and will be working on the FROST-14 verification.
9
Promotion of best practice
Recommended methods for evaluating cloud and related parameters Introduction Data sources Designing a verification or evaluation study Verification methods Reporting guidelines Summary of recommendations Cloud document is just out! Originally requested by WGNE, has been in the works for some time. Has recommendations for standard verification of cloud amount and related variables such as cloud base height, vertical profile of cloud amount, using both point-based and spatial observations (satellite, cloud radar, etc.)
10
Promotion of best practice
Verification of tropical cyclone forecasts Introduction Observations and analyses Forecasts Current practice in TC verification – deterministic forecasts Current verification practice – Probabilistic forecasts and ensembles Verification of monthly and seasonal tropical cyclone forecasts Experimental verification methods Comparing forecasts Presentation of verification results JWGFVR is also preparing a document describing methods for verifying tropical cyclone forecasts, in support of GIFS-TIGGE and the WMO Typhoon Landfall FDP. It will include standard methods for assessing track and intensity forecasts, probabilistic and ensemble forecast verification, and a review of recent developments in this field. In addition to track and intensity, we also recommend methodologies for TC-related hazards – wind, heavy precipitation, storm surge.
11
Verification of deterministic TC forecasts
12
Beyond track and intensity…
Track error distribution TC genesis Wind speed Most tropical cyclone verification (at least operationally) focuses on only 2 variables: track location and intensity. Since a great deal of the damage associated with tropical storms is related to other factors, this seems overly limiting Some additional important variables: Storm structure and size Precipitation Storm surge Landfall time, position, and intensity Consistency Uncertainty Info to help forecasters (e.g., steering flow) Other? Tailoring verification to help forecasters with their high-pressure job and multiple sources of guidance information Precipitation (MODE spatial method)
13
Verification of probabilistic TC forecasts
TIGGE ensemble intensity error before bias correction After bias correction Courtesy Yu Hui (STI)
14
Issues in TC verification
Observations contain large uncertainties Some additional important variables: Storm structure and size Rapid intensification Landfall time, position, and intensity Precipitation Storm surge Consistency Uncertainty Info to help forecasters (e.g., steering flow) Tailoring verification to help forecasters with their high-pressure job and multiple sources of guidance information False alarms (incl. forecast storms outliving actual storm) and misses (unforecasted storms) currently ignored How best to evaluate ensemble TC predictions?
15
Promotion of best practice
Verification of forecasts from mesoscale models (early DRAFT) Purposes of verification Choices to be made Surface and/or upper-air verification? Point-wise and/or spatial verification? Proposal for 2nd Spatial Verification Intercomparison Project in collaboration with Short-Range NWP (SRNWP)
16
Spatial Verification Method Intercomparison Project
International comparison of many new spatial verification methods Phase 1 (precipitation) completed Methods applied by researchers to same datasets (precipitation; perturbed cases; idealized cases) Subjective forecast evaluations Weather and Forecasting special collection Phase 2 in planning stage Complex terrain MAP D-PHASE / COPS dataset Wind and precipitation, timing errors 16
17
Outreach and training Verification workshops and tutorials
On-site, travelling EUMETCAL training modules Verification web page Sharing of tools
18
5th International Verification Methods Workshop Melbourne 2011
Tutorial 32 students from 23 countries Lectures and exercises (took tools home) Group projects - presented at workshop Workshop ~120 participants Topics: Ensembles and probabilistic forecasts Seasonal and climate Aviation verification User-oriented verification Diagnostic methods and tools Tropical cyclones and high impact weather Weather warning verification Uncertainty Special issue of Meteorol. Applications in early 2013 THANKS FOR WWRP’S SUPPORT!! Had some trouble with participants getting their visas on time – some countries missed out (Ethiopia, China came late). Could use advice/help from WMO on this.
19
Seamless verification
Seamless forecasts - consistent across space/time scales single modelling system or blended likely to be probabilistic / ensemble climate change local point regional global Spatial scale forecast aggregation time minutes hours days weeks months years decades NWP nowcasts decadal prediction seasonal sub- very short range Which scales / phenomena are predictable? Different user requirements at different scales (timing, location, …)
20
"Seamless verification" – consistent across space/time scales
Modelling perspective – is my model doing the right thing? Process approaches LES-style verification of NWP runs (first few hours) T-AMIP style verification of coupled / climate runs (first few days) Single column model Statistical approaches Spatial and temporal spectra Spread-skill Marginal distributions (histograms, etc.) Seamless verification It was not clear to the group how to define seamless verification, and the WG had a lively discussion on this topic. One possible interpretation is consistent verification across a range of scales by for example applying the same verification scores to all forecasts being verified to allow comparison. This would entail greater time and space aggregation as longer forecast ranges are verified. Averaging could be applied to the EPS medium range and monthly time range, as these two forecast ranges have an overlapping period. Similarly the concept of seamless verification could be applied to the EPS medium range forecast and seasonal forecast. For example, verification scores could be calculated using tercile exceedance and the ERA Interim could be used as the reference system. Verification across scales could involve conversion of forecast types, for example, from precipitation amounts (weather scales) to terciles (climate scales). A probabilistic framework would likely be the best approach to connect weather and climate scales. Perkins et al., J.Clim. 2007
21
"Seamless verification" – consistent across space/time scales
User perspective – can I use this forecast to help me make a better decision? Neighborhood approaches - spatial and temporal scales with useful skill Generalized discrimination score (Mason & Weigel, MWR 2009) consistent treatment of binary, multi-category, continuous, probabilistic forecasts Calibration - accounting for space-time dependence of bias and accuracy? Conditional verification based on larger scale regime Extreme Forecast Index (EFI) approach for extremes JWGFVR activity Proposal for research in verifying forecasts in weather-climate interface Assessment component of UK INTEGRATE project Models may be seamless – but user needs are not! Nowcasting users can have very different needs for products than short-range forecasting users (more localized in space and time; wider range of products which are not standard in SR NWP and may be difficult to produce with an NWP model; some products routinely measured, others not; …) Temporal/spatial resolution go together. On small spatial /temporal scales modelling/verification should be inherently probabilistic. The predictability of phenomena generally decreases (greatly) from short to very short time/spatial scales. How to assess/show such limits to predictability in verification? Need to distinguish “normal” and “extreme” weather? Nowcasting more than SR forecasting is interested not just in intensities of phenomena, but also in exact timing/duration and location. Insight in errors of timing/location is needed. Different demands on observations, possibly not to be met with the same data sources? From Marion: We have two work packages kicking off this FY (i.e. now or soon). I am co-chair of the assessment group for INTEGRATE which is our 3-year programme for improving our global modelling capability. The INTEGRATE project follows on from the CAPTIVATE project. INTEGRATE project pages are hosted on the collaboration server. A password is needed (as UM partners you have access to these pages). The broad aim of INTEGRATE is to pull through model developments from components of the physical earth system (Atmosphere, Oceans, Land, Sea-Ice and Land-Ice, and Aerosols) and integrate them into a fully coupled global prediction system, for use across weather and climate timescales. The project attempts to begin the process of integrating coupled atmosphere-ocean (COA) forecast data into a conventional weather forecast verification framework, and consider the forecast skill of surface weather parameters in the existing operational seasonal COA system, GloSea4 and 5, over the first 2 weeks of the forecast. Within that I am focusing more on applying weather-type verification tools on global, longer time scales, monthly to seasonal. A part of this is a comparison of atmosphere-only (AO) and coupled ocean-atmosphere (COA) forecasts for the first 15 days (initially). Both are approaching the idea of seamless forecasting, i.e. can we used COA models to do NWP-type forecasts for the first 15 days, and seamless verification, i.e. finding some common ground in the way we can compare longer simulations and short-range NWP.
22
Final thoughts JWGFVR would like to strengthen its relationship with WWRP Tropical Meteorology WG Typhoon Landfall FDP YOTC TIGGE Subseasonal to Seasonal Prediction CLIVAR “Good will” participation (beyond advice) in WWRP and THORPEX projects getting harder to provide Videoconferencing Capacity building of “local” scientists Include verification component in funded projects
23
Thank you
24
Summary of recommendations for cloud verification
We recommend that the purpose of a verification study is considered carefully before commencing. Depending on the purpose: For user-oriented verification we recommend that, at least the following cloud variables be verified: total cloud cover and cloud base height (CBH). If possible low, medium and high cloud should also be considered. An estimate of spatial bias is highly desirable, through the use of, e.g., satellite cloud masks; More generally, we recommend the use of remotely sensed data such as satellite imagery for cloud verification. Satellite analyses should not be used at short lead times, because of a lack of independence. For model-oriented verification there is a preference for a comparison of simulated and observed radiances, but ultimately what is used should depend on the pre-determined purpose. For model-oriented verification the range of parameters of interest is more diverse, and the purpose will dictate the parameter and choice of observations, but we strongly recommend that vertical profiles are considered in this context. We also recommend the use of post-processed cloud products created from satellite radiances for user- and model-oriented verification, but these should be avoided for model inter-comparisons if the derived satellite products require model input since the model that is used to derive the product could be favoured. We recommend that verification be done both against: gridded observations and vertical profiles (model-oriented verification), with model inter-comparison done on a common latitude/longitude grid that accommodates the coarsest resolution; the use of cloud analyses should be avoided because of any model-specific "contamination" of observation data sets; surface station observations (user-oriented verification). For synoptic surface observations we recommend that: all observations should be used but if different observation types exist (e.g., automated and manual) they should not be mixed; automated cloud base height observations be used for low thresholds (which are typically those of interest, e.g., for aviation). We recognize that a combination of observations is required when assessing the impact of model physics changes. We recommend the use of cloud radar and lidar data as available, but recognize that this may not be a routine activity. We recommend that verification data and results be stratified by lead time, diurnal cycle, season, and geographical region. The recommended set of metrics is listed in Section 4. Higher priority should be given to those labeled with three stars. The optional measures are also desirable. We recommend that the verification of climatology forecasts be reported along with the forecast verification. The verification of persistence forecasts and use of model skill scores with respect to persistence, climatology, or random chance is highly desirable. For model-oriented verification in particular, it is recommended that all aggregate verification scores be accompanied by 95% confidence intervals, and reporting of the median and inter-quartile range for each score is highly desirable.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.