Verification of multi-model ensemble forecasts using the TIGGE dataset

Slides:



Advertisements
Similar presentations
Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
Advertisements

Severe Weather Forecasts
Sub-seasonal to seasonal prediction David Anderson.
LRF Training, Belgrade 13 th - 16 th November 2013 © ECMWF Sources of predictability and error in ECMWF long range forecasts Tim Stockdale European Centre.
ECMWF long range forecast systems
The THORPEX Interactive Grand Global Ensemble (TIGGE) Richard Swinbank, Zoltan Toth and Philippe Bougeault, with thanks to the GIFS-TIGGE working group.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
Forecasting Uncertainty Related to Ramps of Wind Power Production
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Validation of the Ensemble Tropical Rainfall Potential (eTRaP) for Landfalling Tropical Cyclones Elizabeth E. Ebert Centre for Australian Weather and Climate.
Regional TC model ensemble forecast products Jon Moskaitis and the regional model subgroup: W. Lewis, Z. Zhang, J. Peng, A. Aksoy, F. Zhang, R. Torn, and.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
GEO Work Plan Symposium 2014 WE-01 Jim Caughey THORPEX IPO.
Performance of the MOGREPS Regional Ensemble
ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Statistical Characteristics of High- Resolution COSMO.
COSMO Annual Meeting September 2005 Zurich (Switzerland) Short-Range Numerical Weather Prediction Programme.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
A Comparison of the Northern American Regional Reanalysis (NARR) to an Ensemble of Analyses Including CFSR Wesley Ebisuzaki 1, Fedor Mesinger 2, Li Zhang.
Improving Ensemble QPF in NMC Dr. Dai Kan National Meteorological Center of China (NMC) International Training Course for Weather Forecasters 11/1, 2012,
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
MODEL OUTPUT STATISTICS (MOS) TEMPERATURE FORECAST VERIFICATION JJA 2011 Benjamin Campbell April 24,2012 EAS 4480.
Verification of IRI Forecasts Tony Barnston and Shuhua Li.
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
ENSEMBLES General Assembly Prague, Czech Republic, 14 November 2007 RT5: Evaluation Albert Klein Tank & Elisa Manzini.
Verification of Global Ensemble Forecasts Fanglin Yang Yuejian Zhu, Glenn White, John Derber Environmental Modeling Center National Centers for Environmental.
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Exploring Multi-Model Ensemble Performance in Extratropical Cyclones over Eastern North America and the Western Atlantic Ocean Nathan Korfe and Brian A.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Common verification methods for ensemble forecasts
Short-Range Ensemble Prediction System at INM García-Moya, J.A., Santos, C., Escribà, P.A., Santos, D., Callado, A., Simarro, J. (NWPD, INM, SPAIN) 2nd.
Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
NCEP CMC ECMWF MEAN ANA BRIAN A COLLE MINGHUA ZHENG │ EDMUND K. CHANG Applying Fuzzy Clustering Analysis to Assess Uncertainty and Ensemble System Performance.
10th COSMO General Meeting, Cracow, Poland Verification of COSMOGR Over Greece 10 th COSMO General Meeting Cracow, Poland.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
EVALUATION OF A GLOBAL PREDICTION SYSTEM: THE MISSISSIPPI RIVER BASIN AS A TEST CASE Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier Civil and.
Figures from “The ECMWF Ensemble Prediction System”
Medium Range Forecasting at the Weather Prediction Center (WPC) –
LEPS VERIFICATION ON MAP CASES
Tom Hopson, NCAR (among others) Satya Priya, World Bank
Statistical Downscaling of Precipitation Multimodel Ensemble Forecasts
Verifying and interpreting ensemble products
Precipitation Products Statistical Techniques
A Review of the CSTAR Ensemble Tools Available for Operations
A.Montani; The COSMO-LEPS system.
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
Application of satellite-based rainfall and medium range meteorological forecast in real-time flood forecasting in the Upper Mahanadi River basin Trushnamayee.
Post Processing.
DEMETER Development of a European Multi-model Ensemble System for
Application of a global probabilistic hydrologic forecast system to the Ohio River Basin Nathalie Voisin1, Florian Pappenberger2, Dennis Lettenmaier1,
N. Voisin, J.C. Schaake and D.P. Lettenmaier
Quantitative verification of cloud fraction forecasts
GIFS-TIGGE project Richard Swinbank, and Young-Youn Park,
Comparison of different combinations of ensemble-based and variational data assimilation approaches for deterministic NWP Mark Buehner Data Assimilation.
Deterministic (HRES) and ensemble (ENS) verification scores
Xiefei Zhi, Yongqing Bai, Chunze Lin, Haixia Qi, Wen Chen
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
SWFDP Key Issues for GIFS-TIGGE
Tropical storm intra-seasonal prediction
SRNWP-PEPS COSMO General Meeting September 2005
GloSea4: the Met Office Seasonal Forecasting System
Verification of Tropical Cyclone Forecasts
Some Verification Highlights and Issues in Precipitation Verification
Global Observational Network and Data Sharing
Short Range Ensemble Prediction System Verification over Greece
Ryan Kang, Wee Leng Tan, Thea Turkington, Raizan Rahmat
Presentation transcript:

Verification of multi-model ensemble forecasts using the TIGGE dataset www.ec.gc.ca Lawrence.wilson@ec.gc.ca Verification of multi-model ensemble forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF With thanks to Marcel Vallée

Outline Introduction – TIGGE goals and verification Status of verification of TIGGE ensembles Standard methods Spatial methods Precipitation verification project: plan and early results Summary SEE the extended abstract also November 20, 2018

Verification and the goals of TIGGE Enhance collaborative research Enable evolution towards GIFS Develop ensemble combination methods; bias removal Essential question: If we are going to move towards a GIFS, then we must demonstrate that the benefits of combined ensembles are worth the effort with respect to single-center ensembles. OR: Do we get a “better” pdf by merging ensembles? Verification – Relevant, user-oriented November 20, 2018

Status of Verification of TIGGE ensembles Mostly model-oriented verification so far Upper air data Against analyses Standard scoring and case studies Studies on the TIGGE website Park et al, 2008 First study involving several months of data Found modest improvement with combined ensembles, greatest benefits in tropics and lower atmosphere “Advantage” of using one’s own analysis as truth Pappenberger et al. Case study of flooding event in Romania User-oriented, Q-Q plots, RPS and RMSE main scores used Multimodel ensemble has best average properties, ECMWF next. November 20, 2018

Studies using TIGGE data (cont’d) Johnson and Swinbank, 2008 Study of calibration/combination methods Used only 3 ensembles Mslp and 2m temperature, but from analyses Multimodel ensemble improves on individual ensembles, but not by much in general. More at 2m than 500mb Matsueda 2008 Comparison of 5 combined ensembles vs ECMWF alone RMSE skill and RPSS with ECMWF as standard forecast Multimodel eps outperforms ECMWF at medium and longer ranges. November 20, 2018

Status of TIGGE – related verification Current efforts – Verification of surface variables? This conference ---? Studies using spatial methods Ebert – application of CRA technique to ensemble forecasts. So far, only ECMWF. Application of Wilks minimum spanning tree or T. Gneiting’s multi-dimensional rank histogram for TC centers. (idea stage) Precipitation verification project: November 20, 2018

Precipitation verification project Goal: to verify global 24h precipitation forecasts from all the ensembles in the TIGGE archive and combinations One region at a time, using highest density observations Canada and Europe so far Methodology Cherubini et al upscaling, verify only where data available Single station, nearest gridpoint where data is sparser Kernel density fitting following Peel and Wilson to look at extremes of distributions. November 20, 2018

Precipitation verification project : methodology - Europe Upscaling: 1x1 gridboxes, limit of model resolution Average obs over grid boxes, at least 9 stns per grid box (Europe data) Verify only where enough data Matches obs and model resolution locally Answers questions about the quality of the forecasts within the capabilities of the model Most likely users are modelers. November 20, 2018

European Verification -Upscaled observations according to Cherubini et al (2002) -OBS from gauges in Spain, Portugal, France, Italy, Switzerland, Netherlands, Romania, Czech Republic, Croatia, Austria, Denmark, UK, Ireland, Finland and Slovenia -At least 9 stns needed per grid box to estimate average -24h precip totals, thresholds 1,3,5,10,15,20,25,30 mm -one year (oct 07 to oct 08 November 20, 2018

Reliability – Summer 08 – Europe – 42h November 20, 2018

Reliability – Summer 08- Europe 114 h November 20, 2018

Reliability – Winter 07-08 – Europe – 114h November 20, 2018

ROC – Summer 08 – Europe – 42h November 20, 2018

ROC – Summer 08 – Europe – 114 h November 20, 2018

Precipitation verification project: methodology - Canada Single station verification Canadian verification over 20 widely-spaced stations, only one station per gridbox; comparison of nearest gridpoint fcst to obs Pointwise verification, does not (we cannot) upscale properly because don’t have the data density necessary. Valid nevertheless as absolute verification of model predictions November 20, 2018

Results – Canada – ROC curves – 24h November 20, 2018

Results – Canada – ROC Curves – 144h November 20, 2018

RMSE of pcpn probability – Canada – Oct 07 to Oct 08 – 20 stns 2.0 mm 10 mm BOM in blue (darker blue); ECMWF in red; UKMET in green CMC in gray; NCEP in cyan (lighter blue) November 20, 2018

Combined ensemble verification Verification of TIGGE forecasts with respect to surface observations – next steps Combined ensemble verification Other regions – Southern Africa should be next. – Non-GTS data is available Evaluation of extreme events – kernel density fitting to ensembles. Other high-density observation datasets such as SHEF in the US Other variables: TC tracks and related surface weather Use of spatial verification methods THEN maybe we will know the answer to the TIGGE question. November 20, 2018

www.ec.gc.ca November 20, 2018

Issues for TIGGE verification Use of analyses as truth – advantage of one’s own model. Alternatives: Each own analysis Analyses as ensemble (weighted or not) Random selection from all analyses Use “best” analysis; eliminate the related model from comparison Average analysis (may have different statistical characteristics) Model-independent analysis (restricted to data – rich areas, but that is where verification might be most important for most users Problem goes away for verification against observations (as long as they are not qc’d with respect to any model) November 20, 2018

Park et al study – impact of analysis used as truth in verification November 20, 2018

Issues for TIGGE Verification (cont’d) Bias adjustment/calibration Reason: to eliminate “artifical” spread in combined ensemble arising from systematic differences in component models First (mean) and second (spread) moments Several studies have/are being undertaken Results on benefits not conclusive so far Due to too small sample for bias estimation? Alternative: Rather than correcting bias, eliminate inter-ensemble component of bias and spread variation. November 20, 2018

ROC – Winter 07-08 – Europe – 42h November 20, 2018

ROC – Winter 07-08 – Europe – 114h November 20, 2018

Reliability – Winter 07-08 – Europe – 42h November 20, 2018

Results – Canada – Brier Skill, Resolution and Reliability November 20, 2018