. Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is.

Slides:



Advertisements
Similar presentations
ECMWF Stochastic representations of model uncertainty: Glenn Shutts March 2009 Stochastic representations of model uncertainty Glenn Shutts ECMWF/Met Office.
Advertisements

ECMWF long range forecast systems
WCRP OSC 2011: Strategies for improving seasonal prediction © ECMWF Strategies for improving seasonal prediction Tim Stockdale, Franco Molteni, Magdalena.
Representing model uncertainty in weather and climate: stochastic versa multi-physics representations Judith Berner, NCAR.
Page 1© Crown copyright 2006ESWWIII, Royal Library of Belgium, Brussels, Nov 15 th 2006 Forecasting uncertainty: the ensemble solution Mike Keil, Ken Mylne,
Effects of model error on ensemble forecast using the EnKF Hiroshi Koyama 1 and Masahiro Watanabe 2 1 : Center for Climate System Research, University.
Verification of probability and ensemble forecasts
Cost-effective dynamical downscaling: An illustration of downscaling CESM with the WRF model Jared H. Bowden and Saravanan Arunachalam 11 th Annual CMAS.
Instituting Reforecasting at NCEP/EMC Tom Hamill (ESRL) Yuejian Zhu (EMC) Tom Workoff (WPC) Kathryn Gilbert (MDL) Mike Charles (CPC) Hank Herr (OHD) Trevor.
Representing Model Error in Ensemble DA Chris Snyder (NCAR) NCAR is supported by the National Science Foundation.
How random numbers improve weather and climate predictions Expected and unexpected effects of stochastic parameterizations NCAR day of networking and.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Evaluation of a Mesoscale Short-Range Ensemble Forecasting System over the Northeast United States Matt Jones & Brian A. Colle NROW, 2004 Institute for.
Introduction to Numerical Weather Prediction and Ensemble Weather Forecasting Tom Hamill NOAA-CIRES Climate Diagnostics Center Boulder, Colorado USA.
¿How sensitive are probabilistic precipitation forecasts to the choice of the ensemble generation method and the calibration algorithm? Juan J. Ruiz 1,2.
Juan Ruiz 1,2, Celeste Saulo 1,2, Soledad Cardazzo 1, Eugenia Kalnay 3 1 Departamento de Cs. de la Atmósfera y los Océanos (FCEyN-UBA), 2 Centro de Investigaciones.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A comparison of hybrid ensemble transform Kalman filter(ETKF)-3DVAR and ensemble square root filter (EnSRF) analysis schemes Xuguang Wang NOAA/ESRL/PSD,
1 Assessment of the CFSv2 real-time seasonal forecasts for Wanqiu Wang, Mingyue Chen, and Arun Kumar CPC/NCEP/NOAA.
LAMEPS Development and Plan of ALADIN-LACE Yong Wang et al. Speaker: Harald Seidl ZAMG, Austria Thanks: Météo France, LACE, NCEP.
Performance of the MOGREPS Regional Ensemble
Introduction to Seasonal Climate Prediction Liqiang Sun International Research Institute for Climate and Society (IRI)
Barcelona, 2015 Ocean prediction activites at BSC-IC3 Virginie Guemas and the Climate Forecasting Unit 9 February 2015.
Verification of ensembles Courtesy of Barbara Brown Acknowledgments: Tom Hamill, Laurence Wilson, Tressa Fowler Copyright UCAR 2012, all rights reserved.
The high resolution in seasonal climate predictions with EC-Earth François Massonnet C. Prodhomme, L. Batté, V. Guemas, F. J. Doblas-Reyes Polar Predictability.
WWOSC 2014, Aug 16 – 21, Montreal 1 Impact of initial ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model.
ISDA 2014, Feb 24 – 28, Munich 1 Impact of ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model Florian.
Tutorial. Other post-processing approaches … 1) Bayesian Model Averaging (BMA) – Raftery et al (1997) 2) Analogue approaches – Hopson and Webster, J.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
EUROBRISA Workshop – Beyond seasonal forecastingBarcelona, 14 December 2010 INSTITUT CATALÀ DE CIÈNCIES DEL CLIMA Beyond seasonal forecasting F. J. Doblas-Reyes,
Improved ensemble-mean forecast skills of ENSO events by a zero-mean stochastic model-error model of an intermediate coupled model Jiang Zhu and Fei Zheng.
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
ENSEMBLES RT4/RT5 Joint Meeting Paris, February 2005 Overview of the WP5.3 Activities Partners: ECMWF, METO/HC, MeteoSchweiz, KNMI, IfM, CNRM, UREAD/CGAM,
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Data assimilation, short-term forecast, and forecasting error
Theoretical Review of Seasonal Predictability In-Sik Kang Theoretical Review of Seasonal Predictability In-Sik Kang.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Local Probabilistic Weather Predictions for Switzerland.
I n t e g r i t y - S e r v i c e - E x c e l l e n c e Air Force Weather Agency Probabilistic Lightning Forecasts Using Deterministic Data Evan Kuchera.
Huiling Yuan 1, Xiang Su 1, Yuejian Zhu 2, Yan Luo 2, 3, Yuan Wang 1 1. Key Laboratory of Mesoscale Severe Weather/Ministry of Education and School of.
Insights from CMC BAMS, June Short Range The SPC Short-Range Ensemble Forecast (SREF) is constructed by post-processing all 21 members of the NCEP.
Impact of the backscatter kinetic energy on the perturbation of ensemble members for strong convective event Jakub Guzikowski
18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Verification and Metrics (CAWCR)
Local Predictability of the Performance of an Ensemble Forecast System Liz Satterfield and Istvan Szunyogh Texas A&M University, College Station, TX Third.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
1 Probabilistic Forecast Verification Allen Bradley IIHR Hydroscience & Engineering The University of Iowa RFC Verification Workshop 16 August 2007 Salt.
ECMWF Meteorological Training Course: Predictability, Diagnostics and Seasonal Forecasting 1. 1.What is model error and how can we distinguish it from.
JCSDA Workshop Brock Burghardt August Model Configuration WRF-ARW v3.5.1 Forecasts integrated 30 hours Cycled every 6 hours (non-continuous boundaries)
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
Verification methods - towards a user oriented verification The verification group.
NOAA Northeast Regional Climate Center Dr. Lee Tryhorn NOAA Climate Literacy Workshop April 2010 NOAA Northeast Regional Climate.
Current Issues and Challenges in Ensemble Forecasting Junichi Ishida (JMA) and Carolyn Reynolds (NRL) With contributions from WGNE members 31 th WGNE Pretoria,
1/39 Seasonal Prediction of Asian Monsoon: Predictability Issues and Limitations Arun Kumar Climate Prediction Center
© Crown copyright Met Office Mismatching Perturbations at the Lateral Boundaries in Limited-Area Ensemble Forecasting Jean-François Caron … or why limited-area.
Nigel Roberts Met Reading
Observation-Based Ensemble Spread-Error Relationship
Systematic timing errors in km-scale NWP precipitation forecasts
What is in our head…. Spatial Modeling Performance in Complex Terrain Scott Eichelberger, Vaisala.
Precipitation Products Statistical Techniques
Arpae Hydro-Meteo-Climate Service, Bologna, Italy
background error covariance matrices Rescaled EnKF Optimization
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
Predictability of Snow Multi-Bands Using a 40-Member WRF Ensemble
Observation uncertainty in verification
Numerical Weather Prediction Center (NWPC), Beijing, China
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Presentation transcript:

Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is the additional skill coming from? Is it just the increased spread?  Decompose Brierscore into components  Ability of different model-error scheme to capture structural uncertainty  How does the increased skill compare against that of postprocessing the forecasts  Impact of calibration  Impact of debiasing  Impact of model-version

Experimental Setup  Weather Research and Forecast Model WRFV  15 dates between Nov 2008 and Dec 2009, 00Z and 12Z, 30 cycles or cases  45km horizontal resolution and 41 vertical levels  Limited area model: Continuous United States (CONUS)  Initial and boundary conditions from GFS (downscaled from NCEPs Global Forecast System)  Verification against 3003 surface observations from the aviation routine weather report measurements (METAR) (and 106 soundings)  Observation error not taken into account

Model-error Experiments

Stochastic parameterization schemes Stochastic kinetic-energy backscatter scheme (SKEBS)  Rationale: A fraction of the dissipated kinetic-energy is scattered upscale and available as forcing for the resolved flow (Shutts, 2005, Mason and Thomson 1992) Stochastically perturbed parameterization scheme (SPPT)  Rationale: Especially as resolution increases, the equilibrium assumption is no longer valid and fluctuations of the subgrid-scale state should be sampled (Buizza et al. 1999, Palmer et al. 2009)

Potential to reduce model error  Stochastic parameterizations can change the mean and variance of a PDF  Impacts variability of model (e.g. internal variability of the atmosphere)  Impacts systematic error (e.g. blocking precipitation error) Weak noise Multi-modalUnimodal Potential PDF Strong noise

Multi-Physics combinations

Spread, Error and Brierscore (Threshold 0<x< +σ) Spread – dashed RMS Error – solid “CNTL < PARAM < SKEBS < PHYS10 < PHYS3_SKEBS_PARAM < PHYS10_SKEBS” CNTL PARAM SKEBS PHYS10 PHYS10_SKEBS PHYS3_SKEBS_PARAM

Decomposition of Brierscore Are model-error schemes merely influencing reliability (linked to spread) or also resolution? Resolution and reliability are both increased Order stays more or less the same CNTL PARAM SKEBS PHYS10 PHYS10_SKEBS PHYS3_SKEBS_PARAM

Brier skillscore Normally climatology is reference, but here CNTL Measures skill improvements of model-error schemes over CNTL CNTL PARAM SKEBS PHYS10 PHYS10_SKEBS PHYS3_SKEBS_PARAM

What if all ensemble systems had the same spread?  Calibrate all ensemble systems so that each member has the same variability as the observations (Doblas-Reyes et al., 2005; Kharin and Zwiers, 2003, Hamill and Colucci, 1998 )  Calibrated ensemble systems will have similar spread  Just another way of assessing the role of spread

Calibration of ensemble systems Fullfills two conditions: 1. The variance of the inflated prediction is the same as that of the reference (here observations) (Hamill and Colucci, 1998) 2. The potentially predictable signal after inflation is made equal to the correlation of the ensemble mean with the observations (Kharin and Zwiers, 2003a)  Correlation between ensemble mean and observations is not changed

Brier score of calibrated ensemble systems Model-error schemes have different resolution and reliability Ability to represent structural uncertainty better? CNTL PARAM SKEBS PHYS10 PHYS10_SKEBS PHYS3_SKEBS_PARAM “CNTL < PARAM < SKEBS < PHYS10 < {PHYS3_SKEBS_PARAM < PHYS10_SKEBS}”

Impact of other postprocessing methods  Debiasing  A combination of debiasing

Impact of changing model-versions Now SPPT, but not PARAM Multi-physics has changed and is now clearly overdispersive => big problem for maintainance CNTL PARAM SKEBS PHYS10 PHYS10_SKEBS PHYS3_SKEBS_PARAM SPPT

Bias

Impact Summary  What is the relative impact of postprocessing (debiasing, calibration, changing model versions)  Compute skill of postprocesses ensemble forecast over raw ensemble forecasts  Done for each experiment separately, e.g. SKEBS postprocesses is compared to SKEBS raw  Take the average of the skill score over all forecast times (exclude initial time)

Impact Summary Raw Calibrated

Calibration Calibration & Debiasing Debiasing Model-error scheme Changing model-versions Brierskillscore Resolution component Reliability Component CNTL PARAM SKEBS PHYS10 PHYS10_SKEBS PHYS3_SKEBS_PARAM

Comments  Results hold qualitatively for other verification thresholds  Results significant at 95% except PARAM  If obs error is included results still hold qualitatively, but significance is reduced  In free atmosphere SKEBS tends to outperfrom PHYS10

Conclusions  Model-error schemes improve forecast skill by improving both, reliability and resolution  The impact is of comparable magnitude to that of common postprocessing methods  Combining multiple model-error schemes yields consistently best results

SKEBS tutorial and informal discussion group  Stochastic Kinetic-Energy Backscatter Scheme  Friday, 10:30 – 12:00  Tutorial  Informal discussion group  Feedback to developers  Network with other users, especially on applications SKEBS was not directly developed for

Skill of calibrated forecasts At 12h difference between RMSE and spread was ca. 1.3m/s and 2.0K, now 0.4m/s and 0.2K Spread between ensemble systems is much closer Brier score increases Combination of multiple model-error schemes performs still best, but has still most spread. Multi-physics performs very well CNTL PARAM SKEBS PHYS10 PHYS10_SKEBS PHYS3_SKEBS_PARAM