R.J. Graham, The Seasonal Modelling and Prediction Group,

Slides:



Advertisements
Similar presentations
Sub-seasonal to seasonal prediction David Anderson.
Advertisements

Numerical Weather Prediction Models
ECMWF long range forecast systems
Page 1© Crown copyright 2006ESWWIII, Royal Library of Belgium, Brussels, Nov 15 th 2006 Forecasting uncertainty: the ensemble solution Mike Keil, Ken Mylne,
Potential Predictability and Extended Range Prediction of Monsoon ISO’s Prince K. Xavier Centre for Atmospheric and Oceanic Sciences Indian Institute of.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
Predictability and Chaos EPS and Probability Forecasting.
Creating probability forecasts of binary events from ensemble predictions and prior information - A comparison of methods Cristina Primo Institute Pierre.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
EG1204: Earth Systems: an introduction Meteorology and Climate Lecture 7 Climate: prediction & change.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Performance of the MOGREPS Regional Ensemble
Grid for Coupled Ensemble Prediction (GCEP) Keith Haines, William Connolley, Rowan Sutton, Alan Iwi University of Reading, British Antarctic Survey, CCLRC.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
Climate Forecasting Unit Prediction of climate extreme events at seasonal and decadal time scale Aida Pintó Biescas.
Short-Range Ensemble Prediction System at INM José A. García-Moya & Carlos Santos SMNT – INM COSMO Meeting Zurich, September 2005.
Intraseasonal TC prediction in the southern hemisphere Matthew Wheeler and John McBride Centre for Australia Weather and Climate Research A partnership.
Alan Robock Department of Environmental Sciences Rutgers University, New Brunswick, New Jersey USA
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
© Crown copyright Met Office Probabilistic turbulence forecasts from ensemble models and verification Philip Gill and Piers Buchanan NCAR Aviation Turbulence.
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Interoperability at INM Experience with the SREPS system J. A. García-Moya NWP – Spanish Met Service INM SRNWP Interoperability Workshop ECMWF –
Plans for Short-Range Ensemble Forecast at INM José A. García-Moya SMNT – INM Workshop on Short Range Ensemble Forecast Madrid, October,
EurEPS The Realization of an Operational LAM-EPS in Europe A SRNWP Project under EUMETNET Proposed to the EUMETNET Council at its 30 th meeting by Norwegian.
Local Predictability of the Performance of an Ensemble Forecast System Liz Satterfield and Istvan Szunyogh Texas A&M University, College Station, TX Third.
Use of linear discriminant methods for calibration of seasonal probability forecasts Andrew Colman, Richard Graham. © Crown copyright /0145 Met.
THORPEX THORPEX (THeObserving system Research and Predictability Experiment) was established in 2003 by the Fourteenth World Meteorological Congress. THORPEX.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
Figures from “The ECMWF Ensemble Prediction System”
Wrap-up of SPPT Tests & Introduction to iSPPT
Lothar (T+42 hours) Figure 4.
Upper Rio Grande R Basin
Predictability: How can we predict the climate decades into the future when we can’t even predict the weather for next week? Predictability of the first.
making certain the uncertainties
FORECASTING HEATWAVE, DROUGHT, FLOOD and FROST DURATION Bernd Becker
Nigel Roberts Met Reading
European Centre for Medium-Range Weather Forecasts
Mid-latitude cyclone dynamics and ensemble prediction
Climate and Global Dynamics Laboratory, NCAR
Challenges of Seasonal Forecasting: El Niño, La Niña, and La Nada
Update on the Northwest Regional Modeling System 2013
Course Evaluation Now online You should have gotten an with link.
S2S sub-project on verification (and products)
Verifying and interpreting ensemble products
Question 1 Given that the globe is warming, why does the DJF outlook favor below-average temperatures in the southeastern U. S.? Climate variability on.
David Salstein, Edward Lorenz, Alan Robock, and John Roads
Anne Leroy Météo France, Nouméa, Nouvelle-Calédonie Matthew C. Wheeler
Course Evaluation Now online You should have gotten an with link.
Verification of multi-model ensemble forecasts using the TIGGE dataset
Post Processing.
Global Forecast System (GFS) Model
Modeling the Atmos.-Ocean System
Causes of improvements to NWP (1979 – 2009)
Andy Wood and Dennis P. Lettenmaier
GIFS-TIGGE project Richard Swinbank, and Young-Youn Park,
Integration of NCAR DART-EnKF to NCAR-ATEC multi-model, multi-physics and mutli- perturbantion ensemble-RTFDDA (E-4DWX) forecasting system Linlin Pan 1,
Predictability assessment of climate predictions within the context
Ensemble tropical cyclone and windstorm forecast applications
Linking operational activities and research
Xiefei Zhi, Yongqing Bai, Chunze Lin, Haixia Qi, Wen Chen
Case Studies in Decadal Climate Predictability
EFnet: an added value of multi-model simulation
Global Forecast System (GFS) Model
MOGREPS developments and TIGGE
Peter May and Beth Ebert CAWCR Bureau of Meteorology Australia
Adam J. O’Shay and T. N. Krishnamurti FSU Meteorology 8 February 2001
Presentation transcript:

R.J. Graham, The Seasonal Modelling and Prediction Group, The Ensemble of Ensembles: Multiple-model forecasting for seasonal timescales Curriculum module 4, part 6 Written by R.J. Graham, The Seasonal Modelling and Prediction Group, Met. Office, UK rjgraham@meto.gov.uk It is recommended that modules 1.6, 3.1 and 4.3 are studied before this module

Ensemble Prediction: Ensembles (revision note) Instabilities and feed-backs at work in the atmosphere, so-called “chaotic” processes, impose limits on deterministic predictability. Recognition of the above has lead to the development of Ensemble Prediction Systems. An ensemble is a collection of predictions which collectively “explore” the possible future outcomes, given the uncertainties inherent in the forecast process. Instabilities and feedbacks at work in the atmosphere, so-called “chaotic” processes, impose limits on its predictability. Small errors in the initial analysis may amplify rapidly, leading to serious errors after a few days of forecast. Unavoidable imperfections in the numerical forecast model itself (e.g. the grid-point representation) may also amplify forecast errors. The ensemble prediction method has been developed to deal with chaos. For each prediction a number of numerical forecasts are made (typically up to 30 forecasts are made for current real-time seasonal predictions), varying the initial conditions and model formulation. The resulting ensemble of forecasts allows an estimate of the probability of a range of outcomes. For example, if 20 of the 30 forecasts (known as ensemble members) predict above normal rains for the rainy season of a given region, then the forecast probability of above normal rains is ~67%.

Multiple-Model Ensemble Prediction Numerical models are imperfect representations of the atmosphere. There will always be uncertainties in the formulation of dynamics and physical parameterisations. These uncertainties must be represented in Ensemble Prediction Systems, in addition to the uncertainties in initial conditions. (Uncertainties in initial conditions are covered in Module 4.3). One very effective way of representing model uncertainties is to construct ensembles from two or more forecast models which have different numerical formulations - so called Multiple-model Ensembles. The main purpose of Ensemble Prediction Systems is to estimate the probability density function of future atmospheric states, given the uncertainties in the forecast process. It is necessary to consider two sources of uncertainty; 1) Uncertainties in the initial conditions See module 4.3 for details. The first ensemble systems developed considered only initial condition uncertainty. It was soon recognised that model uncertainties needed to be included to improve the spread characteristics of the ensembles. 2) Uncertainty in the model formulation Representation of uncertainties in model formulation have been approached in three ways - Stochastic physics (e.g. Buizza et al 1999): In this method the output of model physics schemes is randomly perturbed in each ensemble member as it runs. - Use of different physics schemes (e.g. Houtekamer et al 1996): In this method the parameterisation schemes are varied across the ensemble members. - Multiple-models (e.g. Evans et al 2000, Graham et al 2000): In this method, the subject of this module, ensembles from two or more numerical models are combined to form a “grand” ensemble. Results show that the multiple-model method provides substantial further benefits to spread and other performance characteristics than achieved with stochastic physics alone (Mylne et al. 2000).

Multiple-model ensemble of N1+N2+N3+…Nn members A multiple-model ensemble is simply a combination of ensembles from individual ensemble systems Ensemble 1 (e.g. Met. Office) N1 members Ensemble 2 (e.g. ECMWF) N2 members Ensemble 3 (e.g. NCEP) N3 members Ensemble n (other forecast centre) Nn members Combining algorithm Multiple-model ensemble of N1+N2+N3+…Nn members The multiple-model is produced by combining ensembles from two or more individual models. The benefits of the multiple-model system derive potentially from; 1) Exploitation of complementary predictive skill: Different models generally have different strengths and weakness. Forecast skill will vary among models depending on the variable, season and region. Moreover, skill is often complementary, e.g. one model may perform particularly well over Indonesia while for another model best skill may be over South America. Research has shown that by combining ensembles from different models the strengths of each individual model can be exploited to optimise global skill (see later and e.g. Graham et al. 2000) 2) Increase in ensemble size: In general, the greater number of ensemble members will in itself provide skill improvements. (However the benefits provided by multiple-models derive mainly from the use of more than one model formulation, see e.g. Evans et al. 2000). Combining algorithm Multiple-model ensembles have been shown to provide improved performance with simple, unweighted combination of the individual ensembles (e.g. Graham et al 2000; Brankovic and Palmer 2000). However, more sophisticated methods of combining the ensembles, e.g. by weighting each individual ensemble according to its track record of skill, have been proposed (e.g. Krishnamurti et al. 1999)

From the same set of initial states, different models will typically produce a different set of forecast outcomes. Ensemble forecast from model 1 explores part of the future uncertainty Ensemble forecast from model 2, run from same set of initial states, typically explores additional future uncertainties Uncertainty in future atmospheric state Uncertainty in initial atmospheric state The schematic above illustrates how use of ensembles from two (or more) forecast models, started with the same set of perturbed initial conditions, will generally evolve differently - resulting in greater exploration of the range of future outcomes and better estimation of the probability of each outcome. The evolutions diverge because small differences triggered from different formulations (e.g. in convective precipitation schemes) will be amplified by the non-linear “chaotic” dynamics of the model. The ellipses on the left represent the uncertainty in the initial state. The uncertainty is sampled by the analysis perturbations (red dots). The ellipses to the right represent the uncertainty in the future state. The red and green surfaces illustrate how ensembles run from the different models sample different parts of the total uncertainty, increasing the prospect of covering all potential outcomes. The red and green lines represent the evolution of ensemble members from the different models (just 3 members are shown for clarity). Their evolution is similar at first, but later diverges. Note that a way of gaining further coverage of the total uncertainty (not illustrated here) is by using additional basic analyses (to which perturbations are added) as well as additional models (so called multiple-model, multiple-analysis ensembles, Evans et al. 2000, Mylne et al. 2000).

Multiple-model Ensembles for seasonal prediction Seasonal simulations for the winter (DJF) season from four individual models and multiple-model combinations The multiple-models act as a filter for the more skilful individual model The graphic shows an example of the benefit of multiple-model ensembles to probabilistic skill. Skill assessment is provided for 9-member ensembles run with four individual models (referred to here as models 1,2,3 and 4 (blue, green, cyan and yellow) and three multiple-model combinations of these models. The multiple model skills are shown by the red bars where 2-model combination (open red bar) = 18- member combination of models 1 and 2 3-model combination (shaded red bar) = 27-member combination of models 1, 2 and 3 4-model combination (solid red bar) = 36-member combination of models 1, 2, 3 and 4 All combinations are unweighted. The event assessed is winter (DJF) 850 hPa temperature above normal over Europe (top panel) and North America (bottom panel). The skill measure used is the Relative Operating Characteristic (ROC, see Stanski et al. 1989). ROC scores over a value of 0.5 indicate skill better than a random or climate forecast. The most striking result is that the skill of each multiple-model is similar to and sometimes better than, the skill of the more skilful component model. Over North America (bottom panel), for example, the skill of the 2-model combination is similar to that of model 2 (which is more skilful than model 1 for this region/season), the skill of the 3-model combination is similar to that of model 3 (which is more skilful than models 1 and 2), and the skill of the 4-model combination is similar to that of model 4 (more skilful than models 1, 2 and 3). Thus best skill (out of the multiple-models) is obtained with the 4-model combination. Similar results are found for Europe (top panel), where the skill of models 1 and 3 is matched by the skill of the 4-model combination, despite the lower skill of models 2 and 4. To summarise, the relative skill of individual models may differ markedly with season and region. The multiple-model ensemble formed by combining individual ensembles provides a filter on the more skilful individual model, enabling the skill of the more skilful model to be recovered for each region and season. The multiple-model method thus provides a powerful way of improving capabilities for global prediction.

Benefits of Multiple-model Ensembles The ensemble spread is increased relative to the individual model ensembles. Thus the observed outcome more frequently falls within the range of forecast solutions provided by the ensemble. The Multiple-model provides a filter for the more skilful individual model (the best model will vary with season/variable/region). Thus the strengths of the individual models are exploited, improving capabilities for global seasonal prediction. Benefits derive mainly from the use of additional models, but also from the increased ensemble size. References: Brankovic, C. and T.N. Palmer, 2000: Seasonal skill and predictability of ECMWF PROVOST ensembles. Q.J.R. Meteorol. Soc., 126, 2035-2067. Buizza, R., M. Miller and T.N. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF EPS. Q.J.R. Meteorol. Soc., 125, 2887-2908. Evans, R.E., M.S.J. Harrison, R.J. Graham, and K.R. Mylne, 2000: Joint medium range ensembles from the Met. Office and ECMWF systems. Mon. Wea. Rev., 128, 3104-3127. Graham, R.J., A.D.L. Evans, K.R. Mylne, M.S.J. Harrison and K.B. Robertson, 2000: An assessment of seasonal predictability using atmospheric general circulation models. Q.J.R. Meteorol. Soc., 126, 2211-2240. Houtekamer, P.L., L. Lefaivre, J. Derome, H. Ritchie and H.L. Mitchell, 1996: A system simulation approach to ensemble prediction. Mon Wea. Rev., 124, 1225-1242. Krishnamurti, T.N. and collaborators, 1999: Improved weather and seasonal climate forecasts from a multi-model superensemble. Science, 285, 1548-1550. Mylne, K.R., R.E. Evans, and R.T. Clark, 2000: Multi-model Multi-analysis ensembles in quasi-operational medium range forecasting. Submitted to Q.J.R. Meteorol. Soc. Available as NWP Forecasting Research Scientific Paper No. 60, from the National Meteorological Library, the Met. Office, London Road, Bracknell, RG12 2SZ, UK. Stanski, H.R., L.J. Wilson and W.R. Burrows, 1989: Survey of common verification methods in meteorology. World Weather Watch Technical Report No. 8, WMO/TD 358, World Meteorological Organisation, Geneva, Switzerland.