Download presentation
Presentation is loading. Please wait.
Published byTapani Penttilä Modified over 5 years ago
1
R.J. Graham, The Seasonal Modelling and Prediction Group,
The Ensemble of Ensembles: Multiple-model forecasting for seasonal timescales Curriculum module 4, part 6 Written by R.J. Graham, The Seasonal Modelling and Prediction Group, Met. Office, UK It is recommended that modules 1.6, 3.1 and 4.3 are studied before this module
2
Ensemble Prediction: Ensembles (revision note)
Instabilities and feed-backs at work in the atmosphere, so-called “chaotic” processes, impose limits on deterministic predictability. Recognition of the above has lead to the development of Ensemble Prediction Systems. An ensemble is a collection of predictions which collectively “explore” the possible future outcomes, given the uncertainties inherent in the forecast process. Instabilities and feedbacks at work in the atmosphere, so-called “chaotic” processes, impose limits on its predictability. Small errors in the initial analysis may amplify rapidly, leading to serious errors after a few days of forecast. Unavoidable imperfections in the numerical forecast model itself (e.g. the grid-point representation) may also amplify forecast errors. The ensemble prediction method has been developed to deal with chaos. For each prediction a number of numerical forecasts are made (typically up to 30 forecasts are made for current real-time seasonal predictions), varying the initial conditions and model formulation. The resulting ensemble of forecasts allows an estimate of the probability of a range of outcomes. For example, if 20 of the 30 forecasts (known as ensemble members) predict above normal rains for the rainy season of a given region, then the forecast probability of above normal rains is ~67%.
3
Multiple-Model Ensemble Prediction
Numerical models are imperfect representations of the atmosphere. There will always be uncertainties in the formulation of dynamics and physical parameterisations. These uncertainties must be represented in Ensemble Prediction Systems, in addition to the uncertainties in initial conditions. (Uncertainties in initial conditions are covered in Module 4.3). One very effective way of representing model uncertainties is to construct ensembles from two or more forecast models which have different numerical formulations - so called Multiple-model Ensembles. The main purpose of Ensemble Prediction Systems is to estimate the probability density function of future atmospheric states, given the uncertainties in the forecast process. It is necessary to consider two sources of uncertainty; 1) Uncertainties in the initial conditions See module 4.3 for details. The first ensemble systems developed considered only initial condition uncertainty. It was soon recognised that model uncertainties needed to be included to improve the spread characteristics of the ensembles. 2) Uncertainty in the model formulation Representation of uncertainties in model formulation have been approached in three ways - Stochastic physics (e.g. Buizza et al 1999): In this method the output of model physics schemes is randomly perturbed in each ensemble member as it runs. - Use of different physics schemes (e.g. Houtekamer et al 1996): In this method the parameterisation schemes are varied across the ensemble members. - Multiple-models (e.g. Evans et al 2000, Graham et al 2000): In this method, the subject of this module, ensembles from two or more numerical models are combined to form a “grand” ensemble. Results show that the multiple-model method provides substantial further benefits to spread and other performance characteristics than achieved with stochastic physics alone (Mylne et al. 2000).
4
Multiple-model ensemble of N1+N2+N3+…Nn members
A multiple-model ensemble is simply a combination of ensembles from individual ensemble systems Ensemble 1 (e.g. Met. Office) N1 members Ensemble 2 (e.g. ECMWF) N2 members Ensemble 3 (e.g. NCEP) N3 members Ensemble n (other forecast centre) Nn members Combining algorithm Multiple-model ensemble of N1+N2+N3+…Nn members The multiple-model is produced by combining ensembles from two or more individual models. The benefits of the multiple-model system derive potentially from; 1) Exploitation of complementary predictive skill: Different models generally have different strengths and weakness. Forecast skill will vary among models depending on the variable, season and region. Moreover, skill is often complementary, e.g. one model may perform particularly well over Indonesia while for another model best skill may be over South America. Research has shown that by combining ensembles from different models the strengths of each individual model can be exploited to optimise global skill (see later and e.g. Graham et al. 2000) 2) Increase in ensemble size: In general, the greater number of ensemble members will in itself provide skill improvements. (However the benefits provided by multiple-models derive mainly from the use of more than one model formulation, see e.g. Evans et al. 2000). Combining algorithm Multiple-model ensembles have been shown to provide improved performance with simple, unweighted combination of the individual ensembles (e.g. Graham et al 2000; Brankovic and Palmer 2000). However, more sophisticated methods of combining the ensembles, e.g. by weighting each individual ensemble according to its track record of skill, have been proposed (e.g. Krishnamurti et al. 1999)
5
From the same set of initial states, different models will typically produce a different set of forecast outcomes. Ensemble forecast from model 1 explores part of the future uncertainty Ensemble forecast from model 2, run from same set of initial states, typically explores additional future uncertainties Uncertainty in future atmospheric state Uncertainty in initial atmospheric state The schematic above illustrates how use of ensembles from two (or more) forecast models, started with the same set of perturbed initial conditions, will generally evolve differently - resulting in greater exploration of the range of future outcomes and better estimation of the probability of each outcome. The evolutions diverge because small differences triggered from different formulations (e.g. in convective precipitation schemes) will be amplified by the non-linear “chaotic” dynamics of the model. The ellipses on the left represent the uncertainty in the initial state. The uncertainty is sampled by the analysis perturbations (red dots). The ellipses to the right represent the uncertainty in the future state. The red and green surfaces illustrate how ensembles run from the different models sample different parts of the total uncertainty, increasing the prospect of covering all potential outcomes. The red and green lines represent the evolution of ensemble members from the different models (just 3 members are shown for clarity). Their evolution is similar at first, but later diverges. Note that a way of gaining further coverage of the total uncertainty (not illustrated here) is by using additional basic analyses (to which perturbations are added) as well as additional models (so called multiple-model, multiple-analysis ensembles, Evans et al. 2000, Mylne et al. 2000).
6
Multiple-model Ensembles for seasonal prediction
Seasonal simulations for the winter (DJF) season from four individual models and multiple-model combinations The multiple-models act as a filter for the more skilful individual model The graphic shows an example of the benefit of multiple-model ensembles to probabilistic skill. Skill assessment is provided for 9-member ensembles run with four individual models (referred to here as models 1,2,3 and 4 (blue, green, cyan and yellow) and three multiple-model combinations of these models. The multiple model skills are shown by the red bars where 2-model combination (open red bar) = 18- member combination of models 1 and 2 3-model combination (shaded red bar) = 27-member combination of models 1, 2 and 3 4-model combination (solid red bar) = 36-member combination of models 1, 2, 3 and 4 All combinations are unweighted. The event assessed is winter (DJF) 850 hPa temperature above normal over Europe (top panel) and North America (bottom panel). The skill measure used is the Relative Operating Characteristic (ROC, see Stanski et al. 1989). ROC scores over a value of 0.5 indicate skill better than a random or climate forecast. The most striking result is that the skill of each multiple-model is similar to and sometimes better than, the skill of the more skilful component model. Over North America (bottom panel), for example, the skill of the 2-model combination is similar to that of model 2 (which is more skilful than model 1 for this region/season), the skill of the 3-model combination is similar to that of model 3 (which is more skilful than models 1 and 2), and the skill of the 4-model combination is similar to that of model 4 (more skilful than models 1, 2 and 3). Thus best skill (out of the multiple-models) is obtained with the 4-model combination. Similar results are found for Europe (top panel), where the skill of models 1 and 3 is matched by the skill of the 4-model combination, despite the lower skill of models 2 and 4. To summarise, the relative skill of individual models may differ markedly with season and region. The multiple-model ensemble formed by combining individual ensembles provides a filter on the more skilful individual model, enabling the skill of the more skilful model to be recovered for each region and season. The multiple-model method thus provides a powerful way of improving capabilities for global prediction.
7
Benefits of Multiple-model Ensembles
The ensemble spread is increased relative to the individual model ensembles. Thus the observed outcome more frequently falls within the range of forecast solutions provided by the ensemble. The Multiple-model provides a filter for the more skilful individual model (the best model will vary with season/variable/region). Thus the strengths of the individual models are exploited, improving capabilities for global seasonal prediction. Benefits derive mainly from the use of additional models, but also from the increased ensemble size. References: Brankovic, C. and T.N. Palmer, 2000: Seasonal skill and predictability of ECMWF PROVOST ensembles. Q.J.R. Meteorol. Soc., 126, Buizza, R., M. Miller and T.N. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF EPS. Q.J.R. Meteorol. Soc., 125, Evans, R.E., M.S.J. Harrison, R.J. Graham, and K.R. Mylne, 2000: Joint medium range ensembles from the Met. Office and ECMWF systems. Mon. Wea. Rev., 128, Graham, R.J., A.D.L. Evans, K.R. Mylne, M.S.J. Harrison and K.B. Robertson, 2000: An assessment of seasonal predictability using atmospheric general circulation models. Q.J.R. Meteorol. Soc., 126, Houtekamer, P.L., L. Lefaivre, J. Derome, H. Ritchie and H.L. Mitchell, 1996: A system simulation approach to ensemble prediction. Mon Wea. Rev., 124, Krishnamurti, T.N. and collaborators, 1999: Improved weather and seasonal climate forecasts from a multi-model superensemble. Science, 285, Mylne, K.R., R.E. Evans, and R.T. Clark, 2000: Multi-model Multi-analysis ensembles in quasi-operational medium range forecasting. Submitted to Q.J.R. Meteorol. Soc. Available as NWP Forecasting Research Scientific Paper No. 60, from the National Meteorological Library, the Met. Office, London Road, Bracknell, RG12 2SZ, UK. Stanski, H.R., L.J. Wilson and W.R. Burrows, 1989: Survey of common verification methods in meteorology. World Weather Watch Technical Report No. 8, WMO/TD 358, World Meteorological Organisation, Geneva, Switzerland.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.