The Seasonal Forecast System

Slides:



Advertisements
Similar presentations
Chapter 7 Constructors and Other Tools. Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 7-2 Learning Objectives Constructors Definitions.
Advertisements

THOR Annual Meeting - Bergen 9-11 November /25 On the impact of initial conditions relative to external forcing on the skill of decadal predictions:
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Extended range forecasts at MeteoSwiss: User experience.
Page 1© Crown copyright 2004 Seasonal forecasting activities at the Met Office Long-range Forecasting Group, Hadley Centre Presenter: Richard Graham ECMWF.
2. The WAM Model: Solves energy balance equation, including Snonlin
User Meeting 15 June 2005 Monthly Forecasting Frederic Vitart ECMWF, Reading, UK.
Seasonal forecasts Laura Ferranti and the Seasonal Forecast Section User meeting June 2005.
Slide 1 Forecast Products User Meeting June 2006 Slide 1 ECMWF medium-range forecasts and products David Richardson Met Ops.
Training Course 2013– NWP-PR: The Monthly Forecast System at ECMWF 1 Monthly Forecasting at ECMWF Frédéric Vitart European Centre for Medium-Range Weather.
The ECMWF Monthly and Seasonal Forecast Systems
Severe Weather Forecasts
Monthly Forecasting at ECMWF
Data assimilation in the ocean
Sub-seasonal to seasonal prediction David Anderson.
Performance Management Chapter 10. Performance Management 10-2 Objectives How to View Goals View State Goals View Your LWA Goals Search Goals (including.
The role of the stratosphere in extended- range forecasting Thomas Jung Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research Germany.
1 NCAS SMA presentation 14/15 September 2004 The August 2002 European floods: atmospheric teleconnections and mechanisms Mike Blackburn (1), Brian Hoskins.
The Role of the Basic State in Determining the Predictability of Tropical Rainfall Andrew Turner, Pete Inness and Julia Slingo. Talk Outline Motivation.
The effect of doubled CO 2 and model basic state biases on the monsoon- ENSO system Andrew Turner, Pete Inness, Julia Slingo Walker Institute / NCAS-Climate.
The effect of doubled CO 2 and model basic state biases on the monsoon- ENSO system: the mean response and interannual variability Andrew Turner, Pete.
Climate Prediction Division Japan Meteorological Agency Developments for Climate Services at Japan Meteorological Agency 1.
Chapter 7 Sampling and Sampling Distributions
Pole Placement.
Turing Machines.
Page 1 NAE 4DVAR Oct 2006 © Crown copyright 2006 Mark Naylor Data Assimilation, NWP NAE 4D-Var – Testing and Issues EWGLAM/SRNWP meeting Zurich 9 th -12.
1 RA III - Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Buenos Aires, Argentina, 25 – 27 October 2006 Status of observing programmes in RA.
Numerical Weather Prediction Models
LRF Training, Belgrade 13 th - 16 th November 2013 © ECMWF Sources of predictability and error in ECMWF long range forecasts Tim Stockdale European Centre.
ECMWF long range forecast systems
WCRP OSC 2011: Strategies for improving seasonal prediction © ECMWF Strategies for improving seasonal prediction Tim Stockdale, Franco Molteni, Magdalena.
Initialization Issues of Coupled Ocean-atmosphere Prediction System Climate and Environment System Research Center Seoul National University, Korea In-Sik.
Review of Northern Winter 2010/11
Seamless precipitation prediction skill in a global model: Actual versus potential skill Matthew Wheeler 1, Hongyan Zhu 1, Adam Sobel 2, and Debra Hudson.
Introduction to Numerical Weather Prediction and Ensemble Weather Forecasting Tom Hamill NOAA-CIRES Climate Diagnostics Center Boulder, Colorado USA.
© Crown copyright Met Office Andrew Colman presentation to EuroBrisa Workshop July Met Office combined statistical and dynamical forecasts for.
NOAA Climate Obs 4th Annual Review Silver Spring, MD May 10-12, NOAA’s National Climatic Data Center 1.SSTs for Daily SST OI NOAA’s National.
1 Assessment of the CFSv2 real-time seasonal forecasts for Wanqiu Wang, Mingyue Chen, and Arun Kumar CPC/NCEP/NOAA.
Performance of the MOGREPS Regional Ensemble
8. Seasonal-to-Interannual Predictability and Prediction 8.1 Predictability 8.2 Prediction.
EUROBRISA Workshop – Beyond seasonal forecastingBarcelona, 14 December 2010 INSTITUT CATALÀ DE CIÈNCIES DEL CLIMA Beyond seasonal forecasting F. J. Doblas-Reyes,
NLOA2012: Global Climate PredictionMadrid, 5 July 2012 INSTITUT CATALÀ DE CIÈNCIES DEL CLIMA Global Climate Prediction: Between Weather Forecasting and.
ECMWF training course, 2006: Predictability on the seasonal timescale Predictability on the Seasonal Timescale Tim Stockdale and Franco Molteni Seasonal.
Workshop on Sub-seasonal to Seasonal Prediction, Exeter, 1-3 December Operational Seasonal Forecast Systems: a view from ECMWF Tim Stockdale The.
EUROBRISA WORKSHOP, Paraty March 2008, ECMWF System 3 1 The ECMWF Seasonal Forecast System-3 Magdalena A. Balmaseda Franco Molteni,Tim Stockdale.
Seasonal forecasting from DEMETER to ENSEMBLES21 July 2009 Seasonal Forecasting From DEMETER to ENSEMBLES Francisco J. Doblas-Reyes ECMWF.
Franco Molteni, Frederic Vitart, Tim Stockdale,
Course Evaluation Closes June 8th.
Ben Kirtman University of Miami-RSMAS Disentangling the Link Between Weather and Climate.
2.2. Prediction and Predictability. Predictability “If we claim to understand the climate system surely we should be able to predict it!” If we cannot.
El Niño Forecasting Stephen E. Zebiak International Research Institute for climate prediction The basis for predictability Early predictions New questions.
Sources of Skill and Error in Long Range Columbia River Streamflow Forecasts: A Comparison of the Role of Hydrologic State Variables and Winter Climate.
ECMWF Training course 26/4/2006 DRD meeting, 2 July 2004 Frederic Vitart 1 Predictability on the Monthly Timescale Frederic Vitart ECMWF, Reading, UK.
2. Natural Climate Variability 2.1 Introduction 2.2 Interannual Variability 2.3 Climate Prediction 2.4 Variability of High Impact Weather.
1 An Assessment of the CFS real-time forecasts for Wanqiu Wang, Mingyue Chen, and Arun Kumar CPC/NCEP/NOAA.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
1 An Assessment of the CFS real-time forecasts for Wanqiu Wang, Mingyue Chen, and Arun Kumar CPC/NCEP/NOAA.
Figures from “The ECMWF Ensemble Prediction System”
1/39 Seasonal Prediction of Asian Monsoon: Predictability Issues and Limitations Arun Kumar Climate Prediction Center
Teleconnections in MINERVA experiments
Course Evaluation Now online You should have gotten an with link.
Course Evaluation Now online You should have gotten an with link.
Question 1 Given that the globe is warming, why does the DJF outlook favor below-average temperatures in the southeastern U. S.? Climate variability on.
El Nino and La Nina An important atmospheric variation that has an average period of three to seven years. Goes between El Nino, Neutral, and La Nina (ENSO.
Course Evaluation Now online You should have gotten an with link.
Sub-seasonal prediction at ECMWF
WGCM/WGSIP decadal prediction proposal
2. Natural Climate Variability
Climate Forecasting or the Continuous Adaptation to Climate Change
GloSea4: the Met Office Seasonal Forecasting System
Operational Seasonal Forecast Systems:
Presentation transcript:

The Seasonal Forecast System at ECMWF Tim Stockdale European Centre for Medium-Range Weather Forecasts

Contents Seasonal forecasting with coupled GCMs Why use GCMs? How we make a forecast Basic calibration – how, why, and what are the problems? Model error Operational forecasts: ECMWF System 3 System design How good are the El Nino forecasts? How good are the atmospheric forecasts? Multi-model systems and forecast interpretation EUROSIP multi-model system Meaning of probabilistic forecasts

Sources of seasonal predictability KNOWN TO BE IMPORTANT: El Nino variability - biggest single signal Other tropical ocean SST - important, but multifarious Climate change - especially important in mid-latitudes Local land surface conditions - e.g. soil moisture in spring OTHER FACTORS: Volcanic eruptions - definitely important for large events Mid-latitude ocean temperatures - still somewhat controversial Remote soil moisture/ snow cover - not well established Sea ice anomalies - local effects, but remote?? Dynamic memory of atmosphere - most likely on 1-2 months Stratospheric influences - various possibilities Unknown or Unexpected - ???

Methods of seasonal forecasting Empirical forecasting Use past observational record and statistical methods Works with reality instead of error-prone numerical models  Limited number of past cases means that it works best when observed variability is dominated by a single source of predictability  A non-stationary climate is problematic  Two-tier forecast systems First predict SST anomalies (ENSO or global; dynamical or statistical) Use ensemble of atmosphere GCMs to predict global response Many people still use regression of a predicted El Nino index on a local variable of interest Single-tier GCM forecasts Include comprehensive range of sources of predictability  Predict joint evolution of SST and atmosphere flow  Includes indeterminacy of future SST, important for prob. forecasts  Model errors are an issue! 

Step 1: Build a coupled model IFS (atmosphere) TL159L62 Cy31r1, 1.125 deg grid for physics (operational in Sep 2006) Full set of singular vectors from EPS system to perturb atmosphere initial conditions (more sophisticated than needed …) Ocean currents coupled to atmosphere boundary layer calculations HOPE (ocean) Global ocean model, 1x1 mid-latitude resolution, 0.3 near equator A lot of work in developing the OI ocean analyses, including analysis of salinity, multivariate bias corrections and use of altimetry. Coupling Fully coupled, no flux adjustments, except no physical model of sea-ice

Step 2: Make some forecasts Initialize coupled system (cf Magdalena’s lecture) Aim is to start system close to reality. Accurate SST is particularly important, plus ocean sub-surface. Don’t worry too much about “imbalances” Run an ensemble forecast Explicitly generate an ensemble on the 1st of each month, with perturbations to represent the uncertainty in the initial conditions; run forecasts for 7 months Worry about model error later ….

Creating the ensemble Wind perturbations SST perturbations Perfect wind would give a good ocean analysis, but uncertainties are significant. We represent these by adding perturbations to the wind used in the ocean analysis system. BUT only have 5 member ensemble, and no representation of other sources of uncertainty in ocean analysis (obs error, ..) SST perturbations SST uncertainty is not negligible SST perturbations added to each ensemble member at start of forecast. BUT perturbations based on analyses that use the same input data Atmospheric unpredictability Atmospheric ‘noise’ soon becomes the dominant source of spread in an ensemble forecast. This sets a fundamental limit to forecast quality. To ensure that noise grows rapidly enough in the first few days, we activate ‘stochastic physics’ and use EPS singular vectors.

RMSE and spread in different systems .. but ensemble spread (dashed lines) is still substantially less than actual forecast error. Substantial amounts of forecast error are not from the initial conditions. Rms error of forecasts has been systematically reduced (solid lines) ….

Step 3: Remove systematic errors Model drift is typically comparable to signal Both SST and atmosphere fields Forecasts are made relative to past model integrations Model climate estimated from 25 years of forecasts (1981-2005), all of which use a 11 member ensemble. Thus the climate has 275 members. Model climate has both a mean and a distribution, allowing us to estimate eg tercile boundaries. Model climate is a function of start date and forecast lead time. EXCEPTION: Nino SST indices are bias corrected to absolute values, and anomalies are displayed wrt a 1971-2000 climate. Implicit assumption of linearity We implicitly assume that a shift in the model forecast relative to the model climate corresponds to the expected shift in a true forecast relative to the true climate, despite differences between model and true climate. Most of the time, assumption seems to work pretty well. But not always.

Plots courtesy Paco Doblas-Reyes Biases (eg JJA 2mT as shown here) are often comparable in magnitude to the anomalies which we seek to predict ECMWF Met Office Météo-France CFS Plots courtesy Paco Doblas-Reyes

SST bias is a function of lead time and season. More recent systems have less bias, but it is still large enough to require correcting for.

Despite SST bias and other errors, anomalies in the coupled system can be remarkably similar to those obtained using observed (unbiased) SSTs …..

… and can also verify well against observations

Model errors are still serious … Models have errors other than mean bias Eg weak wind and SST variability in system 2 Strong underestimation of MJO activity Suspected too-weak teleconnections to mid-latitudes Mean state errors interact with model variability Nino 4 region is very sensitive (cold tongue/warm pool boundary) Atlantic variability suppressed if mean state is badly wrong Our forecast errors are larger than they should be With respect to internal variability estimates and (occasionally) other prediction systems Reliability of probabilistic forecasts is not particularly high

ENSO forecast quality – Feb. starts only ECMWF S3, 1 Feb Starts, first 3 months of forecast Completion of TAO array Dramatic pre/post TAO contrast is specific to these forecasts. Hypothesis is that model error usually dominates forecasts, but at this particular time of year, for this model and this forecast range, model error has negligible impact on forecasts. In this case, the role of initial condition error becomes visible, and the pre/post TAO era is clearly demarcated. At longer lead times (eg May verification), we start to get forecast errors in the post-TAO era, although June and July SSTs verify almost perfectly in the last 10 years (1998-2007). The striking behaviour of the forecasts plotted above was first noted using forecasts to 2005. 2006 and 2007 (the last two points above) continue the pattern of excellent recent forecasts. The forecast from 2008 looks as if it is also going to be nearly perfect, despite the challenging evolution of SST over the last few months. Note that since the above plot was “selected” from a range of possible plots covering different seasons and lead times, there is likely to be an element of artificial skill. Nonetheless, it is believed that an important part of the apparent error reduction is real.

ENSO forecast quality – all start months Model error partially masks the benefit of improved observations

System 3 vs Cycle 33r1 The most recent ECMWF cycles (such as 33r1, shown here) develop a too-strong cold tongue in the West Pacific. This results in a large drop in skill in SST forecasts in Nino 4. This relationship between mean-state error and forecast performance in the Nino-4 region has been seen time and again in different model versions. The model version used for System 3 has a pretty good balance, but even it is not perfect.

Statistical post-processing Red: statistically corrected System 3 forecast Improvement is mainly seen in Nino 4, where there are clear indications of anomaly / mean-state interactions, and where state-dependent errors are visible. (Statistical model is a cross-validated linear regression, based on initial condition SST anomaly and model predicted SST anomaly) Not used operationally

Capturing trends is important. Some are handled well ….

… other trends are poorly handled Global mean T at 50 hPa. The observed cooling at this altitude is mainly driven by trends in ozone, which our model does not include. The volcanic spikes of El Chichon and Pinatubo are also clear in the observed data.

Volcanic influence on Eurasian winters? ECMWF System 3 hindcast Analysed 2m T anomaly DJF 82/83 NH winters with large volcanic aerosol in the NH look like tend to look like 82/83 and 91/92, particularly over Eurasia (based on reconstructions back to 1600). Our coupled forecasts get both these winters badly wrong (although for 91/92, the observed SST does better). Shindell et al 2004 (JGR) have modelling results to suggest volcanic aerosol impact is similar to observed pattern. Even if no volcanoes occur in the future, our estimates of past forecast quality may need to take into account volcanic winters. ECMWF are working on including the relevant aerosol effects (historically, and in the real-time analysis an forecast system). DJF 91/92

Operational seasonal forecasts Real time forecasts since 1997 “System 1” initially made public as “experimental” in Dec 1997 System 2 started running in August 2001, released in early 2002 System 3 started running in Sept 2006, operational in March 2007 Burst mode ensemble forecast Initial conditions are valid for 0Z on the 1st of a month Forecast is created typically on the 11th/12th (SST data is delayed up to 11 days) Forecast and product release date is 12Z on the 15th. Range of operational products Moderately extensive set of graphical products on web Raw data in MARS Formal dissemination of real time forecast data

System 3 configuration Real time forecasts: 41 member ensemble forecast to 7 months SST and atmos. perturbations added to each member 11 member ensemble forecast to 13 months Designed to give an ‘outlook’ for ENSO Only once per quarter (Feb, May, Aug and Nov starts) November starts are actually 14 months (to year end) Back integrations from 1981-2005 (25 years) 11 member ensemble every month 5 members to 13 months once per quarter

How many back integrations? Back integrations dominate total cost of system System 3: 3300 back integrations (must be in first year) 492 real-time integrations (per year) Back integrations define model climate Need both climate mean and the pdf, latter needs large sample May prefer to use a “recent” period (30 years? Or less??) System 2 had a 75 member “climate”, System 3 has 275. Sampling is basically OK Back integrations provide information on skill A forecast cannot be used unless we know (or assume) its level of skill Observations have only 1 member, so large ensembles are much less helpful than large numbers of cases. Care needed eg to estimate skill of 41 member ensemble based on past performance of 11 member ensemble For regions of high signal/noise, System 3 gives adequate skill estimates For regions of low signal/noise (eg <= 0.5), need hundreds of years

Example forecast products A few examples only – see web pages for full details and assessment of skill Note: Significance values on plots A lot of variability in seasonal mean values is due to chaos Ensembles are large enough to test whether any apparent signals are real shifts in the model pdf We use the w-test, non-parametric, based on the rank distribution NOT related to past levels of skill

Other operational plots for JJA 2009

( … although these are not routinely plotted) Many other fields are available from the forecast system … ( … although these are not routinely plotted)

SST forecast performance Actual rms errors > model estimate of “perfect model” errors

More recent SST forecasts are better .... 1981-1993 1994-2007

At longer leads, model spread starts to catch up

How good are the forecasts? Deterministic skill: DJF ACC Temperature: actual forecasts Temperature: perfect model

How good are the forecasts? Deterministic skill: DJF ACC Precip: actual forecasts Precip: perfect model

How good are the forecasts? Probabilistic skill: Reliability diagrams Tropical precip < lower tercile, JJA NH extratrop temp > upper tercile, DJF

How good are the forecasts? Probabilistic skill: Reliability diagrams Europe: Temp > upper tercile, DJF

EUROSIP multi-model ensemble Three models running at ECMWF: ECMWF – as described Met Office – HADCM3 model, Met Office ocean analyses Meteo-France – Meteo-France model, Mercator ocean analyses Unified system Real-time since mid-2005 All data in ECMWF operational archive Common operational schedule (products released at 12Z on 15th) Common products Coordinated development strategy (sort of …) See “EUROSIP User Guide” on web for details, and also the recent ECMWF Newsletter article (Issue No. 118, Winter 2008/09)

Summer 2009 temperature forecast ECMWF EUROSIP (See Antje’s lecture for more on multi-model ensembles ...)

Model error and forecast interpretation Model error is large It dominates El Nino forecast errors Mean state and variability errors are very significant Errors cannot be easily fixed Products typically account for sampling error only Don’t take model probabilities as true probabilities Estimating forecast skill can be difficult In many cases, data is insufficient to produce sensible estimates Makes interpretation of forecasts tough In the end we need trustworthy models (Multi-model ensembles are small, and only partially span the space of model errors)

Calibrated forecasts Frequentist or Bayesian Sampling uncertainties give rise to frequentist probabilities Model errors mean our forecast pdfs are wrong. We can try to represent our lack of knowledge as “probabilities” in a Bayesian framework. Calibration of probabilistic forecasts Given the model forecast pdf, the past history of forecast pdfs, the past history of observed outcomes, what is the best estimate of the true pdf? A Bayesian analysis depends on specification of the prior. Mathematically, no single “right” answer. We hope one day to implement a hierarchical Bayesian analysis, accounting for the major uncertainties and producing some general purpose calibrated probabilistic forecasts. Handling trends and non-stationarities is an additional challenge

Some final comments Plenty of scope for improving model forecasts Nino SST forecasts still much worse than predictability limits Model errors still obvious in many cases Initial conditions probably OK in Pacific for some time, recently improved elsewhere by ARGO Model output -> user forecast Calibration and presentation of forecast information Potential for multi-model ensembles Timescale for improvements Optimist: in 10 years, we’ll have much better models, pretty reliable forecasts, confidence in our ability to handle climate variations Pessimist: in 10 years, modelling will still be a hard problem, and progress will largely be down to improved calibration