Download presentation
1
MJO Forecasting Metrics/Diagnostics
Matthew Wheeler – Australian Bureau of Meteorology/CAWCR Klaus Weickmann – NOAA/Physical Sciences Division on behalf of the U.S.-CLIVAR MJO Working Group (and others)
2
MJO WG Terms of Reference
(from Develop a set of diagnostics to be used for assessing MJO simulation fidelity and forecast skill. Develop and coordinate model simulation and prediction experiments, in conjunction with model-data comparisons, which are designed to better understand the MJO and improve our model representations and forecasts of the MJO. Raise awareness of the potential utility of subseasonal and MJO forecasts in the context of the seamless suite of predictions. Help to coordinate MJO-related activities between national and international agencies and associated programmatic activities. Provide guidance to US CLIVAR and Interagency Group (IAG) on where additional modeling, analysis or observational resources are needed.
3
MJO forecasting/prediction forms an integral part of the Terms of Reference of the MJO WG, thus there is a subgroup focussed on this activity. Our first goal has been to develop a diagnostic to measure the state of the MJO in forecast models. This talk summarizes the work of this subgroup to date. Apologies for inaccuracies…. This activity is currently less developed than that of the subgroup working on MJO simulation diagnostics.
4
Plan for this talk 1. Overview of the adopted combined-EOF metric
2. Examples of current application at the Operational Modelling Centres 3. Current issues (e.g. precip vs. OLR) 4. Verification and a statistical benchmark 5. A straw-man recipe 6. Future plans (e.g. a multi-model ensemble) 7. Further metrics?
5
1. Overview of the adopted combined-EOF metric
Discussions to date have led to the adoption of the so-called Wheeler-Hendon combined EOF index. The problem: Traditional methods of band-pass time filtering introduce end effects, and influence is spread across time. The solution: Extract the MJO signal by projecting daily model/analysis/observed fields onto a pair of pre-defined multivariate EOF spatial structures.
6
Defined using all seasons of data.
MJO spatial structures defined using EOFs of the combined fields of 15S-15N averaged OLR, u850, and u200. Prior removal of annual cycle and components of interannual variability (e.g. ENSO) was required, but still possible in real time. Defined using all seasons of data. Wheeler and Hendon (Mon. Wea. Rev., 2004)
7
The EOFs describe the convectively-coupled vertically-oriented circulation cells of the MJO that propagate eastward along the equator. Madden and Julian (1972)
8
Monitor the MJO life-cycle in phase space
An example from observations (NCEP/BoM analyses and satellite OLR) Define MJO Phases 1-8 for the generation of composites and impacts studies. Projection on EOF2 It is thus convenient to view the state of the MJO in the two-dimensional phase space defined by the two EOFs. Where RMM1 is the projection coefficient for the first EOF and RMM2 is the projection coefficient for the 2nd EOF. For example, we can look at the June-July period this year, where each dot represents the projection of an individual day of data onto the pair of EOFs. In this phase space, strong MJO variability appears as large anti-clockwise excursions around the origin, and weak MJO activity as more random behavior within the central circle. Given this real-time index, then, we can use these for empirical prediction, as I will show. But first it is important to demonstrate that this all-season index can capture the seasonal variation of the MJO. Projection on EOF1
9
Composites for different seasons demonstrate that the all-season index still captures the strong seasonality exhibited by the MJO. Dec-Feb Composite May-June Composite It is thus convenient to view the state of the MJO in the two-dimensional phase space defined by the two EOFs. Where RMM1 is the projection coefficient for the first EOF and RMM2 is the projection coefficient for the 2nd EOF. For example, we can look at the June-July period this year, where each dot represents the projection of an individual day of data onto the pair of EOFs. In this phase space, strong MJO variability appears as large anti-clockwise excursions around the origin, and weak MJO activity as more random behavior within the central circle. Given this real-time index, then, we can use these for empirical prediction, as I will show. But first it is important to demonstrate that this all-season index can capture the seasonal variation of the MJO.
10
…and impacts work has been done based on the RMM index.
Rain Event Probabilities TC Tracks Stratifying the MJO into 8 phases as I showed earlier in the (RMM1, RMM2) phase space, the conditional probability of the weekly rainfall being in the upper quintile ranges from less around 15% in phase 2 here, to about 60% in phase 6 here. This is to be compared to the normal probability of such an event of 33%.
11
Example impacts in North America based on RMM phases
Signal/Noise for 2 meter air temperature Eight MJO Phases, DJF Max ~+0.5 sigma => 67% prob > 0 anomaly 2 3 4 5 6 7 8 1
12
2. Examples of current application at the Operational Modelling Centres
UK Met Office (Nick Savage) 14-day ensemble prediction system Obs/Analysis Uses same EOFs as WH04. For the “observed” trajectory, uses their own model analyses (incl. for OLR). Climatologies are computed from the NCEP Reanalyses (same as WH04). Forecasts
13
NCEP (Gottschalck, Higgins, L’Heureux, Wang, Vintzileos..….)
15-day forecasts with Global Ensemble Forecast System (20 members) Obs/Analysis Generated their own EOF structures (but resemble WH04 very closely). Observed trajectory uses combination of operational analyses and Reanlayses, plus satellite OLR. Climatologies from NCEP Reanalyses. Forecast
14
Currently six operational centres are contributing forecasts of the RMM index.
Each are updated daily on the Experimental MJO Prediction web-site at NOAA/PSD.
15
3. Current issues Each centre currently uses a slightly different recipe. However, the results don’t appear overly sensitive to this, provided the EOF structures are the same or very similar. Some centres (e.g. CMC) don’t save OLR, and in some models the relationship between OLR and precip is not the same as that observed. Should we be using precip instead?
16
Precip versus OLR issue
WH04 EOFs using OLR, u850, and u200. 190E 145E 73E data. No filtering except some removal of low frequencies. 90E Same, except using pentad CMAP rainfall instead of daily OLR. 127E
17
The computed CMAP/u850/u200 EOFs are shifted longitudinally compared to the OLR/u850/u200 EOFs! This would result in a relative rotation in the index phase space. However, as the EOFs are a pair, any linear combination of them is equally applicable. So we may perform an orthogonal rotation. EOF1new = (1×EOF1 – 1.6×EOF2) / sqrt(12+(1.6)2) EOF2new = (1.6×EOF1 + 1×EOF2) / sqrt(12+(1.6)2) These new CMAP EOFs still remain a pair in quadrature, but match the phasing of the original OLR EOFs much more closely!
18
WH04 EOFs using OLR CMAP rotated: EOF1-1.6EOF2 1.6EOF1+EOF2
/ sqrt(12+(1.6)2) 90E
19
So we now have available equivalent precip/u850/u200 EOF structures, if desired.
But a problem is that there is no real-time daily precipitation dataset available for creation of the “observed” part of the phase-space trajectory.
20
…another issue Removal of low-frequency variability (e.g. ENSO) is not straight-forward, especially for models that have only a short history. Different methods result in a translation in the RMM Phase space.
21
4. Verification and a statistical benchmark
A statistical benchmark forecast of RMM1 and RMM2 can be provided through a first-order vector autoregressive model: (Maharaj and Wheeler, Int. J. Climatol., 2005) Which provides a very similar forecast to lagged linear regression (e.g., Jiang et al., Mon. Wea. Rev., 2007)
22
Example statistical benchmark forecasts
Now that we have our indices of the MJO, forecasts of the MJO can come from forecasting the values of RMM1 and RMM2. One way to do this is through multiple linear regression. For example, we can use as predictors RMM1 and RMM2 on the current day. Then at future leads predictions can be made from equations such as these, where the coefficients are computed independently for each lag, and are also made as a smoothly varying function of the time of year. An example from the 28th of May this year looks like this. Predictions of the indices in just look like circles that spiral into the centre in this phase space. This particular prediction shows that initially we had enhanced convection over the Pacific somewhere, or for the MJO to be in phase 7. Subsequently the MJO is forecast to evolve such that after 15 days the enhanced convection is entering the Indian Ocean region (in phase 2). Spirals heading around the origin
23
The easiest forecast verification statistic to calculate is the correlation between the forecasted and ‘observed’ RMM1/2 values. For the benchmark statistical scheme, the correlation skill is 0.52 for 15-day forecasts (using forecasts from all seasons, and averaging correlations for RMM1/2; Maharaj and Wheeler 2005). For the BMRC coupled seasonal forecast model (POAMA), the correlation skill is ~0.5 at 15 days (computed from a comprehensive hindcast set for ). PSD model also does worse than the statistical benchmark - correlation skill for the model is ~0.1 lower than the statistical benchmark at 14 days.
24
Can also stratify in many ways.
e.g., from POAMA: Can answer questions about “predictability barriers” as well. (e.g. Xianan Jiang has looked at this) Statistical Model DJF Actual Forecast Skill Courtesy of Harun Rashid DJF Potential Predictability
25
5. A straw-man recipe Good arguments exist for standardising the calculations between Centres (e.g., if we want to create a multi-model ensemble). Initially it has been very informative to see the different calculation and presentation strategies of the six Centres currently involved, but it is now time to bring the best of those strategies together. This makes all the more sense given our proposal to WGNE to ask all Operational Centres to contribute.
26
Proposed recipe (either to be computed by each Centre themselves, or by a single volunteering Centre): a) Use WH04 EOFs (or equivalent precip structures; some Centres may choose to try both) b) WH04 normalization factors for each field (OLR=15.1 Wm-2, u850=1.81 ms-1, u200=4.81 ms-1) c) All use the same climatology computed from NCEP Reanalyses and observed OLR/precip. d) All use the same methodology for removal of ENSO and other low-frequency variability (i.e., the removal of variability linearly related to an ENSO SST index and removal of mean of previous 120 days).
27
Note that the use of the same climatology and estimate of low-frequency variability will generate a bias in the models from the outset (i.e., the forecast initial condition “day 0” will not exactly match that from other models or the observations). For verification purposes, this bias could potentially be removed by correcting the model forecast RMM values until the “day 0” values (i.e., from the model analysis/initial condition), exactly match the “observed” RMM values on that day.
28
Shift forecasts until the “day 0” points (circled in red) correspond exactly to the observations.
Anomaly correlations will then always begin at a value of 1.0 for day 0.
29
6. Future plans 1. WGNE has offered to disseminate a letter to all other Operational Centres asking them to calculate the proposed metric (hence the need for a standard recipe). 2. Multi-model ensemble. 3. Acquisition and dissemination by who? Currently at NOAA-PSD, but will this move to NCEP? 3. Journal article by whole group?
30
7. Further metrics? So far we have concentrated only on the canonical eastward-propagating MJO. A metric designed specifically to the northward propagation in the Asian monsoon would also be desirable.
31
THE END
32
The global wind oscillation (GWO) is lurking!
MJO’s global teleconnection pattern 250 mb , DJF , 8 phases, ~27 cases/phase L H L L Indian Ocean 2 6 Western Pacific Ocean L H H L H H H L Rossby wave dispersion “fm trough” Blade et. al., 2007 nonlinear 3 vs. 7 Signals: for > 1 index 4-7 days between phases 3 7 L H L H L H L L L H H The 3-6 days is supported by the RMM1,2 spectra Wheeler and Hendon. The wave energy arrow in phase 3 is supported by Matthews and Kiladis. There is a nonlinearity between phase 2 and 6 that may be explained by Blade et al, 2007. Looks like nonlinear southwest USA between phase 4 and 8 also. Matthews and Kiladis, 1999 RWD into east Pac Maritime Continent 4 8 Western Hemisphere Africa Weickmann et. al, 1997 tilts imply sources/sinks 5 1 The global wind oscillation (GWO) is lurking!
33
Regression of RMM1 and 2 at initial time with week 2 verification, forecast or forecast error
1 m/s 3 m/s RMM1 >0 Phase 4-5 Indo; <0 phase 8-1 WH RMM2 >0 Phase 6-7 wPac; <0 phase 2-3 IO Week 2 forecast Week 2 verification
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.