Presentation is loading. Please wait.

Presentation is loading. Please wait.

Observation-Based Ensemble Spread-Error Relationship

Similar presentations


Presentation on theme: "Observation-Based Ensemble Spread-Error Relationship"— Presentation transcript:

1 Observation-Based Ensemble Spread-Error Relationship
World Weather Open Science Conference Montreal, Canada 18 August 2014 (Mon) Munehiko Yamaguchi1,2, Simon Lang1, Martin Leutbecher1, Mark Rodwell 1, Gabor Radnoti1, and Niels Bormann1                                    1: European Centre for Medium-Range Weather Forecasts 2: Meteorological Research Institute, Japan Meteorological Agency

2 Introduction Assessing the performance of numerical weather prediction (NWP) including ensemble prediction is of great importance in order to understand the skill of the prediction, identify the deficiencies and limitations, and further improve the NWP system (model dynamics, physics, data assimilation, etc.). A conventional way to do so is analysis-based evaluations. In other words, predictions are evaluated against their own analyses, which are the outcomes of a data assimilation used in the NWP system yielding the predictions. However, verifying against own analyses often creates a problematic situation in the interpretation of the verification results (Brawn et al WGNE Final Report; Kanehama 2013 WGNE).

3 T850 TR(20S-20N) FT=024 against own analysis
Courtesy of Mio Matsueda

4 T850 TR(20S-20N) FT=024 against ERA
Courtesy of Mio Matsueda

5 Kanehama, T., 2014: Verification against multiple analyses, WGNE-29.

6 Problematic situations in the analysis-based verification
Example Problematic situations in the analysis-based verification Park et al. (2008) demonstrate using TIGGE data set that the advantage of verifying predictions against their own analyses can be large. Since NWP models for predictions are also used in data assimilation schemes to provide so-call first-guess fields, the analysis and forecast fields are in the same model world (as opposed to the real world). If they have the same or similar systematic error patterns, the advantage of the own analysis-based verification could be rather large. Another troublesome situation associated with the analysis-based verification is when new observations are added to a data assimilation scheme. As new observations can produce significant changes in analysis fields, especially in data-sparse areas, relative to changes in forecast fields, the verification results may look worse when the forecasts are verified against the analyses with the new observations assimilated (Bouttier and Kelly 2001). Such a questionable degradation of forecast performance can be seen more distinctively at short range forecasts (Geer et al. 2010).

7 Motivation of this study
Ensemble forecasts are usually verified against their own analyses and the limited number of conventional data (e.g. radiosondes, synop) Satellite observations are less used. In this study, ensemble forecasts are verified against satellite observations using a newly developed verification tool at ECMWF. Verifying ensemble forecasts against satellite observations will provide additional views of understanding the forecast performance. Moreover, it may lead to future possible developments and/or amendments of the ensemble forecasting system.

8 Schematic of the Forecast Obstat (example of verifying 3-day forecast)
What we have done 1/2 1. We have developed a new verification tool called “forecast obstat”, in which (deterministic) forecasts are verified against assimilated observations including a various types of satellite observations. Schematic of the Forecast Obstat (example of verifying 3-day forecast)

9 What we have done 2/2 2. We modified the forecast obstat so that it could handle ensemble forecasts (hereafter referred to as ensemble forecast obstat: EFO). 3. We modified the EFO so that it can run with and without tendency perturbations (Stochastically Perturbed Parameterization Tendencies scheme (SPPT, Buizza et al. 1999, Palmer et al. 2009) and the Stochastic Kinetic Energy Backscatter scheme (SKEB, Berner et al. 2009, Palmer et al. 2009)). 4. Then we have conducted a set of experiments (next slide).

10 Ensemble Forecast Obstat Experiments
Ensemble resolution: TL639L91 Ensemble size: 21 (20 perturbed members + 1 unperturbed member) Two different configurations of the ensemble forecasts With tendency perturbations Without tendency perturbations Three forecast lead times 24 hour (actually h) 48 hour (actually h) 120 hour (actually h) Set of start dates: – with a 4 day interval

11 Ensemble Spread-Error Relationship
Ensemble spread-error relationship is verified. The ensemble variance and ensemble mean forecast departure is calculated in grid-point space with a horizontal resolution of 0.25 x 0.25 degrees (roughly equivalent to TL639). When the number of observations in a grid box is one, the ensemble variance and ensemble mean forecast departure at the observation point represent the value of the grid point. When the number of observations in a grid box is two or more, the ensemble variance and ensemble mean forecast departure at each observation point are first calculated. Then a simple average of them represents the value of the grid point. 0.25 deg. 0.25 deg.

12 Ensemble Spread-Error Relationship -2-
For reliable ensemble, The ensemble size can be accounted for in principle but has been neglected in the results shown here.

13 Observations and the observation errors
Note Radiosonde (700hPa) K (value employed for the data assimilation at ECMWF) constant over the globe AMSU-A Ch5 (the weighting function peaks at around 700 hPa.) 0.2 K (Bormann and Bauer, 2010) Fig. 3 of Bormann and Bauer (2010, QJRMS)

14 Against observation -Radiosonde-
Evaluation of Spread-Error Relationship -NH T700- FT Against analysis (Conventional method) Against observation -Radiosonde- 24 (hrs) 120 Not Over-dispersive Over-dispersive ϵ2 (K2) Red: With Tendency Perturbations Blue: W/O Tendency Perturbations

15 Against observation -AMSUA-
Evaluation of Spread-Error Relationship -NH 700- FT Against analysis (Conventional method) Against observation -AMSUA- 24 (hrs) 120 Not Over-dispersive Over-dispersive ϵ2 (K2) Red: With Tendency Perturbations Blue: W/O Tendency Perturbations

16 Sensitive to specific humidity @ 850hPa
Other Sensors: Example SSMIS Typhoon BOLAVEN IR UTC The assumption that the observation errors are constant over the globe may not be valid because they may become larger where active weather systems exist (e.g. tropical cyclone). While it is less likely that this is an issue for the temperature-sounding from AMSU-A, it is more difficult to exclude this possibility for humidity-sounding channels like MHS and SSMIS. Sensitive to specific 850hPa Squared Error of EM (T+24h) Ensemble variance (T+24h)

17 Conclusions Observation-based evaluations provide different views of the spread/error relationship from those of the analysis-based evaluations. Relying only on verification against analyses may lead to wrong decision-making in the future improvement of the ensemble system. The methodology used here is very general and permits to use any assimilated observation for verification. It can be extended to include more metrics (e.g. probabilistic scores). Substantial further work will be required to permit routine use of this approach.

18 Supplementary slides

19 Sensitive to specific humidity @ 850hPa
Other Sensors: Example SSMIS Typhoon BOLAVEN IR UTC Sensitive to specific 850hPa Squared Error of EM (T+24h) Ensemble variance (T+24h)

20 Ensemble Forecast Obstat Experiments
Set of start dates: – with a 4 day interval With tendency perturbations Without tendency perturbations Resolution TL639L91 Child exp. cf pf FT=24 fvn8 fvn5 fwrg fwlu FT=48 fvt2 fvt1 fx3h FT=120 fvq3 fvpx fwre 20 ensemble members

21 Ensemble Spread-Error Relationship -2-
For reliable ensemble, The ensemble size can be accounted for in principle but has been neglected in the results shown here.


Download ppt "Observation-Based Ensemble Spread-Error Relationship"

Similar presentations


Ads by Google