Xiefei Zhi, Yongqing Bai, Chunze Lin, Haixia Qi, Wen Chen

Slides:



Advertisements
Similar presentations
Slide 1ECMWF forecast User Meeting -- Reading, June 2006 Verification of weather parameters Anna Ghelli, ECMWF.
Advertisements

Sub-seasonal to seasonal prediction David Anderson.
Munehiko Yamaguchi Typhoon Research Department, Meteorological Research Institute of the Japan Meteorological Agency 9:00 – 12: (Thr) Topic.
How do model errors and localization approaches affects model parameter estimation Juan Ruiz, Takemasa Miyoshi and Masaru Kunii
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
Creating probability forecasts of binary events from ensemble predictions and prior information - A comparison of methods Cristina Primo Institute Pierre.
Exponential Smoothing Methods
Neural Network Based Approach for Short-Term Load Forecasting
MOVING AVERAGES AND EXPONENTIAL SMOOTHING
Two adaptive radiation parameterisations Annika Schomburg 1), Victor Venema 1), Felix Ament 2), Clemens Simmer 1) 1) Department of Meteorology, University.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Model dependence and an idea for post- processing multi-model ensembles Craig H. Bishop Naval Research Laboratory, Monterey, CA, USA Gab Abramowitz Climate.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Time series Decomposition Farideh Dehkordi-Vakil.
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
11 Background Error Daryl T. Kleist* National Monsoon Mission Scoping Workshop IITM, Pune, India April 2011.
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Y. Fujii 1, S. Matsumoto 1, T. Yasuda 1, M. Kamachi 1, K. Ando 2 ( 1 MRI/JMA, 2 JAMSTEC ) OSE Experiments Using the JMA-MRI ENSO Forecasting System 2nd.
Seasonal Variation and Test of Qinghai-Tibetan Plateau Heating and Its Profile Zhong Shanshan, He Jinhai Key Laboratory of Meteorological Disasters of.
Lennart Bengtsson ESSC, Uni. Reading THORPEX Conference December 2004 Predictability and predictive skill of weather systems and atmospheric flow patterns.
Predicted Rainfall Estimation in the Huaihe River Basin Based on TIGGE Fuyou Tian, Dan Qi, Jingyue Di, and Linna Zhao National Meteorological Center of.
18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
General Meeting Moscow, 6-10 September 2010 High-Resolution verification for Temperature ( in northern Italy) Maria Stefania Tesini COSMO General Meeting.
1 A latent information function to extend domain attributes to improve the accuracy of small-data-set forecasting Reporter : Zhao-Wei Luo Che-Jung Chang,Der-Chiang.
Figures from “The ECMWF Ensemble Prediction System”
1/39 Seasonal Prediction of Asian Monsoon: Predictability Issues and Limitations Arun Kumar Climate Prediction Center
Time, probe type and temperature variable bias corrections to historical eXpendable BathyThermograph observations 1. International Center for Climate and.
International Workshop on Monthly-to-Seasonal Climate Prediction National Taiwan Normal Univ., October 2003 Evaluation of the APCN Multi-Model Ensemble.
Daiwen Kang 1, Rohit Mathur 2, S. Trivikrama Rao 2 1 Science and Technology Corporation 2 Atmospheric Sciences Modeling Division ARL/NOAA NERL/U.S. EPA.
ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course Mar 2016.
of Temperature in the San Francisco Bay Area
Section 11.1 Day 3.
Upper Rio Grande R Basin
5th International Conference on Earth Science & Climate Change
Stability, Fire Behaviour and the Haines Index
Spatial Modes of Salinity and Temperature Comparison with PDO index
Tom Hopson, NCAR (among others) Satya Priya, World Bank
Systematic timing errors in km-scale NWP precipitation forecasts
Statistical Downscaling of Precipitation Multimodel Ensemble Forecasts
INVERSE BUILDING MODELING
Instrumental Surface Temperature Record
Height and Pressure Test for Improving Spray Application
Chapter 4: Seasonal Series: Forecasting and Decomposition
Use of TIGGE Data: Cyclone NARGIS
IMPROVING HURRICANE INTENSITY FORECASTS IN A MESOSCALE MODEL VIA MICROPHYSICAL PARAMETERIZATION METHODS By Cerese Albers & Dr. TN Krishnamurti- FSU Dept.
of Temperature in the San Francisco Bay Area
Shuhua Li and Andrew W. Robertson
Predictability of 2-m temperature
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
Observation Informed Generalized Hybrid Error Covariance Models
Hydrologic ensemble prediction - applications to streamflow and drought Dennis P. Lettenmaier Department of Civil and Environmental Engineering And University.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
Instrumental Surface Temperature Record
Progress in Seasonal Forecasting at NCEP
The Importance of Reforecasts at CPC
Linear Model Selection and regularization
National Meteorological Center, CMA, Beijing, China.
Model generalization Brief summary of methods
Instrumental Surface Temperature Record
Reanalyses – A sharing of juicy tidbits* and gory details**
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Chaug‐Ing Hsu & Yuh‐Horng Wen (1998)
Adam J. O’Shay and T. N. Krishnamurti FSU Meteorology 8 February 2001
Presentation transcript:

Multimodel Superensemble Forecasts of Surface Temperature in the Northern Hemisphere Xiefei Zhi, Yongqing Bai, Chunze Lin, Haixia Qi, Wen Chen Nanjing University of Information Science & Technology Nanjing, China, 210044 Here the topic is Multimodel Superensemble Forecasts of Surface Temperature in the Northern Hemisphere This work is conducted by Prof. Xiefei Zhi and his team Monterey, CA Sep 2009

Outline Introduction Data and methods Error evaluation Multimodel superensemble forecast Improved superensemble forecast Summary First I will introduce u why we focus on this study and then tell sth about the datasets and methods After that I will present some results of our work including evaluation of single models, improvement of multimodel sup and so on. Finally give a brief summary.

Introduction Krishnamurti, T. N. et al (1999) in Science, Krishnamurti, T. N. et al (2000) in J. Climate Krishnamurti, T. N. et al (2001) in Mon. Wea. Rev. Postulated multimodel superensemble forecast method for weather and seasonal climate and compared the forecast skill of the multimodel forecasts with that of the individual models, the ensemble mean, and individually bias-removed ensemble mean. In 1999 Krishnamurti and his colleagues published a paper in Science and postulated the method of the multimodel superensemble forecast. Later they published a series of papers and discussed the advantages of multimodel superensemble forecast.

Introduction The multimodel superensemble forecasts outperform all the individual models. The skill of the superensemble-based rain rates is higher than (a) individual model’s skills, (b) skill of the ensemble mean, and (c) skill of the ensemble mean of individually bias-removed models. They find that

The work includes New Idea Model 1 Model 2 Model 3 Model n Error Evaluation of the Ensemble Mean Forecasts Superensemble Forecasting Evaluating the Forecast Skill of New Idea In this study, first, we evaluate the forecast error of each model. And then we conduct the superensemble forecasting with comparisons to test its improvement of the forecast skill. Although the sup using fixed training period gives a big improvement in the surface temperature forecast, there are still some problems in it. as a result, we bring up a new idea to use running training period.

Data and Methods Data 1) Ensemble forecasts of the temperature at 2m from ECMWF, JMA, NCEP and UKMO provided by TIGGE archives. Period: 1 June 2007 to 31 August 2007 Area: 10°-80°N ,0°-357.5°,with a resolution of 1.25°×1.25° Forecast: 24h-168h with a time interval of 24hrs 2) NCEP/NCAR Reanalyses are used as “observational data” Period: 1 June 2007 to 7 September 2007 Area: same as that of dataset 1) with a resolution of 2.5°×2.5° The datasets we used can be divided into two parts. The first is the Ensemble forecasts of the surface temperature from ECMWF, JMA, NCEP and UKMO. The second part is the NCEP/NCAR Reanalyses which are used as “observational data”

Methods Superensemble: Bias-removed Ensemble Mean: Root Mean Square Error: Here comes the methods we used in this study, the details can be find in Krishnamurti‘s papers. where S t= superensemble prediction, O bar= time mean of observed state, ai = weight for model i, i = model index, n= number of models, Fi bar = time mean of prediction by model i, and Fi = prediction by model i.

Methods Creation of a superensemble forecast at a given grid point: . The weights ai are computed at each grid point by minimizing the function G in (1.5) In the multimodel superensemble forecast, several model outputs are put together with appropriate weights in order to obtain a combined estimation of meteorological parameters. Weights are calculated by square error minimization in a so-called training period. During the training period, we can calculate the weights ai The multimodel superensemble forecast will be created by substituting the weights ai into (1.1) During the forecast period.

Forecast errors of ensemble mean forecasts of each model Error Evaluation Firstly we Evaluate the forecast errors of Ensemble Mean forecasts of each models We can find that For 24-48h forecast, JMA is the best single model. While for 72-168h forecast the ECMWF is the best. . Forecast errors of ensemble mean forecasts of each model

Error Evaluation a c b d this figure gives the Mean RMSEs of the surface temperature in China, USA and Europe for (a) ECMWF, (b)JMA, (c)NCEP and (d)UKMO clearly that the mean RMSEs are the largest in China and the smallest in U.S. Perhaps it is because the model performance over the Tibetan Plateau is not good. Mean RMSEs of the surface temperature in China, USA and Europe for (a) ECMWF, (b)JMA, (c)NCEP, and (d)UKMO (Unit:℃).

Error Evaluation Comparisons among the four models Geographical distribution of the RMSEs of the 24h forecast ECMWF JMA NCEP UKMO Point one Slow speed Different models has different performance in different areas. Here shows the geographical distribution of the RMSEs of the 24h surface temperature forecast. Although JMA has the smallest mean RMSEs for 24h forecast, it has relatively larger errors in East China. While ECMWF and NCEP perform very well in this area. It also can be noticed that the RMSEs is large over the Tibetan Plateau for all models So it is necessary to combine the outputs of different models to improve our forecast skill.

Multimodel superensemble forecast (g) (a) (b) (e) (c) (d) why? How to deal with it Point two Slow On this stage, we conducted the multimodel superensemble by using linear regression and neural networks. both of them have a considerable improvement on forecast skill over the best single model and the multimodel ensemble mean (EMN) for 24h to 72h forecast. The most interesting thing here is that the RMSEs of the 96h to 168h superensemble forecasts increase rapidly and exceeds that of the ensemble mean and even the single model (ECMWF) forecast during the last week of the forecast period. However, the superensemble using neural networks (NNSUP) has a better skill compared with that using linear regression. Why the RMSEs increase such rapidly? We guess that it might be caused by the importance of the weights decreased with the length of the forecast period. And how to deal with this problem? This is our next step. Mean RMSEs of the surface temperature forecast with fixed training period (a) 24h, (b)48h, (c)72h, (d)96h(e)120h, (f)144h and (g)168h

Improved superensemble forecast with running training periods Point three Slow In order to overcome the shortage of the superensemble forecast using fixed training period, the running training period was applied to the multimodel superensemble. The length of the running training period is about 2month. And the red line shows the performance of the R-NNSUP while the black one is R-LRSUP. The orange and blue lines are the superensemble using fixed training period. The purple one is multimodel ensemble mean. Different from the multimodel superensemble with fixed training period, the RMSEs of 96h-168h forecasts of the multimodel superensemble with running training period do not increase during the last week of the forecast period. Both linear regression (R-LRSUP) and neural network (R-NNSUP) based multimodel superensemble with running training period have higher forecast skill over those with fixed training period and the multimodel ensemble mean (EMN) for the 24h-168h forecasts. It also can be found that the superensemble using running training period perform better than the Bias-removed ensemble mean for 24h-168h forecast. Furthermore, the superensemble using neural network has a little bit higher forecast skill than that using linear regression. RMSEs of the improved surface temperature forecast for (a) 24h, (b)48h, (c)72h, (d)96h, (e)120h, (f)144h and (g)168h (Unit:℃). (g)

Improved superensemble forecast with running training periods 0.5 1 1.5 2 2.5 3 24h 48h 72h 96h 120h 144h 168h Time (Hours) ECMWF JMA NCEP UKMO EMN LRSUP NNSUP R-LRSUP R-NNSUP Rms error(℃) Percentage improvement of the EMN, LRSUP, NNSUP, R-LRSUP, R-NNSUP over the best model 10 20 30 40 50 24h 48h 72h 96h 120h 144h 168h Time (Hours) % Improvement EMN LRSUP NNSUP R-LRSUP R-NNSUP This two pictures also show the improvement of the EMN, LRSUP, NNSUP, R-LRSUP and R-NNSUP over the best model. It is clear that R-LRSUP and R-NNSUP have better performance than the best model (ECMWF), EMN, LRSUP and NNSUP. Mean RMSEs of the 24-168h surface temperature forecast

Improved superensemble forecast with running training periods Geographical distribution of the RMSEs for 24h、120h forecast from the best model, EMN, R-LRSUP and R-NNSUP 24h 120h Here are the Geographical distribution of the RMSEs for 24h、120h forecast from the best model, EMN, R-LRSUP and R-NNSUP Obviously, for both 24h and 120h forecast, R-LRSUP and R-NNSUP give a considerable reduction in RMSEs specially over east asia in comparison with the best single model and EMN.

Optimal length of the training period Ave: 10°-80°N; 0°-357.5°E (a) Ave: 10°-30°N; 0°-357.5°E (b) Ave: 30°-60°N; 0°-357.5°E (c) Ave: 60°-80°N; 0°-357.5°E (d) Point four Slow Is 2 months appropriate length of the training period. The RMSEs of the superensemble forecasts with running training period decrease with the length of the training period rapidly from 20 to 30 days. While, from 30 to70 days, They decrease steadily. However, the RMSEs vary barely with the training length from 30 to 70 days in tropical latitudes. The variation of the forecast RMSEs with the training length in extratropics is similar to that in the Northern Hemisphere. The optimal training length is different for different forecast time. The optimal training length in tropics is about one month for 24h-168h forecasts. However, In extratropics, the optimal training length is about one month for 24h-72h forecasts and about two months for 96h-168h forecasts. But our experiments only put on the summer season. There might be some differences in other seasons. The mean RMSEs of the surface temperature forecasts versus the length of the running training period

Summary The superensemble with fixed training period gives a good improvement of 24h-72h temperature forecast with RMSEs reduction over the best single model forecast and the multimodel ensemble mean. The superensemble forecast using running training period further improves 96h-168h temperature forecasts. The optimal training length is different for different forecast time. with RMSE reduction over the best single model forecast and the multimodel ensemble mean. ……. The optimal training length for the multimodel superensemble with running training period is about one month for 24h-168h forecasts in tropics. Nevertheless, it is about one month for 24h-72h forecasts and about two months for 96h-168h forecasts in extratropics.

Thank you! Wen Chen Monterey, CA Sep 2009 Pay attention to the pronunciation of key words Large or considerable improvement Simply tell sth about the methods Present times Wen Chen Monterey, CA Sep 2009