Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.

Slides:



Advertisements
Similar presentations
Zentralanstalt für Meteorologie und Geodynamik Calibrating the ECMWF EPS 2m Temperature and 10m Wind Speed for Austrian Stations Sabine Radanovics.
Advertisements

A unified linear Model Output Statistics scheme for both deterministic and ensemble forecasts S. Vannitsem Institut Royal Météorologique de Belgique Monterey,
Matthew Hendrickson, and Pascal Storck
1 00/XXXX © Crown copyright Use of radar data in modelling at the Met Office (UK) Bruce Macpherson Mesoscale Assimilation, NWP Met Office EWGLAM / COST-717.
A community statistical post-processing system Thomas Nipen and Roland Stull University of British Columbia.
Patrick Tewson University of Washington, Applied Physics Laboratory Local Bayesian Model Averaging for the UW ProbCast Eric P. Grimit, Jeffrey Baars, Clifford.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Statistics, data, and deterministic models NRCSE.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
NCAR Efficient Production of High Quality, Probabilistic Weather Forecasts F. Anthony Eckel National Weather Service Office of Science and Technology,
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
Application of a Multi-Scheme Ensemble Prediction System for Wind Power Forecasting in Ireland.
Applications of Bayesian sensitivity and uncertainty analysis to the statistical analysis of computer simulators for carbon dynamics Marc Kennedy Clive.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
Renewable Energy Research Laboratory University of Massachusetts Prediction Uncertainties in Measure- Correlate-Predict Analyses Anthony L. Rogers, Ph.D.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
© British Crown copyright 2014 Met Office A comparison between the Met Office ETKF (MOGREPS) and an ensemble of 4DEnVars Marek Wlasak, Stephen Pring, Mohamed.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Local Probabilistic Weather Predictions for Switzerland.
Insights from CMC BAMS, June Short Range The SPC Short-Range Ensemble Forecast (SREF) is constructed by post-processing all 21 members of the NCEP.
18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
Post-processing air quality model predictions of fine particulate matter (PM2.5) at NCEP James Wilczak, Irina Djalalova, Dave Allured (ESRL) Jianping Huang,
11th EMS & 10th ECAM Berlin, Deutschland The influence of the new ECMWF Ensemble Prediction System resolution on wind power forecast accuracy and uncertainty.
On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ.
Short-Range Ensemble Prediction System at INM García-Moya, J.A., Santos, C., Escribà, P.A., Santos, D., Callado, A., Simarro, J. (NWPD, INM, SPAIN) 2nd.
Kalman filtering at HNMS Petroula Louka Hellenic National Meteorological Service
A study on the spread/error relationship of the COSMO-LEPS ensemble Purpose of the work  The spread-error spatial relationship is good, especially after.
Verification methods - towards a user oriented verification The verification group.
National Oceanic and Atmospheric Administration’s National Weather Service Colorado Basin River Forecast Center Salt Lake City, Utah 11 The Hydrologic.
1 Preliminary Validation of the GOES-R Rainfall Rate Algorithm(s) over Guam and Hawaii 30 June 2016 Presented By: Bob Kuligowski NOAA/NESDIS/STAR.
ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course Mar 2016.
Encast Global forecasting.
Skillful Arctic climate predictions
Xuexing Qiu and Fuqing Dec. 2014
Impact of AMDAR/RS Modelling at the SAWS
Systematic timing errors in km-scale NWP precipitation forecasts
What is in our head…. Spatial Modeling Performance in Complex Terrain Scott Eichelberger, Vaisala.
Aircraft weather observations: Impacts for regional NWP models
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
Model Post Processing.
S.Alessandrini, S.Sperati, G.Decimi,
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
Challenge: High resolution models need high resolution observations
Post Processing.
Observation uncertainty in verification
The Importance of Reforecasts at CPC
New Developments in Aviation Forecast Guidance from the RUC
Xiefei Zhi, Yongqing Bai, Chunze Lin, Haixia Qi, Wen Chen
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Ensemble-4DWX update: focus on calibration and verification
Alex Gallagher and Dr. Robert Fovell
GloSea4: the Met Office Seasonal Forecasting System
MOS What does acronym stand for ?
MOGREPS developments and TIGGE
Extra-Tropical Storm Surge (ETSS 2.1)
Atmospheric Sciences 452 Spring 2019
Verification of probabilistic forecasts: comparing proper scoring rules Thordis L. Thorarinsdottir and Nina Schuhen
the performance of weather forecasts
Strategic Intervention A novel use of Ensembles in Forecast Guidance
Presentation transcript:

Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex Lenkoski 11.04.2019 nina.schuhen@nr.no

Status Quo Most forecasts are not revisited once issued Newest forecast run is usually the best RAFT: correct a forecast trajectory while it verifies

RAFT: Concept

Data Set: UK Met Office MOGREPS-UK 2.2 km 70 Levels 36 hourly forecast 4 times/day 12 members Interpolated to 150 observation sites in UK and Ireland Training period: Jan to Dec 2014 Evaluation period: Jan 2015 to Jul 2016 Surface temperature and wind speed

Initial Calibration: EMOS / NGR Raw ensemble output is calibrated using EMOS (NGR) Removal of deterministic and probabilistic biases Performed after the ensemble run is completed, based on a rolling training period 16% reduction in CRPS Here: adjust EMOS mean, keep calibrated EMOS spread Gneiting, T., A.E. Raftery, A.H. Westveld, and T. Goldman, 2005: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation. Mon. Wea. Rev., 133, 1098–1118

Correlation of Forecast Errors 03 UTC run at Heathrow Airport Significant at 90% level Correlation between lead times varies Depends strongly on diurnal cycle

RAFT Framework: Error Correction At any time a new forecast is issued for the next 36 hours: with Observations become available while this forecast trajectory is valid Observed forecast error at : Correct the next forecast(s) using a function of this error:

RAFT Framework: Estimating the Error Estimate future errors by linear regression Previous forecasts’ errors as predictors: Correct forecasts by adding the estimated error: Adjustment period: find earliest lead time, where is significantly different from 0

Adjustment Period Length

Heathrow, 14.01.2016 03 UTC

Results for Surface Temperature iStock.com/Zbynek Pospisil

RMSE (All Sites): At Lead Time 1

RMSE (All Sites): At Lead Time 35

Rank and PIT Histograms Coverage of 84.62% prediction interval: 52.24% 79.29% 87.31%

Results for 10m Wind Speed (ongoing work) iStock.com/photoschmidt

RAFT for Wind Speed EMOS: truncated Gaussian distribution Adjust mean of truncated distribution Length of adjustment period is slightly «jumpy» Wind profiles differ widely between stations Thorarinsdottir, T. L. and T. Gneiting, 2010: Probabilistic Forecasts of Wind Speed: Ensemble Model Output Statistics by Using Heteroscedastic Censored Regression. J. Royal Stat. Soc. A, 173, 371-388

RMSE (All Sites): At Lead Time 35

Conclusions 1 Strong correlation between errors at different lead times allows us to adjust the forecasts a few hours ahead Temperature: predictability depends heavily on the time of day Wind speed: more location-dependent, need local approach Regression model reduces forecast error overall, with a big margin for short-term corrections Should prefer RAFT forecasts from a previous model run to the first forecasts from the current one

Conclusions 2 RAFT naturally applies to any weather forecasting setting, only needs correlation structure between lead times Did not take into account the time it takes to run the NWP model => older forecasts become even more valuable Higher observation frequency can bring additional benefits Rapid ensemble update cycles: Can also be used to level out differences in skill between lagged ensemble members

Outlook and Future Work RAFT only corrects the mean forecast Spread should be updated to reflect the larger deterministic skill Can we spread parameters spatially to non-observation locations? Take into account seasonal effects (more data needed) Apply RAFT to seasonal temperature forecasts

Rapid Adjustment of Forecast Trajectories (RAFT) Bias correction of forecast trajectory while it verifies Define adjustment period based on correlation between forecast errors Predict future error with linear regression Large reduction in RMSE a few hours ahead Older forecasts can have more skill than the newest RAFT/EMOS predictive distributions are still calibrated

RMSE (Heathrow): All Runs, Lead Time 35

RMSE and CRPS for Different Site Types