The Importance of Reforecasts at CPC

Slides:



Advertisements
Similar presentations
LRF Training, Belgrade 13 th - 16 th November 2013 © ECMWF Sources of predictability and error in ECMWF long range forecasts Tim Stockdale European Centre.
Advertisements

CanSISE East meeting, CIS, 10 February 2014 Seasonal forecast skill of Arctic sea ice area Michael Sigmond (CCCma) Sigmond, M., J. Fyfe, G. Flato, V. Kharin,
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Instituting Reforecasting at NCEP/EMC Tom Hamill (ESRL) Yuejian Zhu (EMC) Tom Workoff (WPC) Kathryn Gilbert (MDL) Mike Charles (CPC) Hank Herr (OHD) Trevor.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
The NCEP operational Climate Forecast System : configuration, products, and plan for the future Hua-Lu Pan Environmental Modeling Center NCEP.
Seasonal Volume Forecasts Using Ensemble Streamflow Prediction for the 2008 Water Year Steve King, Hydrologist Northwest River Forecast Center.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Climate Prediction Center FY14 Review Jon Gottschalck Acting Chief, Operational Prediction Branch NOAA / NWS / Climate Prediction Center NCEP Production.
The Long Journey of Medium-Range Climate Prediction Ed O’Lenic, NOAA-NWS-Climate Prediction Center.
Multi-Model Ensembling for Seasonal-to-Interannual Prediction: From Simple to Complex Lisa Goddard and Simon Mason International Research Institute for.
1 Ensemble Reforecasts Presented By: Yuejian Zhu (NWS/NCEP) Contributors:
CPC Forecasts: Current and Future Methods and Requirements Ed O’Lenic NOAA-NWS-Climate Prediction Center Camp Springs, Maryland ,
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
The Utilization of the Graphic Forecast Generator (GFE) to Locally Apply CPC’s Week Two Forecast.
The La Niña Influence on Central Alabama Rainfall Patterns.
EUROBRISA Workshop – Beyond seasonal forecastingBarcelona, 14 December 2010 INSTITUT CATALÀ DE CIÈNCIES DEL CLIMA Beyond seasonal forecasting F. J. Doblas-Reyes,
The Application of Reforecasts at WPC Thomas Workoff 1, Wallace Hogsett, Faye Barthold 2 and David Novak Weather Prediction Center 1 Systems Research Group,
1 Using reforecasts for probabilistic forecast calibration Tom Hamill & Jeff Whitaker NOAA Earth System Research Lab, Boulder, CO NOAA.
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
RFC Climate Requirements 2 nd NOAA Climate NWS Dialogue Meeting January 4, 2006 Kevin Werner.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss A more reliable COSMO-LEPS F. Fundel, A. Walser, M. A.
Sources of Skill and Error in Long Range Columbia River Streamflow Forecasts: A Comparison of the Role of Hydrologic State Variables and Winter Climate.
Exploring the Possibility to Forecast Annual Mean Temperature with IPCC and AMIP Runs Peitao Peng Arun Kumar CPC/NCEP/NWS/NOAA Acknowledgements: Bhaskar.
The 2 nd phase of the Global Land-Atmosphere Coupling Experiment Presented by: Bart van den Hurk (KNMI) Direct questions to Randal Koster, GMAO,
Climate Prediction Center: Challenges and Needs Jon Gottschalck and Arun Kumar with contributions from Dave DeWitt and Mike Halpert NCEP Production Review.
Multi Model Ensembles CTB Transition Project Team Report Suranjana Saha, EMC (chair) Huug van den Dool, CPC Arun Kumar, CPC February 2007.
Huug van den Dool and Suranjana Saha Prediction Skill and Predictability in CFS.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
Probabilistic Forecasts Based on “Reforecasts” Tom Hamill and Jeff Whitaker and
Climate Prediction and Products Breakout CTB Meeting November 10, 2015.
1 A review of CFS forecast skill for Wanqiu Wang, Arun Kumar and Yan Xue CPC/NCEP/NOAA.
National Oceanic and Atmospheric Administration’s National Weather Service Colorado Basin River Forecast Center Salt Lake City, Utah 11 The Hydrologic.
The 2 nd phase of the Global Land-Atmosphere Coupling Experiment Randal Koster GMAO, NASA/GSFC
Afghanistan Area Weather and Climate Products cpc. ncep
Mahkameh Zarekarizi, Hamid Moradkhani,
GPC-Montreal - Status Report - March 2014
Forecast Capability for Early Warning:
Fuzzy verification using the Fractions Skill Score
IRI Multi-model Probability Forecasts
Precipitation Products Statistical Techniques
Dan Petersen Bruce Veenhuis Greg Carbin Mark Klein Mike Bodner
Mike Staudenmaier NWS/WR/STID
Question 1 Given that the globe is warming, why does the DJF outlook favor below-average temperatures in the southeastern U. S.? Climate variability on.
Makarand A. Kulkarni Indian Institute of Technology, Delhi
Eric Jones Senior Hydrologist Lower Mississippi River Forecast Center
Model Post Processing.
Applications of Medium Range To Seasonal/Interannual Climate Forecasts For Water Resources Management In the Yakima River Basin of Washington State Shraddhanand.
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
Predictability of Indian monsoon rainfall variability
N. Voisin, J.C. Schaake and D.P. Lettenmaier
Deterministic (HRES) and ensemble (ENS) verification scores
A Climate Study of Daily Temperature Change From the Previous Day
GloSea4: the Met Office Seasonal Forecasting System
Ensemble forecasts and seasonal precipitation tercile probabilities
MOGREPS developments and TIGGE
Environment Canada Monthly and Seasonal Forecasting Systems
Latest Results on Variational Soil Moisture Initialisation
Forecast system development activities
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Power Regression & Regression estimation of event probabilities (REEP)
Ryan Kang, Wee Leng Tan, Thea Turkington, Raizan Rahmat
Presentation transcript:

The Importance of Reforecasts at CPC NCEP Production Suite Review - 3-5 Dec. 2013 Mike Charles, Melissa Ou, Dan Collins, Emily Riddle Climate Prediction Center

Reforecast-calibrated 6-10 day and 8-14 day Temp and Precip Forecasts

Week-2 Reforecast Tool Reliability Temperature Precipitation In the interest time, I’ll just show reliability. Reliability tells us how accurate the forecast probabilities are Takeaway for Temp - Manual forecast is conservative; Reforecast 2 has best reliability even at very high probabilities; Reforecast 1 has good reliability for forecast probabilities up to 60%, then overconfident Takeaway for Precip - Manual and reforecast 2 have best reliability through forecast probabilities up to 70%, but little to no cases above that

Sensitivity of Skill to Reforecast Sampling We did a study where we sampled the reforecast dataset in different ways to test the sensitivity of the skill to things like training years, model run frequency, and the number of ensemble members. Sensitivity of Skill to Reforecast Sampling

Sensitivity Study - RPSS Comparison of # training years (6 members, 1 run/week) Comparison of # Ensemble Members (26 years, 1 run/week) Comparison of model run frequency (26 years, 11 members) Temp – no sig. difference between 26 and 18 years Precip – each drop in the number of training years results in sig. lower skill Temp – no sig. difference between 11 and 6 members Precip – no sig. difference between 11 and 6 members Temp – no sig. difference between any number of days per week Precip – no sig. difference between 7 and 2 days per week

Sensitivity Study – Reliability - Temp Comparison of # Ensemble Members (26 years, 1 run/week) Comparison of # training years (6 members, 1 run/week) Comparison of model run frequency (26 years, 11 members) Little difference in reliability between 26 and 18 year training period Noticeable drop in reliability for 10 year training period (model overconfident) Model becomes more underconfident with decreasing members Still fairly good reliability with 6 members Negligible difference in reliability when decreasing runs per week Slightly more overconfident in extreme probabilities for 1 run per week

Conclusions For temp and precip, our reforecast-calibrated tool has much better reliability than the raw GEFS and any other post-processing technique at CPC Week-2 temp and precip skill is most sensitive to the number of training years in the reforecast dataset, and least sensitive to model run frequency. Our proposed optimal reforecast configuration: At least 20 years of reforecasts, preferably 30 years 5 ensemble members 2 runs per week This proposed 30-year reforecast configuration would cost approximately 52% of the computing of real-time ensemble forecasts, and 16% of the cost of producing the previous reforecast dataset. Using 20 years would be 10% of the cost. An untested (by CPC) once per 5 days might compare in skill to twice a week at 70% the cost (3.5 days / 5 days).

Questions?

Thanks Melissa Ou and Dan Collins for providing a lot of the content of this presentation

Outline Reforecast-calibrated 6-10 day and 8-14 day temp and precip tool Sensitivity of skill to reforecast sampling Describe CPC's use of reforecasts for the week-2 forecast Give an overview of CPC's reforecast calibration project Show an evaluation of CPC's new reforecast-calibrated forecast tool

CPC’s Week-2 Forecast Temperature Precipitation Heres an example of CPC's week-2 probabilistic temperature and precipitation forecasts

Reforecast-Calibrated Forecast Temperature Precipitation How well does the Reforecast-2-calibrated forecast do?

Reforecast Tool Skill - Temp - RPSS Showing 1 cold season for temp because of a cold-bias; probably bug in GEFS soil moisture Takeaway for Temp - Reforecast 2 huge improvement over Manual and all tools Takeaway for Precip - Reforecast 2 shows improvements over reforecast 1; competitive with manual

Reforecast Tool Skill - Precip - RPSS Showing 1 cold season for temp because of a cold-bias; probably bug in GEFS soil moisture Takeaway for Temp - Reforecast 2 huge improvement over Manual and all tools Takeaway for Precip - Reforecast 2 shows improvements over reforecast 1; competitive with manual

Reforecast Tool Skill - Temp - Reliability Reliability tells us how accurate the forecast probabilities are Takeaway for Temp - Manual forecast is conservative; Reforecast 2 has best reliability even at very high probabilities; Reforecast 1 has good reliability for forecast probabilities up to 60%, then overconfident Takeaway for Precip - Manual and reforecast 2 have best reliability through forecast probabilities up to 70%, but little to no cases above that

Reforecast Tool Skill - Precip - Reliability Reliability tells us how accurate the forecast probabilities are Takeaway for Temp - Manual forecast is conservative; Reforecast 2 has best reliability even at very high probabilities; Reforecast 1 has good reliability for forecast probabilities up to 60%, then overconfident Takeaway for Precip - Manual and reforecast 2 have best reliability through forecast probabilities up to 70%, but little to no cases above that

Sensitivity Study Goals Determine the impact of changing the sampling of reforecasts on the skill of real-time week-2 (days 8-14) calibrated temperature and precipitation forecasts. Evaluate the skill of different reforecast sampling cases using various skill scores. Find optimal reforecast sampling case(s) that maximized forecast scores, while reducing resources needed for producing a reforecast dataset. The goals for this project were to… 1. See how the skill of week-2 temperature and precip reforecast tool forecasts are impacted by changing the sample of reforecats used for calibration 2. Evaluate the skill of different reforecast sampling cases using various skill scores to have a more complete picture of impact to skill. 3. Finally, we wanted to see...

Sensitivity Study – Reliability - Precip Comparison of # Ensemble Members (26 years, 1 run/week) Comparison of # training years (6 members, 1 run/week) Comparison of model run frequency (26 years, 11 members) Model becomes more overconfident with decreasing training years 3-member ensemble has best reliability through 80% probabilities Except for a single-member ensemble, model becomes more overconfident with increasing members Negligible difference in reliability when decreasing to 2 runs per week More overconfident for 1 run per week

Week-2 GEFS Reforecast Mean Temp Bias DEC-JAN Ensemble Mean Bias JUL-AUG Ensemble Mean Bias Bias is the week-2 means minus the accumulated day-0 means for the same week. The weekly means were averaged over the 2-month periods shown. Bias is a large fraction of the variability Important to correct raw forecasts Must capture seasonally varying bias

Standardized Linear Trend Standardized Linear Trend 25-year (1985-2010) linear temperature trend standardized by observed weekly variability DEC-JAN Standardized Linear Trend JUL-AUG Standardized Linear Trend Over the U.S. the trend is more than 0.2 standard deviations, compared to the variance of weekly temperatures. In many areas of the globe the trend is more than 0.5 standard deviations. The near-normal tercile is about 0.8 standard deviations wide. Trend shifts weekly temperature anomalies by more than 2 tercile categories in some areas.

Model bias is changing with changing background climate state DEC-JAN Standardized Trend of Mean Bias JUL-AUG Standardized Trend of Mean Bias We have not yet tested using the trend in bias as part of model calibration. Trends in the model bias may be systematic and need to be corrected for. Longer reforecast would be needed to determine trends in the model bias.