Huug van den Dool / Dave Unger Consolidation of Multi-Method Seasonal Forecasts at CPC. Part I.

Slides:



Advertisements
Similar presentations
ECMWF long range forecast systems
Advertisements

Verification of NCEP SFM seasonal climate prediction during Jae-Kyung E. Schemm Climate Prediction Center NCEP/NWS/NOAA.
Seasonal Climate Prediction at Climate Prediction Center CPC/NCEP/NWS/NOAA/DoC Huug van den Dool
Downscaling ensembles using forecast analogs Jeff Whitaker and Tom Hamill
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Seasonal Predictability in East Asian Region Targeted Training Activity: Seasonal Predictability in Tropical Regions: Research and Applications 『 East.
1 Seasonal Forecasts and Predictability Masato Sugi Climate Prediction Division/JMA.
Creating probability forecasts of binary events from ensemble predictions and prior information - A comparison of methods Cristina Primo Institute Pierre.
ENSO Forcing of Streamflow Conditions in the Pearl River Basin R. Jason Caldwell, LMRFC and Robert J. Ricks, WFO LIX.
CPC Seasonal Forecasts ASO 2005-JAS 2006 Edward O’Lenic Michael Halpert, David Unger NOAA-NWS-CPC 31 st Climate Diagnostics and Prediction Workshop Boulder,
© Crown copyright Met Office Andrew Colman presentation to EuroBrisa Workshop July Met Office combined statistical and dynamical forecasts for.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Multi-Model Ensembling for Seasonal-to-Interannual Prediction: From Simple to Complex Lisa Goddard and Simon Mason International Research Institute for.
CPC Forecasts: Current and Future Methods and Requirements Ed O’Lenic NOAA-NWS-Climate Prediction Center Camp Springs, Maryland ,
Recent developments in seasonal forecasting at French NMS Michel Déqué Météo-France, Toulouse.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
1 How Does NCEP/CPC Make Operational Monthly and Seasonal Forecasts? Huug van den Dool (CPC) CPC, June 23, 2011/ Oct 2011/ Feb 15, 2012 / UoMDMay,2,2012/
Intraseasonal TC prediction in the southern hemisphere Matthew Wheeler and John McBride Centre for Australia Weather and Climate Research A partnership.
Chapter 6 Lecture 3 Sections: 6.4 – 6.5.
EUROBRISA Workshop – Beyond seasonal forecastingBarcelona, 14 December 2010 INSTITUT CATALÀ DE CIÈNCIES DEL CLIMA Beyond seasonal forecasting F. J. Doblas-Reyes,
NOAA’s Seasonal Hurricane Forecasts: Climate factors influencing the 2006 season and a look ahead for Eric Blake / Richard Pasch / Chris Landsea(NHC)
Heidke Skill Score (for deterministic categorical forecasts) Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Model validation Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
Seasonal forecasting from DEMETER to ENSEMBLES21 July 2009 Seasonal Forecasting From DEMETER to ENSEMBLES Francisco J. Doblas-Reyes ECMWF.
Objective weighting of Multi-Method Seasonal Predictions (Consolidation of Multi-Method Seasonal Forecasts) Huug van den Dool Acknowledgement: David Unger,
Verification of IRI Forecasts Tony Barnston and Shuhua Li.
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
Forecasting in CPT Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
ENSEMBLES RT4/RT5 Joint Meeting Paris, February 2005 Overview of the WP5.3 Activities Partners: ECMWF, METO/HC, MeteoSchweiz, KNMI, IfM, CNRM, UREAD/CGAM,
1 Motivation Motivation SST analysis products at NCDC SST analysis products at NCDC  Extended Reconstruction SST (ERSST) v.3b  Daily Optimum Interpolation.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Local Probabilistic Weather Predictions for Switzerland.
CTB Science Plan For Multi Model Ensembles (MME) Suru Saha Environmental Modeling Centre NCEP/NWS/NOAA.
Development of Precipitation Outlooks for the Global Tropics Keyed to the MJO Cycle Jon Gottschalck 1, Qin Zhang 1, Michelle L’Heureux 1, Peitao Peng 1,
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Verification and Metrics (CAWCR)
The lower boundary condition of the atmosphere, such as SST, soil moisture and snow cover often have a longer memory than weather itself. Land surface.
1 Progress on Science Activities: Climate Forecast Products Team Probabilistic forecasts of Extreme Events and Weather Hazards in the US (PI: Charles Jones.
Recent and planed NCEP climate modeling activities Hua-Lu Pan EMC/NCEP.
“Comparison of model data based ENSO composites and the actual prediction by these models for winter 2015/16.” Model composites (method etc) 6 slides Comparison.
Exploring the Possibility to Forecast Annual Mean Temperature with IPCC and AMIP Runs Peitao Peng Arun Kumar CPC/NCEP/NWS/NOAA Acknowledgements: Bhaskar.
Education 793 Class Notes Inference and Hypothesis Testing Using the Normal Distribution 8 October 2003.
1 Malaquias Peña and Huug van den Dool Consolidation of Multi Method Forecasts Application to monthly predictions of Pacific SST NCEP Climate Meeting,
Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb.
1 How Does NCEP/CPC Make Operational Monthly and Seasonal Forecasts? Huug van den Dool (CPC) ESSIC, February, 23, 2011.
Meteorology 485 Long Range Forecasting Friday, February 13, 2004.
Two Consolidation Projects: Towards an International MME: CFS+EUROSIP(UKMO,ECMWF,METF) 11 slides Towards a National MME: CFS and GFDL 18 slides.
Multi Model Ensembles CTB Transition Project Team Report Suranjana Saha, EMC (chair) Huug van den Dool, CPC Arun Kumar, CPC February 2007.
Huug van den Dool and Suranjana Saha Prediction Skill and Predictability in CFS.
Huug van den Dool and Steve Lord International Multi Model Ensemble.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
31 st CDPW Boulder Colorado, Oct 23-27, 2006 Prediction and Predictability of Climate Variability ‘Charge to the Prediction Session’ Huug van den Dool.
TRENDS REVISITED Huug van den Dool Climate Prediction Center NCEP/NWS/NOAA CDPW Reno October, 22, 2003 CPCCPC.
1 Malaquias Peña and Huug van den Dool Consolidation methods for SST monthly forecasts for MME Acknowledgments: Suru Saha retrieved and organized the data,
Verification of Daily CFS forecasts Huug van den Dool & Suranjana Saha CFS was designed as ‘seasonal’ system Hindcasts , 15 ‘members’ per month.
Forecast 2 Linear trend Forecast error Seasonal demand.
Challenges of Seasonal Forecasting: El Niño, La Niña, and La Nada
Seasonal Climate Prediction at Climate Prediction Center CPC/NCEP/NWS/NOAA/DoC Huug van den Dool
Verifying and interpreting ensemble products
With special thanks to Prof. V. Moron (U
Seasonal Degree Day Outlooks
Progress in Seasonal Forecasting at NCEP
Predictability of Indian monsoon rainfall variability
The Importance of Reforecasts at CPC
COSMO-LEPS Verification
Fig. 1 The temporal correlation coefficient (TCC) skill for one-month lead DJF prediction of 2m air temperature obtained from 13 coupled models and.
GloSea4: the Met Office Seasonal Forecasting System
Can we distinguish wet years from dry years?
Forecast system development activities
Presentation transcript:

Huug van den Dool / Dave Unger Consolidation of Multi-Method Seasonal Forecasts at CPC. Part I

LAST YEAR / THIS YEAR: Last Year: Ridge Regression as a Consolidation Method, to yield, potentially, non-equal weights. This Year: see new Posters on Consolidation by -) Malaquias Pena (methodological twists and SST application) and -) Peitao Peng (application to US T&P) Last Year: Conversion to pdf as per “Kernel method” (Dave Unger). This Year: Time series approach is next.

Does the NCEP CFS add to the skill of the European DEMETER-3 to produce a viable International Multi Model Ensemble (IMME) ? “Much depends on which question we ask” Input by Suranjana Saha and Ake Johansson is acknowledged.

DATA and DEFINITIONS USED DEMETER-3 (DEM3) = ECMWF + METFR + UKMO CFS IMME = DEM3 + CFS 1981 – Initial condition months : Feb, May, Aug and Nov Leads 1-5 Monthly means

DATA/Definitions USED (cont) Deterministic : Anomaly Correlation Probabilistic : Brier Score (BS) and Rank Probability Score (RPS) Ensemble Mean and PDF T2m and Prate Europe and United States Verification Data : T2m : Fan and Van den Dool Prate : CMAP

BRIER SCORE FOR 3-CLASS SYSTEM 1. Calculate tercile boundaries from observations ( for longer leads) at each gridpoint. 2. Assign departures from model’s own climatology (based on 21 years, all members) to one of the three classes: Below (B), Normal (N) and Above (A), and find the fraction of forecasts (F) among all participating ensemble members for these classes denoted by FB, FN and FA respectively, such that FB+ FN+FA=1. 3. Denoting Observations as O, we calculate a Brier Score (BS) as : BS={(FB-OB)**2 +(FN-ON)**2 + (FA-OA)**2}/3, aggregated over all years and all grid points. {{For example, when the observation is in the B class, we have (1,0,0) for (OB, ON, OA) etc.}} 4. BS for random deterministic prediction: BS for ‘always climatology’ (1/3 rd,1/3 rd,1/3 rd ) : RPS: The same as Brier Score, but for cumulative distribution (no- skill=0.148)

Number of times IMME improves upon DEM-3 : out of 20 cases (4 IC’s x 5 leads): RegionEUROPE USA VariableT2mPrateT2mPrate Anomaly Correlation 914 Brier Score RPS “The bottom line” “ NO consolidation, equal weights, NO Cross-validation”

Method  CFS aloneIMMECON4 No CV29%33%38% CV-3R Aspect to be CV-ed Systematic error in mean Systematic error in mean and weights Anom.Corr US T2m February lead 3 November start

Method  CFS aloneIMMECON4 No CV29%33%38% CV-3R20%18%21% Aspect to be CV-ed Systematic error in mean Systematic error in mean and weights Anom.Corr US T2m February lead 3 November start

Cross Validation (CV) Why do we need CV? Which aspects are CV- ed: a) systematic error correction ( i) the mean and ii) the stand.dev) and b) weights generated by Consolidation How?? CV-1, CV-3, CV-3R Don’t use CV-1!. CV-1 malfunctions for systematic error correction in combination with internal* climatology and suffers from degeneracy when weights generated by Consolidation are to be CV-ed. *Define internal and external climatology

wrt OIv climatology Last year

wrt OIv climatology Last year

nov lead 3 check left out (and 2 others) nov lead 3 check left out (and 2 others) nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check nov lead 3 check left out (and 2 others) w1 w2 w3 w4 Feb forecast has high co-linearity. Model 2 has high –ve weights for unconstrained regression

Overriding conclusion  With only 20+ years of hindcasts it is hard for any consolidation to be much better than equal weight MME. (Give us 5000 years.)  ‘Pooling’ data helps stabilize weights and it increases skill, but is it enough?  20+ years is a problem even for CV-ed systematic error correction.

Further points of study The nature of climatology (= control in verification), external, internal, fixed Cross Validation method not settled The Many Details of Consolidation as per Ridge Regression Conversion to pdf can be done in very many different ways (including 3-class BS minimization, logistic regression, ‘count method’, Kernels)

Forecast Consolidation at CPC – Part 2 Ensembles to Probabilities David Unger / Huug van den Dool Acknowledgements: Dan Collins, Malaquias Pena, Peitao Peng

Objectives Produce a single probabilistic forecast from many tools - Single value estimates - Ensemble sets Utilize Individual ensemble members - Assume individual forecasts represent possible realizations - We want more than just the ensemble mean Provide Standardized Probabilistic output - More than just a 3-class forecast

Kernel Characteristics

Kernel Characteristics Unequal Weighting

Ensemble Regression A regression model designed for the kernel smoothing methodology - Each member is equally likely to occur - Each has the same conditional error distribution in the event it is closest to the truth. F= Forecast, σ F = Forecast Standard Deviation Obs=Observations, σ Obs = Standard Deviation of observations R=Correlation between individual ensemble members and the observations R m = Correlation between ensemble mean and observations a 1, a 0 = Regression Coefficients, F = a 0 + a 1 F

Time series estimation Moving Average, Let X 11 Be the 10-year running mean known on year 11. N=10 X 11 = 1/N(x 1 +x 2 +x 3 +x 4 +x 5 +x 6 +x 7 +x 9 +x 9 +x 10 ) X 12 = X /N(x 11 -x 1 ) X Y+1 = X Y + 1/N(x Y+1 -x Y-10 ) Exponential Moving Average (EMA), α = 1/N X 12 = X 11 + α(x 11 - X 11 ) X Y+1 = (1- α)X Y + αx Y+1

Adaptive Ensemble Regression F F 2 (Obs) (Obs) 2 F (Obs) Fm 2 (F-Fm) 2 EMA estimates

Trends Adaptive Regression “learns” recent bias, and is very good in compensating. Most statistical tools also “learn” bias, and adapt to compensate. Steps need to be taken to prevent doubling bias corrections.

Trends (Continued) Step 1. Detrend all models and Obs. F = F – F 10 Obs = Obs – Obs 10 F 10, Obs 10 = The EMA approximating a 10-year mean Step 2. Ensemble Regression Final forecast set, F are anomalies. Step 3. Restore the forecast. A) F = F + F 10 We believe the OCN trend estimate B) F = F + C30: C30 = 30-year ( ) Climatology We have no trust in OCN. C) F = F +C30+R OCN (F 10 –C30) : R OCN = Correlation ( F 10,Obs) Trust but verify.

Weighting The chances of an individual ensemble member being “best” increases with the skill of the model. The kernel distribution represents the expected error distribution of a correct forecast.

Final Forecast Consolidated Probabilities are the area under the PDF within each of three (Below, Near, Above median ) categories. Call for ABOVE when the P(above)>36% and P(Below) <33.3% Call for BELOW when P(Below > 36%) and P(Above) < 33.3% White area = Equal Chances (We don’t as yet trust our Near Normal percentages)

Performance Tools 1995 – CFS – 15 members hindcast All Members weighted equally with combined area equal to the tool weighting CCA - Single Valued Forecast Hindcasts from Cross Validation SMLR - Single valued forecast – Hindcasts from Retroactive Real-time Validation OCN incorporated with EMA rather than 10-year box car average.

Performance (Continued) First Guess EMA parameters provided by CCA, SMLR Statistics CFS spinup Validation Period All Initial times, 1-month lead, Jan 1995-Dec 2005 (11 Years)

Performance (Continued) Official Forecast: Hand-drawn, probabilities in 3 classes. PoE obtained from Normal distribution, Standard deviation based on tool skills) CCA+SMLR: A Consolidation of the two Statistical Forecasts, equally Weighted. CFS: A Consolidation of 15 CFS ensembles, equally weighted. CFS+CCA+SMLR Wts: A Consolidation of CCA, CFS, and SMLR, weighted by R/(1-R) for each of the three tools. 15 CFS members each given 1/15 th of the CFS weight. Also known as “All” All – Equal Wts: CCA, SMLR and the 15 CFS members combined are given equal Weights. All – No Trend. Anomalies applied to 30-year mean.

Performance % % % % % % CCA+SMLR CFS CFS+CCA+ SMLR, Wts. All – No Trend All – Equal Wts. Official HSSCRPSSRPSS - 3% CoverBias (C)

CFS CCA+SMLR Official CRPS Skill Scores: Temperature 1995 – High Moderate Low None Skill Month Lead, All initial times All

%45% %61% %70% %44% %cover No TrendsTrends CRPS Skill Scores / % cover: Temperature 1995 – %32% %26% %39% %32% %63% % 56% %63% % 64% %57% High Moderate Low None Skill Month Lead, All initial times CRPSS

The Winter Forecast Skill WeightingEqual Weighting

Conclusions Weighting is not critical – within reason. Consolidation outperforms component models. Getting the trends right is essential. CFS + Trend consolidation provides an accurate continuous distribution.