Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Slides:



Advertisements
Similar presentations
RSMC La Réunion activities regarding SWFDP Southern Africa Matthieu Plu (Météo-France, La Réunion), Philippe Arbogast (Météo-France, Toulouse), Nicole.
Advertisements

TOPIC 3 CHAIR REPORT TROPICAL CYCLONE MOTION Russell L. Elsberry OUTLINE Need to improve track prediction –Importance of track forecast Opportunities to.
Sixth International Workshop on Tropical Cyclones 21 – 30 November 2006, San Jose, Costa Rica Topic 3.1 : Advances and Requirements for Operational Tropical.
Sixth International Workshop on Tropical Cyclones Session Topic 3.2: Improvements in understanding and prediction of Tropical Cyclone (TC) motion. Topic.
ECMWF long range forecast systems
SPC Potential Products and Services and Attributes of Operational Supporting NWP Probabilistic Outlooks of Tornado, Severe Hail, and Severe Wind.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Further Development of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1 Mark DeMaria 2 Brian D. McNoldy 3 Yi Jin 4 Michael.
Creation of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1, Brian D. McNoldy 1,3, and Mark DeMaria 2 1 CIRA/CSU, Fort.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
A community statistical post-processing system Thomas Nipen and Roland Stull University of British Columbia.
Ensemble Forecasting of Hurricane Intensity based on Biased and non-Gaussian Samples Zhan Zhang, Vijay Tallapragada, Robert Tuleya HFIP Regional Ensemble.
TC Dressing: Next-generation GPCE Jim Hansen NRL MRY, code 7504 (831) Jim Goerss Buck Sampson.
Validation and Monitoring Measures of Accuracy Combining Forecasts Managing the Forecasting Process Monitoring & Control.
Web-ATCF, User Requirements and Intensity Consensus Presenters Buck Sampson (NRL Monterey) and Chris Sisko (NHC) Contributors Ann Schrader (SAIC) Chris.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Forecasting Tropical cyclones Regional Training Workshop on Severe Weather Forecasting and Warning Services (Macao, China, 9 April 2013)
S Neuendorf 2004 Prediction of Software Defects SASQAG March 2004 by Steve Neuendorf.
Slides by John Loucks St. Edward’s University.
LSS Black Belt Training Forecasting. Forecasting Models Forecasting Techniques Qualitative Models Delphi Method Jury of Executive Opinion Sales Force.
SRNWP workshop - Bologne Short range ensemble forecasting at Météo-France status and plans J. Nicolau, Météo-France.
Atmospheric Data Analysis on the Grid Kevin Hodges ESSC Co-workers: Brian Hoskins, Lennart Bengtsson Lizzie Froude.
Applications of Bayesian sensitivity and uncertainty analysis to the statistical analysis of computer simulators for carbon dynamics Marc Kennedy Clive.
Improvements in Deterministic and Probabilistic Tropical Cyclone Wind Predictions: A Joint Hurricane Testbed Project Update Mark DeMaria and Ray Zehr NOAA/NESDIS/ORA,
Slide 1 TIGGE phase1: Experience with exchanging large amount of NWP data in near real-time Baudouin Raoult Data and Services Section ECMWF.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
OFCM-Sponsored Working Group for Tropical Cyclone Research 1 WG/TCR Analysis Results and Next Steps Co-Chairs Working Group for Tropical Cyclone Research.
Thoughts on the GIA Issue in SNARF Jim Davis & Tom Herring Input from and discussions with Mark Tamisiea, Jerry Mitrovica, and Glenn Milne.
Lecture 4 Software Metrics
Development of a Baseline Tropical Cyclone Model Using the Alopex Algorithm Robert DeMaria.
NHC Activities, Plans, and Needs HFIP Diagnostics Workshop August 10, 2012 NHC Team: David Zelinsky, James Franklin, Wallace Hogsett, Ed Rappaport, Richard.
An Overview of Rule-Based Forecasting Monica Adya Department of Management Marquette University Last Updated: April 3, 2004.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
On the ability of global Ensemble Prediction Systems to predict tropical cyclone track probabilities Sharanya J. Majumdar and Peter M. Finocchio RSMAS.
Improvements to the SHIPS Rapid Intensification Index: A Year-2 JHT Project Update This NOAA JHT project is being funded by the USWRP in NOAA/OAR’s Office.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
1 1 Slide Forecasting Professor Ahmadi. 2 2 Slide Learning Objectives n Understand when to use various types of forecasting models and the time horizon.
By Team T-Rex James Houlihan And Gavin Herbert
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
NHC/JHT Products in ATCF Buck Sampson (NRL, Monterey) and Ann Schrader (SAIC, Monterey) IHC 2007 Other Contributors: Chris Sisko, James Franklin, James.
Peter Knippertz et al. – Uncertainties of climate projections of severe European windstorms European windstorms Knippertz, Marsham, Parker, Haywood, Forster.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? Bob Gall – HFIP Development Manager Fred Toepfer—HFIP Project manager Frank.
Munehiko Yamaguchi Typhoon Research Department, Meteorological Research Institute of the Japan Meteorological Agency 9:00 – 12: (Thr) Topic.
OFCM-Sponsored Working Group for Tropical Cyclone Research 1 WG/TCR 2009 IHC: WG/TCR Workshop Wrap-up Workshop: Identifying Tropical Cyclone Research Needs,
Developing Sales Forecasts. Sales Forecasts Objectives: Objectives: Determining sales force size. Determining sales force size. Designing territories.
National Hurricane Center 2010 Forecast Verification James L. Franklin and John Cangialosi Hurricane Specialist Unit National Hurricane Center 2011 Interdepartmental.
18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.
Stream 1.5 Runs of SPICE Kate D. Musgrave 1, Mark DeMaria 2, Brian D. McNoldy 1,3, and Scott Longmore 1 1 CIRA/CSU, Fort Collins, CO 2 NOAA/NESDIS/StAR,
Classification Ensemble Methods 1
2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC.
2015 Production Suite Review: Report from NHC 2015 Production Suite Review: Report from NHC Eric S. Blake, Richard J. Pasch, Andrew Penny NCEP Production.
Impact of New Predictors on Corrected Consensus TC Track Forecast Error James S. Goerss Innovative Employee Solutions / NRL Monterey March 7,
National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane.
OFCM-Sponsored Working Group for Tropical Cyclone Research 1 WG/TCR Workshop Wrap-up Workshop: Identifying Tropical Cyclone Research Needs, Progress and.
National Hurricane Center 2010 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 NOAA Hurricane.
Munehiko Yamaguchi 12, Takuya Komori 1, Takemasa Miyoshi 13, Masashi Nagata 1 and Tetsuo Nakazawa 4 ( ) 1.Numerical Prediction.
Demand Management and Forecasting Chapter 11 Portions Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Chapter 11 – With Woodruff Modications Demand Management and Forecasting Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Applied Analytics in Business Plans Lessons learnt during the SRC15 business planning process Robert Murray – Scottish Water Analytics Team Leader – 27th.
A Guide to Tropical Cyclone Guidance
Overview of Deterministic Computer Models
Huaqing Cai, Jim Wilson, James Pinto, Dave Albo and Cindy Mueller
Model Post Processing.
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
Update of NMC/CMA Global Ensemble Prediction System
Tropical storm intra-seasonal prediction
Verification of Tropical Cyclone Forecasts
MOGREPS developments and TIGGE
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Presentation transcript:

Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C. Lee, Buck Sampson, Todd Smith.

Sharing Experiences in Operational Ensemble Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C. Lee, Buck Sampson, Todd Smith.

Consensus Track Forecasting Single and multi-model approaches Weighted and non-weighted methods Selective and non-selective methods Optimising consensus track forecasting Practical considerations Guidance on guidance and intensity consensus DiscussionRecommendations

Consensus Track Forecasting Consensus methods now relatively widespread, because: –Clear evidence of improvement (seasonal timescales) over individual guidance –Its what forecasters naturally do –Improved objectivity in track forecasting –Removes the windscreen wiper effect

Consensus Track Forecasting Single model approaches (EPS)

Consensus Track Forecasting Single model approaches (EPS) Single model approaches (EPS) –Multiple runs, perturb initial conditions/physics –Degraded resolution -Generally not used operationally for direct input to consensus forecast. Generally used qualitatively. -Little work done on long-term verification of ensemble means -Little work done on statistical calibration of EPS probabilities.

Consensus Track Forecasting

Single vs. multi-model approaches –Disjoint in how these approaches are currently used operationally. –Multi model ensembles – lesser numbers of members, but with greater independence between members (?) and with higher resolution.

Consensus Track Forecasting Multi-model approaches –Combining deterministic forecasts of multiple models (not just NWP), -Fairly widespread use in operations. -Weighted or non-weighted. -Selective or non-selective.

Consensus Track Forecasting Multi-model approaches – simple example.

Consensus Track Forecasting Multi-model approaches – simple example. Process: Acquire tracks Perform initial position correction Interpolate tracks Geographically average

Consensus Track Forecasting Non-selective multi-model consensus –Low maintenance –Low training overhead –Incorporate new models on-the-fly –Robust performance –If many members, less need for selective approach –Widely adopted as baseline approach

Consensus Track Forecasting Multi-model approaches – weighting –Weighted according to historical performance. –Complex weighting: eg. FSSE – unequal weights to forecast parameters for each model and forecast time. –Can outperform unweighted consensus, providing training is up-to-date (human or computer) –Maintenance overhead

Consensus Track Forecasting Selective vs. non-selective approaches –Subjective selection common place and can add significant value. –Semi-objective selection: SAFA – implementation encountered hurdles. – How to identify those cases where selective approach will add value?

Consensus Track Forecasting Selective (SCON) Vs Non-selective (NCON) How to exclude members? How to exclude members?

Consensus Track Forecasting Selective (SCON) Vs Non-selective (NCON) SCON – How to exclude members?

Consensus Track Forecasting Selective (SCON) vs non-selective (NCON) SCON – How to exclude members? Requires knowledge of known model biases (this changes with updates)

Consensus Track Forecasting Selective (SCON) Vs Non-selective (NCON) SCON – How to exclude members? Requires knowledge of model run eg analyses differs from observed BEWARE

Consensus Track Forecasting Recent performance of a model does not guarantee success/failure next time

Consensus Track Forecasting Recent performance of a model does not guarantee success/failure next time.

Consensus Track Forecasting Position Vs Vector Motion consensus Combining short and long-term members

Consensus Track Forecasting Accuracy depends on: 1. Number of models 2. Accuracy of individual members 3. Independence of member errors Including advisories in the consensus JTWC, JMA, CMA. Optimising consensus tracks Optimising consensus tracks

Would you add WBAR to your consensus? A Question of Independence

Would you add WBAR to your consensus? 24hrs 48hrs

Consensus Track Forecasting Practical Considerations Access to models? Where to get them from? (JMA eg.?) Can we organise a central repository of global TC tracks? Standard format and timely!

Consensus Track Forecasting Practical Considerations contd. Access to software? Access to model fields Pre-cyclone phase –less tracks Capture/recurvature/ETT

Consensus Track Forecasting

Discussion How many operational centres represented here commonly have access to <5 deterministic runs? Do you have access to tracks for which you dont have the fields? How many operational centres represented here use weighted consensus methods as their primary method? Do forecasters have the skill to be selective? Are the training requirements too great? Modifications for persistence?

Consensus Track Forecasting Discussion Are weighted methods appropriate for all NMHSs? Bifurcation situations? Should a forecaster sit on the fence – in zero probability space? Is statistical calibration of EPS guidance a requirement? How many operational centres are currently looking to produce probabilistic products for external dissemination?

Consensus Track Forecasting Discussion What modifications should forecasters be allowed to make? Do you agree that the relevant benchmark for operational centres is the simple consensus of available guidance? What is an appropriate means of combining EPS and deterministic runs in operational consensus forecasting? (Is it sufficient to include the ensemble mean as a member).

Consensus Track Forecasting Recommendations?

Confidence Level Small Vs Large spread in models

Consensus Track Forecasting Probabilistic Ensemble System for the Prediction of Tropical Cyclones (PEST)

Consensus Track Forecasting Consensus Intensity forecasting? –Early results promising: will become part of the operational procedure –not as good as for track forecasting.

Consensus Track Forecasting