Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement Project (HFIP) Barbara G. Brown,

Slides:



Advertisements
Similar presentations
TOPIC 3 CHAIR REPORT TROPICAL CYCLONE MOTION Russell L. Elsberry OUTLINE Need to improve track prediction –Importance of track forecast Opportunities to.
Advertisements

Improvements to Statistical Intensity Forecasts John A. Knaff, NOAA/NESDIS/STAR, Fort Collins, Colorado, Mark DeMaria, NOAA/NESDIS/STAR, Fort Collins,
Improvements in Statistical Tropical Cyclone Forecast Models: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria 1, Andrea Schumacher 2, John.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? NOAA Satellite Science Week March 21, 2013 Fred Toepfer—HFIP Project Manager.
Joint Typhoon Warning Center Forward, Ready, Responsive Decision Superiority UNCLASSIFIED An Overview of Joint Typhoon Warning Center Tropical Cyclone.
JOINT TYPHOON WARNING CENTER 2014 YEAR IN REVIEW
National Hurricane Center 2008 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialists Unit National Hurricane Center 2009 Interdepartmental.
Tropical Cyclone Intrinsic Variability & Predictability Gregory J. Hakim University of Washington 67th IHC/Tropical Cyclone Research Forum 6 March 2013.
Frank Marks NOAA/AOML/Hurricane Research Division 11 February 2011 Frank Marks NOAA/AOML/Hurricane Research Division 11 February 2011 Hurricane Research.
Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009 Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009.
Further Development of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1 Mark DeMaria 2 Brian D. McNoldy 3 Yi Jin 4 Michael.
Creation of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1, Brian D. McNoldy 1,3, and Mark DeMaria 2 1 CIRA/CSU, Fort.
Applications of Ensemble Tropical Cyclone Products to National Hurricane Center Forecasts and Warnings Mark DeMaria, NOAA/NESDIS/STAR, Ft. Collins, CO.
Zhan Zhang and HWRF Team 1. Outline Introduction to ensemble prediction system What and why ensemble prediction Approaches to ensemble prediction Hurricane.
Demonstration Testbed for the Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement.
HFIP Ensemble Subgroup Mark DeMaria Oct 3, 2011 Conference Call 1.
HFIP Ensemble Products Subgroup Sept 2, 2011 Conference Call 1.
Hurricane Diagnostics and Verification Workshop Opening Remarks Mark DeMaria May 4-6, 2009 National Hurricane Center Miami, Florida.
The Relative Contribution of Atmospheric and Oceanic Uncertainty in TC Intensity Forecasts Ryan D. Torn University at Albany, SUNY World Weather Open Science.
Testbeds and Projects with Ongoing Ensemble Research:  Hydrometeorology Testbed (HMT)  Hazardous Weather Testbed (HWT)  Hurricane Forecast Improvement.
A. Schumacher, CIRA/Colorado State University NHC Points of Contact: M. DeMaria, D. Brown, M. Brennan, R. Berg, C. Ogden, C. Mattocks, and C. Landsea Joint.
Model testing, community code support, and transfer between research and operations: The Tropical Cyclone Modeling Testbed (TCMT) and the Developmental.
Improvements in Deterministic and Probabilistic Tropical Cyclone Wind Predictions: A Joint Hurricane Testbed Project Update Mark DeMaria and Ray Zehr NOAA/NESDIS/ORA,
The Impact of Satellite Data on Real Time Statistical Tropical Cyclone Intensity Forecasts Joint Hurricane Testbed Project Mark DeMaria, NOAA/NESDIS/ORA,
Update on Storm Surge at NCEP Dr. Rick Knabb, Director, National Hurricane Center and representing numerous partners 21 January 2014.
1 Requirements for hurricane track and intensity guidance Presented By: Vijay Tallapragada and Sam Trahan (NWS/NCEP) Contributors: HWRF Team at EMC, NHC.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
NHC Activities, Plans, and Needs HFIP Diagnostics Workshop August 10, 2012 NHC Team: David Zelinsky, James Franklin, Wallace Hogsett, Ed Rappaport, Richard.
A. FY12-13 GIMPAP Project Proposal Title Page version 25 October 2011 Title: Combining Probabilistic and Deterministic Statistical Tropical Cyclone Intensity.
On the ability of global Ensemble Prediction Systems to predict tropical cyclone track probabilities Sharanya J. Majumdar and Peter M. Finocchio RSMAS.
Upgrades to the GFDL/GFDN Operational Hurricane Models Planned for 2015 (A JHT Funded Project) Morris A. Bender, Matthew Morin, and Timothy Marchok (GFDL/NOAA)
Zhan Zhang and HWRF Team Hurricane WRF Tutorial, NCWCP, MD. January 16.
Improvements to the SHIPS Rapid Intensification Index: A Year-2 JHT Project Update This NOAA JHT project is being funded by the USWRP in NOAA/OAR’s Office.
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 Image: MODIS Land Group, NASA GSFC March 2000 Improving Hurricane Intensity.
Ligia Bernardet, S. Bao, C. Harrop, D. Stark, T. Brown, and L. Carson Technology Transfer in Tropical Cyclone Numerical Modeling – The Role of the DTC.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? Bob Gall – HFIP Development Manager Fred Toepfer—HFIP Project manager Frank.
A JHT FUNDED PROJECT GFDL PERFORMANCE AND TRANSITION TO HWRF Morris Bender, Timothy Marchok (GFDL) Isaac Ginis, Biju Thomas (URI)
Hurricane Forecast Improvement Project (HFIP) Bob Gall HFIP Development Manager Boulder, Colorado June 26, 2012.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? NOAA Satellite Conference April 11, 2013 Fred Toepfer—HFIP Project Manager.
Exploitation of Ensemble Prediction System in support of Atlantic Tropical Cyclogenesis Prediction Chris Thorncroft Department of Atmospheric and Environmental.
National Hurricane Center 2010 Forecast Verification James L. Franklin and John Cangialosi Hurricane Specialist Unit National Hurricane Center 2011 Interdepartmental.
Stream 1.5 Runs of SPICE Kate D. Musgrave 1, Mark DeMaria 2, Brian D. McNoldy 1,3, and Scott Longmore 1 1 CIRA/CSU, Fort Collins, CO 2 NOAA/NESDIS/StAR,
Improvements to Statistical Forecast Models and the Satellite Proving Ground for 2013 Mark DeMaria, John Knaff, NOAA/NESDIS/STAR John Kaplan, Jason Dunion,
An Updated Baseline for Track Forecast Skill Through Five Days for the Atlantic and Northeastern and Northwestern Pacific Basins Sim Aberson NOAA/AOML/Hurricane.
Dynamic Hurricane Season Prediction Experiment with the NCEP CFS CGCM Lindsey Long and Jae Schemm Climate Prediction Center / Wyle IS NOAA/NWS/NCEP October.
Dynamic Hurricane Season Prediction Experiment with the NCEP CFS Jae-Kyung E. Schemm January 21, 2009 COLA CTB Seminar Acknowledgements: Lindsey Long Suru.
Tropical cyclone activity in the Minerva T1279 seasonal forecasts. Preliminary analysis Julia Manganello 1, Kevin Hodges 2 1 COLA, USA 2 NERC Centre for.
Development of a Rapid Intensification Index for the Eastern Pacific Basin John Kaplan NOAA/AOML Hurricane Research Division Miami, FL and Mark DeMaria.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC.
Impact of New Global Models and Ensemble Prediction Systems on Consensus TC Track Forecasts James S. Goerss NRL Monterey March 3, 2010.
2015 Production Suite Review: Report from NHC 2015 Production Suite Review: Report from NHC Eric S. Blake, Richard J. Pasch, Andrew Penny NCEP Production.
Fleet Numerical… Supercomputing Excellence for Fleet Safety and Warfighter Decision Superiority… 1 Chuck Skupniewicz Models (N34M) FNMOC Operations Dept.
National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane.
National Hurricane Center 2010 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 NOAA Hurricane.
New Tropical Cyclone Intensity Forecast Tools for the Western North Pacific Mark DeMaria and John Knaff NOAA/NESDIS/RAMMB Andrea Schumacher, CIRA/CSU.
National Hurricane Center 2007 Forecast Verification Interdepartmental Hurricane Conference 3 March 2008 James L. Franklin NHC/TPC.
Objective and Automated Assessment of Operational Global Forecast Model Predictions of Tropical Cyclone Formation Patrick A. Harr Naval Postgraduate School.
NOAA Hurricane Forecast Improvement Project Development Fred Toepfer, HFIP Manager Bob Gall, HFIP Development Manager.
M. Fiorino :: 63 rd IHC St. Petersburg, FL Recent trends in dynamical medium- range tropical cyclone track prediction and the role of resolution.
Hurricane Joaquin Frank Marks AOML/Hurricane Research Division 10 May 2016 Frank Marks AOML/Hurricane Research Division 10 May 2016 Research to Improve.
Prediction of Consensus Tropical Cyclone Track Forecast Error ( ) James S. Goerss NRL Monterey March 1, 2011.
A Few Words on Hurricane Forecasts
Kate Musgrave1, Christopher Slocum2, and Andrea Schumacher1
By Nicholas Leonardo Advisor: Brian Colle
A Guide to Tropical Cyclone Guidance
Group meeting: Summary for HFIP 2012
Verification of Tropical Cyclone Forecasts
Presentation transcript:

Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement Project (HFIP) Barbara G. Brown, Louisa Nance, Paul A. Kucera, and Christopher L. Williams Tropical Cyclone Modeling Team (TCMT) Joint Numerical Testbed Program NCAR, Boulder, CO 1 67th IHC/Tropical Cyclone Research Forum, 6 March 2013

HFIP Retrospective and Demonstration Exercises Retrospective evaluation goal: Select new Stream 1.5 models to demonstrate to NHC forecasters during the yearly HFIP demonstration project – Select models based on criteria established by NHC Demonstration goal: Demonstrate and test capabilities of new modeling systems (Stream 1, 1.5, and 2) in real time Model forecasts evaluated by TCMT in both the retrospective and demonstration projects 2

Methodology GraphicsSS tables forecast errors NHC Vx error distribution properties forecast errors NHC Vx forecast errors NHC Vx forecast errors NHC Vx ……. Experimental ModelOperational Baseline pairwise differences matching – homogeneous sample Top flight models – ranking plots Evaluation focused on early model guidance! 3

2012 RETROSPECTIVE EXERCISE 4

Stream 1.5 Retrospective Evaluation Goals Provide NHC with in- depth statistical evaluations of the candidate models/techniques directed at the criteria for Stream 1.5 selection Explore new approaches that provide more insight into the performance of the Stream 1.5 candidates Selection criteria Track - – Explicit - 3-4% improvement over previous year’s top-flight models – Consensus – 3-4% improvement over conventional model consensus track error Intensity – – improve upon existing guidance for TC intensity & RI 5

Atlantic Basin 2009: 8 storms 2010: 17 storms 2011: 15 storms # of cases: 640 Eastern North Pacific Basin 2009: 13 storms 2010: 5 storms 2011: 6 storms # of cases: 387 6

2012 Stream 1.5 Retrospective Participants OrganizationModelTypeBasinsConfig MMM/SUNY- Albany AHWRegional-dynamic-deterministicAL, EP1 UW – MadisonUW-NMSRegional-dynamic-deterministicAL1 NRLCOAMPS-TCRegional-dynamic-deterministicAL, EP1 PSUARWRegional-dynamic-deterministicAL2 GFDL Regional-dynamic-ensembleAL, EP2 GSDFIMGlobal-dynamic-deterministicAL, EP2 FSU Correlation Based Consensus Consensus (global/regional dynamic deterministic + statistical- dynamic) AL1 CIRASPICEStatistical-dynamic-consensusAL, EP2 7

Comparisons and Evaluations 1.Performance relative to Baseline (top-flight) models – Track: ECMWF, GFS, GFDL – Intensity: DSHP, LGEM, GFDL 2.Contribution to Consensus – Track (variable) Atlantic: ECMWF, GFS, UKMET, GFDL, HWRF, GFDL-Navy East Pacific: ECMWF, GFS, UKMET, GFDL, HWRF, GFDL-Navy, NOGAPS – Intensity (fixed) Decay SHIPS, LGEM, GFDL, HWRF 8

SAMPLE RETRO RESULTS/DISPLAYS All reports and graphics are available at: 9

Error Distributions Box Plots 10

Statistical Significance – Pairwise Differences Summary Tables % mean error difference % improve (+) /degrade (-) p-value TrackIntensity SS differences  < -20  < <  < <  < <  < 0-1 <  < 0 0 <  < 100 <  < 1 10 <  < 201 <  < 2  > 20  > 2 Not SS  < 0  > 0 Forecast hour GHMI Track Land/Water 0.0 0% % % % % % % % % % % GHMI Intensity Land/Water 0.0 0% % % % % % % % % % % Example COAMPS-TC Practical Significance 11

Comparison w/ Top-Flight Models Rank Frequency 12 U of Wisconsin: 1 st or last for shorter lead times More likely to rank 1 st for longer lead time FIM: CIs for all ranks tend to overlap Method sensitive to sample size

NHC’s 2012 Stream 1.5 Decision OrganizationModelTrack Track Consensus Intensity Consensus MMM/SUNY- Albany AHW UW – MadisonUW-NMS NRLCOAMPS-TC PSUARW GFDL GFDL ensemble mean No-bogus member GSDFIM FSU Correlation Based Consensus CIRASPICE 13

2012 DEMO All graphics are available at: 14

2012 HFIP Demonstration Evaluation of Stream 1, 1.5, and 2 models – Operational, Demonstration, and Research models Focus here on selected Stream 1.5 model performance – Track: GFDL ensemble mean performance relative to baselines – Intensity: SPICE performance relative to baselines – Contribution of Str 1.5 models to consensus forecasts 15

2012 Demo: GFDL Ensemble Mean Track errors vs. Baseline models Red: GFDL Ensemble Mean Model errors Baselines: ECMWF, GFDL (operational), GFS ECMWF GFDL GFS

Comparison w/ Top-Flight Models Rank Frequency: GFDL Ensemble Mean 17 Retrospective ( )Demo (2012)

2012 Demo: SPICE (intensity) Baseline ComparisonsRank Frequency Comparisons Demo Retro

2012 Demo: Stream 1.5 Consensus Stream 1.5 Consensus performed similarly to Operational Consensus, for both Track and Intensity For Demo, confidence intervals tend to be large due to small sample sizes Track Intensity

Online Access to HFIP Demonstration Evaluation Results Evaluation graphics are available on the TCMT website: – p/d2012/verify/ Wide variety of evaluation statistics are available: – Aggregated by basin or storm – Aggregated by land/water, or water only – Different plot types: error distributions, line plots, rank histogram, Demo vs. Retro – A variety of variables and baselines to evaluate

THANK YOU! 21

Baseline Comparisons Operational BaselinesStream 1.5 configuration Top flight models: Track – ECMWF, GFS, GFDL Intensity – DSHP, LGEM, GFDL Stream 1.5 Consensus: Track (variable) AL: ECMWF, GFS, UKMET, GFDL, HWRF, GFDL-Navy EP: ECMWF, GFS, UKMET, GFDL, HWRF, GFDL-Navy, NOGAPS Intensity (fixed) AL & EP: Decay SHIPS, LGEM, GFDL, HWRF AHW, ARW, UM-NMS, COAMPS- TC, FIM: Consensus + Stream 1.5 GFDL, SPICE: Consensus w/ Stream 1.5 equivalent replacement FSU-CBC: Direct comparison 22