National Hurricane Center 2007 Forecast Verification Interdepartmental Hurricane Conference 3 March 2008 James L. Franklin NHC/TPC.

Slides:



Advertisements
Similar presentations
2010 East Pacific Hurricane Season John Cangialosi and Stacy Stewart National Hurricane Center 2011 Interdepartmental Hurricane Conference Miami, Florida.
Advertisements

Joint Typhoon Warning Center Forward, Ready, Responsive Decision Superiority UNCLASSIFIED An Overview of Joint Typhoon Warning Center Tropical Cyclone.
Introduction to Hurricane Forecasting John P. Cangialosi Hurricane Specialist National Hurricane Center HSS Webinar 13 March 2012 John P. Cangialosi Hurricane.
National Hurricane Center 2008 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialists Unit National Hurricane Center 2009 Interdepartmental.
2013 North Atlantic hurricane seasonal forecast Ray Bell with thanks to Joanne Camp (met office)
Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009 Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009.
Further Development of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1 Mark DeMaria 2 Brian D. McNoldy 3 Yi Jin 4 Michael.
Creation of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1, Brian D. McNoldy 1,3, and Mark DeMaria 2 1 CIRA/CSU, Fort.
Applications of Ensemble Tropical Cyclone Products to National Hurricane Center Forecasts and Warnings Mark DeMaria, NOAA/NESDIS/STAR, Ft. Collins, CO.
Demonstration Testbed for the Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement.
GFDL Hurricane Model Ensemble Performance during the 2010 Atlantic Season 65 th IHC Miami, FL 01 March 2011 Tim Marchok Morris Bender NOAA / GFDL Acknowledgments:
HFIP Ensemble Subgroup Mark DeMaria Oct 3, 2011 Conference Call 1.
Web-ATCF, User Requirements and Intensity Consensus Presenters Buck Sampson (NRL Monterey) and Chris Sisko (NHC) Contributors Ann Schrader (SAIC) Chris.
Seasonal Hurricane Forecasting and What’s New at NHC for 2009 Eric Blake Hurricane Specialist National Hurricane Center 4/2/2009 Eric Blake Hurricane Specialist.
Advanced Applications of the Monte Carlo Wind Probability Model: A Year 1 Joint Hurricane Testbed Project Update Mark DeMaria 1, Stan Kidder 2, Robert.
ATMS 373C.C. Hennon, UNC Asheville Tropical Cyclone Forecasting Where is it going and how strong will it be when it gets there.
A. Schumacher, CIRA/Colorado State University NHC Points of Contact: M. DeMaria, D. Brown, M. Brennan, R. Berg, C. Ogden, C. Mattocks, and C. Landsea Joint.
Improvements in Deterministic and Probabilistic Tropical Cyclone Wind Predictions: A Joint Hurricane Testbed Project Update Mark DeMaria and Ray Zehr NOAA/NESDIS/ORA,
SNOPAC: Western Seasonal Outlook (8 December 2011) Portland, OR By Jan Curtis.
The Impact of Satellite Data on Real Time Statistical Tropical Cyclone Intensity Forecasts Joint Hurricane Testbed Project Mark DeMaria, NOAA/NESDIS/ORA,
Update on Storm Surge at NCEP Dr. Rick Knabb, Director, National Hurricane Center and representing numerous partners 21 January 2014.
Tropical Cyclones and Climate Change: An Assessment WMO Expert Team on Climate Change Impacts on Tropical Cyclones February 2010 World Weather Research.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
NOAA’s Seasonal Hurricane Forecasts: Climate factors influencing the 2006 season and a look ahead for Eric Blake / Richard Pasch / Chris Landsea(NHC)
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
An Improved Wind Probability Program: A Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
ATCF Requirements, Intensity Consensus Sea Heights Consistent with NHC Forecasts (Progress Report) Presenter Buck Sampson (NRL Monterey) Investigators.
On the ability of global Ensemble Prediction Systems to predict tropical cyclone track probabilities Sharanya J. Majumdar and Peter M. Finocchio RSMAS.
Page 1© Crown copyright 2006 Matt Huddleston With thanks to: Frederic Vitart (ECMWF), Ruth McDonald & Met Office Seasonal forecasting team 14 th March.
Improvements to the SHIPS Rapid Intensification Index: A Year-2 JHT Project Update This NOAA JHT project is being funded by the USWRP in NOAA/OAR’s Office.
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 Image: MODIS Land Group, NASA GSFC March 2000 Improving Hurricane Intensity.
Statistical Hurricane Intensity Prediction Scheme with Microwave Imagery (SHIPS-MI): Results from 2006 Daniel J. Cecil University of Alabama/Huntsville.
NHC/JHT Products in ATCF Buck Sampson (NRL, Monterey) and Ann Schrader (SAIC, Monterey) IHC 2007 Other Contributors: Chris Sisko, James Franklin, James.
Tracking and Forecasting Hurricanes By John Metz Warning Coordination Meteorologist NWS Corpus Christi, Texas.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? Bob Gall – HFIP Development Manager Fred Toepfer—HFIP Project manager Frank.
Development of Probabilistic Forecast Guidance at CIRA Andrea Schumacher (CIRA) Mark DeMaria and John Knaff (NOAA/NESDIS/ORA) Workshop on AWIPS Tools for.
Web-ATCF, User Requirements and Intensity Consensus Presenter Buck Sampson (NRL Monterey) Funded Participants Ann Schrader (SAIC, Monterey) Providence.
Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement Project (HFIP) Barbara G. Brown,
A JHT FUNDED PROJECT GFDL PERFORMANCE AND TRANSITION TO HWRF Morris Bender, Timothy Marchok (GFDL) Isaac Ginis, Biju Thomas (URI)
Application of a Hybrid Dynamical-Statistical Model for Week 3 and 4 Forecast of Atlantic/Pacific Tropical Storm and Hurricane Activity Jae-Kyung E. Schemm.
National Hurricane Center 2010 Forecast Verification James L. Franklin and John Cangialosi Hurricane Specialist Unit National Hurricane Center 2011 Interdepartmental.
Stream 1.5 Runs of SPICE Kate D. Musgrave 1, Mark DeMaria 2, Brian D. McNoldy 1,3, and Scott Longmore 1 1 CIRA/CSU, Fort Collins, CO 2 NOAA/NESDIS/StAR,
National Weather Service What’s New With The NHC Guidance Models and Products The views expressed herein are those of the author and do not necessarily.
An Updated Baseline for Track Forecast Skill Through Five Days for the Atlantic and Northeastern and Northwestern Pacific Basins Sim Aberson NOAA/AOML/Hurricane.
Dynamic Hurricane Season Prediction Experiment with the NCEP CFS CGCM Lindsey Long and Jae Schemm Climate Prediction Center / Wyle IS NOAA/NWS/NCEP October.
Application of T382 CFS Forecasts for Dynamic Hurricane Season Prediction J. Schemm, L. Long, S. Saha and S. Moorthi NOAA/NWS/NCEP October 21, 2008 The.
Dynamic Hurricane Season Prediction Experiment with the NCEP CFS Jae-Kyung E. Schemm January 21, 2009 COLA CTB Seminar Acknowledgements: Lindsey Long Suru.
Development of a Rapid Intensification Index for the Eastern Pacific Basin John Kaplan NOAA/AOML Hurricane Research Division Miami, FL and Mark DeMaria.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC.
Impact of New Global Models and Ensemble Prediction Systems on Consensus TC Track Forecasts James S. Goerss NRL Monterey March 3, 2010.
2015 Production Suite Review: Report from NHC 2015 Production Suite Review: Report from NHC Eric S. Blake, Richard J. Pasch, Andrew Penny NCEP Production.
Fleet Numerical… Supercomputing Excellence for Fleet Safety and Warfighter Decision Superiority… 1 Chuck Skupniewicz Models (N34M) FNMOC Operations Dept.
M. Fiorino :: IHC 61 st NOLA Performance of the ECMWF High- Resolution Global Model during the 2006 Northern Hemisphere Season and Impact on CONsensus.
Impact of New Predictors on Corrected Consensus TC Track Forecast Error James S. Goerss Innovative Employee Solutions / NRL Monterey March 7,
National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane.
National Hurricane Center 2010 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 NOAA Hurricane.
Ocean Vector Wind Experience Joe Sienkiewicz NOAA Ocean Prediction Center.
2003 Atlantic Hurricane Season Summary By Gerald Bell, Muthuvel Chelliah Climate Prediction Center NOAA/ NWS And NOAA Atlantic Hurricane forecast team.
A proposition on Seasonal Prediction of TC Activity over western North Pacific H. Joe Kwon Kongju National University, KOREA.
M. Fiorino :: 63 rd IHC St. Petersburg, FL Recent trends in dynamical medium- range tropical cyclone track prediction and the role of resolution.
Hurricane Joaquin Frank Marks AOML/Hurricane Research Division 10 May 2016 Frank Marks AOML/Hurricane Research Division 10 May 2016 Research to Improve.
Prediction of Consensus Tropical Cyclone Track Forecast Error ( ) James S. Goerss NRL Monterey March 1, 2011.
A Few Words on Hurricane Forecasts
A Guide to Tropical Cyclone Guidance
2016 Hurricane Season National Weather Service
Raw plume forecast data
Verification of Tropical Cyclone Forecasts
Presentation transcript:

National Hurricane Center 2007 Forecast Verification Interdepartmental Hurricane Conference 3 March 2008 James L. Franklin NHC/TPC

Summary: Atlantic Track  OFCL track errors set records for accuracy from h. Errors continue their downward trends, although skill has been flat for several years.  OFCL track forecasts beat consensus models at some time periods, but trailed the best of the dynamical models (an atypical result).  GFS and UKMET provided best dynamical track guidance. GFDL and NGPS had relatively poor years. ECMWF was mediocre.  A respectable first season for the HWRF, but it isn’t ready to replace the GFDL. A combination of the two is better than either alone.

Summary: Atlantic Intensity  Very difficult year (as measured by Decay- SHIFOR), and OFCL errors were up considerably compared to OFCL in 2007 was more skillful, however, than in  Best models were statistical, as has almost always been the case. Four-model consensus (DSHP/LGEM/HWRF/GHMI) seems promising.

Summary: East Pacific Track  OFCL track errors set records at h.  OFCL beat individual dynamical models but not the consensus (the typical result).  There continues to be a much larger difference between the dynamical models and the consensus in the eastern North Pacific than there is in the Atlantic, which is suggestive of different error mechanisms in the two basins.

Summary: East Pacific Intensity  OFCL added considerable value over the guidance through 48 h, but lagged the guidance thereafter.  Best guidance was statistical. LGEM did very well, as did the 4-model consensus.

Season and May 2007 Aug 2007 Observed Activity Type Outlook Outlook Activity Climatology Chance Above Normal 75% 85% 33% Chance Near Normal 20%10%Near Normal 33% Chance Below Normal 5% 5% 33% Named Storms Hurricanes Major Hurricanes ACE % of Median ~100 NOAA’s 2007 Atlantic Hurricane Outlook Slide Courtesy of Eric Blake (NHC)

NOAA’s Atlantic Hurricane Outlooks (ACE) Graphic Courtesy of Gerry Bell (CPC) Last 6 forecasts have been outside the predicted range. The Hurricane Specialists have been very concerned about the effect that the seasonal forecasts, and the media hype that surrounds them, have on the hurricane warning program, and are glad to see that NOAA will be looking at ways to minimize these problems.

Verification Rules  Verification rules unchanged for Results presented here are final.  System must be a tropical or subtropical cyclone at both forecast initial time and verification time. All verifications include depression stage except for GPRA goal verification.  Special advisories ignored (original advisory is verified.  Skill baselines are recomputed after the season from operational compute data. Decay- SHIFOR5 is the intensity skill benchmark.

2007 Atlantic Verification VT NT TRACK INT (h) (n mi) (kt) ============================ Values in green meet or exceed all-time records. * 48 h track error for TS and H only was 86.2 n mi, a record.

Atlantic Track Errors by Storm Fewer than half the storms had any 72-h forecasts, only Dean and Noel had any 5-day forecasts.

Atlantic Track Errors vs. 5-Year Mean Official forecast was lower than the 5-year mean, but so was CLP5 (statistics dominated by Dean, a “west-runner”).

Atlantic Track Error Trends Errors have been cut in half since Sharpest recent declines in

Atlantic Track Skill Trends Skill has increased since the 1990’s, in particular at the end of the decade, but has been relatively flat for the past few years.

Atlantic 5-Year Mean Track Errors Track errors increase by about 55 n mi per day. Intensity errors level off because intensity is a much more bounded problem.

OFCL Error Distributions and Cone Radii Last year’s 4- and 5-day cones were 252 and 326 n mi, respectively.

2007 Track Guidance Official forecast beat consensus models at some time periods. Best models were GFS and UKMET (especially using subjective tracker - EGRI). UKMET was last in 2006! GFDL/NOGAPS had rough years, so much so that they were a drag on the consensus. GFNI, AEMI, FSSE excluded due to insufficient availability (less than 67% of the time at 48 or 120 h).

GFDL-HWRF Comparison Good first year for the HWRF; competitive for intensity, better than GFDL for track (mainly Dean). Consensus of the two better than either alone.

Guidance Trends UKMET goes from worst to first. NOGAPS has 5th poor season in a row (and 9th out of the last 10, at least at 48 h). Even so, it contributes positively to the consensus.

Guidance Trends Relative performance at 120 h is more variable, although GFSI has been strong every year except NGPI is better at the longer periods, GFDL less so.

Consensus Models Use of EGRI (subjective tracker) improves the GUNA consensus (GENA). Mixed bag for FSSE, which appeared to lag behind at longer forecast intervals.

Consensus Models Second year in a row AEMI trailed the control run. Multi- model ensembles remain far more effective for TC forecasting. ECMWF ensemble mean is also not as good as the control run.

Goerss Corrected Consensus Continues to be of benefit, or at least no harm.

Forecaster Consensus Forecasters appear to have been successful in selecting their own consensus (in the Atlantic).

Atlantic Intensity Errors vs. 5-Year Mean In contrast to 2006, 2007 had storms that were difficult to forecast, as measured by D-SHIFOR (Dean and Felix, presumably), and OFCL suffered as a result, with errors significantly above the 5-year mean.

Atlantic Intensity Error Trends No progress with intensity.

Atlantic Intensity Skill Trends Skill returns to previous levels, with little net change over the past several years.

2007 Intensity Guidance OFCL beat the available guidance through 72 h. Statistical models back in their accustomed position, ahead of dynamical models. With the advent of the LGEM and HWRF, we now can form a 4-member intensity consensus…

2007 Intensity Guidance …which is at least as good as the best individual model at all time periods except 120 h.

2007 Intensity Guidance FSU Superensemble trailed the simple intensity consensus.

2007 East Pacific Verification VT NT TRACK INT (h) (n mi) (kt) ============================ Values in green tied or exceeded all-time lows.

2007 vs 5-Year Mean CLIPER errors in 2007 were above their previous 5-yr means. Despite this, OFCL errors were below their previous 5-yr means.

EPAC Track Error Trends Since 1990, track errors have decreased by about 1/3.

EPAC Track Skill Trends Skill continues to improve.

OFCL Error Distributions and Cone Radii

2007 Track Guidance UKMI, EGRI, AEMI, FSSE, GUNA excluded due to insufficient availability. Official forecast beat the CONU consensus at some time periods; beat each individual model. EMXI best by wide margin (largely due to Kiko).

GFDL-HWRF Comparison Overall, HWRF performance not as good as the GFDL, especially at longer periods. Consensus did add value for intensity through 72 h.

Consensus Models No standouts. Substitution of EGRI for UKMI improves GUNA.

Goerss Corrected Consensus Did not help in 2007.

Forecaster Consensus Unlike the Atlantic, forecaster’s selective consensus didn’t work in the eastern Pacific. Suggests that error mechanisms in the eastern Pacific are more subtle than the Atlantic, making erroneous outliers harder to detect.

Eastern North Pacific Intensity Errors vs. 5-year Mean OFCL errors were lower than 5-yr means, but so were the Decay- SHIFOR errors.

EPAC Intensity Error Trends Same as it ever was…same as it ever was… ♫ ♫

EPAC Intensity Skill Trends Skill does seem to be inching upward…

2007 Intensity Guidance OFCL added significant value over the guidance through 48 h. Wind biases turn sharply negative at h. LGEM provided most skillful guidance overall. HWRF had trouble, presumably with decay over cooler waters?

2007 Intensity Guidance Good value in multi-model consensus.

Consensus Changes for 2008  Fixed consensus models (require all present)  TCON: AVNI EGRI NGPI GHMI HWFI  ICON: DSHP LGEM GHMI HWFI  Variable consensus models (require 2 present)  TVCN: AVNI EGRI NGPI GHMI HWFI GFNI EMXI  IVCN: DSHP LGEM GHMI HWFI GFNI  Corrected versions of TCON and TVCN will be TCCN and TVCC, respectively.  Substitute EGRI for UKMI in GUNA.  Discontinue CONU (it's superseded by TVCN), CCON (superseded by TVCC), GUNS, GENA (superseded by GUNA), CONE, and INT4.

Genesis Forecast Verification Good correlation between forecast and verifying genesis rates in the Atlantic with only a weak over-forecast bias. Poor correlation (except at the extremes), with a large under-forecast bias.

Genesis Verification by Bins ATLANTIC Range (%)% Expected% Verified# Forecasts 0-10 (Low) (Med) (High) EASTERN NORTH PACIFIC Range (%)% Expected% Verified# Forecasts 0-10 (Low) (Med) (High) NHC will issue experimental public quantitative/categorical genesis forecasts in 2008 in association with the Graphical Tropical Weather Outlook.