National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane.

Slides:



Advertisements
Similar presentations
Briana Luthman, Ryan Truchelut, and Robert E. Hart Young Scholars Program, Florida State University Background In recent decades the technology used to.
Advertisements

2010 East Pacific Hurricane Season John Cangialosi and Stacy Stewart National Hurricane Center 2011 Interdepartmental Hurricane Conference Miami, Florida.
Joint Typhoon Warning Center Forward, Ready, Responsive Decision Superiority UNCLASSIFIED An Overview of Joint Typhoon Warning Center Tropical Cyclone.
Introduction to Hurricane Forecasting John P. Cangialosi Hurricane Specialist National Hurricane Center HSS Webinar 13 March 2012 John P. Cangialosi Hurricane.
National Hurricane Center 2008 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialists Unit National Hurricane Center 2009 Interdepartmental.
Frank Marks NOAA/AOML/Hurricane Research Division 11 February 2011 Frank Marks NOAA/AOML/Hurricane Research Division 11 February 2011 Hurricane Research.
Further Development of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1 Mark DeMaria 2 Brian D. McNoldy 3 Yi Jin 4 Michael.
Creation of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1, Brian D. McNoldy 1,3, and Mark DeMaria 2 1 CIRA/CSU, Fort.
Applications of Ensemble Tropical Cyclone Products to National Hurricane Center Forecasts and Warnings Mark DeMaria, NOAA/NESDIS/STAR, Ft. Collins, CO.
Demonstration Testbed for the Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement.
HFIP Ensemble Products Subgroup Sept 2, 2011 Conference Call 1.
March 1, th Interdepartmental Hurricane Conference Jim Weyman, Director Central Pacific Hurricane Center 2009 Central Pacific Hurricane Season.
Seasonal Hurricane Forecasting and What’s New at NHC for 2009 Eric Blake Hurricane Specialist National Hurricane Center 4/2/2009 Eric Blake Hurricane Specialist.
ATMS 373C.C. Hennon, UNC Asheville Tropical Cyclone Forecasting Where is it going and how strong will it be when it gets there.
A. Schumacher, CIRA/Colorado State University NHC Points of Contact: M. DeMaria, D. Brown, M. Brennan, R. Berg, C. Ogden, C. Mattocks, and C. Landsea Joint.
Improvements in Deterministic and Probabilistic Tropical Cyclone Wind Predictions: A Joint Hurricane Testbed Project Update Mark DeMaria and Ray Zehr NOAA/NESDIS/ORA,
The Impact of Satellite Data on Real Time Statistical Tropical Cyclone Intensity Forecasts Joint Hurricane Testbed Project Mark DeMaria, NOAA/NESDIS/ORA,
Tropical Cyclones and Climate Change: An Assessment WMO Expert Team on Climate Change Impacts on Tropical Cyclones February 2010 World Weather Research.
Are Atlantic basin tropical cyclone intensity forecasts improving? Jonathan R. Moskaitis 67 th IHC / 2013 Tropical Cyclone Research Forum Naval Research.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
Pablo Santos WFO Miami, FL Mark DeMaria NOAA/NESDIS David Sharp WFO Melbourne, FL rd IHC St Petersburg, FL PS/DS “HURRICANE CONDITIONS EXPECTED.”
NOAA’s Seasonal Hurricane Forecasts: Climate factors influencing the 2006 season and a look ahead for Eric Blake / Richard Pasch / Chris Landsea(NHC)
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
An Improved Wind Probability Program: A Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
Development of a Baseline Tropical Cyclone Model Using the Alopex Algorithm Robert DeMaria.
Heidke Skill Score (for deterministic categorical forecasts) Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations.
On the ability of global Ensemble Prediction Systems to predict tropical cyclone track probabilities Sharanya J. Majumdar and Peter M. Finocchio RSMAS.
Caribbean Disaster Mitigation Project Caribbean Institute for Meteorology and Hydrology Tropical Cyclones Characteristics and Forecasting Horace H. P.
Page 1© Crown copyright 2006 Matt Huddleston With thanks to: Frederic Vitart (ECMWF), Ruth McDonald & Met Office Seasonal forecasting team 14 th March.
Improving SHIPS Rapid Intensification (RI) Index Using 37 GHz Microwave Ring Pattern around the Center of Tropical Cyclones 65 th Interdepartmental Hurricane.
Improvements to the SHIPS Rapid Intensification Index: A Year-2 JHT Project Update This NOAA JHT project is being funded by the USWRP in NOAA/OAR’s Office.
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 Image: MODIS Land Group, NASA GSFC March 2000 Improving Hurricane Intensity.
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
Statistical Hurricane Intensity Prediction Scheme with Microwave Imagery (SHIPS-MI): Results from 2006 Daniel J. Cecil University of Alabama/Huntsville.
PREDICTABILITY OF WESTERN NORTH PACIFIC TROPICAL CYCLONE EVENTS ON INTRASEASONAL TIMESCALES WITH THE ECMWF MONTHLY FORECAST MODEL Russell L. Elsberry and.
Tracking and Forecasting Hurricanes By John Metz Warning Coordination Meteorologist NWS Corpus Christi, Texas.
2/10/03F.Marks1 Development of a Tropical Cyclone Rain Forecasting Tool Frank D. Marks NOAA/AOML, Hurricane Research Division, Miami, FL QPE Techniques.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? Bob Gall – HFIP Development Manager Fred Toepfer—HFIP Project manager Frank.
Development of Probabilistic Forecast Guidance at CIRA Andrea Schumacher (CIRA) Mark DeMaria and John Knaff (NOAA/NESDIS/ORA) Workshop on AWIPS Tools for.
Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement Project (HFIP) Barbara G. Brown,
The Impact of Lightning Density Input on Tropical Cyclone Rapid Intensity Change Forecasts Mark DeMaria, John Knaff and Debra Molenar, NOAA/NESDIS, Fort.
Atlantic Simplified Track Model Verification 4-year Sample ( ) OFCL shown for comparison Forecast Skill Mean Absolute Error.
National Hurricane Center 2010 Forecast Verification James L. Franklin and John Cangialosi Hurricane Specialist Unit National Hurricane Center 2011 Interdepartmental.
Stream 1.5 Runs of SPICE Kate D. Musgrave 1, Mark DeMaria 2, Brian D. McNoldy 1,3, and Scott Longmore 1 1 CIRA/CSU, Fort Collins, CO 2 NOAA/NESDIS/StAR,
National Weather Service What’s New With The NHC Guidance Models and Products The views expressed herein are those of the author and do not necessarily.
An Updated Baseline for Track Forecast Skill Through Five Days for the Atlantic and Northeastern and Northwestern Pacific Basins Sim Aberson NOAA/AOML/Hurricane.
Tropical Cyclone Rapid Intensity Change Forecasting Using Lightning Data during the 2010 GOES-R Proving Ground at the National Hurricane Center Mark DeMaria.
John Kaplan (NOAA/HRD), J. Cione (NOAA/HRD), M. DeMaria (NOAA/NESDIS), J. Knaff (NOAA/NESDIS), J. Dunion (U. of Miami/HRD), J. Solbrig (NRL), J. Hawkins(NRL),
Development of a Rapid Intensification Index for the Eastern Pacific Basin John Kaplan NOAA/AOML Hurricane Research Division Miami, FL and Mark DeMaria.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC.
2015 Production Suite Review: Report from NHC 2015 Production Suite Review: Report from NHC Eric S. Blake, Richard J. Pasch, Andrew Penny NCEP Production.
Enhancement of SHIPS Using Passive Microwave Imager Data—2005 Testing Dr. Daniel J. Cecil Dr. Thomas A. Jones University of Alabama in Huntsville
Development and Implementation of NHC/JHT Products in ATCF Charles R. Sampson NRL (PI) Contributors: Ann Schrader, Mark DeMaria, John Knaff, Chris Sisko,
Impact of New Predictors on Corrected Consensus TC Track Forecast Error James S. Goerss Innovative Employee Solutions / NRL Monterey March 7,
National Hurricane Center 2010 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 NOAA Hurricane.
New Tropical Cyclone Intensity Forecast Tools for the Western North Pacific Mark DeMaria and John Knaff NOAA/NESDIS/RAMMB Andrea Schumacher, CIRA/CSU.
National Hurricane Center 2007 Forecast Verification Interdepartmental Hurricane Conference 3 March 2008 James L. Franklin NHC/TPC.
M. Fiorino :: 63 rd IHC St. Petersburg, FL Recent trends in dynamical medium- range tropical cyclone track prediction and the role of resolution.
Developing a tropical cyclone genesis forecast tool: Preliminary results from 2014 quasi- operational testing Daniel J. Halperin 1, Robert E. Hart 1, Henry.
Hurricane Joaquin Frank Marks AOML/Hurricane Research Division 10 May 2016 Frank Marks AOML/Hurricane Research Division 10 May 2016 Research to Improve.
Prediction of Consensus Tropical Cyclone Track Forecast Error ( ) James S. Goerss NRL Monterey March 1, 2011.
A Few Words on Hurricane Forecasts
Predictability of Tropical Cyclone Intensity
A Guide to Tropical Cyclone Guidance
Storm Surge Forecasting Practices, Tools for Emergency Managers, A Probabilistic Storm Surge Model Based on Ensembles and Past Error Distributions.
2016 Hurricane Season National Weather Service
Michael J. Brennan National Hurricane Center
Verification of Tropical Cyclone Forecasts
Presentation transcript:

National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane Conference James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane Conference 1

Verification Rules  Verification rules unchanged for Results presented here in both basins are preliminary.  System must be a tropical or subtropical cyclone at both forecast initial time and verification time. All verifications include depression stage except for GPRA track goal verification.  Special advisories ignored (original advisory is verified.  Skill baselines are recomputed after the season from operational compute data. Decay- SHIFOR5 is the intensity skill benchmark.

2009 Atlantic Verification Values in green exceed all- time records. 48 h track error for TS and H only (GPRA goal) was 69.9 n mi, well below previous record of Sample is very small (last year 346 forecasts, with 149 verifying at 5 days). Five-day sample is smallest ever. VT NT TRACK INT (h) (n mi) (kt) ============================ Four- and five day track error was almost exclusively along-track (slow).

Atlantic Track Errors vs. 5-Year Mean Official forecast was better than the 5-year mean, even though the season’s storms were “harder” than normal. OFCL errors were mostly below the 5-yr means, even though CLIPER5 errors were above their 5-yr means.

Atlantic Track Error Trends Errors have been cut in half over the past 15 years was best year ever. Smaller samples give more erratic trends at days 4-5.

Atlantic Track Skill Trends 2009 set skill records at h. Sharp increase over past two years due to greater availability of the ECMWF?

2009 Track Guidance Official forecast performance was very close to the consensus models. Good year for FSSE. First year of availability for CMCI. A good start indeed. Best dynamical models were ECMWF and GFS. UKMET and NOGAPS appear to be serious drags on the consensus. Will need to reevaluate TCON/TVCN members for BAMD performed poorly (strong shear).

Atlantic Intensity Errors vs. 5-Year Mean OFCL errors in 2009 were mostly at or above the 5-yr means, but the 2009 Decay-SHIFOR errors were above their 5-yr means, indicating storms with unusual behaviors.

Atlantic Intensity Error Trends No progress with intensity.

Atlantic Intensity Skill Trends Little net change in skill over the past several years.

2009 Intensity Guidance Best model at every time period was statistical. Very good year for LGEM, which handles changes in the environment better than SHIPS.

2009 East Pacific Verification VT NT TRACK INT (h) (n mi) (kt) ============================ Values in green tied or exceeded all-time lows.

EPAC Track Error Trends Since 1990, track errors have decreased by 30%-50%.

EPAC Track Skill Trends Although errors were higher in 2009, skill was mixed.

EPAC Intensity Error Trends Errors look pretty flat.

EPAC Intensity Skill Trends Skill also seems flat in this decade.

Genesis Forecast Verification Relatively steady increase of verifying percentage with forecast percentage indicates good reliability. Forecasters can distinguish likelihood of genesis in 10% increments. NHC will provide public genesis forecasts to the nearest 10% in Fairly systematic low bias in the east Pacific.

Experimental Rapid Intensification Forecast Verification In house probabilistic forecast of the chance of 30 kt or more of intensification over the next 24 hours (climatology = 5%). EPAC curve (where low probs have high bias while high probs have a low bias) reflects underconfidence on the part of the forecaster (i.e., we know more than we think we do).

Summary  Very small Atlantic sample. OFCL track errors set records for accuracy from 24 to 72 h. Errors continue their downward trends, skill was also up.  Good years for FSSE, GFSI, EMXI, CMCI (first year of availability) in the Atlantic.  Not much new with intensity. Best model was statistical (LGEM).  Genesis forecasts show good reliability. NHC will provide these forecasts in 10% increments in 2010.

Forecast Error by Parameter Radii errors look good (especially 64 kt radii), but this is misleading...

Forecast Skill by Parameter Track forecast skill is very high (and higher at 5 days than it is at 24 h). Intensity skill is low but fairly steady with lead time. Wind radii skill falls off significantly with lead time (to near zero by 72 h). 30% skill of the 36 h hurricane radii forecasts represents accuracy improvement of 3-4 n mi over DRCL. This is meaningless in the context of the ~75 n mi mean track error.

Forecast Change Variance Explained at Longest Lead Much more information in the intensity forecasts. Intensity forecasts have a very large “dynamic range”, while the vast majority of hurricane radii forecasts are within 20 n mi of initial radii (such that the forecast radii change is much smaller than typical track error.)

Forecast Parameter Scale Analysis Tropical Cyclone Forecast Parameter Scale Analysis (Based on Atlantic Basin statistics, ) Parameter Analysis Uncertainty Verification Uncertainty (at 36 h) Forecast Skill at longest lead Variance Explained at longest lead Track <1% 30° ± 0.3° 20% 75 nm ± 15 nm 45%Very high Intensity 10% 100 kt ± 10 kt ~50% 13 kt ± 7 kt* 10%60% Wind Radii 25-40% 100 nm ± 25 nm 25 nm ± 10 nm ~100% 30 nm ± 25 nm 10 nm ± 10 nm 0% (34kt/72h) 30% (64kt/36h) 17-34% (34kt) 8-12% (64 kt) * Assumes 10% error applied to a 70 kt “typical” TC.

Upper bound estimate of the impact of radii forecast uncertainty on WS probabilities (Francis 2004). Control (Radii-CLIPER perturbations)Radii assumed perfect Conclusion: Knowing the error characteristics of the radii forecasts is of limited practical use (because the probabilities are dominated by the track uncertainty).

Considerations for Forecast Extension  Track (location) is very easy to determine, easy to verify, forecasts have high skill and high information content. Forecasts have high utility. Excellent candidate to extend lead time.  Intensity is readily determined, and can be meaningfully verified most of the time. Forecasts have little skill (relative to climatology and persistence), but have high information content. Forecasts have high utility. Decent candidate for extension.  Wind radii are difficult to determine, virtually impossible to verify quantitatively; 34 kt radii have no skill at extended leads and low information content. Forecasts have low utility due to the much larger track errors. Lousy candidate for extension.