National Hurricane Center 2008 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialists Unit National Hurricane Center 2009 Interdepartmental.

Slides:



Advertisements
Similar presentations
Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.
Advertisements

Briana Luthman, Ryan Truchelut, and Robert E. Hart Young Scholars Program, Florida State University Background In recent decades the technology used to.
Joint Typhoon Warning Center Forward, Ready, Responsive Decision Superiority UNCLASSIFIED An Overview of Joint Typhoon Warning Center Tropical Cyclone.
Andrea Schumacher, CIRA/CSU Mark DeMaria, NOAA/NESDIS/StAR Dan Brown and Ed Rappaport, NHC.
2013 North Atlantic hurricane seasonal forecast Ray Bell with thanks to Joanne Camp (met office)
Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009 Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009.
Further Development of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1 Mark DeMaria 2 Brian D. McNoldy 3 Yi Jin 4 Michael.
Creation of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1, Brian D. McNoldy 1,3, and Mark DeMaria 2 1 CIRA/CSU, Fort.
Applications of Ensemble Tropical Cyclone Products to National Hurricane Center Forecasts and Warnings Mark DeMaria, NOAA/NESDIS/STAR, Ft. Collins, CO.
Ensemble Forecasting of Hurricane Intensity based on Biased and non-Gaussian Samples Zhan Zhang, Vijay Tallapragada, Robert Tuleya HFIP Regional Ensemble.
Demonstration Testbed for the Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement.
GFDL Hurricane Model Ensemble Performance during the 2010 Atlantic Season 65 th IHC Miami, FL 01 March 2011 Tim Marchok Morris Bender NOAA / GFDL Acknowledgments:
HFIP Ensemble Products Subgroup Sept 2, 2011 Conference Call 1.
Seasonal Hurricane Forecasting and What’s New at NHC for 2009 Eric Blake Hurricane Specialist National Hurricane Center 4/2/2009 Eric Blake Hurricane Specialist.
ATMS 373C.C. Hennon, UNC Asheville Tropical Cyclone Forecasting Where is it going and how strong will it be when it gets there.
A. Schumacher, CIRA/Colorado State University NHC Points of Contact: M. DeMaria, D. Brown, M. Brennan, R. Berg, C. Ogden, C. Mattocks, and C. Landsea Joint.
Improvements in Deterministic and Probabilistic Tropical Cyclone Wind Predictions: A Joint Hurricane Testbed Project Update Mark DeMaria and Ray Zehr NOAA/NESDIS/ORA,
Bias Corrections of Storm Counts from Best Track Data Chris Landsea, National Hurricane Center, Miami, USA Gabe Vecchi, Geophysical Fluid Dynamics Lab,
The Impact of Satellite Data on Real Time Statistical Tropical Cyclone Intensity Forecasts Joint Hurricane Testbed Project Mark DeMaria, NOAA/NESDIS/ORA,
Mark DeMaria, NOAA/NESDIS/StAR Andrea Schumacher, CIRA/CSU Dan Brown and Ed Rappaport, NHC HFIP Workshop, 4-8 May 2009.
Tropical Cyclones and Climate Change: An Assessment WMO Expert Team on Climate Change Impacts on Tropical Cyclones February 2010 World Weather Research.
Are Atlantic basin tropical cyclone intensity forecasts improving? Jonathan R. Moskaitis 67 th IHC / 2013 Tropical Cyclone Research Forum Naval Research.
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
NOAA’s Seasonal Hurricane Forecasts: Climate factors influencing the 2006 season and a look ahead for Eric Blake / Richard Pasch / Chris Landsea(NHC)
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
An Improved Wind Probability Program: A Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
Development of a Baseline Tropical Cyclone Model Using the Alopex Algorithm Robert DeMaria.
On the ability of global Ensemble Prediction Systems to predict tropical cyclone track probabilities Sharanya J. Majumdar and Peter M. Finocchio RSMAS.
Upgrades to the GFDL/GFDN Operational Hurricane Models Planned for 2015 (A JHT Funded Project) Morris A. Bender, Matthew Morin, and Timothy Marchok (GFDL/NOAA)
Caribbean Disaster Mitigation Project Caribbean Institute for Meteorology and Hydrology Tropical Cyclones Characteristics and Forecasting Horace H. P.
Improvements to the SHIPS Rapid Intensification Index: A Year-2 JHT Project Update This NOAA JHT project is being funded by the USWRP in NOAA/OAR’s Office.
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 Image: MODIS Land Group, NASA GSFC March 2000 Improving Hurricane Intensity.
Statistical Hurricane Intensity Prediction Scheme with Microwave Imagery (SHIPS-MI): Results from 2006 Daniel J. Cecil University of Alabama/Huntsville.
PREDICTABILITY OF WESTERN NORTH PACIFIC TROPICAL CYCLONE EVENTS ON INTRASEASONAL TIMESCALES WITH THE ECMWF MONTHLY FORECAST MODEL Russell L. Elsberry and.
Tracking and Forecasting Hurricanes By John Metz Warning Coordination Meteorologist NWS Corpus Christi, Texas.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? Bob Gall – HFIP Development Manager Fred Toepfer—HFIP Project manager Frank.
Development of Probabilistic Forecast Guidance at CIRA Andrea Schumacher (CIRA) Mark DeMaria and John Knaff (NOAA/NESDIS/ORA) Workshop on AWIPS Tools for.
Web-ATCF, User Requirements and Intensity Consensus Presenter Buck Sampson (NRL Monterey) Funded Participants Ann Schrader (SAIC, Monterey) Providence.
Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement Project (HFIP) Barbara G. Brown,
Item 12-20: 2013 Project Update on Expressions of Uncertainty Initiative.
A JHT FUNDED PROJECT GFDL PERFORMANCE AND TRANSITION TO HWRF Morris Bender, Timothy Marchok (GFDL) Isaac Ginis, Biju Thomas (URI)
National Hurricane Center 2010 Forecast Verification James L. Franklin and John Cangialosi Hurricane Specialist Unit National Hurricane Center 2011 Interdepartmental.
Stream 1.5 Runs of SPICE Kate D. Musgrave 1, Mark DeMaria 2, Brian D. McNoldy 1,3, and Scott Longmore 1 1 CIRA/CSU, Fort Collins, CO 2 NOAA/NESDIS/StAR,
National Weather Service What’s New With The NHC Guidance Models and Products The views expressed herein are those of the author and do not necessarily.
An Updated Baseline for Track Forecast Skill Through Five Days for the Atlantic and Northeastern and Northwestern Pacific Basins Sim Aberson NOAA/AOML/Hurricane.
Development of a Rapid Intensification Index for the Eastern Pacific Basin John Kaplan NOAA/AOML Hurricane Research Division Miami, FL and Mark DeMaria.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC.
Impact of New Global Models and Ensemble Prediction Systems on Consensus TC Track Forecasts James S. Goerss NRL Monterey March 3, 2010.
2015 Production Suite Review: Report from NHC 2015 Production Suite Review: Report from NHC Eric S. Blake, Richard J. Pasch, Andrew Penny NCEP Production.
Fleet Numerical… Supercomputing Excellence for Fleet Safety and Warfighter Decision Superiority… 1 Chuck Skupniewicz Models (N34M) FNMOC Operations Dept.
Andrea Schumacher, CIRA/CSU Mark DeMaria and John Knaff, NOAA/NESDIS/StAR.
Development and Implementation of NHC/JHT Products in ATCF Charles R. Sampson NRL (PI) Contributors: Ann Schrader, Mark DeMaria, John Knaff, Chris Sisko,
Impact of New Predictors on Corrected Consensus TC Track Forecast Error James S. Goerss Innovative Employee Solutions / NRL Monterey March 7,
Analysis of Typhoon Tropical Cyclogenesis in an Atmospheric General Circulation Model Suzana J. Camargo and Adam H. Sobel.
National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane.
National Hurricane Center 2010 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 NOAA Hurricane.
New Tropical Cyclone Intensity Forecast Tools for the Western North Pacific Mark DeMaria and John Knaff NOAA/NESDIS/RAMMB Andrea Schumacher, CIRA/CSU.
National Hurricane Center 2007 Forecast Verification Interdepartmental Hurricane Conference 3 March 2008 James L. Franklin NHC/TPC.
Munehiko Yamaguchi 12, Takuya Komori 1, Takemasa Miyoshi 13, Masashi Nagata 1 and Tetsuo Nakazawa 4 ( ) 1.Numerical Prediction.
M. Fiorino :: 63 rd IHC St. Petersburg, FL Recent trends in dynamical medium- range tropical cyclone track prediction and the role of resolution.
Developing a tropical cyclone genesis forecast tool: Preliminary results from 2014 quasi- operational testing Daniel J. Halperin 1, Robert E. Hart 1, Henry.
Hurricane Joaquin Frank Marks AOML/Hurricane Research Division 10 May 2016 Frank Marks AOML/Hurricane Research Division 10 May 2016 Research to Improve.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Prediction of Consensus Tropical Cyclone Track Forecast Error ( ) James S. Goerss NRL Monterey March 1, 2011.
A Few Words on Hurricane Forecasts
A Guide to Tropical Cyclone Guidance
Michael J. Brennan National Hurricane Center
Verification of Tropical Cyclone Forecasts
Presentation transcript:

National Hurricane Center 2008 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialists Unit National Hurricane Center 2009 Interdepartmental Hurricane Conference James L. Franklin Branch Chief, Hurricane Specialists Unit National Hurricane Center 2009 Interdepartmental Hurricane Conference 1

Verification Rules  Verification rules unchanged for Results presented here in both basins are final.  System must be a tropical or subtropical cyclone at both forecast initial time and verification time. All verifications include depression stage except for GPRA goal verification.  Special advisories ignored (original advisory is verified.  Skill baselines are recomputed after the season from operational compute data. Decay- SHIFOR5 is the intensity skill benchmark.

2008 Atlantic Verification VT NT TRACK INT (h) (n mi) (kt) ============================ Values in green exceed all- time records. * 48 h track error for TS and H only (GPRA goal) was 87.5 n mi, just off last year’s record of 86.2.

Atlantic Track Errors by Storm Forecasts for Ike were relatively low, while those for Josephine and Omar were relatively high.

Atlantic Track Errors vs. 5-Year Mean Official forecast was better than the 5-year mean, even though the season’s storms were “harder” than normal.

Atlantic Track Error Trends Errors have been cut in half over the past 15 years was best year ever.

Atlantic Track Skill Trends 2008 was the most skillful year on record at all time periods.

Atlantic 5-Year Mean Track Errors Track errors increase by about n mi per day. 48 hr mean error below 100 n mi for the first time. Intensity errors level off because intensity is a much more bounded problem. New 5-yr means slightly larger than last year’s.

OFCL Error Distributions and Cone Radii Only modest reductions in the cone radii.

2008 Track Guidance Official forecast performance was very close to the consensus models. Best model was ECMWF, which was so good that it as good or better than the consensus. BAMD was similar to the poorest of the 3-D models (UKMET). AEMI excluded due to insufficient availability (less than 67% of the time at 48 or 120 h).

2008 Track Guidance Examine major dynamical models to increase sample size. ECMWF best at all time periods (as opposed to last year, when it was mediocre). GFDL also better than last year (and better than HWRF). As we’ve seen before, GFDL skill declines relatively sharply at days 4-5. NOGAPS and GFNI again performed relatively poorly. GFNI upgrades were delayed.

GFDL-HWRF Comparison Much larger sample than last year shows that the HWRF is competitive with, but has not quite caught up to the GFDL yet. Consensus of the two (mostly) better than either alone.

Guidance Trends Return to more “traditional” relationships among the models after the very limited sample of 2007.

Guidance Trends Relative performance at 120 h is more variable, although GFSI has been strong every year except GFDL is not a good performer at the longer ranges.

Consensus Models Best consensus model was TVCN, the variable member consensus that includes EMXI. It does not appear that the “correction” process was beneficial.

Consensus Models Third year in a row AEMI trailed the control run. Multi- model ensembles remain far more effective for TC forecasting. ECMWF ensemble mean is also not as good as the control run (EEMN v EMX).

Atlantic Intensity Errors vs. 5-Year Mean OFCL errors in 2008 were at or below the 5-yr means, but the 2008 Decay-SHIFOR errors were also at or below their 5-yr means, so not much change in skill.

Atlantic Intensity Error Trends No progress with intensity.

Atlantic Intensity Skill Trends Little net change in skill over the past several years.

2008 Intensity Guidance Split decision between the dynamical vs statistical models. New ICON consensus, introduced this year, was very successful, beating OFCL except at 12 h. OFCL adds most value over guidance at shorter ranges. Modest high bias in 2008 (2007 was a low bias).

2008 Intensity Guidance HWRF competitive through 3 days, with issues at the longer times. Although the sample was smaller, there was a hint of this last year as well. Cannot shut GFDL off yet!

2008 Intensity Guidance When the complication of timing landfall/track dependence is removed, OFCL performs better relative to the guidance. Dynamical models are relatively poor performers.

2008 East Pacific Verification VT NT TRACK INT (h) (n mi) (kt) ============================ Values in green tied or exceeded all-time lows.

2008 vs 5-Year Mean Slightly easier than average year (CLIPER errors below their 5-yr mean). Official errors were also below their 5-yr means (by a slightly larger margin).

EPAC Track Error Trends Since 1990, track errors have decreased by 30%-50%.

EPAC Track Skill Trends Skill continues to improve.

OFCL Error Distributions and Cone Radii Approximate 10% reduction in cone radii.

2008 Track Guidance EMXI, EGRI, AEMI, FSSE, GUNA, TCON excluded due to insufficient availability. Official forecast beat the TVCN consensus at later periods; beat each individual model. OFCL far superior to model guidance at longer time periods (also beat consensus at 4-5 days last year).

2008 Track Guidance Relax selection criteria to see all major dynamical models. ECMWF best overall. OFCL clearly doing something right.

GFDL-HWRF Comparison Overall, HWRF performance not as good as the GFDL. Consensus better than either one alone.

Consensus Models Best was the variable consensus that includes EMXI. Corrected consensus models did not excel.

Consensus Models Similar performance to the Atlantic. Added value at longer lead times.

Eastern North Pacific Intensity Errors vs. 5-year Mean In 2008 both the OFCL and Decay- SHIFOR5 errors were below their 5-yr means. No dramatic changes in skill.

EPAC Intensity Error Trends Perhaps just a hint of improvement?

EPAC Intensity Skill Trends Skill does seem to be inching upward…

2008 Intensity Guidance OFCL mostly beat the individual models and even the consensus at some time periods. OFCL wind biases turn sharply negative at h, which was also true in Statistical models outperformed dynamical models. This year, DSHP beat LGEM (flip from 2007).

2008 Intensity Guidance HWRF competitive through 3 days, with issues at the longer times. Although the sample was smaller, there was a hint of this last year as well. Cannot shut GFDL off yet!

Genesis Forecast Verification

Genesis Forecast Verification Lead-Time Analysis for Disturbances that became Tropical Cyclones Time wrt/Genes is -48 h-42 h-36 h-30 h-24 h-18 h-12 h-6 h Avg. %31% 34%37%41%45%53%61% Time wrt/Genes is -48 h-42 h-36 h-30 h-24 h-18 h-12 h-6 h Avg. %29%30%28%30%31%35%42%52% Atlantic Eastern North Pacific

Genesis Bins for 2009 ATLANTIC Range (%)% Expected% Verified# Forecasts 0-20 (Low) (Med) (High) EASTERN NORTH PACIFIC Range (%)% Expected% Verified# Forecasts 0-20 (Low) (Med) (High) NHC will issue operational public quantitative/categorical genesis forecasts in 2009 and include categorical forecasts in the text Tropical Weather Outlook.

Summary: Atlantic Track  OFCL track errors set records for accuracy at all time periods. Errors continue their downward trends, skill was also up.  OFCL track forecast skill was very close to that of the consensus models, was beaten by EMXI.  EMXI and GFDL provided best dynamical track guidance. UKMET, which performed well in 2007, did not do so in NOGAPS lagged again.  HWRF has not quite attained the skill of the GFDL, but is competitive. A combination of the two is better than either alone.  Best consensus model was TVCN (variable consensus with EMXI). Multi-model consensus – good. Single model consensus – not so good. Not a good year for the “corrected consensus” models.

Summary: Atlantic Intensity  OFCL errors in 2008 were below the 5-yr means, but the 2008 Decay-SHIFOR errors were also lower than its 5-yr mean, so no real change in skill.  Still no progress with intensity errors; OFCL errors have remained unchanged over the last 20 years. Skill has been relatively flat over the past 5-6 years.  Split decision between the statistical and dynamical guidance. Simple four-model consensus (DSHP/LGEM/HWRF/GHMI) beat everything else, including the corrected consensus model FSSE.

Summary: East Pacific Track  OFCL track errors set records at h.  OFCL beat individual dynamical models, and also beat the consensus at 4 and 5 days.  GFDL, HWRF, and ECMWF were strong performers, although ECMWF had trouble holding on to systems through 5 days.  There continues to be a much larger difference between the dynamical models and the consensus in the eastern North Pacific than there is in the Atlantic, which is suggestive of different error mechanisms in the two basins.

Summary: East Pacific Intensity  OFCL mostly beat the individual models and even the consensus at 12 and 36 h. OFCL wind biases turned sharply negative at h, which was also true in  Best model at most time periods was a statistical model. DSHP provided most skillful guidance overall. HWRF continued to have trouble in this basin. Four-model intensity consensus performed very well.