Augmentation of Early Intensity Forecasting in Tropical Cyclones Teams: RUA: Elizabeth A. Ritchie J. Scott TyoMiguel F. PiñerosG. Valliere-Kelley. NRL:

Slides:



Advertisements
Similar presentations
1 GOES-R Hurricane Intensity Estimation (HIE) Validation Tool Development Winds Application Team Tim Olander (CIMSS) Jaime Daniels (STAR)
Advertisements

Introduction to Hurricane Forecasting John P. Cangialosi Hurricane Specialist National Hurricane Center HSS Webinar 13 March 2012 John P. Cangialosi Hurricane.
Future Plans  Refine Machine Learning:  Investigate optimal pressure level to use as input  Investigate use of neural network  Add additional input.
Robert DeMaria.  Motivation  Objective  Data  Center-Fixing Method  Evaluation Method  Results  Conclusion.
Future Plans  Refine Machine Learning:  Investigate optimal pressure level to use as input  Investigate use of neural network  Add additional input.
Acknowledgments: ONR NOPP program HFIP program ONR Marine Meteorology Program Elizabeth A. Ritchie Miguel F. Piñeros J. Scott Tyo Scott Galvin Gen Valliere-Kelley.
HEDAS ANALYSIS STATISTICS ( ) by Altug Aksoy (NOAA/AOML/HRD) HEDAS retrospective/real-time analyses have been performed for the years
Maximum Covariance Analysis Canonical Correlation Analysis.
McGraw-Hill Ryerson Copyright © 2011 McGraw-Hill Ryerson Limited. Adapted by Peter Au, George Brown College.
Assessment of Tropical Rainfall Potential (TRaP) forecasts during the Australian tropical cyclone season Beth Ebert BMRC, Melbourne, Australia.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
BA 555 Practical Business Analysis
More Raster and Surface Analysis in Spatial Analyst
Class 5: Thurs., Sep. 23 Example of using regression to make predictions and understand the likely errors in the predictions: salaries of teachers and.
Evaluating Hypotheses
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
1 Simple Linear Regression 1. review of least squares procedure 2. inference for least squares lines.
Forecasting Tropical cyclones Regional Training Workshop on Severe Weather Forecasting and Warning Services (Macao, China, 9 April 2013)
Analysis of High Resolution Infrared Images of Hurricanes from Polar Satellites as a Proxy for GOES-R INTRODUCTION GOES-R will include the Advanced Baseline.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Application of the Computer Vision Hough Transform for Automated Tropical Cyclone Center-Fixing from Satellite Data Mark DeMaria, NOAA/NCEP/NHC Robert.
BPS - 3rd Ed. Chapter 211 Inference for Regression.
A. Schumacher, CIRA/Colorado State University NHC Points of Contact: M. DeMaria, D. Brown, M. Brennan, R. Berg, C. Ogden, C. Mattocks, and C. Landsea Joint.
Improvements in Deterministic and Probabilistic Tropical Cyclone Wind Predictions: A Joint Hurricane Testbed Project Update Mark DeMaria and Ray Zehr NOAA/NESDIS/ORA,
“Nature Run” Diagnostics Thomas Jung ECMWF. Another “Nature Run” A large set of seasonal T L 511L91 integrations has been carried out for many summers.
Improvements in Deterministic and Probabilistic Tropical Cyclone Surface Wind Predictions Joint Hurricane Testbed Project Status Report Mark DeMaria NOAA/NESDIS/ORA,
STATISTICAL ANALYSIS OF ORGANIZED CLOUD CLUSTERS ON WESTERN NORTH PACIFIC AND THEIR WARM CORE STRUCTURE KOTARO BESSHO* 1 Tetsuo Nakazawa 1 Shuji Nishimura.
Augmentation of Early Intensity Forecasting in Tropical Cyclones Teams: UA: Elizabeth A. Ritchie, J. Scott Tyo, Kim Wood, Oscar Rodriguez, Wiley Black,
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
NOAA’s Seasonal Hurricane Forecasts: Climate factors influencing the 2006 season and a look ahead for Eric Blake / Richard Pasch / Chris Landsea(NHC)
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
An Improved Wind Probability Program: A Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
Compression and Analysis of Very Large Imagery Data Sets Using Spatial Statistics James A. Shine George Mason University and US Army Topographic Engineering.
1 Changes in Spatial Distribution of North Atlantic Tropical Cyclones NG31A-07 AGU December 2007 Meeting Roger Pielke, Jr. and Stephen McIntyre
Page 1© Crown copyright 2006 Matt Huddleston With thanks to: Frederic Vitart (ECMWF), Ruth McDonald & Met Office Seasonal forecasting team 14 th March.
Objective: Understanding and using linear regression Answer the following questions: (c) If one house is larger in size than another, do you think it affects.
The Impact of Lightning Density Input on Tropical Cyclone Rapid Intensity Change Forecasts Mark DeMaria, John Knaff and Debra Molenar, NOAA/NESDIS, Fort.
1 Microwave Imager TC Applications Naval Research Laboratory, Monterey, CA 2 Jet Propulsion Laboratory, Pasadena, CA 3 Science Applications Inc. International,
Detecting tropical cyclone formation from satellite imagery Elizabeth A. RitchieMiguel F. PiñerosJ. Scott TyoS. Galvin University of Arizona Acknowledgements:
Can Dvorak Intensity Estimates be Calibrated? John A. Knaff NOAA/NESDIS Fort Collins, CO.
Tropical Cyclone Rapid Intensity Change Forecasting Using Lightning Data during the 2010 GOES-R Proving Ground at the National Hurricane Center Mark DeMaria.
Scatter plot of minimum pressure and maximum azimuthal wind for Atlantic and Eastern Pacific tropical cyclones ( Hurricane Isaac 2012 [red]).
How Good is a Model? How much information does AIC give us? –Model 1: 3124 –Model 2: 2932 –Model 3: 2968 –Model 4: 3204 –Model 5: 5436.
Application of T382 CFS Forecasts for Dynamic Hurricane Season Prediction J. Schemm, L. Long, S. Saha and S. Moorthi NOAA/NWS/NCEP October 21, 2008 The.
NOAA G-IV AIRCRAFT TRACK AROUND HURRICANE IVAN. ETKF PLANNED FLIGHT ACTUAL G-IV FLIGHT.
Introduction to Inference Sampling Distributions.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
GOES-R Hurricane Intensity Estimation (HIE) Winds-HIE Application Team Chris Velden & Tim Olander (CIMSS) Jaime Daniels (STAR)
How to describe Accuracy And why does it matter Jon Proctor, PhotoTopo GIS In The Rockies: October 10, 2013.
Doppler Lidar Winds & Tropical Cyclones Frank D. Marks AOML/Hurricane Research Division 7 February 2007.
COMPARISONS OF NOWCASTING TECHNIQUES FOR OCEANIC CONVECTION Huaqing Cai, Cathy Kessinger, Nancy Rehak, Daniel Megenhardt and Matthias Steiner National.
Reconciling droughts and landfalling tropical cyclones in the southeastern US Vasu Misra and Satish Bastola Appeared in 2015 in Clim. Dyn.
TC Projects Joint Hurricane Testbed, Surface winds GOES-R, TC structure – TC Size TPW & TC size (Jack Dostalek) IR climatology – RMW/wind profile Proving.
Analysis of Typhoon Tropical Cyclogenesis in an Atmospheric General Circulation Model Suzana J. Camargo and Adam H. Sobel.
11 TC Activity in WNP, Oct08 33 rd Annual Climate Diagnostics and Prediction Workshop 21 October 2008 Long Term Changes in Tropical Cyclone.
Munehiko Yamaguchi 12, Takuya Komori 1, Takemasa Miyoshi 13, Masashi Nagata 1 and Tetsuo Nakazawa 4 ( ) 1.Numerical Prediction.
BPS - 5th Ed. Chapter 231 Inference for Regression.
2. WRF model configuration and initial conditions  Three sets of initial and lateral boundary conditions for Katrina are used, including the output from.
Prediction of Consensus Tropical Cyclone Track Forecast Error ( ) James S. Goerss NRL Monterey March 1, 2011.
Subtropical Potential Vorticity Streamer Formation and Variability in the North Atlantic Basin Philippe Papin, Lance F. Bosart, Ryan D. Torn University.
Using Lightning Data to Monitor the Intensification of Tropical Cyclones in the Eastern North Pacific By: Lesley Leary1, Liz Ritchie1, Nick Demetriades2,
Mark DeMaria and John A. Knaff - NOAA/NESDIS/RAMMB, Fort Collins, CO
Travelling to School.
Outlier Processing via L1-Principal Subspaces
GOES-R Risk Reduction Research on Satellite-Derived Overshooting Tops
IMPROVING HURRICANE INTENSITY FORECASTS IN A MESOSCALE MODEL VIA MICROPHYSICAL PARAMETERIZATION METHODS By Cerese Albers & Dr. TN Krishnamurti- FSU Dept.
Advanced Dvorak Technique
Science Objectives contained in three categories
Advanced Algebra Unit 1 Vocabulary
Presentation transcript:

Augmentation of Early Intensity Forecasting in Tropical Cyclones Teams: RUA: Elizabeth A. Ritchie J. Scott TyoMiguel F. PiñerosG. Valliere-Kelley. NRL: Jeffrey HawkinsRichard BankertKim Richardson NHC: James Franklin NOPP Review February

Overview 1.Introduction and Motivation 2.Data 3.Methodology 4.Intensity Estimation 5.Mitigation of issues/problems 6.Other work to be done 2

90 W75 W60 W45 W 30 N 20 N 10 N Question: Can we characterize the axisymmetry (or lack thereof) of these cloud clusters and relate that to their stage of development/intensity? Can we do this at low intensities? 1. Motivation 3

2. Data Spatial resolution: 5 km/pixel Temporal resolution: 30 min 10.7 μm Atlantic and Gulf of Mexico: Infrared Imagery (GOES-E) Spatial resolution: 5 km/pixel Temporal resolution: 30 min 10.7 μm Eastern North Pacific: Infrared Imagery (GOES-W) … Spatial resolution: 5 km/pixel Temporal resolution: 30 min 10.7 μm Western North Pacific: Infrared Imagery (MTSAT) Will focus on this basin in this presentation 4

IR ImageGradientDetail Artificial Vortex GradientDetail 3. Methodology 5

Deviation Angle Calculation. If we choose some “reference pixel” in the IR image and make that the centre of a 350-km radius circle, then we can draw radials from that “center” to every pixel within the circle. The deviation angle is the angle between the gradient vector angle at a particular pixel and the radial from the center pixel. The difference angles for every pixel within the circle are accumulated into a histogram and the variance is calculated. 6

Departure Angle Variance 09/18/ :15 UTC 25kt 09/19/ :15 UTC 55 kt 09/21/ :15 UTC 130 kt Hurricane Rita (2005) 7

Choosing a reference point in a real cloud system Center Location? Aug 24 00:15 UTC 30 kt Aug 24 21:15 UTC 45 kt Aug 28 00:15 UTC 100 kt Hurricane Katrina (2005) How do we choose the perfect center pixel? Especially at early times when even the experts find it tough? 8

Choosing a reference point in a real cloud system The short answer is “we don’t”!! 9

Departure Angle Variance (c)(d) (b)(a) Best Center? Best Radius? 10 “map of variances” that corresponds to the original IR image. Center finder? map the variance back to the pixel that was the reference center, creating a Calculate the variance using each pixel as the reference center, in turn. Then,

Departure Angle Variance (DAV) Hurricane Wilma (2005) 25kt 35kt 130kt Extract the minimum value 11

DAV Time Series Hurricane Rita (2005) Intensity Variance Unfiltered Variance Filtered 12

2D Histogram: DAV – Intensity -All DAV time series from the training set ( ) are mapped to the (NHC) best-track intensity -Because of the oscillations in the filtered DAV signal, several values can map to a single best-track intensity value (the best track is 6-h and 5-kt resolution). -The median of all the DAV values that map to a single best track intensity estimate is used to create the following data scatter plot -For the Atlantic:- 13

2D Histogram: DAV – Intensity 20 deg 2 x 5 kt bins 14

2D Histogram: DAV – Intensity Example: Hurricane Jeanne (2004) Diurnal Oscillation 15 Red dots: Jeanne’s values

4. Results of testing Two tests: 1.Randomly remove 30% of cases for testing Train with 70% cases Estimate intensity with remaining 30% 2. Train using cases. Test with

4. Results of testing Training: 70% of Testing: 30% of , RMSE: 14.7kt 17

Root Mean Square Error Training: Testing: 2009, RMSE: 24.8kt What Happened ??? It turns out that if we remove these two cases from the set, the RMSE drops to … 18

Root Mean Square Error Training: Testing: 2009, RMSE: 12.9kt That is, most of the error is contained in these two cases … What’s so special about them? 19

Wind Speed Estimation – 2009 bad cases Tropical Storm Ana 0615 UTC August kt Tropical Storm Erika 0615 UTC September 1 ** These storms are very weak, sheared, with a very circular, but offset cloud signature. The DAV is “designed” to calculate a low variance for these types of clouds → low variance = high intensity! and the system overestimated these two storms by a lot!! 20

5. Mitigation of bad cases and other issues Inter-annual and Intra-seasonal Variability:- One possible issue that comes to mind (for automated statistical systems like this) is that when a particular year tests much higher than the random sample testing it might mean that there are inter- annual or intra-seasonal variations for an individual year that are not well captured in the training set (e.g., no prior years with this type of sheared case). To check this, we fitted sigmoids to individual years, and then to individual months over the 5-y ( ) period and simply looked at how they varied … here we show the year-to-year. 21

Inter-annual and Intra-seasonal Variability These are the curves for individual years. The 5-year fit is the black line that runs smack in the middle of the pack. You can see that there is little variation from year-to-year in these curves. 22

Inter-annual and Intra-seasonal Variability In other words, it seems unlikely that inter-annual and intra- seasonal variations exist in the DAV signal. We look elsewhere for solutions to the “shear” problem. 23

Wind Speed Estimation – 2009 bad cases An option to “fix” this kind of problem (when the real center of circulation is exposed and away from the cloud center of mass): Use the “operational center fix” as the center pixel for the DAV calculation and only calculate the DAV within a small area of that center. 24

Use “best track centers” Training on and Testing on 2010 using the NHC “best track center” to locate the DAV values: RMSE: kt. Still re-calculating for 2009 test to see if this improves the two sheared cases of Ana and Erika. At least it hasn’t made things worse!! 25

6. What about the other basins?? Eastern North Pacific ( ): Eastern North Pacific ( ): Processed 2005 and 2006 (before pulling personnel to program the “best track center” algorithm) Processed 2005 and 2006 (before pulling personnel to program the “best track center” algorithm) Western North Pacific ( ): Western North Pacific ( ): located MTSAT imagery NRL collaborators) located MTSAT imagery NRL collaborators) working on the best strategy for processing these data working on the best strategy for processing these data new strategy: pre-processing of imagery done locally at NRL, sectored, and mapped images are downloaded to UA for training. new strategy: pre-processing of imagery done locally at NRL, sectored, and mapped images are downloaded to UA for training. 26

6. What about the low wind speeds?? At low wind speeds the ground truth for best track intensity estimates is rather lacking since these systems tend to form far out in the eastern Atlantic (in the Atlantic basin case). At low wind speeds the ground truth for best track intensity estimates is rather lacking since these systems tend to form far out in the eastern Atlantic (in the Atlantic basin case). Also, there are far fewer best track estimates for these low winds than, say, kt range. Also, there are far fewer best track estimates for these low winds than, say, kt range. 27

What about the low wind speeds?? This means that the spread in DAV to best track estimates for the low wind speeds is much higher, giving low confidence in the validity of the parametric DAV-intensity curve at these low wind speeds. This means that the spread in DAV to best track estimates for the low wind speeds is much higher, giving low confidence in the validity of the parametric DAV-intensity curve at these low wind speeds. Two methodologies we are pursuing to mitigate this problem: Two methodologies we are pursuing to mitigate this problem: 1. run a **thousand** realizations of real cases using a high-resolution mesoscale model from which we can build our own “best track intensity” database and match it to simulated DAV values. This has the advantage that we will obtain 30-min intensity estimates to match the 30-min DAV calculations – already started on 2010 cases. 2. use the subset of observations where there are aircraft reconnaissance intensities at the low wind speeds to build the relationship. Will there be enough for the parametric curve to be robust? We will see! 28

7. Summary A completely objective technique that characterizes the structure of a cloud system in terms of its departure from axisymmetry A completely objective technique that characterizes the structure of a cloud system in terms of its departure from axisymmetry Correlates very well with intensity Correlates very well with intensity Created a set of parametric relationships between the DAV and intensity based on /2009 training set Created a set of parametric relationships between the DAV and intensity based on /2009 training set Testing with independent datasets Testing with independent datasets best result: RMSE = 14.7 kt for random TCS from best result: RMSE = 14.7 kt for random TCS from result:RMSE = 24.8 kt for 8 cases 2009 result:RMSE = 24.8 kt for 8 cases RMSE = 12.9 kt for 6 cases (sans Ana and Erika) RMSE = 12.9 kt for 6 cases (sans Ana and Erika) Using NHC best track centers: 2010 result: kt Using NHC best track centers: 2010 result: kt 29

Future Work Working on other two basins. Working on other two basins. Have two ideas to use on getting the low wind speed estimates Have two ideas to use on getting the low wind speed estimates Also (suggested by J. Hawkins) to bin by cloud scene types and score the estimator based on those to give confidence values on the intensity estimate (potential input to SATCON) Also (suggested by J. Hawkins) to bin by cloud scene types and score the estimator based on those to give confidence values on the intensity estimate (potential input to SATCON) Easier tasks: test sensitivity to image resolution (5 vs 10 km) Easier tasks: test sensitivity to image resolution (5 vs 10 km) Test using different filter on DAV signal Test using different filter on DAV signal 30

Thank you Piñeros, M. F., E. A. Ritchie, and J. S. Tyo 2008: Objective measures of tropical cyclone structure and intensity change from remotely-sensed infrared image data. IEEE Trans. Geosciences and remote sensing. 46, Piñeros, M. F., E. A. Ritchie, and J. S. Tyo 2010: Detecting tropical cyclone genesis from remotely-sensed infrared image data. IEEE Trans. Geosciences and Remote Sensing Letters, 7, Piñeros, M. F., E. A. Ritchie, and J. S. Tyo 2011: Estimating tropical cyclone intensity from infrared image data. Wea. Forecasting, (In review) 31