TOULOUSE (FRANCE), 5-9 September 2005 OBJECTIVE VERIFICATION OF A RADAR-BASED OPERATIONAL TOOL FOR IDENTIFICATION OF HAILSTORMS I. San Ambrosio, F. Elizaga.

Slides:



Advertisements
Similar presentations
Robin Hogan Ewan OConnor University of Reading, UK What is the half-life of a cloud forecast?
Advertisements

QPF verification of the 4 model versions at 7 km res. (COSMO-I7, COSMO-7, COSMO-EU, COSMO-ME) with the 2 model versions at 2.8 km res. (COSMO- I2, COSMO-IT)
Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
DYnamical and Microphysical Evolution of Convective Storms Thorwald Stein, Robin Hogan, John Nicol DYMECS.
A Combined IR and Lightning Rainfall Algorithm for Application to GOES-R Robert Adler, Weixin Xu and Nai-Yu Wang University of Maryland Goal: Develop and.
Monitoring the Quality of Operational and Semi-Operational Satellite Precipitation Estimates – The IPWG Validation / Intercomparison Study Beth Ebert Bureau.
Preparing for a “Change” in Severe Hail Warning Criteria in 2010 Brian J. Frugis NWS WFO Albany, NY NROW XI November 4-5, 2009.
European Storm Forecast Experiment Verification of Dichotomous Lightning Forecasts at the European Storm Forecast Experiment (ESTOFEX) Pieter Groenemeijer.
RECENT PROGRESS IN CONVECTIVE PHENOMENA MONITORING AND FORECASTING AT THE INM F. Martín, F. Elizaga, I. San Ambrosio and J. M. Fernández Servicio de Técnicas.
COSMO General Meeting Zurich, 2005 Institute of Meteorology and Water Management Warsaw, Poland- 1 - Verification of the LM at IMGW Katarzyna Starosta,
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Operational thunderstorm nowcasting in the Alpine region.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
© Crown copyright Met Office Operational OpenRoad verification Presented by Robert Coulson.
SCAN SCAN System for Convection Analysis and Nowcasting Operational Use Refresher Tom Filiaggi & Lingyan Xin
1 GOES-R AWG Hydrology Algorithm Team: Rainfall Probability June 14, 2011 Presented By: Bob Kuligowski NOAA/NESDIS/STAR.
CMAS Conference, October 16 – 18, 2006 The work presented here was performed by the New York State Department of Environmental Conservation with partial.
KING FAHD UNIVERSITY OF PETROLEUM & MINERALS DHAHRAN, SAUDI ARABIA ARE-512 Life Cycle Costing – Cost Data Bank For By G. C. SOFAT S.K. TYAGI Presented.
LMD/IPSL 1 Ahmedabad Megha-Tropique Meeting October 2005 Combination of MSG and TRMM for precipitation estimation over Africa (AMMA project experience)
Lightning Jump Algorithm Update W. Petersen, C. Schultz, L. Carey, E. Hill.
ISDA 2014, Feb 24 – 28, Munich 1 Impact of ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model Florian.
RADAR STORM MOTION ESTIMATION AND BEYOND: A SPECTRAL ALGORITHM AND RADAR OBSERVATION BASED DYNAMIC MODEL Gang Xu* and V. Chandrasekar Colorado State University.
Nowcasting thunderstorms in complex cases using radar data Alessandro Hering* Stéphane Sénési # Paolo Ambrosetti* Isabelle Bernard-Bouissières # *MeteoSwiss.
Verification of the 88D Hail Detection Algorithm at WFO Cheyenne Mike Weiland WFO Cheyenne.
SEASONAL COMMON PLOT SCORES A DRIANO R ASPANTI P ERFORMANCE DIAGRAM BY M.S T ESINI Sibiu - Cosmo General Meeting 2-5 September 2013.
Event-based Verification and Evaluation of NWS Gridded Products: The EVENT Tool Missy Petty Forecast Impact and Quality Assessment Section NOAA/ESRL/GSD.
We carried out the QPF verification of the three model versions (COSMO-I7, COSMO-7, COSMO-EU) with the following specifications: From January 2006 till.
A Preliminary Verification of the National Hurricane Center’s Tropical Cyclone Wind Probability Forecast Product Jackie Shafer Scitor Corporation Florida.
Christopher J. Schultz 1, Walter A. Petersen 2, Lawrence D. Carey 3* 1 - Department of Atmospheric Science, UAHuntsville, Huntsville, AL 2 – NASA Marshall.
National Lab for Remote Sensing and Nowcasting Dual Polarization Radar and Rainfall Nowcasting by Mark Alliksaar.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
The Benefit of Improved GOES Products in the NWS Forecast Offices Greg Mandt National Weather Service Director of the Office of Climate, Water, and Weather.
USING THE ROSSBY RADIUS OF DEFORMATION AS A FORECASTING TOOL FOR TROPICAL CYCLOGENESIS USING THE ROSSBY RADIUS OF DEFORMATION AS A FORECASTING TOOL FOR.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecast in the Alps Verification.
Cloud to Ground Lightning Climatology and Hail Prediction in the Mid-South Matthew Reagan Mississippi State University.
© Crown copyright Met Office WAFC CAT verification Objective verification of GRIB CAT forecasts Dr Philip G Gill, WAFS Workshop on the use and visualisation.
Evaluating the ability of climate models to simulate extremes Eric Robinson Natalie McLean Christine Radermacher Ross Towe Yushiang Tung Project 6.
CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS
GII to RII to CII in South Africa Estelle de Coning South African Weather Service Senior Scientist.
Hypothesis Testing Introduction to Statistics Chapter 8 Feb 24-26, 2009 Classes #12-13.
15 th EMS / 12 TH ECAM, 7-11 September 2015,Sofia, Bulgaria HAIL CHARACTERISTICS IN NORTHERN GREECE Dimitrios Foris (1) and Vasileios Foris (2) (1) Meteorological.
1 Validation for CRR (PGE05) NWC SAF PAR Workshop October 2005 Madrid, Spain A. Rodríguez.
CMS Tracker: Detector Control Units & Tracker Monitoring My Summer Student project (A contribution to:) Fatima Kajout 11 th of August 2003Student Session.
Trials of a 1km Version of the Unified Model for Short Range Forecasting of Convective Events Humphrey Lean, Susan Ballard, Peter Clark, Mark Dixon, Zhihong.
Gridded warning verification Harold E. Brooks NOAA/National Severe Storms Laboratory Norman, Oklahoma
C. Schultz, W. Petersen, L. Carey GLM Science Meeting 12/01/10.
Company LOGO. Company LOGO PE, PMP, PgMP, PME, MCT, PRINCE2 Practitioner.
Data Analysis of GPM Constellation Satellites-IMERG and ERA-Interim Precipitation Products over West of Iran Ehsan Sharifi 1, Reinhold Steinacker 1, and.
Case Study: March 1, 2007 The WxIDS approach to predicting areas of high probability for severe weather incorporates various meteorological variables (e.g.
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
Evaluation of Precipitation from Weather Prediction Models, Satellites and Radars Charles Lin Department of Atmospheric and Oceanic Sciences McGill University,
UERRA user workshop, Toulouse, 3./4. Feb 2016Cristian Lussana and Michael Borsche 1 Evaluation software tools Cristian Lussana (2) and Michael Borsche.
60 min Nowcasts 60 min Verification Cold Front Regime
I. Sanchez, M. Amodei and J. Stein Météo-France DPREVI/COMPAS
SOME OPERATIONAL ASPECTS OF USING LIGHTNING DATA
Paper Review Jennie Bukowski ATS APR-2017
Systematic timing errors in km-scale NWP precipitation forecasts
Pamela Eck, Brian Tang, and Lance Bosart University at Albany, SUNY
COSMO Priority Project ”Quantitative Precipitation Forecasts”
Analysis of rainfall fields in Southern Italy
A Real-Time Automated Method to Determine Forecast Confidence Associated with Tornado Warnings Using Spring 2008 NWS Tornado Warnings John Cintineo Cornell.
Nic Wilson’s M.S.P.M. Research
Sub-daily temporal reconstruction of historical extreme precipitation events using NWP model simulations Vojtěch Bližňák1 Miloslav.
Analysis of NASA GPM Early 30-minute Run in Comparison to Walnut Gulch Experimental Watershed Rain Data Adolfo Herrera April Arizona Space Grant.
Validation of Satellite Precipitation Estimates using High-Resolution Surface Rainfall Observations in West Africa Paul A. Kucera and Andrew J. Newman.
Quantitative verification of cloud fraction forecasts
Quality Assessment Activities
Long-term trends of magnetic bright points: The evolution of MBP size and modelling of the number of MBPs at disc centre D. Utz [1,2,3], T. Van Doorsselaere.
Short Range Ensemble Prediction System Verification over Greece
Using WSR-88D Reflectivity for the Prediction of Cloud-to-ground Lightning: A Central North Carolina Study A collaborative research project between NCSU.
Presentation transcript:

TOULOUSE (FRANCE), 5-9 September 2005 OBJECTIVE VERIFICATION OF A RADAR-BASED OPERATIONAL TOOL FOR IDENTIFICATION OF HAILSTORMS I. San Ambrosio, F. Elizaga and F. Martín Analysis and Forecasting Techniques Department (STAP) INM, SPAIN

TOULOUSE (FRANCE), 5-9 September 2005 CONTENTS Introduction. Hail data collection. Calibration of the hail module. Verification methodology. Parameters performance. Verification of the operational thresholds. Conclusions.

TOULOUSE (FRANCE), 5-9 September 2005 INTRODUCTION (I) INM radar network: –14 C band radars. –Normal (N) and Doppler (D) modes. –Normal mode: cartesian volume with 12 CAPPIS; Resolution: 2x2 km and 10 min. –Doppler mode: 6 CAPPIS; Resolution: 1x1 km and 10 min. Radar sites Radar coverage in normal mode

TOULOUSE (FRANCE), 5-9 September 2005 INTRODUCTION (II) Operational aim of the Hail Module: support in decision making processes for nowcasting activities. Based upon: –Operational application for convective monitoring. –HIRLAM (INM) model data. Hail algorithms used by this module: –Density of VIL (DVIL). –Technique of Waldvogel, to obtain POH (Probability Of Hail of any size). –Hail Detection Algorithm (HDA), to obtain POSH (Probability Of Severe Hail). Severe hail is larger than 19 mm in diameter.

TOULOUSE (FRANCE), 5-9 September 2005 HAIL DATA COLLECTION (I) Information about hail storms during the years 2001, 2002 and 2003: –Place, time of the hailstorms. –Hail size: split into two categories, severe or non-severe. Sources of this information: –Different media: press, radio, TV, internet. –Atmospheric Research group (University of León). –Events summaries of ADV (“Agrupació de Defensa Vegetal de les Terres de Ponent”). Network with 170 hailpads in the SW of Lleida (“Zona d’Actuació”).

TOULOUSE (FRANCE), 5-9 September 2005 HAIL DATA COLLECTION (II):

TOULOUSE (FRANCE), 5-9 September 2005 HAIL DATA COLLECTION (III): thunderstorms without hail It is very difficult to obtain reliable information about these events. Summaries elaborated by ADV have been used for this purpose. Thunderstorms without hail: –take place in the region of interest of ADV. –there are not any hail information about them.

TOULOUSE (FRANCE), 5-9 September 2005 CALIBRATION OF THE HAIL MODULE (I) For this process a 144 thunderstorms data base from the 2001 and 2002 campaigns was used: –29 with severe hail. –52 with non-severe hail. –and 63 without hail. Taking into account contingency tables and different skill measures a calibration of this module was made to identify two categories of hailstorms. In the calibration process a set of thresholds were tuned.

TOULOUSE (FRANCE), 5-9 September 2005 CALIBRATION OF THE HAIL MODULE (II) These are the thresholds finally obtained to identify the two hail categories: –Probability of severe hail: DVIL  1,5 and POSH  10. –Probability of hail (any size): DVIL  1,3 or POH  20.

TOULOUSE (FRANCE), 5-9 September 2005 METHODOLOGY OF VERIFICATION (I): analysis of the hailstorms A complete life cycle of every 3D cell is obtained. Use of the three different algorithms to analyze the selected parameters (POH, DVIL and POSH) and their evolution along every life cycle. During the calibration process, a Criterion for Temporal selection, CT50, was established to extract information related with: –40 minutes before every hail event. –During the hail event itself. –And 10 minutes after it.

TOULOUSE (FRANCE), 5-9 September 2005 CT50 Data

TOULOUSE (FRANCE), 5-9 September 2005 METHODOLOGY OF VERIFICATION (II): The same criterion, CT50, was also used in this verification process. Other criteria, CT30 (data during the hail event and 30 minutes before it) and CT00 (data during the hail event), have been also established to analyze the performance of the hail module.

TOULOUSE (FRANCE), 5-9 September 2005 CT30 Data

TOULOUSE (FRANCE), 5-9 September 2005 CT00 Data

TOULOUSE (FRANCE), 5-9 September 2005 METHODOLOGY OF VERIFICATION (III): The different criteria (CT50, CT30 and CT00) have been applied to the 2003 global set of data (from March to September). The same analysis has been made for Spring thunderstorms, those that have taken place between March and June, both included, and also for Summer ones (July, August and September).

TOULOUSE (FRANCE), 5-9 September 2005 METHODOLOGY OF VERIFICATION (IV): For the global period 246 thunderstorms have been analyzed (39 with severe hail, 120 with non-severe and 87 without hail). Spring period: 78 thunderstorms (10 with severe hail, 52 with non-severe and 16 without hail). Summer period: 168 thunderstorms (29 with severe hail, 68 with non-severe and 71 without hail). In these three periods and with the three temporal criteria, we have evaluated: –Performance of the parameters (DVIL, POH and POSH). –Verification of the operational thresholds by means contingency tables and verification indexes.

TOULOUSE (FRANCE), 5-9 September 2005 PARAMETERS PERFORMANCE The most remarkable difference appears when DVIL values are analyzed in Spring and Summer thunderstorms, ones apart from the others. There are small differences between the three criteria considered for temporal selection.

TOULOUSE (FRANCE), 5-9 September 2005

PARAMETERS PERFORMANCE Events with Severe Hail Another important difference appears when the number of events is represented depending on DVIL and POSH. In Spring thunderstorms the cuasi-linear relationship disappears. The shorter is the criteria for temporal selection (CT00), the less clear is the maximum located in lower values of DVIL and POSH (as can be seen in CT50).

TOULOUSE (FRANCE), 5-9 September 2005

PARAMETERS PERFORMANCE Events with Non Severe Hail In these hailstorms, a clear difference can be seen when the number of events is represented depending on DVIL and POH, and specially when CT00 is used. In Spring hailstorms a maximum number of events is located on DVIL values lower than 1.0 g/cm 3. For Summer cases, almost all the hailstorms have DVIL values larger than 1.0 g/cm 3.

TOULOUSE (FRANCE), 5-9 September 2005

PARAMETERS PERFORMANCE Thunderstorms with No Hail A slight difference can be seen when the number of events is represented depending on DVIL and POH. In Summer thunderstorms the maximum is very well located on DVIL values lower than 0.5 g/cm 3 and POH smaller than 10%. For Spring cases, this maximum spreads up to 1.5 g/cm 3 of DVIL.

TOULOUSE (FRANCE), 5-9 September 2005

VERIFICATION OF THE OPERATIONAL THRESHOLDS (I) For the global period of verification (Spring and Summer 2003), and also for the split seasons, and with the three criteria CT50, CT30 and CT00, the next skill measures have been worked out from contingency tables: –POD, Probability Of Detection. –FAR, False Alarms Rate. –CSI, Critical Success Index. –HSS, Heidke Skill Score. –OR, Odds Ratio.

TOULOUSE (FRANCE), 5-9 September 2005 VERIFICATION OF THE OPERATIONAL THRESHOLDS (II) Next, different graphics comparing the values obtained for the skill measures can be seen. These graphics are bar charts to compare the global, Summer and Spring value of every verification index, and for every temporal criterion, with the calibration process value.

TOULOUSE (FRANCE), 5-9 September 2005 PERFORMANCE OF THE PROBABILITY OF DETECTION (POD). SEVERE HAIL THRESHOLD HAIL OF ANY SIZE THRESHOLD

TOULOUSE (FRANCE), 5-9 September 2005 PERFORMANCE OF THE FALSE ALARMS RATE (FAR). SEVERE HAIL THRESHOLD HAIL OF ANY SIZE THRESHOLD

TOULOUSE (FRANCE), 5-9 September 2005 PERFORMANCE OF THE CRITICAL SUCCESS INDEX (CSI). SEVERE HAIL THRESHOLD HAIL OF ANY SIZE THRESHOLD

TOULOUSE (FRANCE), 5-9 September 2005 PERFORMANCE OF THE HEIDKE SKILL SCORE (HSS). SEVERE HAIL THRESHOLD HAIL OF ANY SIZE THRESHOLD

TOULOUSE (FRANCE), 5-9 September 2005 PERFORMANCE OF THE ODDS RATIO (OR). SEVERE HAIL THRESHOLD HAIL OF ANY SIZE THRESHOLD

TOULOUSE (FRANCE), 5-9 September 2005 CONCLUSIONS (I) This HAIL MODULE is operational since 2003, and has shown its utility for convective events nowcasting. Combining DVIL and HDA seems accurate to detect severe hail, in the some way DVIL and technique of Waldvogel does for hail of any size. The global verification carried out with 2003 data base has given quite good results. The shorter are the criteria for temporal selection, the better are the results. This different performance can appear due to lots of data contained in the CT50.

TOULOUSE (FRANCE), 5-9 September 2005 CONCLUSIONS (II) The skill of the module is better in Summer hailstorms than in Spring ones. The performance of the considered parameters (DVIL, POH, POSH) is related with summer deep convection processes. It seems necessary to analyze separately Spring and Summer hailstorms in more detail, to tune more accurate procedures and thresholds for different seasons (or months) of the year.