Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.

Slides:



Advertisements
Similar presentations
Verification of Probabilistic Forecast J.P. Céron – Direction de la Climatologie S. Mason - IRI.
Advertisements

ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
Chapter 13 – Weather Analysis and Forecasting
John Saltenberger NWCC Lightning Prediction Oregon AMS Feb
Jess Charba Fred Samplatsky Phil Shafer Meteorological Development Laboratory National Weather Service, NOAA Updated September 06, 2013 LAMP Convection.
Forecasting Uncertainty Related to Ramps of Wind Power Production
Introduction to Probability and Probabilistic Forecasting L i n k i n g S c i e n c e t o S o c i e t y Simon Mason International Research Institute for.
Comparison of Several Methods for Probabilistic Forecasting of Locally-Heavy Rainfall in the 0-3 Hour Timeframe Z. Sokol 1, D. Kitzmiller 2, S. Guan 2.
Probabilistic forecasts of precipitation in terms of quantiles John Bjørnar Bremnes.
HFIP Regional Ensemble Call Audio = Passcode = # 16 September UTC.
HFIP Ensemble Products Subgroup Sept 2, 2011 Conference Call 1.
Paul Fajman NOAA/NWS/MDL September 7,  NDFD ugly string  NDFD Forecasts and encoding  Observations  Assumptions  Output, Scores and Display.
EMS ECAM 13 september 2011 GlamEps: Current and future use in operational forecasting at KNMI Adrie Huiskamp.
PERFORMANCE OF NATIONAL WEATHER SERVICE FORECASTS VERSUS MODEL OUTPUT STATISTICS Jeff Baars Cliff Mass Mark Albright University of Washington, Seattle,
SREF in Hirlam Sander Tijm (Kees Kok, Ben Wichers Schreur, Jeanette Onvlee, Hilde Haakenstad, Jose Antonio Garcia-Moya)
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
1 Intercomparison of low visibility prediction methods COST-722 (WG-i) Frédéric Atger & Thierry Bergot (Météo-France)
Improving Gridded Localized Aviation MOS Program (LAMP) Guidance by Using Emerging Forecast and Observation Systems Judy E. Ghirardelli, Jerome P. Charba,
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
NCAR Efficient Production of High Quality, Probabilistic Weather Forecasts F. Anthony Eckel National Weather Service Office of Science and Technology,
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
1 Localized Aviation Model Output Statistics Program (LAMP): Improvements to convective forecasts in response to user feedback Judy E. Ghirardelli National.
Creating Empirical Models Constructing a Simple Correlation and Regression-based Forecast Model Christopher Oludhe, Department of Meteorology, University.
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
1 GOES-R AWG Hydrology Algorithm Team: Rainfall Probability June 14, 2011 Presented By: Bob Kuligowski NOAA/NESDIS/STAR.
Warn on Forecast Briefing September 2014 Warn on Forecast Brief for NCEP planning NSSL and GSD September 2014.
National Weather Service Model Flip-Flops and Forecast Opportunities Bernard N. Meisner Scientific Services Division NWS Southern Region Fort Worth, Texas.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Data mining in the joint D- PHASE and COPS archive Mathias.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Task 1 of PP Interpretation 1.1Further applications of.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
OUTLINE Current state of Ensemble MOS
© Crown copyright Met Office Probabilistic turbulence forecasts from ensemble models and verification Philip Gill and Piers Buchanan NCAR Aviation Turbulence.
MDL Lightning-Based Products Kathryn Hughes NOAA/NWS/OST December 3, 2003
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
L. Mayoraz (1), J. Ambühl (1), R. Voisard (2), C. Voisard (1), M. Züger (2), H. Romang (1) (1) MeteoSwiss, Zurich, Switzerland, (2) University of Zurich,
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Local Probabilistic Weather Predictions for Switzerland.
Priority project Advanced interpretation COSMO General Meeting, 18. September 2006 Pierre Eckert.
EMS 2013 (Reading UK) Verification techniques for high resolution NWP precipitation forecasts Emiel van der Plas Kees Kok Maurice.
OPERATIONAL 2-H THUNDERSTORM GUIDANCE FCSTS TO 24 HRS ON 20-KM GRID JESS CHARBA FRED SAMPLATSKY METEOROLOGICAL DEVELOPMENT LABORATORY OST / NWS / NOAA.
The LAMP/HRRR MELD FOR AVIATION FORECASTING Bob Glahn, Judy Ghirardelli, Jung-Sun Im, Adam Schnapp, Gordana Rancic, and Chenjie Huang Meteorological Development.
Ensembles and Probabilistic Prediction. Uncertainty in Forecasting All of the model forecasts I have talked about reflect a deterministic approach. This.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Alan F. Hamlet Andy Wood Dennis P. Lettenmaier JISAO Center for Science in the Earth System Climate Impacts Group and the Department.
Deutscher Wetterdienst Preliminary evaluation and verification of the pre-operational COSMO-DE Ensemble Prediction System Susanne Theis Christoph Gebhardt,
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
Predicting Intense Precipitation Using Upscaled, High-Resolution Ensemble Forecasts Henrik Feddersen, DMI.
Developing GFS-based MOS Thunderstorm Guidance for Alaska Phillip E. Shafer* and Kathryn Gilbert Meteorological Development Laboratory, NWS, NOAA
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
I. Sanchez, M. Amodei and J. Stein Météo-France DPREVI/COMPAS
A Guide to Tropical Cyclone Guidance
LEPS VERIFICATION ON MAP CASES
Paper Review Jennie Bukowski ATS APR-2017
Systematic timing errors in km-scale NWP precipitation forecasts
Verifying and interpreting ensemble products
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Post Processing.
Probabilistic forecasts
Validation-Based Decision Making
Quantitative verification of cloud fraction forecasts
Finnish Meteorological Institute
COSMO-LEPS Verification
the performance of weather forecasts
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van Westrhenen KNMI

IUGG 2007 Outline Introduction: Weather alarm for severe thunderstorms Method: Model output statistics (MOS) Data used in MOS system for (severe) thunderstorms Illustration of statistical method Definitions of predictands Case (10 June 2007) Verification results Conclusions and outlook

IUGG 2007 Weather alarm for severe thunderstorms (I) Weather alarm: if probability of ≥ 500 discharges/5 min./(50x50 km 2 ) ≥ 90% in next 12 hours One of the least predictable phenomena History (note: other criterion): many misses, only a few hits and no false alarms Goal: decrease number of misses and increase number of hits, while keeping number of false alarms low Means: new objective probabilistic forecasting system

IUGG 2007 Model output statistics (MOS) Aim: Features: To determine a statistical relationship (mostly via regression) between a predictand (i.e. the occurrence of a thunderstorm in this case) and predictors from NWP model forecasts (and possibly from observations)  forecasts possible for predictands that are not available from direct model output  (reliable) probabilistic forecasts possible, even while using output from a single model run  separate regression equation for each forecast projection (correction of systematic model errors)

IUGG 2007 MOS system for (severe) thunderstorms 3/2 years of data: 1 July 2002 until 1 July 2005 (warm half years only, i.e. 16 April – 15 October) 2/3 part for development and 1/3 part for verification predictands: reprocessed lightning data (Saskia Noteboom) potential predictor set 1: radar data (0 to +6 h only) potential predictor set 2: lightning data (0 to +6 h only) potential predictor set 3: 17 thunderstorm indices, computed from weather model # 1 potential predictor set 4: (derived) DMO (forecasts) from model # 2 potential predictor set 5: (co)sine [day of the year] regression equations contain at least 2 and at most 5 predictors severe thunderstorms: all 12 regions pooled run frequency: 8 times per day (every 3 hours) forecast projections: 0 to +12 h (6-h periods)

IUGG 2007 MOS system for (severe) thunderstorms Ensemble of advected radar data (0 to +6 h) (Ensemble of advected) lightning data (0 to +6 h) Probability of thunderstorms (0 to +6 h/ +6 to +12 h) In developing the LR model you need a 3/2-year long data archive Archive: 2/3 part for development 1/3 part for verification NWP model forecasts (0 to +12 h) Logistic regression (LR) model

IUGG 2007 Example of advection vectors and lightning data: 17 July 2004 (1140 UTC)  RVHV

Example of logistic regression equation using only the first predictor (region M-MS; period: UTC) Probability of thunderstorms Fraction of ensemble with no. of flashes ≥ 4 [SAFIR ] binary predictand logistic curve

IUGG 2007 Where the are we? Introduction: Weather alarm for severe thunderstorms Method: Model output statistics (MOS) Data used in MOS system for (severe) thunderstorms Illustration of statistical method Definitions of predictands Case (10 June 2007) Verification results Conclusions and outlook

IUGG 2007 Weather alarm for severe thunderstorms (II) Weather alarm: if probability of ≥ 500 discharges/5 min./(50x50 km 2 ) ≥ 90% in next 12 hours ‘climatology’ on the basis of this criterion: only twice a year (between 30 April and 15 September) Statistical methods are not capable of handling such rare events. Therefore, other predictand definitions have been used.

IUGG 2007 Predictand definitions Predictand for thunderstorms: Probability of > 1 lightning discharge in a 6h period (00-06, 03-09, 06-12, 09-15, 12-18, 15-21, or UTC) in a 90x80 km 2 region. Predictands for severe thunderstorms: Conditional probability of ≥ X, ≥ Y or ≥ Z discharges/ 5 min. in a 6h period in a 90x80 km 2 region with condition > 1 discharge in the same 6h period in the same region. Here X =50 (all 6-h periods); Y = 100 and Z =200 (12-18, and UTC).

IUGG 2007 Case 17 July 2004 (12-18 UTC; 0 to +6 h) 1150 UTC run (based on SAFIR , H and EC ) ‘Clim.’ prob. of thunderstorms: 6-22 % ‘Clim.’ cond. prob. of severe thunderstorms ( ≥ 200 discharges/5 min.): 4 % (abs. prob.: < 1 %) Probability of thunderstorms Cond. prob. of severe thunderstorms (≥ 50 discharges/ 5 min.)(≥ 200 discharges/ 5 min.) Maximum 5-min. lightning intensity

IUGG 2007 Case 25 June 2006 (15-21 UTC; +6 to +12 h) 08:50 UTC run ‘Clim.’ prob. of thunderstorms: 5-19 % ‘Clim.’ cond. prob. of severe thunderstorms ( ≥ 200 discharges/5 min.): 5 % (abs. prob.: < 1 %) Probability of thunderstorms Cond. prob. of severe thunderstorms (≥ 50 discharges/ 5 min.)(≥ 200 discharges/ 5 min.) Maximum 5-min. lightning intensity

IUGG 2007 Case 25 June 2006 (15-21 UTC; 0 to +6 h) 1450 UTC run (based on H and EC ) ‘Clim.’ prob. of thunderstorms: 5-19 % ‘Clim.’ cond. prob. of severe thunderstorms ( ≥ 200 discharges/5 min.): 5 % (abs. prob.: < 1 %) Probability of thunderstorms Cond. prob. of severe thunderstorms (≥ 50 discharges/ 5 min.)(≥ 200 discharges/ 5 min.) Maximum 5-min. lightning intensity

IUGG 2007 Case 8 June 2007 (15-21 UTC; +6 to +12 h) 09 UTC run (based on H 0806 and EC 0712) ‘Clim.’ prob. of thunderstorms: 5-19 % ‘Clim.’ cond. prob. of severe thunderstorms ( ≥ 200 discharges/5 min.): 5 % (abs. prob.: < 1 %) Probability of thunderstorms Cond. prob. of severe thunderstorms (≥ 50 discharges/ 5 min.)(≥ 200 discharges/ 5 min.) Maximum 5-min. lightning intensity

IUGG 2007 Case 8 June 2007 (15-21 UTC; 0 to +6 h) 15 UTC run (based on H 0812 and EC 0712) ‘Clim.’ prob. of thunderstorms: 5-19 % ‘Clim.’ cond. prob. of severe thunderstorms ( ≥ 200 discharges/5 min.): 5 % (abs. prob.: < 1 %) Cond. prob. of severe thunderstorms (≥ 50 discharges/ 5 min.)(≥ 200 discharges/ 5 min.) Probability of thunderstorms Maximum 5-min. lightning intensity

IUGG 2007 Case 10 June 2007 (15-21 UTC; +6 to +12 h) 09 UTC run (based on H 1006 and EC 0912) ‘Clim.’ prob. of thunderstorms: 5-19 % ‘Clim.’ cond. prob. of severe thunderstorms ( ≥ 200 discharges/5 min.): 5 % (abs. prob.: < 1 %) Probability of thunderstorms Cond. prob. of severe thunderstorms (≥ 50 discharges/ 5 min.)(≥ 200 discharges/ 5 min.) Maximum 5-min. lightning intensity

IUGG 2007 Verification results 2006 (Probability of > 1 discharge) Brier skill score (%) 0 to +6 h +6 to +12 h Brier skill score (%) Time (UTC) Brier skill score (%)

IUGG 2007 Reliability diagrams (’05-’06;12-18 UTC; 0 to +6h) ≥ 50 discharges/ 5 min.≥ 100 discharges/ 5 min. Observed frequency Forecast probability Observed frequency Forecast probability

IUGG 2007 Reliability diagrams (’05-’06;15-21 UTC; 0 to +6h) Observed frequency Forecast probability Observed frequency Forecast probability ≥ 50 discharges/ 5 min.≥ 100 discharges/ 5 min.

IUGG 2007 Reliability diagrams (’05-’06;18-00 UTC; 0 to +6h) ≥ 50 discharges/ 5 min.≥ 100 discharges/ 5 min. Observed frequency Forecast probability

IUGG 2007 Reliability diagram 1 (’05-’06): ≥ 50 discharges/ 5 min. (15-21 UTC; 0 to +6h) Observed frequency Forecast probability

IUGG 2007 Reliability diagram 2 (’05-’06): ≥ 100 discharges/ 5 min. (15-21 UTC; 0 to +6h) Observed frequency Forecast probability

IUGG 2007 Reliability diagram 3 (’05-’06): ≥ 50 discharges/ 5 min. (00-06 UTC; 0 to +6h) Forecast probability Observed frequency

IUGG 2007 Reliability diagram 4 (’05-’06): ≥ 50 discharges/ 5 min. (06-12 UTC; 0 to +6h) Observed frequency Forecast probability

IUGG 2007 Conclusions and outlook Probabilistic forecasts for thunderstorms (> 1 discharge) are skilful with respect to the climatology. Probabilistic forecasts for severe thunderstorms ( ≥ 50/ ≥ 100 discharges per 5 min.) are reasonably skilful with respect to the climatology. The system has been pre-operational at KNMI since Spring of 2006 and will be fully operational later this year. It is expected that this system will help the forecasters to decide whether a weather alarm for severe thunderstorms should be issued.

IUGG 2007 Verification results (Cond. prob. of ≥ 50/100/200 discharges/ 5 min.) Brier skill score (%) Time (UTC) 0 to +6 h+6 to +12 h Brier skill score (%)

IUGG 2007 Reliability diagram 1: ≥ 50 discharges/ 5 min. (12-18 UTC; 0 to +6h) BSS = 30 % Bias = 0.2 % N = 235 Observed frequency Forecast probability

IUGG 2007 Reliability diagram 2: ≥ 100 discharges/ 5 min. (12-18 UTC; 0 to +6h) BSS = 32 % Bias = 4.7 % N = 235 Observed frequency Forecast probability

IUGG 2007 Reliability diagram 3: ≥ 200 discharges/ 5 min. (12-18 UTC; 0 to +6h) BSS = 62 % Bias = 1.3 % N = 235 Observed frequency Forecast probability

IUGG 2007 Reliability diagram 4: ≥ 300 discharges/ 5 min. (12-18 UTC; 0 to + 6h) Observed frequency Forecast probability BSS = 38 % Bias = 0.7 % N = 235

IUGG 2007 Reliability diagram 5: ≥ 400 discharges/ 5 min. (12-18 UTC; 0 to + 6h) Forecast probability Observed frequency BSS = 26 % Bias = 0.1 % N = 235

IUGG 2007 Reliability diagram 6: ≥ 500 discharges/ 5 min. (12-18 UTC; 0 to + 6h) Forecast probability Observed frequency BSS = 13 % Bias = 1.0 % N = 235