Probability Forecasts from Ensembles and their Application at the SPC David Bright NOAA/NWS/Storm Prediction Center Norman, OK AMS Short Course on Probabilistic.

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

ECMWF long range forecast systems
5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
A Brief Guide to MDL's SREF Winter Guidance (SWinG) Version 1.0 January 2013.
Matthew Vaughan, Brian Tang, and Lance Bosart Department of Atmospheric and Environmental Sciences University at Albany/SUNY Albany, NY NROW XV Nano-scale.
Louisville, KY August 4, 2009 Flash Flood Frank Pereira NOAA/NWS/NCEP/Hydrometeorological Prediction Center.
Maximum Covariance Analysis Canonical Correlation Analysis.
Ounding nalog etrieval ystem Ryan Jewell Storm Prediction Center Norman, OK SARS Sounding Analog Retrieval System.
Predictability and Chaos EPS and Probability Forecasting.
Introduction to Probability and Probabilistic Forecasting L i n k i n g S c i e n c e t o S o c i e t y Simon Mason International Research Institute for.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Aspects of 6 June 2007: A Null “Moderate Risk” of Severe Weather Jonathan Kurtz Department of Geosciences University of Nebraska at Lincoln NOAA/NWS Omaha/Valley,
Probabilistic QPF Steve Amburn, SOO WFO Tulsa, Oklahoma.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Description and Preliminary Evaluation of the Expanded UW S hort R ange E nsemble F orecast System Maj. Tony Eckel, USAF University of Washington Atmospheric.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Severe Weather Applications David Bright NOAA/NWS/Storm Prediction Center AMS Short Course on Methods and Problems of Downscaling.
Supplemental Topic Weather Analysis and Forecasting.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Lecture II-2: Probability Review
Determining Favorable Days for Summertime Severe Convection in the Deep South Chad Entremont NWS Jackson, MS.
Using Short Range Ensemble Model Data in National Fire Weather Outlooks Sarah J. Taylor David Bright, Greg Carbin, Phillip Bothwell NWS/Storm Prediction.
Phillip Bothwell and Patrick Marsh-Storm Prediction Center Lindsey Richardson –CIMMS Dry Thunderstorm Forecasting Using Perfect Prog(nosis) Forecast results.
The Long Journey of Medium-Range Climate Prediction Ed O’Lenic, NOAA-NWS-Climate Prediction Center.
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
Phillip Bothwell Senior Development Meteorologist-Storm Prediction Center 3 rd Annual GOES-R GLM Science Meeting Dec. 1, 2010 Applying the Perfect Prog.
1 Development and Calibration of Ensemble Based Hazardous Weather Products at the Storm Prediction Center David Bright Gregg Grosshans, Jack Kain, Jason.
1 The Historic Ice Storm of January 26-28, OUTLINE Brief review of the stormBrief review of the storm Review of the environment and forcing (Why.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
Operational Application of Ensembles at the SPC: Fire Weather Forecasting Where Americas Climate and Weather Services Begin David Bright NOAA/NWS/Storm.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
The Rapid Evolution of Convection Approaching the New York City Metropolitan Region Brian A. Colle and Michael Charles Institute for Terrestrial and Planetary.
. Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is.
The Ingredients Based Tornado Parameter Matt Onderlinde.
The Similar Soundings Technique For Incorporating Pattern Recognition Into The Forecast Process at WFO BGM Mike Evans Ron Murphy.
SPC Ensemble Applications: Current Status, Recent Developments, and Future Plans David Bright Storm Prediction Center Science Support Branch Norman, OK.
Using Ensemble Probability Forecasts And High Resolution Models To Identify Severe Weather Threats Josh Korotky NOAA/NWS, Pittsburgh, PA and Richard H.
Phillip Bothwell Southern Thunder 2011 Workshop July 13, 2011 Multi-Model Lightning Prediction.
I n t e g r i t y - S e r v i c e - E x c e l l e n c e Air Force Weather Agency Probabilistic Lightning Forecasts Using Deterministic Data Evan Kuchera.
Ensembles and the Short Range Ensemble Guidance Website at the SPC David Bright NOAA/NWS/Storm Prediction Center Norman, OK Southern Region Teletraining.
APPLICATION OF NUMERICAL MODELS IN THE FORECAST PROCESS - FROM NATIONAL CENTERS TO THE LOCAL WFO David W. Reynolds National Weather Service WFO San Francisco.
Insights from CMC BAMS, June Short Range The SPC Short-Range Ensemble Forecast (SREF) is constructed by post-processing all 21 members of the NCEP.
Local Predictability of the Performance of an Ensemble Forecast System Liz Satterfield and Istvan Szunyogh Texas A&M University, College Station, TX Third.
Application of Short Range Ensemble Forecasts to Convective Aviation Forecasting David Bright NOAA/NWS/Storm Prediction Center Norman, OK Southwest Aviation.
Satellite based instability indices for very short range forecasting of convection Estelle de Coning South African Weather Service Contributions from Marianne.
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
Ensembles and Probabilistic Prediction. Uncertainty in Forecasting All of the model forecasts I have talked about reflect a deterministic approach. This.
WDTB Winter Wx Workshop Oct. 8-11, 2002 Summary. Why Train on Winter Wx? Significant hazard to life and property deaths / year $ 1 to 2 Billion.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Common verification methods for ensemble forecasts
On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ.
Stratiform Precipitation Fred Carr COMAP NWP Symposium Monday, 13 December 1999.
An Ensemble Primer NCEP Ensemble Products By Richard H. Grumm National Weather Service State College PA and Paul Knight The Pennsylvania State University.
A Random Subgrouping Scheme for Ensemble Kalman Filters Yun Liu Dept. of Atmospheric and Oceanic Science, University of Maryland Atmospheric and oceanic.
NCEP CMC ECMWF MEAN ANA BRIAN A COLLE MINGHUA ZHENG │ EDMUND K. CHANG Applying Fuzzy Clustering Analysis to Assess Uncertainty and Ensemble System Performance.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
Probabilistic Forecasts Based on “Reforecasts” Tom Hamill and Jeff Whitaker and
Figures from “The ECMWF Ensemble Prediction System”
Verifying and interpreting ensemble products
Precipitation Products Statistical Techniques
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
SUPERCELL PREDICTABILITY:
Presentation transcript:

Probability Forecasts from Ensembles and their Application at the SPC David Bright NOAA/NWS/Storm Prediction Center Norman, OK AMS Short Course on Probabilistic Forecasting January 9, 2005 San Diego, CA Where Americas Climate and Weather Services Begin

Outline Motivation for ensemble forecasting Ensemble products and applications –Emphasis on probabilistic products Ensemble calibration (verification) Decision making using ensembles

Outline Motivation for ensemble forecasting Ensemble products and applications –Emphasis on probabilistic products Ensemble calibration (verification) Decision making using ensembles

Daily weather forecasts begin as an initial- value problem on large supercomputers To produce a skillful weather forecast requires: –An accurate initial state of the atmosphere to begin the model forecast –Computer models that realistically represent the evolution of the atmosphere (in a timely manner) With a reasonably accurate initial analysis of the atmosphere, the state of the atmosphere at any subsequent time can be determined by a super- mathematician." (Bjerknes 1919)

Example: Determinism 60h Eta Forecast valid 00 UTC 27 Dec 2004 PMSL (solid); 10m Wind; mb thickness (dashed)

60h Eta Forecast valid 00 UTC 27 Dec 2004 PMSL (solid); 10m Wind; mb thickness (dashed) Precip amount (in) and type (blue=snow; green=rain) Example: Determinism

60h Eta Forecast valid 00 UTC 27 Dec 2004 “Truth” 00 UTC 27 Dec 2004 Example: Determinism

60h Eta Forecast valid 00 UTC 27 Dec 2004 “Truth” 00 UTC 27 Dec 2004 Ignores forecast uncertainty Potentially misleading Oversells forecast capability ? Example: Determinism

- Ensemble forecasting can be traced back to the discovery of the "Butterfly Effect" (Lorenz 1963, 1965)… -Atmo a non-linear, non-periodic, dynamical system causes even tiny errors to grow upscale... resulting in forecast uncertainty and eventually chaos The Butterfly Effect

Discovery of the “butterfly effect” (Lorenz 1963) Simplified climate model… When the integration was restarted with 3 (vs 6) digit accuracy, everything was going fine until… Time

Solutions began to diverge Solutions diverge Time The Butterfly Effect

Soon, two “similar” but clearly unique solutions Solutions diverge Time The Butterfly Effect

Eventually, results revealed two uncorrelated and completely different solutions (i.e., chaos) Solutions diverge Time Chaos The Butterfly Effect

Ensembles can be used to provide information on forecast uncertainty Information from the ensemble typically consists of… (1)Mean (2) Spread (3) Probability Ensembles useful in this range! Solutions diverge Time Chaos The Butterfly Effect

Ensembles extend predictability… A deterministic solution is no longer skillful when its error variance exceeds climatic variance An ensemble remains skillful until error saturation (i.e., until chaos occurs) Solutions diverge Chaos Time Ensembles extend predictability Ensembles especially useful in this range! The Butterfly Effect

- NWP models... - Doubling time of small initial errors ~ 1 to 2 days - Maximum large-scale (synoptic to planetary) predictability ~10 to 14 days It’s hard to get it right the first time!

Example: Synoptic Scale Variability 7 day forecast – NCEP MREF 500 MB Height GFS “Control” Forecast GFS -12h “Control” GFS PertEuropean Model and Start

Reveals forecast uncertainty, e.g., se U.S. precip Sensible weather often mesoscale dominated Example: Mesoscale Variability 1.5 day forecast – NCEP SREF Precipitation

Sources of Uncertainty in NWP Observations –Density –Error –Representative –QC Analysis Models LBCs, etc. Satellite Land RMSD ECMWF-NCEP 500 mb Hght (5 winters)

Schematic Illustration: Ensemble Concepts Analysis, Model, and Subgrid-scale Errors... All equally-likely solutions All plausible atmospheric states All equally-likely ICs

Limit of single model skill Limit of ensemble skill Error Growth with Time: Idealized Forecast Error Expected climate variability We can use ensembles (e.g., probabilities, etc.) to extend predictability (~ 3 to 4 days for synoptic scale pattern) until the forecast becomes chaotic. No correlation to initial conditions…chaos!

500 mb Hght (Dec. 2004; Greater U.S. Area) Climate SD 1.41 x Climate SD GFS Ens Means Limit of deterministic skill ~7.5 days Limit of ensemble skill ~10.5 days Days RMSE 20 m 40 m 80 m 100 m 120 m Error Growth with Time: GFS

Determinism Ensemble Ensembles vs. Determinism Evaluating Weather Forecasts

Outline Motivation for ensemble forecasting Ensemble products and applications –Emphasis on probabilistic products Ensemble calibration (verification) Decision making using ensembles

Definitions SREF = NCEP Short Range Ensemble Forecast (5 Eta-BMJ; 5 EtaKF; 5 RSM) MREF = NCEP Medium-Range Ensemble Forecast (GFS) Mean = Arithmetic average of members Spread = Variance or Standard Deviation Probability = % of members meeting some condition Calibrated Probability = As above, but adjusted to reflect expected frequency of occurrence

SPC Approach to Ensembles Develop customized products based on a particular application (severe, fire wx, etc.) Design operational guidance products that… –Help blend deterministic and ensemble approaches –Facilitate transition toward probabilistic thinking –Aid in critical decision making Increase confidence Alert for rare but significant events

F15 SREF MEAN 500 MB HGHT,TEMP,WIND Ensemble Means

Synoptic-Statistical Relationships Mean + Spread Examples of simple relationships between dispersion patterns and synoptic interpretation can be defined. Obtain a quick overview of range of weather situations from ensemble statistics. Amplitude Location

F15 SREF MEAN/SD 500 MB HGHT Ensemble Mean + Spread

F000 F048 F096 F mb Mean Height (solid) and Standard Deviation (dashed/filled) Increased spreadLess predictabilityLess forecast confidence Ensemble Mean + Spread

F000 F048 F096 F mb Mean Height and Normalized Variance Normalize the ensemble variance by climatic variance Values approaching 2 (dark color fill) => Ensemble variance saturated based on climo 2 Ensemble Mean + Normalized Spread

F000 F048 F096 F mb Member Height “Spaghetti” meter contour Another way to view uncertainty: Spaghetti

F63 SREF POSTAGE STAMP VIEW: PMSL, HURRICANE FRANCES Red = EtaBMJ Yellow= EtaKF Blue = RSM White = OpEta SREF Member

F15 SREF MEDIAN/RANGE CAPE At least 1 member has >= 500 J/kg All 16 members have >=500 J/kg CAPE Median Spatial Variability: Median + Range

F15 SREF MEDIAN/RANGE MLCAPE X 0-6 KM SHEAR Creation of Severe Wx Diagnositics - Calculated Craven-Brooks Significant Severe parameter for each member Median All 16 members have >= 10,000 m^3/s^3 At least 1 member has >= 10,000 m^3/s^3

Arithmetic mean… –Easy to compute and understand –Tends to increase coverage of light pcpn and decrease max values. 3-hr Total Pcpn NCEP SREF F63 Valid 09 Oct UTC Ways to view central value: Mean

Median… –If the majority of members don’t precip, will show large areas of no precip. Thus, often limited in areal extent. 3-hr Total Pcpn NCEP SREF Ways to view central value: Median

The blending of two PDFs, when one provides better spatial representation [e.g., ensemble mean QPF] and the other greater accuracy [e.g., QPF from all members]. See Ebert (MWR 2001) for more info. Rank Ens Mean Rank Member QPF 1  1 2  16 3  32 Ways to view central value: Probability Matching

Probability matching… –Ebert (2001) Found to be the best ensemble averaged QPF –Max values restored; pattern from ens mean 3-hr Total Pcpn NCEP SREF Ways to view central value: Probability Matching

Uncalibrated probabilities: Fraction of members meeting some condition

Probability 144h 2 meter Td <= 25 degF Probabilistic Output of Basic Products: 2 m Dewpoint

Probability 144h Haines Index > 5 Probabilistic Output of Derived Products: Haines Index

Probability Convective Pcpn >.01” Prob Conv Pcpn >.01” Valid 00 UTC 20 Sept 2003

Prob Conv Pcpn >.01” Valid 00 UTC 20 Sept 2003 Probability Convective Pcpn >.01”

Pcpn probs due to physics - No EtaBMJ members?! Red = EtaBMJ Yellow = EtaKF Blue = RSM Spaghetti: Different physics Note clustering by model

Spaghetti: Outliers F39 SREF SPAGHETTI (1000 J/KG) Red = EtaBMJ Yellow = EtaKF Blue = RSM White solid = 12 KM OpEta (12 UTC) 12 UTC operational Eta clearly an outlier from 09 UTC SREF - Is this the result of ICs or resolution? - Is this a better fcst (updated info) or an outlier

Extreme Values: Lowest RH 144h Minimum RH from any ensemble member “Worst case scenario”

MINIMUM F15 SREF MINIMUM 2 METER RH MAXIMUM F15 SREF MAXIMUM FOSBERG FIREWX INDEX Any member can contribute to the max or min value at a grid point Extreme Values

Combined Probability Charts Probability surface CAPE >= 1000 J/kg –Generally low in this case –Ensemble mean < 1000 J/kg (no gold dashed line) CAPE (J/kg) Green solid= Percent Members >= 1000 J/kg ; Shading >= 50% Gold dashed = Ensemble mean (1000 J/kg) F036: Valid 21 UTC 28 May 2003

Probability deep layer shear >= 30 kts –Strong mid level jet through Iowa 10 m – 6 km Shear (kts) Green solid= Percent Members >= 30 kts ; Shading >= 50% Gold dashed = Ensemble mean (30 kts) F036: Valid 21 UTC 28 May 2003 Combined Probability Charts

Convection likely WI/IL/IN –Will the convection become severe? 3 Hour Convective Precipitation >= 0.01 (in) Green solid= Percent Members >= 0.01 in; Shading >= 50% Gold dashed = Ensemble mean (0.01 in) F036: Valid 21 UTC 28 May 2003 Combined Probability Charts

Combined probabilities very useful Quick way to determine juxtaposition of key parameters Not a true probability –Not independent –Different members contribute Prob Cape >= 1000 X Prob Shear >= 30 kts X Prob Conv Pcpn >=.01” F036: Valid 21 UTC 28 May 2003 Combined Probability Charts

Severe Reports Red=Tor; Blue=Wind; Green=Hail Prob Cape >= 1000 X Prob Shear >= 30 kts X Prob Conv Pcpn >=.01” F036: Valid 21 UTC 28 May 2003 Combined probabilities a quick way to determine juxtaposition of key parameters Not a true probability –Not independent –Different members contribute Fosters an ingredients-based approach on-the- fly Combined Probability Charts

F15 SREF PROBABILITY P12I x RH x WIND x TMPF ( 30 mph x > 60 F) Ingredients for extreme fire weather conditions over the Great Basin Combined or Joint Probabilities - Not a true probability - An ingredients-based, probabilistic approach - Useful for identifying key areas Combined Probability Charts

F15 SREF PROBABILITY TPCP x RH x WIND x TMPF ( 30 mph x > 60 F) Ingredients for extreme fire weather conditions over the Great Basin Combined Probability Charts

Elevated Instability – General Thunder Elevated Instability – General Thunder NCEP SREF 30 Sept UTC F12 Mean MUCAPE/CIN (Sfc to 500 mb AGL)Mean LPL (Sfc to 500 mb AGL)

Parcel Equilibrium Level Parcel Equilibrium Level NCEP SREF 30 Sept UTC F12 Mean Temp (degC) MUEquilLvl (Sfc to 500 mb AGL) Prob Temp MUEquilLvl < -20 degC (Sfc to 500 mb AGL)

Lightning Verification Gridded Lightning Strikes UTC 30 Sept 2003 (40 km grid boxes)

Dendritic Growth Potential Find SREF members with: –Saturated layers > 50 mb deep –Temp range –11 to –17 degC –Omega <= -3 microbar/s NCEP SREF 7 Oct UTC F15 Probability dendritic conditions (solid/shaded) Mean PMSL (white solid), Mean mb dZ (dashed), Mean 10m Wind

Microphysical Example Probability cloud top temps > -8 degC Probability cloud top temps < -12 degC Ice Crystals UnlikelyIce Crystals Likely NCEP SREF 7 Oct UTC F15

Banded Precipitation Combined Probabilities Probability MPV 1 NCEP SREF 7 Oct UTC F15

Banded Precipitation GOES 10 IR - 8 Oct UTC

Identifying Rare Events Identifying Rare Events (Low end example: Wind/Small Craft Advisory) 9 Oct UTC F63 fcst Prob sfc winds > 30 mph (mean 10m wind vector shown) Difficult to forecast for every grid point Saturday afternoon forecast (11 Oct)

Identifying Rare Events Identifying Rare Events (Low end example: Wind/Small Craft Advisory) 9 Oct UTC F63 fcst Now consider an area +/- 90 mi of a point (see Legg and Mylne 2004) 30% chance small craft advy winds over Monterey Bay and offshore waters Saturday afternoon

Mode Most Common Precip Type (Snow = Blue); Mean Precip (in); Mean 32 o F Isotherm F015 SREF Valid: 00 UTC 21 December 2004

Probability Dendritic Layer > 50 mb F015 SREF Valid: 00 UTC 21 December 2004

Probability of Banded Precipitation Potential Probability MPV 1

Probability Omega <= -3 microbar/s Median Depth of Dendritic Layer F015 SREF Valid: 00 UTC 21 December 2004

Probability Omega <= -3 microbar/s F015 SREF Valid: 00 UTC 21 December 2004

Probability 6h Precip >=.25” F015 SREF Valid: 00 UTC 21 December 2004

Outline Motivation for ensemble forecasting Ensemble products and applications –Emphasis on probabilistic products Ensemble calibration (verification) Decision making using ensembles

Combine Thunderstorm Ingredients into Single Parameter Three first-order ingredients (readily available from NWP models): –Lifting condensation level > -10 o C –Sufficient CAPE in the 0 o to -20 o C layer –Equilibrium level temperature < -20 o C Cloud Physics Thunder Parameter (CPTP) CPTP = (-19 o C – T el )(CAPE -20 – K) K where K = 100 Jkg -1 and CAPE -20 is MUCAPE in the 0 o C to -20 o C layer

Example CPTP: One Member 18h Eta Forecast Valid 03 UTC 4 June 2003 Plan view chart showing where grid point soundings support lightning (given a convective updraft)

SREF Probability CPTP > 1 15h Forecast Ending: 00 UTC 01 Sept 2004 Uncalibrated probability: Solid/Filled; Mean CPTP = 1 (Thick dashed) 3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

SREF Probability Precip >.01” 15h Forecast Ending: 00 UTC 01 Sept 2004 Uncalibrated probability: Solid/Filled; Mean precip = 0.01” (Thick dashed) 3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

Joint Probability (Assumed Independence) 15h Forecast Ending: 00 UTC 01 Sept 2004 Uncalibrated probability: Solid/Filled P(CPTP > 1) x P(Precip >.01”) 3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

Perfect Forecast No Skill Climatology P(CPTP > 1) x P(P03I >.01”) Uncalibrated Reliability (5 Aug to 5 Nov 2004) Frequency [0%, 5%, …, 100%]

Adjusting Probabilities 1)Calibrate based on the observed frequency of occurrence –Very useful, but may not provide information concerning rare or extreme (i.e., low probability) events 2)Use statistical techniques to estimate probabilities in the tails of the distribution (e.g., Hamill and Colucci 1998; Stensrud and Yussouf 2003)

Ensemble Calibration 1)Bin separately P(CPTP > 1) and P(P03M > 0.01”) into 11 bins (0-5%; 5-15%; …; 85-95%; %) 2)Combine the two binned probabilistic forecasts into one of 121 possible combinations (0%,0%); (0%,10%); … (100%,100%) 3)Use NLDN CG data over the previous 60 days to calculate the frequency of occurrence of CG strikes for each of the 121 binned combinations 4)Bin ensemble forecasts as described in steps 1 and 2 and assign the observed CG frequency (step 3) as the calibrated probability of a CG strike 5)Calibration is performed for each forecast cycle (09 and 21 UTC) and each forecast hour; domain is entire U.S. on 40 km grid

Before Calibration

Joint Probability (Assumed Independence) P(CPTP > 1) x P(Precip >.01”) 3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept h Forecast Ending: 00 UTC 01 Sept 2004 Uncorrected probability: Solid/Filled

After Calibration

Calibrated Ensemble Thunder Probability 15h Forecast Ending: 00 UTC 01 Sept 2004 Calibrated probability: Solid/Filled 3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

Calibrated Ensemble Thunder Probability 15h Forecast Ending: 00 UTC 01 Sept 2004 Calibrated probability: Solid/Filled; NLDN CG Strikes (Yellow +) 3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

Perfect Forecast No Skill Perfect Forecast No Skill Calibrated Reliability (5 Aug to 5 Nov 2004) Calibrated Thunder Probability Climatology Frequency [0%, 5%, …, 100%]

Adjusting Probabilities 1)Calibrate based on the observed frequency of occurrence –Very useful, but may not provide information concerning extreme (i.e., low probability) events 2)Use statistical techniques to estimate probabilities in the “tails” of the distribution (e.g., Hamill and Colucci 1998; Stensrud and Yussouf 2003)

Consider 2 meter temperature prediction from NCEP SREF –Construct a “rank histogram” of the ensemble members (also called Talagrand diagram) Rank individual members from lowest to highest Find the verifying rank position of “truth” (RUC 2 meter analysis temperature) Record the frequency with which truth falls in that position (for a 15 member ensemble there are 16 rankings) Adjusting Probabilities

Adjusting Probabilities Adjusting Probabilities Uncorrected Talagrand Diagram Warm bias in 15h fcst of 12 UTC NCEP SREF Uniform Distribution 2m temperature ending 27 December 2004 Under-dispersive Truth is colder than all SREF members a disproportionate amount of time Clustering by model

Use 14-day bias to account for bias in forecast Members 1 through 15 of NCEP SREF

Adjusting Probabilities Adjusting Probabilities Bias Adjusted Talagrand Diagram Near neutral bias in 15h fcst of 12 UTC NCEP SREF Large bias eliminated but remains under-dispersive Uniform Distribution 2m temperature ending 27 December 2004

Build the pdf by using observed data to fit a statistical distribution (Gamma, Gumbel, or Gaussian) to the tails This produces a calibrated pdf based on past performance –“Past performance does not guarantee future results.” Adjusting Probabilities

Adjusting Probabilities Adjusting Probabilities Corrected Talagrand Diagram ~Uniform distribution in 15h fcst of 12 Z SREF Uniform Distribution SREF probabilities now reflect expected occurrence of event even in the “tails” 2m temperature ending 27 December 2004

Adjusted Temperature Fcst Max temp (50%) valid 12 UTC 5 Jan to 00 UTC 6 Jan 2004

Probabilistic Temperature Forecast Norman, OK (95% Confidence) 50.0% 2.5% Dec 27Dec 28Dec 29 Norman, OK Temp Forecast from SREF Actual mins & maxes indicated by red dots Temp (degF) Local Time  4 AM6 PMMid

Probabilistic Meteogram Probability of severe thunderstorm ingredients: OUN; Runtime: 09 UTC 21 April Information on how ingredients are evolving Viewing ingredients via probabilistic thinking

Probabilistic Meteogram Probability of severe thunderstorm ingredients: OUN; Runtime: 09 UTC 21 April Information on how ingredients are evolving Viewing ingredients via probabilistic thinking

Outline Motivation for ensemble forecasting Ensemble products and applications –Emphasis on probabilistic products Ensemble calibration (verification) Decision making using ensembles

Decision Making Probabilities from an uncalibrated, under- dispersive ensemble system are still useful in quantifying uncertainty Trends in probabilities (dprog/dt) may indicate less spread among members as t  0

12h Prob Thunder12h Prob Severe Prob ThunderProb Severe Day 6 Day 5 Day 4 Day 3 Day 2 Increased probabilistic resolution as event approaches Run-to-run consistency Time-lagged members (weighted) add continuity to forecast Trend over 5 days from NCEP MREF (Valid: 22 Dec 2004)

Results…

Decision Making Probabilities from an un-calibrated, under- dispersive ensemble system are still useful to quantify uncertainty Trends in probabilities (dprog/dt) may indicate less spread among members as t  0 Decision theory can be used with or without separate calibration

Decision Theory Example Consider the calibrated thunderstorm forecasts presented earlier [see Mylne (2002) for C/L model]… User: Electronics store Critical Event: Lightning strike/surge Cost to protect: $300 Expense of a Loss: $10,000 YesNo Yes Hit $300 F.A. $300 No Miss $10,000 C.R. $0 Observed Forecast a = F.A. – C.R. Miss + F.A. – Hit – C.R. C/L = a = 0.03 If no forecast information is available, user will always protect if a < o, and never protect if a > o, where o is climatological frequency of the event

Decision Theory Example If the calibration were perfect, then user would seek protective action whenever forecasted probability is > a. But, forecast is not perfectly reliable…

Apply a cost-loss model to assist in the decision (prior calibration is unnecessary) Define a cost-loss model as in Murphy (1977); Legg and Mylne (2004); Mylne (2002) –This can be done without probabilistic calibration as the technique implicitly calibrates based on past performance V = E climate - E forecast E climate – E perfect Decision Theory Example

V = E climate - E forecast E climate - E perfect V = a general assessment of forecast value relative to the perfect forecast (i.e., basically a skill score). V = 1 indicates a perfect forecast system (i.e., action is taken only when necessary) V < 0 indicates a system of equal or lesser value than climatology

V = E climate - E forecast E climate - E perfect E climate = min[ (1-o)F.A. + oHit; (1-o)C.R. + oMiss ] E perfect = oHit E forecast = hHit + mMiss + fF.A.+ rC.R. o = climatological freq = h + m YesNo Yes Hit $300 F.A. $300 No Miss $10,000 C.R. $0 YesNo Yes Hit (h) F.A. (f) No Miss (m) C.R. (r) Decision Theory Example Observed Forecast Costs: Performance:

Decision Theory Example Never Protect Always Protect 10% Action probability for a =.03 is 7% with V =.64 Potential Value a = Cost/Loss Ratio (log scale) Maximum Potential Value of the Forecast and its Associated Probability

Summary Ensembles provide information on mean, spread, and forecast uncertainty (probabilities) Derived products viewed in probability space have proven useful at the SPC Combined or joint probabilities very useful When necessary, ensembles can be calibrated to provide reliable estimates of probability and/or aid in decision making

SPC SREF Products on WEB

References Bright, D.R., M. Wandishin, R. Jewell, and S. Weiss, 2005: A physically based parameter for lightning prediction and its calibration in ensemble forecasts. Preprints, Conference on Meteor. Appl. of Lightning Data, AMS, San Diego, CA (CD-ROM 4.3) Cheung, K.K.W., 2001: A review of ensemble forecasting techniques with a focus on tropical cyclone forecasting. Meteor. Appl., 8, Ebert, E.E., 2001: Ability of a poor man's ensemble to predict the probability and distribution of precipitation. Mon. Wea. Rev., 129, Hamill, T.M. and S.J. Colucci, 1998: Evaluation of Eta-RSM ensemble probabilistic precipitation forecasts. Mon. Wea. Rev., 126, Legg, T.P. and K.R. Mylne, 2004: Early warnings of severe weather from ensemble forecast information. Wea. Forecasting, 19, Mylne, K.R. 2002: Decision-making from probability forecasts based on forecast value. Meteor. Appl., 9, Sivillo, J.K. and J.E. Ahlquist, 1997: An ensemble forecasting primer. Wea. Forecasting, 12, Stensrud, D.J. and N. Yussouf, 2003: Short-range ensemble predictions of 2-m temperature and dewpoint temperature over New England. Mon. Wea. Rev., 131, Wilks, D.S., 1995: Statistical Methods in the Atmospheric Sciences. International Geophysics Series, Vol. 59, Academic Press, 467 pp.