Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble.
© Copyright 2001, Alan Marshall1 Regression Analysis Time Series Analysis.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
WRF Verification: New Methods of Evaluating Rainfall Prediction Chris Davis NCAR (MMM/RAP) Collaborators: Dave Ahijevych, Mike Baldwin, Barb Brown, Randy.
Assessment of Tropical Rainfall Potential (TRaP) forecasts during the Australian tropical cyclone season Beth Ebert BMRC, Melbourne, Australia.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley.
Improving Probabilistic Ensemble Forecasts of Convection through the Application of QPF-POP Relationships Christopher J. Schaffer 1 William A. Gallus Jr.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Object-based Spatial Verification for.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
1 st UNSTABLE Science Workshop April 2007 Science Question 3: Science Question 3: Numerical Weather Prediction Aspects of Forecasting Alberta Thunderstorms.
¿How sensitive are probabilistic precipitation forecasts to the choice of the ensemble generation method and the calibration algorithm? Juan J. Ruiz 1,2.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Using Short Range Ensemble Model Data in National Fire Weather Outlooks Sarah J. Taylor David Bright, Greg Carbin, Phillip Bothwell NWS/Storm Prediction.
1 Doing Statistics for Business Doing Statistics for Business Data, Inference, and Decision Making Marilyn K. Pelosi Theresa M. Sandifer Chapter 11 Regression.
WWOSC 2014 Assimilation of 3D radar reflectivity with an Ensemble Kalman Filter on a convection-permitting scale WWOSC 2014 Theresa Bick 1,2,* Silke Trömel.
Great Basin Verification Task 2008 Increased Variability Review of 2008 April through July Period Forecast for 4 Selected Basins Determine what verification.
Nynke Hofstra and Mark New Oxford University Centre for the Environment Trends in extremes in the ENSEMBLES daily gridded observational datasets for Europe.
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
Toward a 4D Cube of the Atmosphere via Data Assimilation Kelvin Droegemeier University of Oklahoma 13 August 2009.
Intraseasonal TC prediction in the southern hemisphere Matthew Wheeler and John McBride Centre for Australia Weather and Climate Research A partnership.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
Verification of the distributions Chiara Marsigli ARPA-SIM - HydroMeteorological Service of Emilia-Romagna Bologna, Italy.
On the spatial verification of FROST-2014 precipitation forecast fields Anatoly Muraviev (1), Anastasia Bundel (1), Dmitry Kiktev (1), Nikolay Bocharnikov.
Ebert-McBride Technique (Contiguous Rain Areas) Ebert and McBride (2000: Verification of precipitation in weather systems: determination of systematic.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
DTC Verification for the HMT Edward Tollerud 1, Tara Jensen 2, John Halley Gotway 2, Huiling Yuan 1,3, Wally Clark 4, Ellen Sukovich 4, Paul Oldenburg.
An Object-Based Approach for Identifying and Evaluating Convective Initiation Forecast Impact and Quality Assessment Section, NOAA/ESRL/GSD.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
NARCCAP Meeting September 2009 Results from NCEP-driven RCMs ~ Overview ~ Results from NCEP-driven RCMs ~ Overview ~ William J. Gutowski, Jr. & Raymond.
Do the NAM and GFS have displacement biases in their MCS forecasts? Charles Yost Russ Schumacher Department of Atmospheric Sciences Texas A&M University.
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
Kris Shrestha James Belanger Judith Curry Jake Mittelman Phillippe Beaucage Jeff Freedman John Zack Medium Range Wind Power Forecasts for Texas.
On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
A study on the spread/error relationship of the COSMO-LEPS ensemble Purpose of the work  The spread-error spatial relationship is good, especially after.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
Nowcasting Convection Fusing 0-6 hour observation- and model-based probability forecasts WWRP Symposium on Nowcasting and Very Short Range Forecasting.
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Deutscher Wetterdienst Long-term trends of precipitation verification results for GME, COSMO-EU and COSMO-DE Ulrich Damrath.
Lothar (T+42 hours) Figure 4.
LEPS VERIFICATION ON MAP CASES
Fuzzy verification using the Fractions Skill Score
Systematic timing errors in km-scale NWP precipitation forecasts
Convective Scale Modelling Humphrey Lean et. al
With special thanks to Prof. V. Moron (U
S.Alessandrini, S.Sperati, G.Decimi,
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
Predictability of Snow Multi-Bands Using a 40-Member WRF Ensemble
Numerical Weather Prediction Center (NWPC), Beijing, China
2007 Mei-yu season Chien and Kuo (2009), GPS Solutions
Ulrike Romatschke, Robert Houze, Socorro Medina
Verification of Tropical Cyclone Forecasts
Some Verification Highlights and Issues in Precipitation Verification
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Presentation transcript:

Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009

Motivation Object-oriented techniques (e.g., CRA, MODE) have been developed to better evaluate fine-scale forecasts, but these have been applied mostly/always to deterministic forecasts – how can they be best applied to ensembles?

How does CRA (Ebert and McBride 2000) work? Find Contiguous Rain Areas (CRA) in the fields to be verified Observed Forecast Define a rectangular search box around CRA to look for best match between forecast and observations Displacement determined by shifting forecast within the box until MSE is minimized or correlation coefficient is maximized

Method for Object-based Diagnostic Evaluation (MODE; Davis et al a,b) Identifies objects using convolution- thresholding Merging and matching are performed via fuzzy logic approaches (systems don’t have to be contiguous)

Methodology Precipitation forecasts from two sets of ensembles were used as input into MODE and CRA: 1) Two 8 member 15 km grid spacing WRF ensembles (Clark et al. 2008), one with mixed physics alone (Phys), the other with perturbed IC/LBCs alone (IC/LBC) -- initialized at 00 UTC

Two 8-member ensembles 6h precipitation forecasts from both ensembles over 60 h integration periods for 72 cases were evaluated faster growth of spreadClark et al. (2008) found that spread & skill initially were better in Phys than in IC/LBC but after roughly 30h, the lack of perturbed LBCs “hurt” the Phys ensemble, and IC/LBC had faster growth of spread, and better skill later Also, a diurnal signal was noted in traditional skill/spread measures Do these same trends show up in the object parameters?

Methodology (cont.) 2) Second set included 5 members of a 10 member 4 km ensemble (ENS4) run by CAPS with mixed LBCs/IC/physics and 5 similar members of a 15 member 20 km ensemble (ENS20), studied in Clark et al. (2009)

ENS4 vs ENS20 6 hour precipitation evaluated over 30 h integrations from 23 cases, with all precipitation remapped to a 20 km grid Clark et al. (2009) found spread growth was faster in ENS4 than in ENS20 ENS4 had a much better depiction of the diurnal cycle in precipitation Question: Do these two results also show up in the object parameters

Other forecasting questions: Is the mean of the ensemble’s distribution of object-based parameters (run MODE/CRA on each member) a better forecast than one from an ensemble mean (run MODE/CRA once)? Does an increase in spread imply less predictability?

Object-oriented technique parameters examined: Areal coverage Mean Rain Rate Rain Volume of system Displacement error

.32% C-Phys 11.04% C-ICLBC -.02% M-Phys 4.65% M-ICLBC * * CRA results show stat.sig differences at some times (*) Slope of trend lines is stat.sig. different for both CRA & MODE

.75% C-Phys 3.49% C-ICLBC.83% M-Phys 6.68% M-ICLBC MODE still shows stat.sig. greater spread growth for IC/LBC

4.91% C-Phys 13.41% C-ICLBC 1.31% M-Phys 9.30% M-ICLBC * Both MODE & CRA show stat.sig. greater spread growth in IC/LBC

4.96% C-Phys 7.19% C-ICLBC -.75% M-Phys 4.06% M-ICLBC MODE still shows stat.sig. greater spread growth for IC/LBC *

Conclusions (8 members) Increased spread growth in IC/LBC compared to Phys does show up in the 4 object parameters, especially for Areal Coverage, and moreso in MODE results than CRA Diurnal signal (more precip at night) does show up some in Rate, Volume, and Areal coverage SDs

Comparison of Probability Matched Ensemble Mean (Ebert 2001) values to an average of the MODE output from each ensemble member Note: PM may result in better forecast for location (smaller displacements) but a much worse forecast for area (also not as good for volume and rate – not shown) * * * Asterisks at top (bottom) indicate stat.sig. difference for ENS20 (ENS4) *

Areal Coverage IC/LBC Phys Rate - Phys Rate- IC/LBC Volume- Phys Volume- IC/LBC Percentage of times the observed value fell within the min/max of the ensemble CRA Results for Phys and IC/LBC

Skill (MAE) as a function of spread (> 1.5*SD cases vs <.5*SD cases) Rate*10 low SD Rate*10 big SD Vol big SD Vol low SD Area/1000 big SD Area/1000 low SD CRA applied to Phys Ensemble (IC/LBC similar)

For some parameters, application of MODE or CRA to PM-mean might be fine; for others it is better to use MODE/CRA on each member System rain volume and areal coverage show a clear signal for better skill when spread is smaller, not so true for rate Average rain rate for systems is not as big a problem in the forecasts as areal coverage (and thus volume), which might explain lack of clear spread-skill relationship AREAL COVERAGE IS ESPECIALLY POORLY FORECASTED Conclusions (forecasting approaches)

Acknowledgments Funding provided by WRF-DTC and through NSF grant ATM Thanks to Adam Clark for providing the precipitation output Thanks to Randy Bullock and John Halley- Gotway for MODE and R help at NCAR Thanks to Eric Aligo and Daryl Herzmann for assistance with Excel and computer issues

April hour forecast from Mixed Physics/Dynamics ensemble

Same forecast but from mixed initial/boundary condition ensemble

Example from IHOP

Example of MODE output

10.94% ENS % ENS20 * Errors for ENS4 stat.sig. less than for ENS20 ENS4 has slightly better depiction of diurnal minimum

4.08% ENS4 5.33% ENS Notice SDs are a much smaller portion of average values than for other parameters

10.29% ENS4 1.08% ENS ENS4 has slightly better depiction of diurnal min

.50% ENS4.75% ENS20 Small improvement in displacement error in ENS

Conclusions (ENS4 VS ENS20) Hint of better diurnal signal in ENS4 (Area and Volume) ENS4 seems more skillful (but not usually statistically significant) Volume best shows faster spread growth in ENS4 compared to ENS20, but results not as significant as in 8 member ensembles Rain rate has less variability among members, and is better forecasted