Mark T. Stoelinga University of Washington Thanks to: Steve Koch, NOAA/ESRL/GSD Brad Ferrier, NCEP Verification and Calibration of Simulated Reflectivity.

Slides:



Advertisements
Similar presentations
Report of the Q2 Short Range QPF Discussion Group Jon Ahlquist Curtis Marshall John McGinley - lead Dan Petersen D. J. Seo Jean Vieux.
Advertisements

Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble.
NWS Calibration Workshop, LMRFC March, 2009 Slide 1 Sacramento Model Derivation of Initial Parameters.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
The Persistence and Dissipation of Lake Michigan-Crossing Mesoscale Convective Systems Nicholas D. Metz* and Lance F. Bosart # * Department of Geoscience,
“OLYMPEX” Physical validation Precipitation estimation Hydrological applications Field Experiment Proposed for November-December th International.
To perform statistical analyses of observations from dropsondes, microphysical imaging probes, and coordinated NOAA P-3 and NASA ER-2 Doppler radars To.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Precipitation in the Olympic Peninsula of Washington State Robert Houze and Socorro Medina Department of Atmospheric Sciences University of Washington.
February 5th, TRMM Conference The 3-D Reflectivity Structure of Intense Atlantic Hurricanes as seen by the TRMM PR Deanna Hence, Robert Houze.
Predicting lightning density in Mediterranean storms based on the WRF model dynamic and microphysical fields Yoav Yair 1, Barry Lynn 1, Colin Price 2,
Hydrometeors Injected into the Large-scale Environment by Tropical Cloud Systems Robert A. Houze & Courtney Schumacher Co-PIs ARM Science Team Meeting,
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
WHAT IS Z?  Radar reflectivity (dBZ)  Microwave energy reflects off objects (e.g. hydrometeors) and the return is reflectivity WHAT IS R?  Rainfall.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
1 st UNSTABLE Science Workshop April 2007 Science Question 3: Science Question 3: Numerical Weather Prediction Aspects of Forecasting Alberta Thunderstorms.
High-Resolution Simulations of the 25 December 2002 Banded Snowstorm using Eta, MM5, and WRF David Novak NOAA/ NWS Eastern Region Headquarters, Scientific.
Robert A. Houze University of Washington Robert A. Houze University of Washington Using the TRMM Precipitation Radar for Storm Structure Analysis Precipitation.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Tara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3 1 NCAR/RAL and NOAA/GSD, Boulder Colorado, USA 2 NOAA/NWS/Storm Prediction Center,
Mesoscale & Microscale Meteorological Division / ESSL / NCAR WRF (near) Real-Time High-Resolution Forecast Using Bluesky Wei Wang May 19, 2005 CISL User.
Determining How Costs Behave
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
THE NEW FRENCH OPERATIONAL CONVENTIONAL RADAR PRODUCTS Pierre Tabary Centre de Météorologie Radar, Direction des Systèmes d’Observation, Météo France
The National Environmental Agency of Georgia L. Megrelidze, N. Kutaladze, Kh. Kokosadze NWP Local Area Models’ Failure in Simulation of Eastern Invasion.
Assimilation of HF Radar Data into Coastal Wave Models NERC-funded PhD work also supervised by Clive W Anderson (University of Sheffield) Judith Wolf (Proudman.
COMET HYDROMET Enhancements to PPS Build 10 (Nov. 1998) –Terrain Following Hybrid Scan –Graphical Hybrid Scan –Adaptable parameters appended to.
A Conceptual Model for the Hydrometeor Structure of Mesoscale Convective Systems during the MJO Active Stage Hannah C. Barnes Robert A. Houze, Jr. University.
A Conceptual Model for the Hydrometeor Structure of Mesoscale Convective Systems during the MJO Active Stage Hannah C. Barnes Robert A. Houze, Jr. University.
13th Cyclone Workshop 25 October 2005 Pacific Grove, CA1 A Study of the Effect of Horizontal Contrasts in Static Stability on Frontal Behavior Mark T.
Improving Ensemble QPF in NMC Dr. Dai Kan National Meteorological Center of China (NMC) International Training Course for Weather Forecasters 11/1, 2012,
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Higher Resolution Operational Models. Operational Mesoscale Model History Early: LFM, NGM (history) Eta (mainly history) MM5: Still used by some, but.
Experiences with 0-36 h Explicit Convective Forecasting with the WRF-ARW Model Morris Weisman (Wei Wang, Chris Davis) NCAR/MMM WSN05 September 8, 2005.
P1.85 DEVELOPMENT OF SIMULATED GOES PRODUCTS FOR GFS AND NAM Hui-Ya Chuang and Brad Ferrier Environmental Modeling Center, NCEP, Washington DC Introduction.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
Edward Mansell National Severe Storms Laboratory Donald MacGorman and Conrad Ziegler National Severe Storms Laboratory, Norman, OK Funding sources in the.
Linear Optimization as a Solution to Improve the Sky Cover Guess, Forecast Jordan Gerth Cooperative Institute for Meteorological Satellite Studies University.
Validation of the Simulated Microphysical Structure within the Midlevel Inflow Region of a Tropical, Oceanic Squall Line Hannah C. Barnes, Robert A. Houze.
Funded by NSF –Grant AGS Conceptual Model of Mesoscale Convective Systems (MCSs) ConvectiveStratiform TOGA COARE: 3D, layer airflow Kingsmill.
A Statistical Assessment of Mesoscale Model Output using Computed Radiances and GOES Observations Manajit Sengupta 1, Louie Grasso 1, Daniel Lindsey 2.
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
Typhoon Forecasting and QPF Technique Development in CWB Kuo-Chen Lu Central Weather Bureau.
CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
WSR-88D PRECIPITATION ESTIMATION FOR HYDROLOGIC APPLICATIONS DENNIS A. MILLER.
Convective-Scale Numerical Weather Prediction and Data Assimilation Research At CAPS Ming Xue Director Center for Analysis and Prediction of Storms and.
MOS and Evolving NWP Models Developer’s Dilemma: Frequent changes to NWP models… Make need for reliable statistical guidance more critical Helps forecasters.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
05/03/2016FINNISH METEOROLOGICAL INSTITUTE Jarmo Koistinen, Heikki Pohjola Finnish Meteorological Institute CARPE DIEM FMI (Partner 5) progress report.
1)Consideration of fractional cloud coverage Ferrier microphysics scheme is designed for use in high- resolution mesoscale model and do not consider partial.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
A Few Issues Regarding the Use of Q2 (and related) products and software at NCEP/EMC Curtis H. Marshall NCEP/EMC Q2 Science Workshop NSSL Norman, OK June.
NOAA Northeast Regional Climate Center Dr. Lee Tryhorn NOAA Climate Literacy Workshop April 2010 NOAA Northeast Regional Climate.
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
2. WRF model configuration and initial conditions  Three sets of initial and lateral boundary conditions for Katrina are used, including the output from.
Application of Probability Density Function - Optimal Interpolation in Hourly Gauge-Satellite Merged Precipitation Analysis over China Yan Shen, Yang Pan,
Part II: Implementation of a New Snow Parameterization EXPLICIT FORECASTS OF WINTER PRECIPITATION USING AN IMPROVED BULK MICROPHYSICS SCHEME Thompson G.,
Reflections on Radar Observations of Mesoscale Precipitation
CORRELATION.
Influences of Particle Bulk Density of Snow and Graupel in Microphysics-Consistent Microwave Brightness Temperature Simulations Research Group Meeting.
A few examples of heavy precipitation forecast Ming Xue Director
By SANDRA E. YUTER and ROBERT A. HOUZE JR
A dual-polarization QPE method based on the NCAR Particle ID algorithm Description and preliminary results Michael J. Dixon1, J. W. Wilson1, T. M. Weckwerth1,
Determining How Costs Behave
Radar/Surface Quantitative Precipitation Estimation
IMPROVING HURRICANE INTENSITY FORECASTS IN A MESOSCALE MODEL VIA MICROPHYSICAL PARAMETERIZATION METHODS By Cerese Albers & Dr. TN Krishnamurti- FSU Dept.
Verification of Tropical Cyclone Forecasts
Presentation transcript:

Mark T. Stoelinga University of Washington Thanks to: Steve Koch, NOAA/ESRL/GSD Brad Ferrier, NCEP Verification and Calibration of Simulated Reflectivity Products During DWFE

Hurricane WRF (Chen 2006, WRF Workshop)

2006 NOAA/SPC Spring Program

2005 DTC Winter Forecast Experiment (DWFE) (Koch et al. 2005) WRF-ARW SRWRF-ARW 3-h Precip Obs Composite Reflectivity WRF-ARW 700-hPa winds/RH

Variational Data Assimilation: Variational Data Assimilation: What is the best “forward operator” to use as a bridge between observed radar reflectivity and the model microphysics? Forecaster Testimonials “….(we) liked the 4 km BAMEX model run and DON’T want it to go away. The reflectivity forecasts were really very helpful, and almost uncanny.’’ - NWS Forecaster after BAMEX field study “Love the reflectivity product!” - NWS Forecaster after DWFE However,… “Before any meaning can be ascribed to the Reflectivity Product for the purpose of interpreting mesoscale model forecasts, it is important to understand how it is determined.” -Koch et al. (2005)

Study Goals Using archived forecast model runs and observed reflectivity from DWFE, examine Simulated Reflectivity (SR) from two different perspectives: 1.Use statistics and direct examination to see where and why different SR products resemble or differ from observed reflectivity. 2.Consider the question: If it can be shown that there is a systematic error in a particular SR product, such that the SR product consistently produces too much or too little of a given reflectivity value, can the SR product be “calibrated” to more closely match the observed radar reflectivity?

Data Sources Archived Gridded Forecast Model Output from DWFE Archived Observed and Simulated Composite Reflectivity Imagery 3-D Gridded Observed Reflectivity from the National Mosaic and Multi-Sensor Quantitative Precipitation Estimation (NMQ) Thanks to DTC Thanks to NSSL

Stratiform Area Convective/Stratiform Area 13 February 2005 Cyclonic Storm System

Observed NMM consistent ARW generic ARW consistent Stratiform Area: Composite Reflectivity

Observed NMM consistent ARW generic ARW consistent Stratiform Area: CFADs (Yuter and Houze 1995)

Stratiform Area: Frequency Distribution of Height of Maximum Reflectivity Height above Freezing Level (km) Number of Occurrences ARW generic NMM consistent ARW consistent Observed

Differences in ARW Reflectivity Products Real-time ARW post-processor used a “generic” SR that assumes a constant intercept parameter for the snow size distribution. “Consistent” ARW SR product uses T-dependent intercept, consistent with WSM5 microphysics in used in ARW N 0 (m -4 )N (m -4 ) Snow particle size distributions for same mixing ratio q s =0.1 g kg -1 Particle size (mm)Temperature (ºC)

Differences in ARW Reflectivity Products Real-time ARW post-processor used a “generic” SR that does not account for the change in dielectric factor for wet snow (“brightband”) “Consistent” ARW SR product uses the liquid-water dielectric factor for snow that is at T ≥ 0 ºC. → Increases reflectivity by ~7 dBZ in the melting layer

Differences in ARW Reflectivity Products ARW genericARW generic + var. N 0S (b) – (a) (a)(b)

Differences in ARW Reflectivity Products (b) – (a) (a)(b) ARW generic + var. N 0S ARW generic + var. N 0S + wet snow ( = ARW consistent)

Differences between NMM and ARW Reflectivity Products ARW genericNMM consistent (a)(b)

Stratiform Area: Composite Reflectivity Statistics Observed NMM consistent ARW generic ARW consistent

Stratiform Area: Composite Reflectivity Frequency Distributions ARW generic NMM consistent ARW consistent Observed Number of grid boxes Reflectivity (dBZ)

Calibration of Composite Simulated Reflectivity Consider the question: If it can be shown that there is a systematic error in a particular SR product, such that the SR product consistently produces too much or too little of a given reflectivity value, can the SR product be “calibrated” to more closely match the observed radar reflectivity? How would we do this? Use the bias? No. SR is too high in some places, too low in others. Use correlation/linear regression? No. Forecast and observed precipitation are not spatially well-correlated. (Ebert and McBride 2000) How about matching the frequency distribution?

Calibration of Composite Simulated Reflectivity ARW generic NMM consistent ARW consistent Observed Number of grid boxes Reflectivity (dBZ)

We seek a “calibration function” Z new = h(Z m ), such that where Z m is the composite SR, and f(Z) and g(Z) are the frequency distributions of the simulated and observed composite reflectivity, respectively. Calibration of Composite Simulated Reflectivity

While h(Zm) is difficult to extract mathematically, there is a practical and simple way to arrive at it: 1.Start with a set of SR values that will be used to obtain the calibration equation (e.g., all the grid values of composite SR in a single plot) 2.Rank all the values in order from lowest to highest value. 3.Do the same for the corresponding observed reflectivity set. It is important that the same number of points is used for both. 4.Align the two ranked sets (simulated and observed). The full set of pairs of reflectivity values provide the precise calibration function needed to transform the SR plot into one that has the exact same frequency distribution as the corresponding observed reflectivity plot.

Calibration of Composite Simulated Reflectivity Observed NMM consistent ARW generic ARW consistent

to-1 Calibrated Reflectivity (dBZ) Simulated Reflectivity (dBZ) Calibration Curves for Stratiform Area ARW generic NMM consistent ARW consistent

Uncalibrated Composite Simulated Reflectivity Observed NMM consistent ARW generic ARW consistent

Calibrated Composite Simulated Reflectivity (a)(b) (c)(d) Observed NMM consistent ARW generic ARW consistent

Stratiform Area 13 February 2005 Cyclonic Storm System Convective/Stratiform Area

Convective/Stratiform Area: Composite Reflectivity Observed NMM consistentARW generic ARW consistent

Convective/Stratiform Area CFADs Low observed frequency of 20-30dBZ echoes aloft (compared to all models) Observed NMM consistent ARW generic ARW consistent

Convective/Stratiform Area: Frequency Distribution of Height of Maximum Reflectivity Height above Freezing Level (km) Number of Occurrencess ARW generic NMM consistent ARW consistent Observed

Convective/Stratiform Area: Composite Reflectivity Frequency Distributions ARW generic NMM consistent ARW consistent Observed Number of grid boxes Reflectivity (dBZ)

to-1 Calibrated Reflectivity (dBZ) Simulated Reflectivity (dBZ) Calibration Curves for Convective/Stratiform Area ARW generic NMM consistent ARW consistent

Uncalibrated Composite Simulated Reflectivity

Calibrated Composite Simulated Reflectivity

4-Week Study of Calibration of Composite Simulated Reflectivity What about the mean behavior of the SR products over many different types and intensities of precipitation? 4-week study: 28 February – 24 March 2005 (sub-period of DWFE) Daily forecasts and observations of composite reflectivity at 18, 21, and 00 UTC (18, 21, and 21-h model forecasts) Area covering CONUS from Rocky Mountains eastward Used archived imagery – only 5 dBZ resolution (width of color bands)

4-Week Study of Calibration of Composite Simulated Reflectivity

Frequency Distribution ARW generic NMM consistent ObservedNumber of pixels Reflectivity (dBZ)

4-Week Study of Calibration of Composite Simulated Reflectivity to-1 Calibrated Reflectivity (dBZ) Simulated Reflectivity (dBZ) WRF-ARW (constant N 0 ) WRF-ARW WRF-NMM Calibration Curves

Caveats of SR Calibration 1.Calibration of SR will not significantly improve correlation of SR and observed reflectivity. 2.Calibration can only partially compensate for flaws in model microphysics or SR algorithm. 3.Calibration functions should be based on sufficiently large data sets such that they are not influenced by a small number of bad forecasts, i.e., they should reflect the mean behavior of the model. 4.Calibration functions are dependent on many factors, including: - observational data quality - method of “cartesianizing” the observed reflectivity - precipitation type - geographic location and time of year - model resolution, physics, and forecast hour

Merits of SR Calibration 1.Calibration can remove systematic under or overprediction of various reflectivity ranges and improve the “look” of SR products. 2.The process of determining the frequency distribution of SR vs. observed reflectivity, and deriving calibration functions, leads to insights into general flaws in model microphysics and SR algorithms. 3.Calibration functions may provide a more reasonable “forward operator” for assimilating observed reflectivity data into models than the straight D 6 function that is used. 4.There is potential to enhance the calibration functions, by training them on more limited spatio-temporal windows, or by seeking dependencies on particular types of frequency distributions.

Recommendations 1.Model microphysics should be formulated not only to optimize QPF, but also to produce reasonable hydrometeor fields and size distributions that affect the model reflectivity. 2.To the extent possible, SR algorithms should be precisely consistent with all assumptions in the associated model microphysical scheme. 3.Ideally, SR should be calculated within the model as it runs, to take advantage of the increasingly complex and dynamic size distributions calculated by the schemes. 4.Real-time or operational SR products should be statistically examined (using CFADs and other frequency distribution tests) to understand how they behave relative to observations. 5.Real-time or operational SR products should be calibrated with observed reflectivity using the methods described herein. 6.Calibration functions should be used in forward operators for assimilating reflectivity data into models.

Finis