Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

Validation of Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert BMRC, Melbourne, Australia 3 rd IPWG Workshop / 3.
1 00/XXXX © Crown copyright Use of radar data in modelling at the Met Office (UK) Bruce Macpherson Mesoscale Assimilation, NWP Met Office EWGLAM / COST-717.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
WRF Verification: New Methods of Evaluating Rainfall Prediction Chris Davis NCAR (MMM/RAP) Collaborators: Dave Ahijevych, Mike Baldwin, Barb Brown, Randy.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
Univ of AZ WRF Model Verification. Method NCEP Stage IV data used for precipitation verification – Stage IV is composite of rain fall observations and.
Juan Ruiz 1,2, Celeste Saulo 1,2, Soledad Cardazzo 1, Eugenia Kalnay 3 1 Departamento de Cs. de la Atmósfera y los Océanos (FCEyN-UBA), 2 Centro de Investigaciones.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
World Renewable Energy Forum May 15-17, 2012 Dr. James Hall.
COSMO General Meeting Zurich, 2005 Institute of Meteorology and Water Management Warsaw, Poland- 1 - Verification of the LM at IMGW Katarzyna Starosta,
CARPE DIEM Centre for Water Resources Research NUID-UCD Contribution to Area-3 Dusseldorf meeting 26th to 28th May 2003.
1 On the use of radar data to verify mesoscale model precipitation forecasts Martin Goeber and Sean Milton Model Diagnostics and Validation group Numerical.
Tutorial. Other post-processing approaches … 1) Bayesian Model Averaging (BMA) – Raftery et al (1997) 2) Analogue approaches – Hopson and Webster, J.
Verification Summit AMB verification: rapid feedback to guide model development decisions Patrick Hofmann, Bill Moninger, Steve Weygandt, Curtis Alexander,
Verification in NCEP/HPC Using VSDB-fvs Keith F. Brill November 2007.
Development of an object- oriented verification technique for QPF Michael Baldwin 1 Matthew Wandishin 2, S. Lakshmivarahan 3 1 Cooperative Institute for.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
VERIFICATION OF NDFD GRIDDED FORECASTS IN THE WESTERN UNITED STATES John Horel 1, David Myrick 1, Bradley Colman 2, Mark Jackson 3 1 NOAA Cooperative Institute.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
GLFE Status Meeting April 11-12, Presentation topics Deployment status Data quality control Data distribution NCEP meeting AirDat display work Icing.
Gridded Rainfall Estimation for Distributed Modeling in Western Mountainous Areas 1. Introduction Estimation of precipitation in mountainous areas continues.
Overview of the Colorado Basin River Forecast Center Lisa Holts.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
The IEM-KCCI-NWS Partnership: Working Together to Save Lives and Increase Weather Data Distribution.
1 Archiving Requirements – Current Requirements A. Juliann Meyer Sr. Hydrologist – Data Systems Missouri Basin River Forecast Center and RAXUM Team Leader.
Operational Issues from NCDC Perspective Steve Del Greco, Brian Nelson, Dongsoo Kim NOAA/NESDIS/NCDC Dongjun Seo – NOAA/NWS/OHD 1 st Q2 Workshop Archive,
Synthesizing Weather Information for Wildland Fire Decision Making in the Great Lakes Region John Horel Judy Pechmann Chris Galli Xia Dong University of.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
HEMS Weather Summit – 21 March The Outlook for National-Scale Ceiling and Visibility Products Paul Herzegh Lead, FAA/AWRP National C&V Team.
DTC Verification for the HMT Edward Tollerud 1, Tara Jensen 2, John Halley Gotway 2, Huiling Yuan 1,3, Wally Clark 4, Ellen Sukovich 4, Paul Oldenburg.
A QPE Product with Blended Gage Observations and High-Resolution WRF Ensemble Model Output: Comparison with Analyses and Verification during the HMT-ARB.
Development and Testing of a Regional GSI-Based EnKF-Hybrid System for the Rapid Refresh Configuration Yujie Pan 1, Kefeng Zhu 1, Ming Xue 1,2, Xuguang.
Stage IV Multi-sensor Mosaic Development, production and Application at NCEP/EMC Ying Lin NOAA/NWS/NCEP/EMC Jan 2011.
2006(-07)TAMDAR aircraft impact experiments for RUC humidity, temperature and wind forecasts Stan Benjamin, Bill Moninger, Tracy Lorraine Smith, Brian.
Use of Mesoscale Ensemble Weather Predictions to Improve Short-Term Precipitation and Hydrological Forecasts Michael Erickson 1, Brian A. Colle 1, Jeffrey.
CPC Unified Precipitation Project Pingping Xie, Wei Shi, Mingyue Chen and Sid Katz NOAA’s Climate Prediction Center
Colorado Basin River Forecast Center and Drought Related Forecasts Kevin Werner.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Philippe Moinat MACC regional air quality multi-model forecasts: rationale and alternatives to the median ensemble November 29 - December 1, 2011 Potomac,
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Validation of Satellite-Derived Rainfall Estimates and Numerical Model Forecasts of Precipitation over the US John Janowiak Climate Prediction Center/NCEP/NWS.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Page 1© Crown copyright 2005 Met Office Verification -status Clive Wilson, Presented by Mike Bush at EWGLAM Meeting October 8- 11, 2007.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Page 1© Crown copyright 2004 The use of an intensity-scale technique for assessing operational mesoscale precipitation forecasts Marion Mittermaier and.
Science plan S2S sub-project on verification. Objectives Recommend verification metrics and datasets for assessing forecast quality of S2S forecasts Provide.
Tables and Graphs. Graphs: Visual Display of Data X Axis: Independent Variable Y Axis: Dependent Variable.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Objective and Automated Assessment of Operational Global Forecast Model Predictions of Tropical Cyclone Formation Patrick A. Harr Naval Postgraduate School.
Verification of C&V Forecasts Jennifer Mahoney and Barbara Brown 19 April 2001.
Global vs mesoscale ATOVS assimilation at the Met Office Global Large obs error (4 K) NESDIS 1B radiances NOAA-15 & 16 HIRS and AMSU thinned to 154 km.
Overview of CBRFC Flood Operations Arizona WFOs – May 19, 2011 Kevin Werner, SCH.
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
HIC Meeting, 02/25/2010 NWS Hydrologic Forecast Verification Team: Status and Discussion Julie Demargne OHD/HSMB Hydrologic Ensemble Prediction (HEP) group.
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
A few examples of heavy precipitation forecast Ming Xue Director
A dual-polarization QPE method based on the NCAR Particle ID algorithm Description and preliminary results Michael J. Dixon1, J. W. Wilson1, T. M. Weckwerth1,
Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR
Multi-scale validation of high resolution precipitation products
5Developmental Testbed Center
Hypothesis tests Single sample Z
Comparison of different combinations of ensemble-based and variational data assimilation approaches for deterministic NWP Mark Buehner Data Assimilation.
Spatial Interpolation (Discrete Point Data)
Quality Assessment Activities
Presentation transcript:

Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time Verification System (RTVS) of NOAA / FSL Boulder, Colorado USA

Outline Some approaches to objective verification How we perform automated precipitation verification What we mean by "real-time" Forecasts + Obs --> Results disseminated over the web. (The steps involved) QC, Model comparisons, Statistical displays Future direction

If you don't have objective data you are just another person with an opinion

Our Approach? Basically, we're gross! No really, we are... We process 4,500 gauge measurements each hour of every day. On average we retain 2,800 "good" reports. That's 67,000 observations per day, 200,000 per month, and over 6 million per season.

The Real-time Verification System An independent, real-time, automated data ingest and management system Gauge observations received each hour of every day (~4500) Gross error check on observations is performed Model forecasts interpolated to the observation points Results stored in 2 x 2 contingency tables of forecast / observation pairs (YY, YN, NY, NN) Graphics, skill scores and contingency information disseminated over the WorldWide Web

Alternative Approaches (Should be objective) Grid-to-grid verification We're game... but not yet! More fair to the modelers Less fair to the end-users of the forecast? More representative of the areal coverage of precipitation Can do pattern matching and partitioning of the error (Ebert, et al.) or studies of representativeness error (Foufoula et al.)

What about Case Studies? Do you fish with a pole or do you fish with a net? We fish with a net Case studies are insufficient for evaluating national-scale forecast systems Subjective analyses often focus on where forecasts work well, and not on where they work poorly There exists a need to assess variability on many time and space scales (from daily to seasonal) Timely and objective information is needed for decision making

Realtime or Near-Realtime? Realtime processing... Monthly and Seasonal dissemination of results (for now) Gauge data stored in hourly bins Model data interpolated once the observations catch up (Models initialized as late as 18Z, and then 24h forecasts are made) Data collected over numerous accumulation periods

Go with the flow... I. Obtain gauge data and collect it into hourly bins / Match data with list of "good" stations (QC'd list) II. Interpolate model data to "good" observation points III. Accumulate precipitation over 3, 6, 12, 24 hours IV. Compute contingency pairs (YY, YN, NY, NN) V. Process these contingency data to create plots of ESS and Bias for Eta and RUC2 VI. Make these displays and the associated statistical information available through the web

A Point-Specific Approach (Eta at 40 km)

Gauge Data Checked for Accuracy ¢ Hourly gauge data are checked for accuracy vs. radar, 24h totals, nearest neighbor ¢ Further data are included through in-house QC efforts

Forecast / Observation Comparisons Comparisons made at numerous thresholds from 0.1 to 5.0 inches Comparisons made over 3, 6, 12, and 24h accumulation periods

2x2 Contingency Tables

Results Available over the Web www-ad.fsl.noaa.gov/afra/rtvs/precip ¢ Specify parameters... obtain graphical result ¢ View contingency tables stored on disk

The Future! Access and Displays via Database (Model Icing Forecasts) ¢ Specify parameters ¢ Display results (gnuplot) via database query (MySQL)

Are these methods sufficient? Trade off between dealing with the specifics and dealing with the general (rifle vs. shotgun) Method is not discretized by region or event Density of observations is not smooth Although method is straightforward, there still is a lack of understanding for what the skill scores represent May tell you which forecast system is "better", but not why

Future Plans Add more models to this point-specific approach, and provide a measure of confidence Perform verification using a gridded, analyzed precipitation field (Stage IV Precipitation) Verify the probabilistic forecasts of ensembles Move verification data into the relational database and compute results on-the-fly Relate verification results geographically Access verification results as soon as the forecast period ends (timeliness)

Contd,... Test and extend QC of the observations Currently we are: Assessing skill using East-only and West-only hourly station data Assessing skill using full RFC and the in-house QC methods Assessing skill using no QC methods whatsoever Comparing these four experimental results

Problem Not Reporting "Zero" Precipitation?

The Affect on Precipitation Verification