Verification Summit AMB verification: rapid feedback to guide model development decisions Patrick Hofmann, Bill Moninger, Steve Weygandt, Curtis Alexander,

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Report of the Q2 Short Range QPF Discussion Group Jon Ahlquist Curtis Marshall John McGinley - lead Dan Petersen D. J. Seo Jean Vieux.
Hourly RUC Convective Probability Forecasts using Ensembles and Radar Assimilation Steve Weygandt Stan Benjamin Forecast Systems Laboratory NOAA.
Paul Fajman NOAA/NWS/MDL September 7,  NDFD ugly string  NDFD Forecasts and encoding  Observations  Assumptions  Output, Scores and Display.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Univ of AZ WRF Model Verification. Method NCEP Stage IV data used for precipitation verification – Stage IV is composite of rain fall observations and.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Tara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3 1 NCAR/RAL and NOAA/GSD, Boulder Colorado, USA 2 NOAA/NWS/Storm Prediction Center,
Mesoscale & Microscale Meteorological Division / ESSL / NCAR WRF (near) Real-Time High-Resolution Forecast Using Bluesky Wei Wang May 19, 2005 CISL User.
1 Localized Aviation Model Output Statistics Program (LAMP): Improvements to convective forecasts in response to user feedback Judy E. Ghirardelli National.
Transitioning CMAQ for NWS Operational AQ Forecasting Jeff McQueen*, Pius Lee*, Marina Tsildulko*, G. DiMego*, B. Katz* R. Mathur,T. Otte, J. Pleim, J.
Hongli Jiang, Yuanfu Xie, Steve Albers, Zoltan Toth
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
Copyright 2012, University Corporation for Atmospheric Research, all rights reserved Verifying Ensembles & Probability Fcsts with MET Ensemble Stat Tool.
Forecasting and Numerical Weather Prediction (NWP) NOWcasting Description of atmospheric models Specific Models Types of variables and how to determine.
1 On the use of radar data to verify mesoscale model precipitation forecasts Martin Goeber and Sean Milton Model Diagnostics and Validation group Numerical.
CPC Unified Gauge – Satellite Merged Precipitation Analysis for Improved Monitoring and Assessments of Global Climate Pingping Xie, Soo-Hyun Yoo,
Verification in NCEP/HPC Using VSDB-fvs Keith F. Brill November 2007.
Assimilation of AIRS Radiance Data within the Rapid Refresh Rapid Refresh domain Haidao Lin Steve Weygandt Ming Hu Stan Benjamin Patrick Hofmann Curtis.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
P. Ñurmi / WWRP QPF Verification - Prague 1 Operational QPF Verification of End Products and NWP Pertti Nurmi Finnish Meteorological Institute.
VERIFICATION OF NDFD GRIDDED FORECASTS IN THE WESTERN UNITED STATES John Horel 1, David Myrick 1, Bradley Colman 2, Mark Jackson 3 1 NOAA Cooperative Institute.
IMPROVING VERY-SHORT-TERM STORM PREDICTIONS BY ASSIMILATING RADAR AND SATELLITE DATA INTO A MESOSCALE NWP MODEL Allen Zhao 1, John Cook 1, Qin Xu 2, and.
Development of an EnKF/Hybrid Data Assimilation System for Mesoscale Application with the Rapid Refresh Ming Hu 1,2, Yujie Pan 3, Kefeng Zhu 3, Xuguang.
AMB Verification and Quality Control monitoring Efforts involving RAOB, Profiler, Mesonets, Aircraft Bill Moninger, Xue Wei, Susan Sahm, Brian Jamison.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.
Experiments in 1-6 h Forecasting of Convective Storms Using Radar Extrapolation and Numerical Weather Prediction Acknowledgements Mei Xu - MM5 Morris Weisman.
WMO/WWRP Workshop Use of NWP for Nowcasting 25 October 2011 Evaluation of the 3-km High Resolution Rapid Refresh (HRRR) as Nowcast Guidance NOAA/ESRL/GSD.
Verification for the High Impact Weather Prediction Project (HIWPP) and other ESRL/GSD verification work Steve Weygandt, Bonny Strong Bill Moninger, Jeff.
LAPS / STMAS Verification Activities Steve Albers, Isidora Jankov ESRL / GSD Verification Summit September 2011 Updated Sep 7, 2011.
HEMS Weather Summit – 21 March The Outlook for National-Scale Ceiling and Visibility Products Paul Herzegh Lead, FAA/AWRP National C&V Team.
2006(-07)TAMDAR aircraft impact experiments for RUC humidity, temperature and wind forecasts Stan Benjamin, Bill Moninger, Tracy Lorraine Smith, Brian.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecast in the Alps Verification.
Evaluation of impact of satellite radiance data within the hybrid variational/EnKF Rapid Refresh data assimilation system Haidao Lin Steve Weygandt Ming.
Evaluation of radiance data assimilation impact on Rapid Refresh forecast skill for retrospective and real-time experiments Haidao Lin Steve Weygandt Stan.
Production of a multi-model, convective- scale superensemble over western Europe as part of the SESAR project EMS Annual Conference, Sept. 13 th, 2013.
GSI applications within the Rapid Refresh and High Resolution Rapid Refresh 17 th IOAS-AOLS Conference 93 rd AMS Annual Meeting 9 January 2013 Patrick.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
NCEP Production Suite Review
Recent and Future Advancements in Convective-Scale Storm Prediction with the High- Resolution Rapid Refresh (HRRR) Forecast System NOAA/ESRL/GSD/AMB Curtis.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
Assimilation of AIRS SFOV Profiles in the Rapid Refresh Rapid Refresh domain Haidao Lin Ming Hu Steve Weygandt Stan Benjamin Assimilation and Modeling.
VERIFICATION Highligths by WG5. 2 Outlook Some focus on Temperature with common plots and Conditional Verification Some Fuzzy verification Long trends.
Gridded WAFS Icing Verification System Matt Strahan WAFC Washintgon.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
Nowcasting Convection Fusing 0-6 hour observation- and model-based probability forecasts WWRP Symposium on Nowcasting and Very Short Range Forecasting.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Verification of C&V Forecasts Jennifer Mahoney and Barbara Brown 19 April 2001.
2004 Developments in Aviation Forecast Guidance from the RUC Stan Benjamin Steve Weygandt NOAA / Forecast Systems Lab NY Courtesy:
Global vs mesoscale ATOVS assimilation at the Met Office Global Large obs error (4 K) NESDIS 1B radiances NOAA-15 & 16 HIRS and AMSU thinned to 154 km.
WARN ON FORECAST AND HIGH IMPACT WEATHER WORKSHOP 09 February 2012 Use of Rapid Updating Meso‐ and Storm‐scale Data Assimilation to Improve Forecasts of.
1 Recent AMDAR (MDCRS/ACARS) Activities at GSD New AMDAR-RUC database that helps evaluate AMDAR data quality Optimization study that suggests data can.
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
RUC Convective Probability Forecasts using Ensembles and Hourly Assimilation Steve Weygandt Stan Benjamin Forecast Systems Laboratory NOAA.
HIC Meeting, 02/25/2010 NWS Hydrologic Forecast Verification Team: Status and Discussion Julie Demargne OHD/HSMB Hydrologic Ensemble Prediction (HEP) group.
Assimilation of AIRS SFOV retrievals in the Rapid Refresh model system Rapid Refresh domain Steve Weygandt Haidao Lin Ming Hu Stan Benjamin P Hofmann Jun.
A few examples of heavy precipitation forecast Ming Xue Director
Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR
Multi-scale validation of high resolution precipitation products
Aircraft weather observations: Impacts for regional NWP models
Composite-based Verification
Winter storm forecast at 1-12 h range
Aviation Forecast Guidance from the RUC
New Developments in Aviation Forecast Guidance from the RUC
Rita Roberts and Jim Wilson National Center for Atmospheric Research
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Verification Summit AMB verification: rapid feedback to guide model development decisions Patrick Hofmann, Bill Moninger, Steve Weygandt, Curtis Alexander, Susan Sahm

Motivation There is a critical need for both rapid and comprehensive statistical and graphical verification of model forecasts from various AMB experimental models: RUC, RR, and HRRR - Real-time parallel cycles as well as retrospective runs - Two primary types: - Station verification : Upper-air, surface and clouds - Gridded verification: Precipitation, radar reflectivity, convective probabilities - Illuminate model biases and patterns to errors - Essential for evaluating model/assimilation configuration changes Rapid verification feedback enables timely improvement in forecast skill

Design Goals Fast computation and display of verification results (real- time for real-time cycles, day or two for retros) Simple procedures, but with sufficient options to elucidate key aspects (quantify visual impressions) Built-in capabilities to allow quick stratification by key parameters (metric, threshold, scale, valid time, initial time, forecast length, region) Easily accessible web-based presentation of verification results  ability to quickly examine aggregate statistics AND single-case plots in complementary manner Verification design driven by needs of forecast system developers

Design Details Use modified NCEP IPOLATES routines for interpolation and upscaling of input fields to multiple common grids. Calculate contingency table fields (YY, YN, NY, NN) for multiple scales, domains, and thresholds: -- database storage for statistical aggregation -- graphics for each event for detailed evaluation Web-based interface for aggregate statistics and event graphics Apply to multiple gridded fields (reflectivity, precipitation, probabilities) and multiple model runs (several version each of RUC, RR, HRRR as well as RCPF, HCPF, etc.)

Statistics Webpages Composite Reflectivity Time Series: Valid Times: Lead Times: 24 Hour Precipitation Time Series: Thresholds: Convective Probabilities Time Series: CSI vs Bias: Reliability Diagrams: ROC Curves:

Sample “time-series” stats interface Model Region Scale Averaging period Metric Threshold Forecast Length Valid time Date Range Many R/T runs and retros

RR-dev w/ Pseudo-obs HRRR-dev HRRR HRRR-dev better HRRR better Reflectivity (> 25 dBZ) CSI Eastern US on 40 km grid (3-day avg) Models Thresh “Time series” mode Metric Region Scale Sample application of “time-series” stats Difference

HRRR-dev HRRR RR-dev w/ Pseudo-obs HRRR-dev better HRRR better Reflectivity (> 25 dBZ) CSI Eastern US on 40 km grid (3-day avg) Models Thresh “Time series” mode Metric Region Scale Sample application of “time-series” stats Difference

HRRR-dev HRRR RR-dev w/ Pseudo-obs HRRR-dev better HRRR better Reflectivity (> 25 dBZ) CSI Eastern US on 40 km grid (3-day avg) Models Thresh “Time series” mode Metric Region Scale Sample application of “time-series” stats Difference Implemented in RR-prim HRRR-dev Longer time-step

HRRR-dev HRRR RR-dev w/ Pseudo-obs HRRR-dev better HRRR better Reflectivity (> 25 dBZ) CSI Eastern US on 40 km grid (3-day avg) Models Thresh “Time series” mode Metric Region Scale Sample application of “time-series” stats Difference Implemented in RR-prim HRRR-dev Longer time-step RR-dev Added shorter vert. length-scales in RR-dev/GSI Imple- mented In HRRR

CSI 25 dBZ 40-km EUS +6h fcst 8-22 Aug RUC HRRR Better RR HRRR better Sample “time-series” stats to examine scatter in forecast differences August

Sample application of “lead-time” stats illustrating CSI and bias “die-off” for different strengths of radar heating CSI (X100) Bias (X100) Forecast Length (hours)

Upscaled verification (especially to 40km and 80km) reveals “neighborhood” skill in HRRR forecasts, especially around the time of convective initiation 20-km 80-km 40-km 3-km HRRR 25dBZ, 6-h fcst Valid Time (GMT) CSI (x 100) Sample application of “valid time” stats illustrating diurnal variation in scale-dependent skill Convective Initiation time 00z 04z 08z 12z 16z 20z 00z

Reflectivity Graphics Webpage

12z + 6 hr 3-km 40-km Miss FA Hit Single case plots showing “neighborhood” skill Obs Refl. HRRR fcst

13-km CONUS Comparison 2 X 12 hr fcst vs. CPC 24-h analysis 1 – 31 Dec 2010 Matched RR vs. RUC Precipitation Verification RR RUC | | | | | | | | in. | | | | | | | | in. CSI (x 100) RUC RR 100 (1.0) bias (x 100) Sample application of “threshold” stats to show skill for range of precip amounts

Precipitation Graphics Webpages

CPC 24-h precip RUC Thrs CSI Bias observed Thrs CSI Bias x 12h fcst interpolated to 20-km grid RR vs. RUC 24-h precip. verif Single case plots showing forecast skill for precip. RR

RUC Thrs CSI Bias Thrs CSI Bias ” threshold Miss FA Hit CPC 24-h precip observed 2 x 12h fcst interpolated to 20-km grid RR vs. RUC 24-h precip. verif Single case plots showing forecast skill for precip.

2-h fcst 4-h fcst 6-h fcst ROC curve CSI vs. bias Sample display of probability verification statistics Work in progress, have display for CCFP and CoSPA probabilities Plan to add HCPF, RCPF, expand to probabilities of other hazards (fog, high echo-tops, etc.)

2-h fcst 4-h fcst 6-h fcst Sample Reliability Diagram All plots can zoom

Conclusion The verification system, including both the statistical and graphical webpages, greatly aids evaluation of model performance within AMB and facilitates rapid assessment of experimental configurations and improvements in real-time. We are also able to verify retrospective cases of scientific interest in very quick succession for use in presentations and publications for outreach endeavors.