Download presentation
Presentation is loading. Please wait.
1
5Developmental Testbed Center
AGU 2010 WRF Ensemble Model Performance during Atmospheric River Events in California Edward Tollerud1,5, Tara Jensen2,5, Huiling Yuan1,3, John Halley Gotway2,5, Paul Oldenburg2,5, Isidora Jankov1, Wally Clark4, Ellen Sukovich4, Gary Wick4, and Randy Bullock2,5 1 ESRL/GSD, Boulder, CO NCAR/RAL, Boulder, CO 3 CIRES, Boulder, CO ESRL/PSD, Boulder, CO 5Developmental Testbed Center Acknowledgments to the USWRP for funding
2
Research Objectives: HMT and DTC
AGU 2010 Research Objectives: HMT and DTC DTC – Evaluate present and future and EMC operational models HMT – Evaluate forecast values of regional ensemble forecast system Principal focus on QPF Investigate impact of verification data choices DTC - Build demonstration realtime web display and examine standard and state-of-the-art verification methods
3
Precipitation during 1200 UTC 20 January – 21 January 2010
(Note poor observing network in Nevada) Four days of heavy rainfall: 1/17-1/21
4
The Forecast Prospect: Dealing with uncertainty
AGU 2010 Cumulative Rainfall at ATA, 1200 UTC 17 Jan – 21 Jan WRF ensemble member forecasts The Forecast Prospect: Dealing with uncertainty
5
The Forecast Prospect: Selecting NWP Guidance
AGU 2010 WRF ensemble mean GFS Obs at ATA – 240 mm The Forecast Prospect: Selecting NWP Guidance
6
FY 2010 HMT-West: Demonstration Website
AGU 2010 FY 2010 HMT-West: Demonstration Website Assessment of ensemble member QPF possible in near real time; verification dataset options available Basin-specific and RFC-specific verification domains installed 30-day boxplots provide statistical summary of model QPF performance
7
Large rainfall means larger errors
AGU 2010 ETS for January Large day-to-day variability related to rainfall amount; extensive rain means better scores RMSE for January Large rainfall means larger errors
8
Full-season statistics – Gilbert Skill Score – All events
AGU 2010 Full-season statistics – Gilbert Skill Score – All events
9
Score Idiosyncracies and Data Impacts
AGU 2010 Score Idiosyncracies and Data Impacts 30-day summary scoring for January, ETS GFS degradation at higher thresholds ‘Quirky’ scores for verification using Stage IV analysis: Nevada impact
10
AGU 2010 GSS for QPF Threshold > 0 inches Aggregated for January for lead times h FAR for QPF Threshold >0.1 inches Aggregated for January for lead times h
11
MODE/MET objects: Spatial Verification for ensemble QPF fields
AGU 2010 MODE/MET objects: Spatial Verification for ensemble QPF fields In the pipeline: timeseries plots of quantitative attributes scores See Clark poster in afternoon session of more MODE applications
12
Profiler Winds at Bodega Bay
Diurnal cycling of winds, from southerly to westerly (upslope); how well to models perform? January 18 Full explanation and diagnosis could use wind field verification as well as QPF verification January 19
13
Summary and Future DRTC /HMT Plans
AGU 2010 Summary and Future DRTC /HMT Plans General Assessment from one field season: WRF ensemble mean at higher resolution than GFS performs better with most scoring metrics (not statistically scrutinized yet…..) This season’s real-time web display will include probabilistic scoring and other operational models (NMMB, RRR, SREF,…) HMT hypotheses for DTC testing: Best model for guidance, value of ‘hotstart’ analyses, value of ensemble forecast methods New direction for 2011: Assess operational and research model microphysical forecasts with HMT-West research observations Final Thought: UNCERTAINTY, UNCERTAINTY, UNCERTAINTY (forecasts) (observations) (verification)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.