Presentation is loading. Please wait.

Presentation is loading. Please wait.

NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division.

Similar presentations


Presentation on theme: "NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division."— Presentation transcript:

1 NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

2 NFUSE Conference Call 4/11/07 Motivating Question Can we use uncertainty information as a threshold for gauging when a forecast is good enough? –This is an informal talk! –Lots of examples –Approach question from the viewpoint of observational uncertainty

3 NFUSE Conference Call 4/11/07 Points Area(Grid Boxes) No longer forecasting for 8-10 CCF points Each CWA – 1000’s of 2.5 or 5 km grid boxes Twofold need for grid-based verification: –Forecaster feedback across the entire grid –Identifying ways to evolve our services to focus more attention on high impact events

4 NFUSE Conference Call 4/11/07 WR Service Improvement Project Initially began as a grid-based verification project using BOIVerify Morphed into learning how we can evolve our services to focus more effort on high impact events Project got us thinking about: “What is a good forecast for a small area?”

5 NFUSE Conference Call 4/11/07 Observations Grid-based verification requires an objective analysis based on ASOS & non-ASOS observations Lots of known problems with surface & analysis data Ob = Value ± Uncertainty

6 NFUSE Conference Call 4/11/07 Observational Errors Instrument errors Gross errors Siting errors Errors of “representativeness” Photo: J. Horel

7 NFUSE Conference Call 4/11/07 Errors of “representativeness” Observation is accurate –Reflects synoptic & microscale conditions But… the microscale phenomena it captures is not resolvable by analysis or model Example: cold pool in narrow valley –Observation on valley floor may be correct –Not captured by analysis system

8 NFUSE Conference Call 4/11/07 +9 Representativeness Error Temperature ( o C) Example Tooele Valley Rush Valley www.topozone.com

9 NFUSE Conference Call 4/11/07 Variability in Observations Examples - WR/SSD RTMA Evaluation Comparing analysis solutions along a terrain profile near SLC, UT ~70 mesonet obs in a 60 x 60 km area ~60 km Great Salt Lake Wasatch Mountains

10 NFUSE Conference Call 4/11/07 Large Spread in Observations >11 o C spread between 1400-1700 m How do we analyze this?

11 NFUSE Conference Call 4/11/07 Objective Analysis 101 Analysis Value = Background Value + Observation Corrections Analysis Errors come from: –Errors in the background field –Observational errors A “good” analysis takes into account the uncertainty in the obs & background –A “best fit” to the obs –Won’t always match the obs

12 NFUSE Conference Call 4/11/07 Forecast Verification Forecasters are comfortable with: –Verification against ASOS obs –Assessing forecast skill vs. MOS But is judging a forecast against a few points without any regard for observational and representativeness errors really the scientific way to verify forecasts? Is there a better way? Can we define a “good enough” forecast?

13 NFUSE Conference Call 4/11/07 Proposal Evaluate grid-based forecasts vs. RTMA Use RTMA to scientifically assign uncertainty Reward forecasts that are within the bounds of analysis uncertainty

14 NFUSE Conference Call 4/11/07 RTMA Uncertainty Estimates RTMA/AOR provides a golden opportunity to revamp verification program Analysis uncertainty varies by location Techniques under development at EMC to assign analysis uncertainty to RTMA –Backing out an estimate of the analysis error by taking the inverse of the Hessian of the analysis cost function –Cross Validation (expensive)

15 NFUSE Conference Call 4/11/07 Example Verify forecasts based on the amount of uncertainty that exists in an analysis Example: –Forecast = 64 o F –Analysis Value = 66 o F –Analysis Uncertainty = +/- 3 o F –No penalty for forecasts between 63-69 o F (the forecast fell in the “good enough” range) –This is a “distributions-oriented” approach…

16 NFUSE Conference Call 4/11/07 “Distributions-oriented” forecast verification Murphy and Winkler (1987) – original paper Brooks and Doswell (1996) - reduced dimensionality problem by using wider bins

17 NFUSE Conference Call 4/11/07 Problem with “distributions” approach Brooks and Doswell (1996) example used 5 o F bins Setup bins -5 to 0 o F, 0 to 5 o F, 5 to 10 o F etc. Forecast = 4.5 o F Verification = 0.5 o F = good forecast Verification = 5.5 o F = bad forecast

18 NFUSE Conference Call 4/11/07 Myrick and Horel (2006) Verified NDFD grid-based forecasts using floating bins whose width was based on the observational uncertainty (~2.5 o C)

19 NFUSE Conference Call 4/11/07 54 Temperature ( o F) Forecast Example ForecastRTMA RTMA Uncertainty 56 58 60 58 60 62 2 3 2 3 4 5 Populated Valley Mountains Green = Forecasts are “good enough” Red = abs(RTMA – Forecast) > Uncertainty

20 NFUSE Conference Call 4/11/07 Summary Challenge: How do we define a “good enough” forecast Proposal: –Verify against RTMA ± Uncertainty Uncertainty based on observational, representativeness, & analysis errors –Give the forecaster credit for forecast areas that are within the uncertainty Goal: Provide better feedback as to which forecast areas are “good enough” and which areas need more attention

21 NFUSE Conference Call 4/11/07 Special Thanks! Tim Barker (BOI WFO) Brad Colman (SEW WFO) Kirby Cook (SEW WFO) Andy Edman (WR/SSD) John Horel (Univ. Utah) Chad Kahler (WR/SSD) Mark Mollner (WR/SSD) Aaron Sutula (WR/SSD) Ken Pomeroy (WR/SSD) Manuel Pondeca (NCEP/EMC) Kevin Werner (WR/SSD)

22 NFUSE Conference Call 4/11/07 References Brooks H. E., and C. A. Doswell, 1996: A comparison of measures-oriented and distributions-oriented approaches to forecast verification. Wea. Forecasting, 11, 288–303. Murphy A. H., and R. L. Winkler, 1987: A general framework for forecast verification. Mon. Wea. Rev., 115, 1330–1338. Myrick, D. T., and J. D. Horel, 2006: Verification of surface temperature forecasts from the National Digital Forecast Database over the Western United States. Wea. Forecasting. 21, 869-892. Representativeness Errors – Western Region Training Module: http://ww2.wrh.noaa.gov/ssd/digital_services/training/Rep_Error_basics_final Western Region Service Evolution Project Internal Page: http://ww2.wrh.noaa.gov/ssd/digital_services/ http://ww2.wrh.noaa.gov/ssd/digital_services/


Download ppt "NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division."

Similar presentations


Ads by Google