Presentation is loading. Please wait.

Presentation is loading. Please wait.

23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Toward Short-Range Ensemble Prediction of Mesoscale.

Similar presentations


Presentation on theme: "23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Toward Short-Range Ensemble Prediction of Mesoscale."— Presentation transcript:

1 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Toward Short-Range Ensemble Prediction of Mesoscale Forecast Skill Eric P. Grimit Clifford F. Mass University of Washington Supported by: NWS Western Region/UCAR-COMET Student-Career Experience Program (SCEP) DoD Multi-Disciplinary University Research Initiative (MURI)

2 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Forecasting Forecast Error Like any other scientific prediction or measurement, weather forecasts should be accompanied by error bounds, or a statement of uncertainty. Forecast error changes from day-to-day, and is dependent on: Atmospheric predictability – a function of the sensitivity of the flow to: Magnitude/orientation of initial state errors Numerical model errors / deficiencies T 2m = 3 °C ± 2 °CP(T 2m < 0 °C) = 6.7 %

3 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Operational forecasters need this crucial information to know how much to trust model forecast guidance Current uncertainty knowledge is partial, and largely subjective End users could greatly benefit from knowing the expected forecast reliability Allows sophisticated users to make optimal decisions in the face of uncertainty (economic cost-loss or utility) Common users of weather forecasts – confidence index Value of Forecast Error Prediction Showers Low 46°F High 54°F FRI8 AM Showers Low 47°F High 57°F SAT5 Take protective action if: P(T 2m cost/loss

4 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Probabilistic Weather Forecasts One approach to estimating forecast uncertainty is to use a collection of different forecasts—an ensemble. Ensemble weather forecasting diagnoses the sensitivity of the predicted flow to initial-state and model errors—provided they are well-sampled.

5 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Probabilistic Weather Forecasts Agreement/disagreement among ensemble member forecasts provides information about forecast certainty/uncertainty. agreementdisagreement better forecastworse forecastreliability use ensemble forecast variance as a predictor of forecast error

6 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Observed Error Predictions: A Disappointment [c.f. Goerss 2000] [c.f. Hou et al. 2001] [c.f. Hamill and Colucci 1998] Tropical Cyclone Tracks SAMEX ’98 SREFs NCEP SREF Precipitation Highly scattered relationship, thus low correlations [c.f. Grimit and Mass 2002] UW MM5 SREF 10-m Wind Direction Unique 5-member short- range ensemble developed in 2000 showed promise Spread-error correlations near 0.6, higher for cases with extreme spread

7 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Definition of forecast error Error metric – user-dependency Specifics of the forecast verification approach Day-to-day forecast spread variability An accurately forecast probability distribution is required In practice, the PDF is not well forecast Unaccounted for sources of uncertainty Sub-grid scale processes Under-sampling (distribution tails not well captured) Systematic forecast biases Must find ways to extract flow-dependent uncertainty information from current (suboptimal) ensembles Why Forecast Error Prediction is Limited  1- exp(-  2 )  2 ( ,|E|) = ;  =std(ln  ) 2 1-exp(-  2 ) 22

8 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Project Goal Develop a short-range forecast error prediction system using an imperfect mesoscale ensemble short-range = 0 – 48 h imperfect = suboptimal; cannot correctly forecast the true PDF Estimate the upper-bound of forecast error predictability using a simple statistical model Use existing UW MM5 SREF system – a unique resource Initialized using an international collection of large-scale analyses Spatial resolution (12-km grid spacing) Include spatially- and temporally-dependent forecast bias correction Use temporal ensemble spread as a secondary predictor of forecast error, if viable Test a variety of metrics of spread and error

9 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR STD-AEM correlation STD-RMS correlation Simple Statistical Model Spread-Error Correlations spread STD =Standard Deviation error RMS=Root-Mean Square error AEM=Absolute Error of the ensemble Mean

10 UW’s Ensemble of Ensembles # of EF Initial Forecast Forecast Name Members Type Conditions Model(s) Cycle Domain ACME 17SMMA 8 Ind. Analyses, “Standard” 00Z 36km, 12km 1 Centroid, MM5 8 Mirrors ACME core 8SMMA Independent “Standard” 00Z 36km, 12km Analyses MM5 ACME core+ 8PMMA “ “ 8 MM5 00Z 36km, 12km variations PME 8 MMMA “ “ 8 “native” 00Z, 12Z 36km large-scale Homegrown Imported ACME: Analysis-Centroid Mirroring Ensemble PME: Poor Man’s Ensemble MM5: PSU/NCAR Mesoscale Modeling System Version 5 SMMA: Single Model Multi-Analysis PMMA: Perturbed-model Multi-Analysis MMMA: Multi-model Multi-Analysis

11 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Verification Data: Surface Observations Network of surface observations from many different agencies Observations are preferentially located in lower elevations and near urban centers. Focus is on 10-m wind direction More extensive coverage & greater # of reporting sites than SLP. Greatly influenced by regional orography, mesoscale pressure pattern, and synoptic scale changes. Systematic forecast biases in the other near-surface variables can dominate stochastic errors. Will also use temperature and wind speed

12 ACME core Spread-Error Correlations for WDIR Latest spread-error correlations are lower than in early MM5 ensemble work. Observed STD-RMS correlations are higher than STD-AEM correlations. ACME core forecast error predictability is comparable to the expected predictability, given a perfect ensemble (with the same spread variability). Clear diurnal variation— affected by IC & MM5 biases? Ensemble Size = 8 members (AVN, CMC, ETA, GASP, JMA, NOGAPS, TCWB, UKMO) Verification Period: Oct 2002 – Mar 2003 (130 cases) Verification Strategy: Interpolate Model to Observations Variable: 10-m Wind Direction

13 ACME core+ Spread-Error Correlations for WDIR Mixed-physics adds some useful spread, increasing spread-error correlations slightly, even though temporal spread variability decreases. STD-RMS correlations are higher than and improve more than STD-AEM correlations. Exceedance of expected and idealized correlations may be due to: Simple model assumptions Domain-averaging Ensemble Size = 8 members (PLUS01, PLUS02, PLUS03, PLUS04, PLUS05, PLUS06, PLUS07, PLUS08) Verification Period: Oct 2002 – Mar 2003 (130 cases) Verification Strategy: Interpolate Model to Observations Variable: 10-m Wind Direction

14 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Spread-Error Correlations for Temperature ACME core ACME core+

15 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Summary Forecast error predictability depends largely on the definition of error itself. User-dependent needs Spread-error correlation is sensitive to the spread and error metrics For mesoscale wind and temperature forecast errors, the UW MM5 SREF spread appears to be a viable predictor—especially using the multi-analysis, mixed-physics ensemble (ACME core+ ). Incorporation of a simple method of forecast bias correction is expected to further improve spread-error correlations. Temporal ensemble spread has not proven to be a consistently skillful secondary predictor of forecast error.

16 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR “No forecast is complete without a forecast of forecast skill!” -- H. Tennekes, 1987 QUESTIONS? Website http://www.atmos.washington.edu/~emm5rt/ensemble.cgi Email epgrimit@atmos.washington.edu

17 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Extra Slides

18 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR The Original Simple Stochastic Model Spread-skill correlation depends on the time variation of spread For constant spread day-to-day (  = 0),  = 0 For large spread variability (    ),   sqrt(2/  ) < 0.8 Assumes that E is the ensemble mean error, infinite ensemble  1- exp(-  2 )  2 ( ,|E|) = ;  =std(ln  ) 2 1-exp(-  2 ) 22 (Houtekamer 1993) Spread-Skill Correlation Theory  = ensemble standard deviation (spread)  = temporal spread variability E = ensemble forecast error (skill)

19 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR 1.Draw today’s “forecast uncertainty” from a log- normal distribution (Houtekamer 1993 model). ln(  ) ~ N( ln(  f ),    2.Create synthetic ensemble forecasts by drawing M values from the “true” distribution (perfect ensemble). F i ~ N( Z,   ) ; i = 1,2,…,M 3.Draw the verifying observation from the same “true” distribution. V ~ N( Z,   ) 4.Calculate ensemble spread and skill using varying metrics. A Modified Simple Stochastic Model Stochastically simulated ensemble forecasts at a single grid point with 50,000 realizations (cases) Assume perfect ensemble forecasts Assumed Gaussian statistics Varied: 1)temporal spread variability (  2)finite ensemble size (M) 3)spread and skill metrics

20 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR What Measure of Skill? STD is a better predictor of the average ensemble member error than of the ensemble mean error. _ AEM = | E | ___ MAE = | E | Different measures of ensemble variation in may be required to predict other measures of skill. spread STD =Standard Deviation error RMS=Root-Mean Square error MAE=Mean Absolute Error AEM=Absolute Error of the ensemble Mean AEC=Absolute Error of a Control

21 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Multi-Analysis, Mixed Physics: ACME core+ see Eckel (2003) for further details

22 Using Lagged-Centroid Forecasts Advantages: Run-to-run consistency of the best deterministic forecast estimate of “truth” (without any weighting) Less sensitive to a single member’s temporal variability Yields mesoscale spread [equal weighting of lagged forecasts] Temporal (Lagged) Ensemble c T M T Analysis Region 48h forecast Region

23 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Multiple (Combined) Spread-Skill Correlations

24 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR At and below minimum useful correlation. Multiple (Combined) Spread-Skill Correlations Early results suggested that temporal spread would be a useful secondary predictor. Latest results suggest otherwise.

25 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Verification Data: Mesoscale Gridded Analysis Reduced concern about impacts of observational errors on results, if observation and grid-based spread-skill relationships are qualitatively similar. Use Rapid Update Cycle 20-km (RUC20) analysis as “gridded truth” for MM5 ensemble verification and calibration. Smooth 12-km MM5 ensemble forecasts to RUC20 grid. Improved analysis could be used in the future.

26 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR

27 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR

28 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR

29 Training Period Bias-corrected Forecast Period Training Period Bias-corrected Forecast Period Training Period Bias-corrected Forecast Period N number of forecast cases f i,j,t forecast at location (i, j ) and lead time (t) o i,j verification 1) Calculate bias at every location and lead time using previous forecasts/verifications 2) Post-process current forecast using calculated bias: f i,j,t bias-corrected forecast at location (i, j ) and lead time (t) * November December January February March Simple Bias Correction Overall goal is to correct the majority of the bias in each member forecast, while using shortest possible training period Will be performed separately using both observations and the RUC20 analysis as verifications

30 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR spread STD =Standard Deviation ENT*=Statistical Entropy MOD*=Mode Population error AEM=Absolute Error of the ensemble Mean MAE=Mean Absolute Error IGN*=Ignorance * = binned quantity

31 23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR spread STD =Standard Deviation ENT*=Statistical Entropy MOD*=Mode Population skill Success =0 / 1 * = binned quantity


Download ppt "23 June 2003 4:30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Toward Short-Range Ensemble Prediction of Mesoscale."

Similar presentations


Ads by Google