Download presentation
Presentation is loading. Please wait.
1
Effective Mesoscale, Short-Range Ensemble Forecasting Tony Eckel** and Clifford F. Mass Presented By: Eric Grimit University of Washington Atmospheric Sciences Department **AFWA, Offutt AFB, NE This research was supported by… -The United States Air Force -The DoD Multidisciplinary University Research Initiative (MURI) program administered by the Office of Naval Research under Grant N00014-01-10745. -The National Weather Service
2
Overview Questions Can an effective mesoscale, short-range ensemble forecast (SREF) system be designed? Effective: skillful forecast probability How should analysis uncertainty be accounted for? How much does model bias impact a SREF system and can it be easily corrected? How should model uncertainty be accounted for?
3
FP = 93% Event Threshold FP = ORF = 72% Frequency Initial State Forecast Probability from an Ensemble EF provides an estimate (histogram) of truth’s Probability Density Function (PDF). In a large, ideal EF system, Forecast Probability (FP) = Observed Relative Frequency (ORF) 24 h Forecast State48 h Forecast State True PDF 0 5 10 15 20 25 30 35 40 kt0 5 10 15 20 25 30 35 Surface Wind Speed (kt) Frequency In practice, things go awry from… Undersampling of the PDF (too few ensemble members) Poor representation of initial uncertainty Model deficiencies -- Model bias causes a shift in the estimated mean -- Sharing of model errors between EF members leads to reduced variance EF’s estimated PDF does not match truth’s PDF, and FP ORF Frequency Probability of Surface Winds > 18.0 kt 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
4
UW’s Ensemble of Ensembles # of EF Initial Forecast Forecast Name Members Type Conditions Model(s) Cycle Domain ACME 17SMMA 8 Ind. Analyses, “Standard” 00Z 36km, 12km 1 Centroid, MM5 8 Mirrors UWME 8SMMA 8 Independent “Standard” 00Z 36km, 12km Analyses MM5 UWME+ 8PMMA 8 Independent 8 MM5 00Z 36km, 12km Analyses variations 8 PME 8 MMMA8 Independent operational, 00Z, 12Z 36km Analyses large-scale Homegrown Imported ACME: Analysis-Centroid Mirroring Ensemble PME: Poor Man’s Ensemble MM5: 5 th Generation PSU/NCAR Mesoscale Modeling System SMMA: Single Model Multi-Analysis PMMA: Perturbed-model Multi-Analysis MMMA: Multi-model Multi-Analysis
5
Total of 129, 48-h forecasts (31 Oct 2002 – 28 Mar 2003) all initialized at 00z Incomplete forecast case days are shaded Parameters: 36-km Domain: Mean Sea Level Pressure (MSLP), 500mb Geopotential Height (Z 500 ) 12-km Domain: Wind Speed at 10 m (WS 10 ), Temperature at 2 m (T 2 ) Research Dataset Verification: 36-km Domain: centroid analysis (mean of 8 independent analyses, available at 12h increments) 12-km Domain: ruc20 analysis (NCEP 20-km mesoscale analysis, available at 3h increments) NovemberDecemberJanuary February March 36-km Domain (151 127) 12-km Domain (103 100) Note: Global PME data was fitted to the 36-km domain MM5 Nested Model Domains
6
Training Period Bias-corrected Forecast Period Training Period Bias-corrected Forecast Period Gridded, Mean Bias Correction N number of forecast cases (14) f i,j,t forecast at grid point (i, j ) and lead time (t ) o i,j observation (centroid-analysis or ruc20 verification) For the current forecast cycle: 1) Calculate bias at every grid point and lead time using previous 2 weeks’ forecasts 2) Postprocess current forecast to correct for bias: f i,j,t bias-corrected forecast at grid point (i, j ) and lead time (t) * NovemberDecemberJanuary February March
7
Average RMSE ( C) and (shaded) Average Bias Uncorrected UWME+ T 2 12 h 24 h 36 h 48 h
8
Average RMSE ( C) and (shaded) Average Bias Bias-Corrected UWME+ T 2 12 h 24 h 36 h 48 h
9
Multimodel Vs. PerturbedModel PME Vs. UWME+ 36-km Domain
10
*PME exhibits more dispersion than *UWME+ because *PME (a multi-model system) has more model diversity *PME is better at capturing growth of synoptic-scale errors *PME *UWME+ Probability Verification Rank Histogram Record of where verification fell (i.e., its rank) among the ordered ensemble members: Flat Well calibrated EF (truth’s PDF matches EF PDF) U’d Under-dispersive EF (truth “gets away” quite often) Humped Over-dispersive EF *Bias-corrected, 36-h MSLP Comparison of VRHs “Nudging” MM5 outer domain may improve SREF
11
Skill vs. Lead Time for FP of the event: MSLP < 1001 mb BSS = 1, perfect BSS < 0, worthless Comparison of Skill *UWME+ UWME+ *PME PME * Bias-corrected
12
Value of Model Diversity For a Mesoscale SREF UWME Vs. UWME+ 12-km Domain
13
Standardized Verification Verification Outlier Percentage % of |V Z | > 3 Probability *UWME 36 h MSLP Verification Rank Histogram MRE = 14.4% VOP = 9.0% How Much & Where Does Truth “Get Away”?
14
Case Study Initialized at 00Z, 20 Dec 2002 rank 9 V z < 3 V z > 3Z 500 EF Mean and V z rank 1 Z 500 EF Mean and VRH Outside Ranks MRE = 25.7% VOP = 18.5% *UWME 36-h Standardized Verification Verification Outlier Percentage % of |V Z | > 3 Probability *UWME 36 h MSLP Verification Rank Histogram MRE = 14.4% VOP = 9.0% How Much & Where Does Truth “Get Away”?
15
(d) T 2 (c) WS 10 (b) MSLP (a) Z 500 *UWME *UWME+ VOP 5.0 % 4.2 % 9.0 % 6.7 % 25.6 % 13.3 % 43.7 % 21.0 % Surface/Mesoscale Variable ( Errors Depend on Model Uncertainty ) Synoptic Variable ( Errors Depend on Analysis Uncertainty ) Comparison of 36-h VRHs *UWME *UWME+ *UWME *UWME+ *UWME *UWME+
16
*UWME UWME *UWME+ UWME+ Comparison of Skill Skill vs. Lead Time for FP of the event: WS 10 > 18 kt BSS = 1, perfect BSS < 0, worthless * Bias-corrected
17
*UWME UWME *UWME+ UWME+ Uncertainty Skill for P(WS 10 > 18 kt)
18
Conclusions Bias Correction Particularly important for mesoscale SREF in which model biases are often large Significantly improves SREF utility by correctly adjusting the forecast PDF Allows for fair and accurate analysis Multianalysis Approach for Representing Analysis Uncertainty Contributes to a skilled SREF Analyses too highly correlated at times—miss key features Limits EF size to number of available analyses - Mirroring produces additional, valid samples of the approximate forecast PDF (i.e., from UWME) but can not correct deficiencies in original sample More rigorous approach would be beneficial to SREF
19
Including Model Diversity in SREF is Critical for Representation of Uncertainty Provides needed increase in spread Degree of importance depends on the variable or phenomenon considered Improves SREF utility through better estimation of the forecast PDF Conclusions The Value of Mesoscale SREF May be Further Increased by... Nudging MM5 outer domain solutions toward the PME solutions - A PME (multimodel) system is great at capturing large-scale model uncertainty Increasing model resolution of SREF members to increase variance at small-scales Introducing more thorough model perturbations Improving the model to reduce need for model diversity
20
The End
21
Backup Slides
22
Resolution ( ~ @ 45 N ) Objective Abbreviation/Model/Source Type Computational Distributed Analysis avn, Global Forecast System (GFS), SpectralT254 / L641.0 / L14 SSI National Centers for Environmental Prediction~55 km~80 km3D Var cmcg, Global Environmental Multi-scale (GEM), Finite0.9 0.9 /L281.25 / L113D Var Canadian Meteorological Centre Diff ~70 km ~100 km eta, limited-area mesoscale model, Finite32 km / L45 90 km / L37SSI National Centers for Environmental Prediction Diff.3D Var gasp, Global AnalysiS and Prediction model,SpectralT239 / L291.0 / L11 3D Var Australian Bureau of Meteorology~60 km~80 km jma, Global Spectral Model (GSM),SpectralT106 / L211.25 / L13OI Japan Meteorological Agency~135 km~100 km ngps, Navy Operational Global Atmos. Pred. System,SpectralT239 / L301.0 / L14OI Fleet Numerical Meteorological & Oceanographic Cntr. ~60 km~80 km tcwb, Global Forecast System,SpectralT79 / L181.0 / L11 OI Taiwan Central Weather Bureau~180 km~80 km ukmo, Unified Model, Finite5/6 5/9 /L30same / L123D Var United Kingdom Meteorological Office Diff.~60 km Operational Models/Analyses of the PME
23
Perturbed surface boundary parameters according to their suspected uncertainty Assumed differences between model physics options approximate model error Design of UWME+ 1) Albedo 2) Roughness Length 3) Moisture Availability
24
VOP = 8.0% MR = 37.3% MRE = 15.1% *PME Case Study Initialized at 00Z, 20 Dec 2002 rank 9 V z < 3 V z > 3Z 500 EF Mean and V z rank 1 Z 500 EF Mean and VRH Outside Ranks Standardized Verification Verification Outlier Percentage % of V Z > 3 Probability *ACME core 36 h MSLP Verification Rank Histogram MR = 36.6% MRE = 14.4% VOP = 9.0% 36-h How Much & Where Does Truth “Get Away”?
25
VOP = 13.8% VOP = 8.0% *ACME core+ *PME Z 500 Cent Analysis 12Z, 21 Dec 2002 Case Study Initialized at 00Z, 20 Dec 2002 36 h Forecast Z 500 EF Mean and Standardized Verification, V z V z < 3 V z > 3
26
*ACME core ACME core *ACME core+ ACME core+ Uncertainty Skill for P(T 2 < 0°C)
27
(m 2 /s 2 ) (C2)(C2) WS 10 T2T2 *ACME core Spread MSE of *ACME core Mean *ACME core+ Spread MSE of *ACME core+ Mean c 2 11.4 m 2 / s 2 c 2 13.2 ºC 2 Ensemble Dispersion *Bias Corrected ACME core Spread (Uncorrected) ACME core+ Spread (Uncorrected)
28
Ensemble Dispersion at Smaller Scales Ensemble Variance (m 2 /s 2 ) UWME WS 10 36-km 12-km 36km Grid Spacing 12km Grid Spacing 3h Cumulative Precipitation
29
An analysis is just one possible initial condition (IC) used to run a numerical model (such as NCEP’s Eta). T The true state of the atmosphere exists as a single point in phase space that we never know exactly. A point in phase space completely describes an instantaneous state of the atmosphere. For a model, a point is the vector of values for all parameters (pres, temp, etc.) at all grid points at one time. e Trajectories of truth and model diverge: 1) IC error grows nonlinearly 2) Model error adds to the problem P H A S E S P A C E 12 h Forecast 36 h Forecast 24 h Forecast 48 h Forecast T 48 h Verification T 36 h Verification T 24 h Verification T 12 h Verification Eta Analysis
30
e a u c j t g n T T Analysis Region 8 “Core” Analyses Plug each analysis into MM5 to create an ensemble of mesoscale forecasts (cloud of future states encompassing truth). 1) Reveal uncertainty in forecast 2) Reduce error by averaging to M 3) Yield probabilistic information M 48h forecast Region P H A S E S P A C E
31
Success and Failure of ACME ACME core Vs. ACME 36-km Domain 12-km Domain
32
cmcg* The ACME Process STEP 1: Calculate best guess for truth (the centroid) by averaging all analyses. STEP 2: Find error vector in model phase space between one analysis and the centroid by differencing all state variables over all grid points. STEP 3: Make a new IC by mirroring that error about the centroid. cmcg C cmcg* Sea Level Pressure (mb) ~1000 km 1006 1004 1002 1000 998 996 994 cent 170°W 165°W 160°W 155°W 150°W 145°W 140°W 135°W eta ngps tcwb gasp avn ukmo cmcg
33
e a u c j t g n c T M T Analysis Region 48h forecast Region ACME’s Centroid Analysis Eta Analysis Centroid Analysis P H A S E S P A C E
34
P H A S E S P A C E e n a c u t g T j T Analysis Region M 48h Forecast Region e a u c j t g n c ACME’s Mirrored Analyses Eta Analysis Eta ´ Analysis Centroid Analysis
35
0.4 0.3 0.2 0.1 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 36-h T 2 36-h MSLP *ACME core *ACME 0.3 0.2 0.1 0.0 Comparison of Verification Rank Histograms Verification Rank Probability 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Verification Rank
36
36-h WS 10 MR = 52.5% MRE = 30.3% VOP = 25.9% MR = 44.0% MRE = 32.9% VOP = 24.7% *ACME core *ACME MR = 68.1% MRE = 45.9% VOP = 44.4% MR = 62.3% MRE = 51.2% VOP = 44.2% *ACME core *ACME 0.4 0.3 0.2 0.1 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Verification Rank 0.3 0.2 0.1 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 36-h T 2 MR = 36.5% MRE = 14.2% VOP = 9.1% 36-h MSLP MR = 26.1% MRE = 15.0% VOP = 8.7% *ACME core *ACME 0.3 0.2 0.1 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Verification Rank Histograms for…
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.