Effective Mesoscale, Short-Range Ensemble Forecasting Tony Eckel** and Clifford F. Mass Presented By: Eric Grimit University of Washington Atmospheric.

Slides:



Advertisements
Similar presentations
PRESENTS: FORECASTING FOR OPERATIONS AND DESIGN February 16 th 2011 – Aberdeen.
Advertisements

Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass Adrian Raftery, Susan Joslyn, Tilmann Gneiting and others University of Washington.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
CHOICES FOR HURRICANE REGIONAL ENSEMBLE FORECAST (HREF) SYSTEM Zoltan Toth and Isidora Jankov.
Analysis of Precipitation Distributions Associated with Two Cool-Season Cutoff Cyclones Melissa Payer, Lance F. Bosart, Daniel Keyser Department of Atmospheric.
The University of Washington Mesoscale Short-Range Ensemble System Eric P. Grimit, F. Anthony Eckel, Richard Steed, Clifford F. Mass University of Washington.
Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass University of Washington.
Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass University of Washington.
Description and Preliminary Evaluation of the Expanded UW S hort R ange E nsemble F orecast System Maj. Tony Eckel, USAF University of Washington Atmospheric.
Challenges in data assimilation for ‘high resolution’ numerical weather prediction (NWP) Today’s observations + uncertainty information Today’s forecast.
May 30, 2003 Tony Eckel, Eric Grimit, and Cliff Mass UW Atmospheric Sciences This research was supported by the DoD Multidisciplinary University Research.
Performance Characteristics of a Pseudo-operational Ensemble Kalman Filter April 2006, EnKF Wildflower Meeting Greg Hakim & Ryan Torn University of Washington.
Introduction to Weather Forecasting Cliff Mass Department of Atmospheric Sciences University of Washington.
Toward State-Dependent Moisture Availability Perturbations in a Multi-Analysis Ensemble System with Physics Diversity Eric Grimit.
Overview of the Pacific Northwest Environmental Prediction System.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Evaluation of a Mesoscale Short-Range Ensemble Forecasting System over the Northeast United States Matt Jones & Brian A. Colle NROW, 2004 Institute for.
Verification of Numerical Weather Prediction systems employed by the Australian Bureau of Meteorology over East Antarctica during the summer season.
Forecasting Boot Camp. Major Steps in the Forecast Process Data Collection Quality Control Data Assimilation Model Integration Post Processing of Model.
22 May :30 PM General Examination Presentation Toward Short-Range Ensemble Prediction of Mesoscale Forecast Skill Eric P. Grimit University of Washington.
Ensembles and Probabilistic Forecasting. Probabilistic Prediction Because of forecast uncertainties, predictions must be provided in a probabilistic framework,
19 September :15 PM Ensemble Weather Forecasting Workshop; Val-Morin, QC Canada Toward Short-Range Ensemble Prediction of Mesoscale Forecast Error.
The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit 1. Description of the Expanded UW SREF System (How is this thing.
Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Eric P. Grimit and Clifford F. Mass Department.
Ensembles and The Future of Mesoscale Prediction Cliff Mass, University of Washington Tony Eckel, USAF and Univ. of WA.
30 January :00 NWSFO-Seattle An Update on Local MM5 Products at NWSFO-Seattle Eric P. Grimit SCEP NOAA/NWS-Seattle Ph.D. Candidate, University of.
Mesoscale Deterministic and Probabilistic Prediction over the Northwest: An Overview Cliff Mass University of Washington.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Forecasting Boot Camp.
Performance of the MOGREPS Regional Ensemble
SRNWP workshop - Bologne Short range ensemble forecasting at Météo-France status and plans J. Nicolau, Météo-France.
Ensemble-variational sea ice data assimilation Anna Shlyaeva, Mark Buehner, Alain Caya, Data Assimilation and Satellite Meteorology Research Jean-Francois.
Forecasting and Numerical Weather Prediction (NWP) NOWcasting Description of atmospheric models Specific Models Types of variables and how to determine.
Probabilistic Prediction. Uncertainty in Forecasting All of the model forecasts I have talked about reflect a deterministic approach. This means that.
The National Environmental Agency of Georgia L. Megrelidze, N. Kutaladze, Kh. Kokosadze NWP Local Area Models’ Failure in Simulation of Eastern Invasion.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
Observing Strategy and Observation Targeting for Tropical Cyclones Using Ensemble-Based Sensitivity Analysis and Data Assimilation Chen, Deng-Shun 3 Dec,
The Importance of Atmospheric Variability for Data Requirements, Data Assimilation, Forecast Errors, OSSEs and Verification Rod Frehlich and Robert Sharman.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
23 June :30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Toward Short-Range Ensemble Prediction of Mesoscale.
A Comparison of the Northern American Regional Reanalysis (NARR) to an Ensemble of Analyses Including CFSR Wesley Ebisuzaki 1, Fedor Mesinger 2, Li Zhang.
Dataset Development within the Surface Processes Group David I. Berry and Elizabeth C. Kent.
It Never Rains But It Pours: Modeling Mixed Discrete- Continuous Weather Phenomena J. McLean Sloughter This work was supported by the DoD Multidisciplinary.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
1 Mesoscale Ensemble Prediction System: What is it? What does it take to be effective? What are the impacts of shortcomings? Maj Tony Eckel, Ph.D. Asst.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Plans for Short-Range Ensemble Forecast at INM José A. García-Moya SMNT – INM Workshop on Short Range Ensemble Forecast Madrid, October,
Probabilistic Prediction Cliff Mass University of Washington.
POTENTIAL THESIS TOPICS Professor Russell L. Elsberry January 26, 2006 Graduate School of Engineering and Applied Sciences Department of Meteorology, Naval.
18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.
LWG, Destin (Fl) 27/1/2009 Observation representativeness error ECMWF model spectra Application to ADM sampling mode and Joint-OSSE.
Ensembles and Probabilistic Prediction. Uncertainty in Forecasting All of the model forecasts I have talked about reflect a deterministic approach. This.
11th EMS & 10th ECAM Berlin, Deutschland The influence of the new ECMWF Ensemble Prediction System resolution on wind power forecast accuracy and uncertainty.
Fly - Fight - Win 2 d Weather Group Mr. Evan Kuchera HQ AFWA 2 WXG/WEA Template: 28 Feb 06 Approved for Public Release - Distribution Unlimited AFWA Ensemble.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
Toward an Effective Short-Range Ensemble Forecast System October 17, 2003 Maj Tony Eckel, USAF University of Washington Atmospheric Sciences Department.
Global vs mesoscale ATOVS assimilation at the Met Office Global Large obs error (4 K) NESDIS 1B radiances NOAA-15 & 16 HIRS and AMSU thinned to 154 km.
Roger A. Stocker 1 Jason E. Nachamkin 2 An overview of operational FNMOC mesoscale cloud forecast support 1 FNMOC: Fleet Numerical Meteorology & Oceanography.
Numerical Weather Forecast Model (governing equations)
Grid Point Models Surface Data.
University of Washington Ensemble Systems for Probabilistic Analysis and Forecasting Cliff Mass, Atmospheric Sciences University of Washington.
Ensembles and Probabilistic Prediction
S.Alessandrini, S.Sperati, G.Decimi,
Mesoscale Probabilistic Prediction over the Northwest: An Overview
The Stone Age Prior to approximately 1960, forecasting was basically a subjective art, and not very skillful. Observations were sparse, with only a few.
14th Cyclone Workshop Brian Ancell The University of Washington
Presentation transcript:

Effective Mesoscale, Short-Range Ensemble Forecasting Tony Eckel** and Clifford F. Mass Presented By: Eric Grimit University of Washington Atmospheric Sciences Department **AFWA, Offutt AFB, NE This research was supported by… -The United States Air Force -The DoD Multidisciplinary University Research Initiative (MURI) program administered by the Office of Naval Research under Grant N The National Weather Service

Overview Questions  Can an effective mesoscale, short-range ensemble forecast (SREF) system be designed? Effective: skillful forecast probability  How should analysis uncertainty be accounted for?  How much does model bias impact a SREF system and can it be easily corrected?  How should model uncertainty be accounted for?

FP = 93% Event Threshold FP = ORF = 72% Frequency Initial State Forecast Probability from an Ensemble  EF provides an estimate (histogram) of truth’s Probability Density Function (PDF).  In a large, ideal EF system, Forecast Probability (FP) = Observed Relative Frequency (ORF) 24 h Forecast State48 h Forecast State True PDF kt Surface Wind Speed (kt) Frequency  In practice, things go awry from… Undersampling of the PDF (too few ensemble members) Poor representation of initial uncertainty Model deficiencies -- Model bias causes a shift in the estimated mean -- Sharing of model errors between EF members leads to reduced variance  EF’s estimated PDF does not match truth’s PDF, and FP  ORF Frequency Probability of Surface Winds > 18.0 kt

UW’s Ensemble of Ensembles # of EF Initial Forecast Forecast Name Members Type Conditions Model(s) Cycle Domain ACME 17SMMA 8 Ind. Analyses, “Standard” 00Z 36km, 12km 1 Centroid, MM5 8 Mirrors UWME 8SMMA 8 Independent “Standard” 00Z 36km, 12km Analyses MM5 UWME+ 8PMMA 8 Independent 8 MM5 00Z 36km, 12km Analyses variations 8 PME 8 MMMA8 Independent operational, 00Z, 12Z 36km Analyses large-scale Homegrown Imported ACME: Analysis-Centroid Mirroring Ensemble PME: Poor Man’s Ensemble MM5: 5 th Generation PSU/NCAR Mesoscale Modeling System SMMA: Single Model Multi-Analysis PMMA: Perturbed-model Multi-Analysis MMMA: Multi-model Multi-Analysis

 Total of 129, 48-h forecasts (31 Oct 2002 – 28 Mar 2003) all initialized at 00z Incomplete forecast case days are shaded  Parameters: 36-km Domain: Mean Sea Level Pressure (MSLP), 500mb Geopotential Height (Z 500 ) 12-km Domain: Wind Speed at 10 m (WS 10 ), Temperature at 2 m (T 2 ) Research Dataset  Verification: 36-km Domain: centroid analysis (mean of 8 independent analyses, available at 12­h increments) 12-km Domain: ruc20 analysis (NCEP 20-km mesoscale analysis, available at 3­h increments) NovemberDecemberJanuary February March 36-km Domain (151  127) 12-km Domain (103  100) Note: Global PME data was fitted to the 36-km domain MM5 Nested Model Domains

Training Period Bias-corrected Forecast Period Training Period Bias-corrected Forecast Period Gridded, Mean Bias Correction N number of forecast cases (14) f i,j,t forecast at grid point (i, j ) and lead time (t ) o i,j observation (centroid-analysis or ruc20 verification) For the current forecast cycle: 1) Calculate bias at every grid point and lead time using previous 2 weeks’ forecasts 2) Postprocess current forecast to correct for bias: f i,j,t bias-corrected forecast at grid point (i, j ) and lead time (t) * NovemberDecemberJanuary February March

Average RMSE (  C) and (shaded) Average Bias Uncorrected UWME+ T 2 12 h 24 h 36 h 48 h

Average RMSE (  C) and (shaded) Average Bias Bias-Corrected UWME+ T 2 12 h 24 h 36 h 48 h

Multimodel Vs. Perturbed­Model PME Vs. UWME+ 36-km Domain

 *PME exhibits more dispersion than *UWME+ because  *PME (a multi-model system) has more model diversity  *PME is better at capturing growth of synoptic-scale errors *PME *UWME+ Probability Verification Rank Histogram Record of where verification fell (i.e., its rank) among the ordered ensemble members: Flat Well calibrated EF (truth’s PDF matches EF PDF) U’d Under-dispersive EF (truth “gets away” quite often) Humped Over-dispersive EF *Bias-corrected, 36-h MSLP Comparison of VRHs “Nudging” MM5 outer domain may improve SREF

Skill vs. Lead Time for FP of the event: MSLP < 1001 mb BSS = 1, perfect BSS < 0, worthless Comparison of Skill *UWME+ UWME+ *PME PME * Bias-corrected

Value of Model Diversity For a Mesoscale SREF UWME Vs. UWME+ 12-km Domain

Standardized Verification Verification Outlier Percentage % of |V Z | > 3 Probability *UWME 36 h MSLP Verification Rank Histogram MRE = 14.4% VOP = 9.0% How Much & Where Does Truth “Get Away”?

Case Study Initialized at 00Z, 20 Dec 2002 rank 9 V z <  3 V z > 3Z 500 EF Mean and V z rank 1 Z 500 EF Mean and VRH Outside Ranks MRE = 25.7% VOP = 18.5% *UWME 36-h Standardized Verification Verification Outlier Percentage % of |V Z | > 3 Probability *UWME 36 h MSLP Verification Rank Histogram MRE = 14.4% VOP = 9.0% How Much & Where Does Truth “Get Away”?

(d) T 2 (c) WS 10 (b) MSLP (a) Z 500 *UWME *UWME+ VOP 5.0 % 4.2 % 9.0 % 6.7 % 25.6 % 13.3 % 43.7 % 21.0 % Surface/Mesoscale Variable ( Errors Depend on Model Uncertainty ) Synoptic Variable ( Errors Depend on Analysis Uncertainty ) Comparison of 36-h VRHs *UWME *UWME+ *UWME *UWME+ *UWME *UWME+

*UWME UWME *UWME+ UWME+ Comparison of Skill Skill vs. Lead Time for FP of the event: WS 10 > 18 kt BSS = 1, perfect BSS < 0, worthless * Bias-corrected

*UWME UWME *UWME+ UWME+ Uncertainty Skill for P(WS 10 > 18 kt)

Conclusions  Bias Correction Particularly important for mesoscale SREF in which model biases are often large Significantly improves SREF utility by correctly adjusting the forecast PDF Allows for fair and accurate analysis  Multianalysis Approach for Representing Analysis Uncertainty Contributes to a skilled SREF Analyses too highly correlated at times—miss key features Limits EF size to number of available analyses - Mirroring produces additional, valid samples of the approximate forecast PDF (i.e., from UWME) but can not correct deficiencies in original sample More rigorous approach would be beneficial to SREF

 Including Model Diversity in SREF is Critical for Representation of Uncertainty Provides needed increase in spread Degree of importance depends on the variable or phenomenon considered Improves SREF utility through better estimation of the forecast PDF Conclusions  The Value of Mesoscale SREF May be Further Increased by... Nudging MM5 outer domain solutions toward the PME solutions - A PME (multimodel) system is great at capturing large-scale model uncertainty Increasing model resolution of SREF members to increase variance at small-scales Introducing more thorough model perturbations Improving the model to reduce need for model diversity

The End

Backup Slides

Resolution ( 45  N ) Objective Abbreviation/Model/Source Type Computational Distributed Analysis avn, Global Forecast System (GFS), SpectralT254 / L641.0  / L14 SSI National Centers for Environmental Prediction~55 km~80 km3D Var cmcg, Global Environmental Multi-scale (GEM), Finite0.9  0.9  /L  / L113D Var Canadian Meteorological Centre Diff ~70 km ~100 km eta, limited-area mesoscale model, Finite32 km / L45 90 km / L37SSI National Centers for Environmental Prediction Diff.3D Var gasp, Global AnalysiS and Prediction model,SpectralT239 / L291.0  / L11 3D Var Australian Bureau of Meteorology~60 km~80 km jma, Global Spectral Model (GSM),SpectralT106 / L  / L13OI Japan Meteorological Agency~135 km~100 km ngps, Navy Operational Global Atmos. Pred. System,SpectralT239 / L301.0  / L14OI Fleet Numerical Meteorological & Oceanographic Cntr. ~60 km~80 km tcwb, Global Forecast System,SpectralT79 / L181.0  / L11 OI Taiwan Central Weather Bureau~180 km~80 km ukmo, Unified Model, Finite5/6  5/9  /L30same / L123D Var United Kingdom Meteorological Office Diff.~60 km Operational Models/Analyses of the PME

 Perturbed surface boundary parameters according to their suspected uncertainty  Assumed differences between model physics options approximate model error Design of UWME+ 1) Albedo 2) Roughness Length 3) Moisture Availability

VOP = 8.0% MR = 37.3% MRE = 15.1% *PME Case Study Initialized at 00Z, 20 Dec 2002 rank 9 V z <  3 V z > 3Z 500 EF Mean and V z rank 1 Z 500 EF Mean and VRH Outside Ranks Standardized Verification Verification Outlier Percentage % of V Z > 3 Probability *ACME core 36 h MSLP Verification Rank Histogram MR = 36.6% MRE = 14.4% VOP = 9.0% 36-h How Much & Where Does Truth “Get Away”?

VOP = 13.8% VOP = 8.0% *ACME core+ *PME Z 500 Cent Analysis 12Z, 21 Dec 2002 Case Study Initialized at 00Z, 20 Dec h Forecast Z 500 EF Mean and Standardized Verification, V z V z <  3 V z > 3

*ACME core ACME core *ACME core+ ACME core+ Uncertainty Skill for P(T 2 < 0°C)

(m 2 /s 2 ) (C2)(C2) WS 10 T2T2 *ACME core Spread MSE of *ACME core Mean *ACME core+ Spread MSE of *ACME core+ Mean  c 2  11.4 m 2 / s 2  c 2  13.2 ºC 2 Ensemble Dispersion *Bias Corrected ACME core Spread (Uncorrected) ACME core+ Spread (Uncorrected)

Ensemble Dispersion at Smaller Scales Ensemble Variance (m 2 /s 2 ) UWME WS km 12-km 36­km Grid Spacing 12­km Grid Spacing 3­h Cumulative Precipitation

An analysis is just one possible initial condition (IC) used to run a numerical model (such as NCEP’s Eta). T The true state of the atmosphere exists as a single point in phase space that we never know exactly. A point in phase space completely describes an instantaneous state of the atmosphere. For a model, a point is the vector of values for all parameters (pres, temp, etc.) at all grid points at one time. e Trajectories of truth and model diverge: 1) IC error grows nonlinearly 2) Model error adds to the problem P H A S E S P A C E 12 h Forecast 36 h Forecast 24 h Forecast 48 h Forecast T 48 h Verification T 36 h Verification T 24 h Verification T 12 h Verification Eta Analysis

e a u c j t g n T T Analysis Region 8 “Core” Analyses Plug each analysis into MM5 to create an ensemble of mesoscale forecasts (cloud of future states encompassing truth). 1) Reveal uncertainty in forecast 2) Reduce error by averaging to M 3) Yield probabilistic information M 48h forecast Region P H A S E S P A C E

Success and Failure of ACME ACME core Vs. ACME 36-km Domain 12-km Domain

cmcg* The ACME Process STEP 1: Calculate best guess for truth (the centroid) by averaging all analyses. STEP 2: Find error vector in model phase space between one analysis and the centroid by differencing all state variables over all grid points. STEP 3: Make a new IC by mirroring that error about the centroid. cmcg C cmcg* Sea Level Pressure (mb) ~1000 km cent 170°W 165°W 160°W 155°W 150°W 145°W 140°W 135°W eta ngps tcwb gasp avn ukmo cmcg

e a u c j t g n c T M T Analysis Region 48h forecast Region ACME’s Centroid Analysis Eta Analysis Centroid Analysis P H A S E S P A C E

P H A S E S P A C E e n a c u t g T j T Analysis Region M 48h Forecast Region e a u c j t g n c ACME’s Mirrored Analyses Eta Analysis Eta ´ Analysis Centroid Analysis

h T 2 36-h MSLP *ACME core *ACME Comparison of Verification Rank Histograms Verification Rank Probability Verification Rank

36-h WS 10 MR = 52.5% MRE = 30.3% VOP = 25.9% MR = 44.0% MRE = 32.9% VOP = 24.7% *ACME core *ACME MR = 68.1% MRE = 45.9% VOP = 44.4% MR = 62.3% MRE = 51.2% VOP = 44.2% *ACME core *ACME Verification Rank h T 2 MR = 36.5% MRE = 14.2% VOP = 9.1% 36-h MSLP MR = 26.1% MRE = 15.0% VOP = 8.7% *ACME core *ACME Verification Rank Histograms for…