Zhan Zhang and HWRF Team 1. Outline Introduction to ensemble prediction system What and why ensemble prediction Approaches to ensemble prediction Hurricane.

Slides:



Advertisements
Similar presentations
ECMWF long range forecast systems
Advertisements

Page 1© Crown copyright 2006ESWWIII, Royal Library of Belgium, Brussels, Nov 15 th 2006 Forecasting uncertainty: the ensemble solution Mike Keil, Ken Mylne,
Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009 Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009.
Effects of model error on ensemble forecast using the EnKF Hiroshi Koyama 1 and Masahiro Watanabe 2 1 : Center for Climate System Research, University.
Predictability and Chaos EPS and Probability Forecasting.
Ensemble Forecasting of Hurricane Intensity based on Biased and non-Gaussian Samples Zhan Zhang, Vijay Tallapragada, Robert Tuleya HFIP Regional Ensemble.
Observing System Simulation Experiments to Evaluate the Potential Impact of Proposed Observing Systems on Hurricane Prediction: R. Atlas, T. Vukicevic,
Brian J. Etherton Developmental Testbed Center Survey and summary of ensemble systems 21 November 2011.
HFIP Ensemble Products Subgroup Sept 2, 2011 Conference Call 1.
Performance Characteristics of a Pseudo-operational Ensemble Kalman Filter April 2006, EnKF Wildflower Meeting Greg Hakim & Ryan Torn University of Washington.
Click to edit Master title style Click to edit Master text styles Second level Third level Fourth level Fifth level 1 Click to edit Master title style.
Use of the National Centers for Environmental Prediction (NCEP) Ensemble Prediction System in the Weather Forecast Process Dr. Ralph Peterson, Dr. Bill.
CSTAR Update: New Tools for More Efficient Use of Ensembles in Operations Brian A. Colle, Minghua Zheng, and Edmund K.M. Chang, School of Marine and Atmospheric.
UNBIASED ESTIAMTION OF ANALYSIS AND FORECAST ERROR VARIANCES
Introduction to Numerical Weather Prediction and Ensemble Weather Forecasting Tom Hamill NOAA-CIRES Climate Diagnostics Center Boulder, Colorado USA.
The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit 1. Description of the Expanded UW SREF System (How is this thing.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
WWOSC 2014 Assimilation of 3D radar reflectivity with an Ensemble Kalman Filter on a convection-permitting scale WWOSC 2014 Theresa Bick 1,2,* Silke Trömel.
LINDSEY NOLAN WILLIAM COLLINS PETA-APPS TEAM MEETING OCTOBER 1, 2009 Stochastic Physics Update: Simulating the Climate Systems Accounting for Key Uncertainties.
Ensemble-variational sea ice data assimilation Anna Shlyaeva, Mark Buehner, Alain Caya, Data Assimilation and Satellite Meteorology Research Jean-Francois.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
Observing Strategy and Observation Targeting for Tropical Cyclones Using Ensemble-Based Sensitivity Analysis and Data Assimilation Chen, Deng-Shun 3 Dec,
Probabilistic Mechanism Analysis. Outline Uncertainty in mechanisms Why consider uncertainty Basics of uncertainty Probabilistic mechanism analysis Examples.
WWOSC 2014, Aug 16 – 21, Montreal 1 Impact of initial ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
MPO 674 Lecture 20 3/26/15. 3d-Var vs 4d-Var.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
Evaluation and Development of Ensemble Prediction System for the Operational HWRF Model Zhan Zhang, V. Tallapragada, R. Tuleya, Q. Liu, Y. Kwon, S. Trahan,
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
Statistical Evaluation of the Response of Intensity to Large-Scale Forcing in the 2008 HWRF model Mark DeMaria, NOAA/NESDIS/RAMMB Fort Collins, CO Brian.
© Crown copyright Met Office Plans for Met Office contribution to SMOS+STORM Evolution James Cotton & Pete Francis, Satellite Applications, Met Office,
Improved ensemble-mean forecast skills of ENSO events by a zero-mean stochastic model-error model of an intermediate coupled model Jiang Zhu and Fei Zheng.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology Pasadena, California 1 J. Teixeira(1), C. A.
Zhan Zhang and HWRF Team Hurricane WRF Tutorial, NCWCP, MD. January 16.
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Uncertainty Quantification Using Ensemble Methods: Predictability of Extremes and Coherent Vortices Joe Tribbia NCAR IPAM lecture 15 February 2007.
Data assimilation, short-term forecast, and forecasting error
. Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is.
DATA ASSIMILATION FOR HURRICANE PREDICTION Experimental system and results of semi-operational implementation during the 2010 Atlantic Hurricane Season.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? Bob Gall – HFIP Development Manager Fred Toepfer—HFIP Project manager Frank.
Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement Project (HFIP) Barbara G. Brown,
Lennart Bengtsson ESSC, Uni. Reading THORPEX Conference December 2004 Predictability and predictive skill of weather systems and atmospheric flow patterns.
Atlantic Simplified Track Model Verification 4-year Sample ( ) OFCL shown for comparison Forecast Skill Mean Absolute Error.
Stream 1.5 Runs of SPICE Kate D. Musgrave 1, Mark DeMaria 2, Brian D. McNoldy 1,3, and Scott Longmore 1 1 CIRA/CSU, Fort Collins, CO 2 NOAA/NESDIS/StAR,
Local Predictability of the Performance of an Ensemble Forecast System Liz Satterfield and Istvan Szunyogh Texas A&M University, College Station, TX Third.
Studying impacts of the Saharan Air Layer on hurricane development using WRF-Chem/EnKF Jianyu(Richard) Liang Yongsheng Chen 6th EnKF Workshop York University.
Exploring Multi-Model Ensemble Performance in Extratropical Cyclones over Eastern North America and the Western Atlantic Ocean Nathan Korfe and Brian A.
Development of a Rapid Intensification Index for the Eastern Pacific Basin John Kaplan NOAA/AOML Hurricane Research Division Miami, FL and Mark DeMaria.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
2015 Production Suite Review: Report from NHC 2015 Production Suite Review: Report from NHC Eric S. Blake, Richard J. Pasch, Andrew Penny NCEP Production.
A Random Subgrouping Scheme for Ensemble Kalman Filters Yun Liu Dept. of Atmospheric and Oceanic Science, University of Maryland Atmospheric and oceanic.
A study on the spread/error relationship of the COSMO-LEPS ensemble Purpose of the work  The spread-error spatial relationship is good, especially after.
National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane.
Figures from “The ECMWF Ensemble Prediction System”
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Current Issues and Challenges in Ensemble Forecasting Junichi Ishida (JMA) and Carolyn Reynolds (NRL) With contributions from WGNE members 31 th WGNE Pretoria,
Improving Numerical Weather Prediction Using Analog Ensemble Presentation by: Mehdi Shahriari Advisor: Guido Cervone.
A Few Words on Hurricane Forecasts
Predictability and forecast evaluation of ensemble simulations of long-lived Hurricane Nadine (2012)
By Nicholas Leonardo Advisor: Brian Colle
A Guide to Tropical Cyclone Guidance
Plans for Met Office contribution to SMOS+STORM Evolution
A Review of the CSTAR Ensemble Tools Available for Operations
Verification of Tropical Cyclone Forecasts
Presentation transcript:

Zhan Zhang and HWRF Team 1

Outline Introduction to ensemble prediction system What and why ensemble prediction Approaches to ensemble prediction Hurricane ensemble prediction HWRF-based EPS Methodology; Verification I : Ensemble vs. Deterministic; Statistical Characteristics of HWRF EPS; Verification II: HWRF EPS vs. Top-flight models; How ensemble perturbation affects storm intensity forecasts; Conclusion and Future Work. 2

3 What is an Ensemble Forecast ? An ensemble forecast is simply a collection of two or more forecasts verifying at the same time. Ensemble forecast aims to estimate the probability density function of forecast states Why do we need ensemble forecast ? Uncertainties, or weak noises, acting upon a numerical weather prediction (NWP) model system can have far-reaching consequences due to its chaotic and nonlinear nature (Lorenz, 1963, 1965). What are the main source of uncertainties ?  IC/BC uncertainties: observational errors, poor data coverage, and errors in DA system;  Model uncertainties: mis-representation of model dynamics/physics, impact of sub-grid scale features. These uncertainties are inevitable. In a chaotic system like an atmospheric model, non linear errors will grow - sometimes rapidly. Eventually these growing errors cause the model forecast output to eventually become useless.

4 Track Prediction for Hurricane Debby, Z Multi-model NCEP ensemble ECMWF ensemble CMC ensemble Large differences in predicted storm tracks due to: 1. multi-model dynamics; 2. multi-physics; 3. multi-initial analysis.

5 Prediction for Hurricane Raymond, Z Large differences in predicted storm intensity due to sub- grid uncertainties in model physics: stochastically perturbed cumulus convection scheme in HWRF Dry air at mid-level suppressed storm development in one member, while active convective cells overcome the dry air, storm intensified in anther member. Two clusters: decaying and intensifying

6 Approaches to Ensemble Prediction  Monte Carlo Approach ---- not practically possible sample all sources of forecast error, perturb any input variable and any model parameter that is not perfectly known. Take into consideration as many sources as possible of forecast error.  Reduced Sampling limited resource sample leading sources of forecast error, prioritize. Rank sources, prioritize, optimize sampling: growing components will dominate forecast error growth.  Existing Methods  Initial uncertainties: SV-based ensemble (ECMWF), EnKF-based ensemble (MCS), BV-based ensemble (NCEP), ETKF-based ensemble (UKMet), ETR-based ensemble, EOF-based ensemble.  Model uncertainties: Multi-model ensemble; Single model with multi-physics.  Desired Ensemble Perturbations Growing modes, Orthogonality, No bias, Similar to analysis error

7 HWRF-based Ensemble Prediction Considerations for Hurricane EPS: 1. Uncertainties in initial storm position, intensity, and structure; 2. Uncertainties in large scale flows (ICs/BCs); 3. Results of multi-scale interactions among sub- grid scales, (~0-100m), convective clouds (~ m), and the large-scale environment (~ km)

Background and Motivation  Convective Trigger function in Current HWRF Cumulus Parameterization Scheme (SAS: Simplified Arakawa-Schubert) P CSL -P LFC <= DP(w) Convection is triggered, P CSL -P LFC > DP(w) No sub-grid convection P CSL : Parcel pressure at Convection Starting Level, P LFC : Parcel pressure at Level of Free Convection DP(w): Convective Trigger, which is function of large scale vertical velocity w. DP(w) is arbitrarily confined between 120hPa-180hPa  Storm intensity (Max Wind Speed) is found very sensitive to the convective trigger function;  Necessary to introduce fuzzy logic trigger to represent sub- grid features. 8

Methodology  Model Physics Perturbations (large scale): Stochastic Convective Trigger P CSL -P LFC <= DP(w) + R r (n) R r is white noise, ranging from -50hPa to +50hPa, n is nth ensemble member, used as random seed. No spatial and temporal correlations  IC/BC Perturbations (sub-grid scale): 20 member GEFS. 9

HW01 – HW20: Individual Ensemble Members Generated from Stochastic Convective Trigger + GEFS IC/BC; HWMN: Mean of HW01-HW20 ensembles; FY13: 2013 Operational HWRF, Deterministic version of HWRF; H212: 2012 Operational HWRF; : all storms during Aug. 1 to Oct. 31; 463 cycles 2011: 09L, 12L, 14L and 16L; 144 cycle Total: 607 cycles. HWRF EPS vs. its Deterministic versions

11 ATL-Intensity ~14% improvement EP-Intensity ~22% improvement at day 5 comparable ATL-Track EP-Track ~23% improvement at day 4

Central Pressure Verifications Atlantic E-Pac 12

Frequency Superior Performance Verifications (Atlantic Basin) Track: FY13 vs. H212 Track: HWMN vs. FY13 Intensity: FY13 vs. H212 Intensity: HWMN vs. FY13 13

Frequency Superior Performance Verifications (E-Pac Basin) Track: FY13 vs. H212Intensity: FY13 vs. H212 Track: HWMN vs. FY13 Intensity: HWMN vs. FY13 14

Track Probability Forecasts for Hurricane Sandy Few members turned west More members turned west All members turned west 15 FY13

Ensemble Intensity Forecasts for Hurricane Sandy 16

17 Spaghetti Plots Comparison for Track Forecasts Sandy, 2012 FY13 HWMN Less outliers

18 Spaghetti Plot Comparison for Intensity Forecasts Sandy, 2012 FY13 HWMN Tighter clusters

19 Statistical Characteristics of HWRF EPS  Ensemble spread and forecast error relationship:  Assume the ensemble spread and forecast error are two random variables, their 1 st and 2 nd moments will be evaluated;  Spread/skill correlation: researches have shown that the spread/skill correlation is generally not as high as expected, less than 0.5 (Baker 1991, Houtekamer 1993, Whitaker and Loughe 1997)  Spread evaluation: rank histogram for ensemble forecasts.

20 1. The mean of ensemble spread is close to the mean of the forecast errors; 2. The difference between the two lines indicates the level of ensemble dispersion; The under-dispersion increases with forecast lead time

Solid: Spread/Error Correlation Dash: abs(Es-Ee) Relationship between Ensemble Spread and Forecast Error TrackIntensity Solid line: Spread/Error Correlation Dash line: abs(Es-Ee) *Es: standard deviation of ensemble spread, Ee: standard deviation of forecast error; 1. Spread/skill correlations reach to the peak at day 3 for both track and intensity; 2. The difference (Es-Ee) measures the closeness of the ‘climatological’ values of ensemble spread and forecast errors; the ensemble spread has less predictive value (of the forecast skill) when |Es-Ee| is small; 3. In HWRF EPS, intensity spread is a good indicator of forecast skill up to 72h, track spread is not.

22 1. The correlation decreases with lead time 2. The ensemble track spread is correlated with intensity spread, even though there is no track/intensity error correlation;

23 Analysis of Rank Histogram for Track Forecast Latitude Longitude Northward biasSouthward bias Westward (fast) bias Eastward (slow) bias 1. Only AL basin data is used; 2. The observed values are distributed evenly in inner bins; 3. The N/S bias is mostly due to longer lead time; 4. W (fast) track bias.

24 Strong bias Weak bias 1. The observed values are distributed evenly in inner bins; 2.The deterministic model tends to have positive bias for strong storms and negative bias for weaker storms. Analysis of Rank Histogram for Intensity Forecast

25 Comparison HWRF EPS with last year’s Top-Flight Models Track: AVNI, EMXI, GHMI Intensity: DSHP, LGEM, GHMI

26 AL-track EP-track AL-IntEP-Int Statistically indistinguishable with EMXI, AVNI, SS in improvement over GHMI Smallest errors in all; SS improvements over other models at some lead times Better skills over GHMI and DSHP; SS improvements over GHMI and DSHP at some lead times

27 Pairwise Comparison Between HWMI and Top-flight Models Intensity For Atlantic Basin From TCMT verification

28 Pairwise Comparison Between HWMI and Top-flight Models Intensity For East-Pacific Basin From TCMT verification Sample size is not big enough for EP

29 How stochastic convective trigger function perturbation affects storm intensity forecasts 1.What is the cause of storm intensity ensemble spread? Convective trigger perturbation or initial storm cycling? 2. Impact of stochastic convective trigger perturbations on D01 and D02? Cumulus convection is explicitly expressed in domain 3 (~3km) in current configuration

Cold-start Experiment With GFS Several cold-start experiments indicate there are large intensity forecast spread even without model cycling. 30 ~20kt diff ~15hPa diff

31 Impact of Cumulus Convection on Large Scale Flow Idealized Experiment Solid line: SAS in D01 and D02 (27km and 9km) Dash line: no SAS in Do1 and D02. Sub-grid convection is explicitly expressed in D03 in both exps. The model storm will not develop when SAS scheme is turned off in the 27- and 9-km domains even if the domain 3 (3km) resolution is high enough to resolve the convection scheme.

10m Wind Speed forecast at 24h from Individual Ensemble Members (Isaac, 2012) 32 Intensity or Maximum Wind Speed has random feature.

10m Wind Speed forecast at 48h from Individual Ensemble Members (Isaac, 2012) 33 Land interaction Different predicted tracks for individual ensemble members due to land interaction, causing intensity variations.

34 HWRF/EPS Verifications for 2013 Storms AL-Track EP-TrackIO-Track AL-IntensityEP-Intensity IO-Intensity

Summary  Ensemble forecast is necessary due to the chaotic and non- linear nature of the atmosphere;  Additional uncertainties, e.g. sub-grid scales, initial storm positions, needs to be take into account in EPS;  HWRF-based EPS includes perturbations from large scale flows (GEFS) and sub-grid scales (physics-based SCT);  HWRF EPS introduces no bias but inherits some biases from the deterministic model in terms of track/intensity forecasts;  Both HWMI track and intensity forecast skills are improved over its deterministic versions (H212 and FY13), with more improvements in intensity forecasts;  The HWMI forecasts are statistically undistinguishable from that of the top-flight models; The HWMI produces SS improvements over GHMI in both track and intensity forecasts at all lead times. 35

36 Summary (Continued)  In HWRF EPS, the intensity spread may be a good predictor for intensity forecast skill up to 72h, not the track spread; Future work:  Experiment with SAS scheme turned on in third domain;  Develop an ensemble ranking system that identifies the ensemble member closest to the ensemble mean based on their predicted storm centers, intensities, and storm structures.