The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit 1. Description of the Expanded UW SREF System (How is this thing.

Slides:



Advertisements
Similar presentations
Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass Adrian Raftery, Susan Joslyn, Tilmann Gneiting and others University of Washington.
Advertisements

Initialization Issues of Coupled Ocean-atmosphere Prediction System Climate and Environment System Research Center Seoul National University, Korea In-Sik.
3.11 Adaptive Data Assimilation to Include Spatially Variable Observation Error Statistics Rod Frehlich University of Colorado, Boulder and RAL/NCAR Funded.
The University of Washington Mesoscale Short-Range Ensemble System Eric P. Grimit, F. Anthony Eckel, Richard Steed, Clifford F. Mass University of Washington.
Similar Day Ensemble Post-Processing as Applied to Wildfire Threat and Ozone Days Michael Erickson 1, Brian A. Colle 1 and Joseph J. Charney 2 1 School.
Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass University of Washington.
Description and Preliminary Evaluation of the Expanded UW S hort R ange E nsemble F orecast System Maj. Tony Eckel, USAF University of Washington Atmospheric.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
May 30, 2003 Tony Eckel, Eric Grimit, and Cliff Mass UW Atmospheric Sciences This research was supported by the DoD Multidisciplinary University Research.
Toward a Real Time Mesoscale Ensemble Kalman Filter Gregory J. Hakim Dept. of Atmospheric Sciences, University of Washington Collaborators: Ryan Torn (UW)
Introduction to Weather Forecasting Cliff Mass Department of Atmospheric Sciences University of Washington.
Toward State-Dependent Moisture Availability Perturbations in a Multi-Analysis Ensemble System with Physics Diversity Eric Grimit.
Patrick Tewson University of Washington, Applied Physics Laboratory Local Bayesian Model Averaging for the UW ProbCast Eric P. Grimit, Jeffrey Baars, Clifford.
Statistics, data, and deterministic models NRCSE.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Evaluation of a Mesoscale Short-Range Ensemble Forecasting System over the Northeast United States Matt Jones & Brian A. Colle NROW, 2004 Institute for.
Verification of Numerical Weather Prediction systems employed by the Australian Bureau of Meteorology over East Antarctica during the summer season.
Ensembles and Probabilistic Forecasting. Probabilistic Prediction Because of forecast uncertainties, predictions must be provided in a probabilistic framework,
19 September :15 PM Ensemble Weather Forecasting Workshop; Val-Morin, QC Canada Toward Short-Range Ensemble Prediction of Mesoscale Forecast Error.
Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Eric P. Grimit and Clifford F. Mass Department.
30 January :00 NWSFO-Seattle An Update on Local MM5 Products at NWSFO-Seattle Eric P. Grimit SCEP NOAA/NWS-Seattle Ph.D. Candidate, University of.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A comparison of hybrid ensemble transform Kalman filter(ETKF)-3DVAR and ensemble square root filter (EnSRF) analysis schemes Xuguang Wang NOAA/ESRL/PSD,
Effective Mesoscale, Short-Range Ensemble Forecasting Tony Eckel** and Clifford F. Mass Presented By: Eric Grimit University of Washington Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Lili Lei1,2, David R. Stauffer1 and Aijun Deng1
SRNWP workshop - Bologne Short range ensemble forecasting at Météo-France status and plans J. Nicolau, Météo-France.
PATTERN RECOGNITION AND MACHINE LEARNING
Oceanography 569 Oceanographic Data Analysis Laboratory Kathie Kelly Applied Physics Laboratory 515 Ben Hall IR Bldg class web site: faculty.washington.edu/kellyapl/classes/ocean569_.
Probabilistic Prediction. Uncertainty in Forecasting All of the model forecasts I have talked about reflect a deterministic approach. This means that.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
EnKF Overview and Theory
Observing Strategy and Observation Targeting for Tropical Cyclones Using Ensemble-Based Sensitivity Analysis and Data Assimilation Chen, Deng-Shun 3 Dec,
WWOSC 2014, Aug 16 – 21, Montreal 1 Impact of initial ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model.
ISDA 2014, Feb 24 – 28, Munich 1 Impact of ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model Florian.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
23 June :30 PM Session 2: Mesoscale Predictability I; 10th Mesoscale Conference; Portland, OR Toward Short-Range Ensemble Prediction of Mesoscale.
It Never Rains But It Pours: Modeling Mixed Discrete- Continuous Weather Phenomena J. McLean Sloughter This work was supported by the DoD Multidisciplinary.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Model dependence and an idea for post- processing multi-model ensembles Craig H. Bishop Naval Research Laboratory, Monterey, CA, USA Gab Abramowitz Climate.
Ensemble Data Assimilation for the Mesoscale: A Progress Report Massimo Bonavita, Lucio Torrisi, Francesca Marcucci CNMCA National Meteorological Service.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to.
Research Seminars in IT in Education (MIT6003) Quantitative Educational Research Design 2 Dr Jacky Pow.
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Plans for Short-Range Ensemble Forecast at INM José A. García-Moya SMNT – INM Workshop on Short Range Ensemble Forecast Madrid, October,
. Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is.
BioSS reading group Adam Butler, 21 June 2006 Allen & Stott (2003) Estimating signal amplitudes in optimal fingerprinting, part I: theory. Climate dynamics,
- 1 - Overall procedure of validation Calibration Validation Figure 12.4 Validation, calibration, and prediction (Oberkampf and Barone, 2004 ). Model accuracy.
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
Ensembles and Probabilistic Prediction. Uncertainty in Forecasting All of the model forecasts I have talked about reflect a deterministic approach. This.
Page 1© Crown copyright 2004 The use of an intensity-scale technique for assessing operational mesoscale precipitation forecasts Marion Mittermaier and.
On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ.
Short-Range Ensemble Prediction System at INM García-Moya, J.A., Santos, C., Escribà, P.A., Santos, D., Callado, A., Simarro, J. (NWPD, INM, SPAIN) 2nd.
Slide 1 International Typhoon Workshop Tokyo 2009 Slide 1 Impact of increased satellite data density in sensitive areas Carla Cardinali, Peter Bauer, Roberto.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
Toward an Effective Short-Range Ensemble Forecast System October 17, 2003 Maj Tony Eckel, USAF University of Washington Atmospheric Sciences Department.
Why Model? Make predictions or forecasts where we don’t have data.
Xuexing Qiu and Fuqing Dec. 2014
Regression Analysis AGEC 784.
Ensembles and Probabilistic Prediction
Verifying and interpreting ensemble products
A New Scheme for Chaotic-Attractor-Theory Oriented Data Assimilation
Mesoscale Probabilistic Prediction over the Northwest: An Overview
The Stone Age Prior to approximately 1960, forecasting was basically a subjective art, and not very skillful. Observations were sparse, with only a few.
background error covariance matrices Rescaled EnKF Optimization
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
14th Cyclone Workshop Brian Ancell The University of Washington
Seasonal Forecasting Using the Climate Predictability Tool
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Presentation transcript:

The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit 1. Description of the Expanded UW SREF System (How is this thing created?) 2. Spread-error Correlation Theory, Results, and Future Work 3. Forecast Verification Issues OUTLINE

Core Members of the Expanded UW SREF System M = 7 + CENT-MM5 Is this enough??? MM5 Multiple Analyses / Forecasts ICs LBCs

Generating Additional Initial Conditions POSSIBILITIES: Random Perturbations Breeding Growing Modes (BGM) Singular Vectors (SV) Perturbed Obs (PO) / EnKF / EnSRF Ensembles of Initializations Linear Combinations* May be the optimal approach (unproven) Simplistic approach (no one has tried it yet) Uses Bayesian melding (under development) Insufficient for short-range, inferior to PO, and computationally expensive (BGM & SV) } Selected Important Linear Combinations (SILC) ? Founded on the idea of “mirroring” (Tony Eckel) IC* = CENT + PF * (CENT - IC) ; PF = 1.0 Computationally inexpensive (restricts dimensionality to M=7) May be extremely cost effective Can test the method now Size of the perturbations is controlled by the spread of the core members Why Linear Combinations?

cmcg* Illustration of “mirroring” STEP 1: Calculate best guess for truth (the centroid) by averaging all analyses. STEP 2: Find error vector in model phase space between one analysis and the centroid by differencing all state variables over all grid points. STEP 3: Make a new IC by mirroring that error about the centroid. cmcg C cmcg* Sea Level Pressure (mb) ~1000 km cent 170°W 165°W 160°W 155°W 150°W 145°W 140°W 135°W eta ngps tcwb gasp avn ukmo cmcg IC* = CENT + (CENT - IC)

Two groups of “important” LCs: (x) mirrors X m * =  X i – X m ; m = 1, 2, …, M (+) inflated sub-centroids X mn * =  X i - (X m +X n ) ; m,n = 1, 2, …, M ; m  n 2M2M i = 1 M 1+PF M PF 2 i = 1 M PF 2 = ( ) 2*(M-1) (M-2) Must restrict selection of LCs to physically/dynamically “important” ones At the same time, try for equally likely ICs Sample the “cloud” as completely as possible with a finite number (ie- fill in the holes)

Root Mean Square Error (RMSE) by Grid Point Verification RMSE of MSLP (mb) 36km Outer Domain cmcg cmcg* avn avn* eta eta* ngps ngps* ukmo ukmo* tcwb tcwb* cent 12h 24h 36h 48h 12km Inner Domain cmcg cmcg* avn avn* eta eta* ngps ngps* ukmo ukmo* tcwb tcwb* cent

Summary of Initial Findings Set of 15 ICs for UW SREF are not optimal, but may be good enough to represent important features of analysis error The centroid may be the best-bet deterministic model run, in the big picture Need further evaluation... How often does the ensemble fail to capture the truth? How reliable are the probabilities? Does the ensemble dispersion represent forecast uncertainty? 1.Evaluate the expanded UW MM5 SREF system and investigate multimodel applications 2.Develop a mesoscale forecast skill prediction system 3.Additional Work –mesoscale verification –probability forecasts –deterministic-style solutions –additional forecast products/tools (visualization) Future Work

Spread-error Correlation Theory Houtekamer 1993 (H93) Model: “This study neglects the effects of model errors. This causes an underestimation of the forecast error. This assumption probably causes a decrease in the correlation between the observed skill and the predicted spread.” agrees with... Var[Q | D] = E k [Var(Q | D,M k )] + Var k (E[Q | D,M k ]) Raftery BMA variance formula: “avg within model variance” “avg between model variance” [ () exp(-  2 )  1 - exp(-  2 ) 22 Corr(S,|E|) = sqrt ; log S ~ N(0,  2 ), E ~ N(0,S 2 ) ]

Observed correlations greater than those predicted by the H93 model RESULTS: 10-m WDIR Jan-Jun 2000 (Phase I) Possible explanations: Artifact of the way spread and error are calculated! Accounting for some of the model error? Luck?

RESULTS: 2-m TEMP Jan-Jun 2000 (Phase I) What’s happening here? Error saturation? Differences in ICs not as important for surface temperature

Another Possible Predictor of Skill Spread of a temporal ensemble ~ forecast consistency Temporal ensemble = lagged forecasts all verifying at the same time F36F24 F12 F48 CENT-MM5CENT-MM5 CENT-MM5CENT-MM5 CENT-MM5CENT-MM5 CENT-MM5CENT-MM5 CENT-MM5CENT-MM5 00 UTC T - 48 h 12 UTC T - 36 h 00 UTC T - 24 h 12 UTC T - 12 h 00 UTC T F00* Does not have mesoscale features * “adjusted” CENT-MM5 analysis M = 4 verification BENEFITS: Yields mesoscale temporal spread Less sensitive to one synoptic-scale model’s time variability Best forecast estimate of “truth” Temporal Short-range Ensemble with the centroid runs

Are spread and skill well correlated for other parameters? ie. – wind speed & precipitation use sqrt or log to transform data to be normally distributed Do spread-error correlations improve after bias removal? What is “high” and “low” spread? need a spread climatology, i.e.- large data set What are the synoptic patterns associated with “high” and “low” spread cases? use NCEP/NCAR reanalysis data and compositing software How do the answers change for the expanded UW MM5 ensemble? Can a better single predictor of skill be formed from the two individual predictors? IC spread & temporal spread Future Investigation: Developing a Prediction System for Forecast Skill

Mesoscale Verification Issues Will verify 2 ways: At the observation locations (as before) Using a gridded mesoscale analysis SIMPLE possibilities for the gridded dataset: “adjusted” centroid analysis (run MM5 for < 1 h) Verification has the same scales as the forecasts Useful for creating verification rank histograms Bayesian combination of “adjusted” centroid with observations (e.g.- Fuentes and Raftery 2001) Accounts for scale differences (change of support problem) Can correct for MM5 biases TRUE VALUES OBSERVATIONS CENT-MM5 “adjusted” OUTPUT Bias parameters Noise Measurement error Large-scale structure Small-scale structure (after Fuentes and Raftery 2001)

Limitations of Traditional Bulk Error Scores biased toward the mean can get spurious zero errors by coincidence, not skill also can be blind to position, phase, and/or rotation errors This affects measurements of both spread & error! Need to try new methods of verification… 1.consider the gradient of a field, not just the magnitude addresses false zero errors / blindness to errors in the first derivative of a field still biased toward the mean 2.pattern recognition software would penalize the mean for absence/smoothness of features