Pg 1 SAMSI UQ Workshop; Sep. 2011 Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research.

Slides:



Advertisements
Similar presentations
Data-Assimilation Research Centre
Advertisements

1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Errors in Error Variance Prediction and Ensemble Post-Processing Elizabeth Satterfield 1, Craig Bishop 2 1 National Research Council, Monterey, CA, USA;
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Lecture 3 Probability and Measurement Error, Part 2.
Predictability and Chaos EPS and Probability Forecasting.
Observers and Kalman Filters
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Empirical Localization of Observation Impact in Ensemble Filters Jeff Anderson IMAGe/DAReS Thanks to Lili Lei, Tim Hoar, Kevin Raeder, Nancy Collins, Glen.
Ibrahim Hoteit KAUST, CSIM, May 2010 Should we be using Data Assimilation to Combine Seismic Imaging and Reservoir Modeling? Earth Sciences and Engineering.
Dimensional reduction, PCA
ASSIMILATION of RADAR DATA at CONVECTIVE SCALES with the EnKF: PERFECT-MODEL EXPERIMENTS USING WRF / DART Altuğ Aksoy National Center for Atmospheric Research.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
UNBIASED ESTIAMTION OF ANALYSIS AND FORECAST ERROR VARIANCES
Ka-fu Wong © 2004 ECON1003: Analysis of Economic Data Lesson6-1 Lesson 6: Sampling Methods and the Central Limit Theorem.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Lecture II-2: Probability Review
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Gaussian process modelling
PATTERN RECOGNITION AND MACHINE LEARNING
Calibration and Model Discrepancy Tony O’Hagan, MUCM, Sheffield.
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
Ensemble Data Assimilation and Uncertainty Quantification Jeffrey Anderson, Alicia Karspeck, Tim Hoar, Nancy Collins, Kevin Raeder, Steve Yeager National.
EnKF Overview and Theory
Nonlinear Data Assimilation and Particle Filters
Lecture 9. If X is a discrete random variable, the mean (or expected value) of X is denoted μ X and defined as μ X = x 1 p 1 + x 2 p 2 + x 3 p 3 + ∙∙∙
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Ka-fu Wong © 2003 Chap 8- 1 Dr. Ka-fu Wong ECON1003 Analysis of Economic Data.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
A unifying framework for hybrid data-assimilation schemes Peter Jan van Leeuwen Data Assimilation Research Center (DARC) National Centre for Earth Observation.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 26: Singular Value Decomposition.
MODEL ERROR ESTIMATION EMPLOYING DATA ASSIMILATION METHODOLOGIES Dusanka Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
BioSS reading group Adam Butler, 21 June 2006 Allen & Stott (2003) Estimating signal amplitudes in optimal fingerprinting, part I: theory. Climate dynamics,
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
2 Introduction to Kalman Filters Michael Williams 5 June 2003.
NASSP Masters 5003F - Computational Astronomy Lecture 6 Objective functions for model fitting: –Sum of squared residuals (=> the ‘method of least.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
A kernel-density based ensemble filter applicable (?) to high-dimensional systems Tom Hamill NOAA Earth System Research Lab Physical Sciences Division.
Local Predictability of the Performance of an Ensemble Forecast System Liz Satterfield and Istvan Szunyogh Texas A&M University, College Station, TX Third.
Ka-fu Wong © 2003 Chap 6- 1 Dr. Ka-fu Wong ECON1003 Analysis of Economic Data.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Gaussian Processes For Regression, Classification, and Prediction.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
A Random Subgrouping Scheme for Ensemble Kalman Filters Yun Liu Dept. of Atmospheric and Oceanic Science, University of Maryland Atmospheric and oceanic.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Regularization of energy-based representations Minimize total energy E p (u) + (1- )E d (u,d) E p (u) : Stabilizing function - a smoothness constraint.
Future Directions in Ensemble DA for Hurricane Prediction Applications Jeff Anderson: NCAR Ryan Torn: SUNY Albany Thanks to Chris Snyder, Pavel Sakov The.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Remember the equation of a line: Basic Linear Regression As scientists, we find it an irresistible temptation to put a straight line though something that.
CS Statistical Machine learning Lecture 7 Yuan (Alan) Qi Purdue CS Sept Acknowledgement: Sargur Srihari’s slides.
Jeffrey Anderson, NCAR Data Assimilation Research Section
Data Assimilation Research Testbed Tutorial
Probability Theory and Parameter Estimation I
Data Assimilation Theory CTCD Data Assimilation Workshop Nov 2005
Ch3: Model Building through Regression
Jeff Anderson, NCAR Data Assimilation Research Section
CSCI 5822 Probabilistic Models of Human and Machine Learning
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
CHAPTER 12 More About Regression
2. University of Northern British Columbia, Prince George, Canada
Sarah Dance DARC/University of Reading
Presentation transcript:

pg 1 SAMSI UQ Workshop; Sep Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research

pg 2 SAMSI UQ Workshop; Sep Observations …to produce an analysis (best possible estimate). What is Data Assimilation? + Observations combined with a Model forecast…

pg 3 SAMSI UQ Workshop; Sep Example: Estimating the Temperature Outside An observation has a value (  ),

pg 4 SAMSI UQ Workshop; Sep Example: Estimating the Temperature Outside An observation has a value (  ), and an error distribution (red curve) that is associated with the instrument.

pg 5 SAMSI UQ Workshop; Sep Example: Estimating the Temperature Outside Thermometer outside measures 1C. Instrument builder says thermometer is unbiased with +/- 0.8C gaussian error.

pg 6 SAMSI UQ Workshop; Sep The red plot is, probability of temperature given that T o was observed. Example: Estimating the Temperature Outside Thermometer outside measures 1C.

pg 7 SAMSI UQ Workshop; Sep The green curve is ; probability of temperature given all available prior information. Example: Estimating the Temperature Outside We also have a prior estimate of temperature.

pg 8 SAMSI UQ Workshop; Sep Example: Estimating the Temperature Outside Prior information can include: 1.Observations of things besides T; 2.Model forecast made using observations at earlier times; 3.A priori physical constraints ( T > C ); 4.Climatological constraints ( -30C < T < 40C ).

pg 9 SAMSI UQ Workshop; Sep Combining the Prior Estimate and Observation Bayes Theorem: Posterior: Probability of T given observations and Prior. Also called update or analysis. Prior Likelihood: Probability that T o is observed if T is true value and given prior information C.

pg 10 SAMSI UQ Workshop; Sep Combining the Prior Estimate and Observation PT|T o,C   PT o |T,C  PT|C  normalization

pg 11 SAMSI UQ Workshop; Sep Combining the Prior Estimate and Observation PT|T o,C   PT o |T,C  PT|C  normalization

pg 12 SAMSI UQ Workshop; Sep Combining the Prior Estimate and Observation PT|T o,C   PT o |T,C  PT|C  normalization

pg 13 SAMSI UQ Workshop; Sep Combining the Prior Estimate and Observation PT|T o,C   PT o |T,C  PT|C  normalization

pg 14 SAMSI UQ Workshop; Sep Combining the Prior Estimate and Observation PT|T o,C   PT o |T,C  PT|C  normalization

pg 15 SAMSI UQ Workshop; Sep Consistent Color Scheme Throughout Tutorial Green = Prior Red = Observation Blue = Posterior Black = Truth

pg 16 SAMSI UQ Workshop; Sep Combining the Prior Estimate and Observation PT|T o,C   PT o |T,C  PT|C  normalization Generally no analytic solution for Posterior.

pg 17 SAMSI UQ Workshop; Sep Combining the Prior Estimate and Observation PT|T o,C   PT o |T,C  PT|C  normalization Gaussian Prior and Likelihood -> Gaussian Posterior

pg 18 SAMSI UQ Workshop; Sep For Gaussian prior and likelihood… Prior Likelihood Then, Posterior With Combining the Prior Estimate and Observation

pg 19 SAMSI UQ Workshop; Sep Suppose we have a linear forecast model L A.If temperature at time t 1 = T 1, then temperature at t 2 = t 1 +  t is T 2 = L(T 1 ) B.Example: T 2 = T 1 +  tT 1 The One-Dimensional Kalman Filter

pg 20 SAMSI UQ Workshop; Sep Suppose we have a linear forecast model L. A.If temperature at time t 1 = T 1, then temperature at t 2 = t 1 +  t is T 2 = L(T 1 ). B.Example: T 2 = T 1 +  tT 1. 2.If posterior estimate at time t 1 is Normal(T u,1,  u,1 ) then prior at t 2 is Normal(T p,2,  p,2 ). T p,2 = T u,1,+  tT u,1  p,2 = (  t + 1)  u,1 The One-Dimensional Kalman Filter

pg 21 SAMSI UQ Workshop; Sep Suppose we have a linear forecast model L. A.If temperature at time t 1 = T 1, then temperature at t 2 = t 1 +  t is T 2 = L(T 1 ). B.Example: T 2 = T 1 +  tT 1. 2.If posterior estimate at time t 1 is Normal(T u,1,  u,1 ) then prior at t 2 is Normal(T p,2,  p,2 ). 3.Given an observation at t 2 with distribution Normal(t o,  o ) the likelihood is also Normal(t o,  o ). The One-Dimensional Kalman Filter

pg 22 SAMSI UQ Workshop; Sep Suppose we have a linear forecast model L. A.If temperature at time t 1 = T 1, then temperature at t 2 = t 1 +  t is T 2 = L(T 1 ). B.Example: T 2 = T 1 +  tT 1. 2.If posterior estimate at time t 1 is Normal(T u,1,  u,1 ) then prior at t 2 is Normal(T p,2,  p,2 ). 3.Given an observation at t 2 with distribution Normal(t o,  o ) the likelihood is also Normal(t o,  o ). 4.The posterior at t 2 is Normal(T u,2,  u,2 ) where T u,2 and  u,2 come from page 18. The One-Dimensional Kalman Filter

pg 23 SAMSI UQ Workshop; Sep A One-Dimensional Ensemble Kalman Filter Represent a prior pdf by a sample (ensemble) of N values:

pg 24 SAMSI UQ Workshop; Sep Use sample mean and sample standard deviation to determine a corresponding continuous distribution A One-Dimensional Ensemble Kalman Filter Represent a prior pdf by a sample (ensemble) of N values:

pg 25 SAMSI UQ Workshop; Sep A One-Dimensional Ensemble Kalman Filter: Model Advance If posterior ensemble at time t 1 is T 1,n, n = 1, …, N

pg 26 SAMSI UQ Workshop; Sep A One-Dimensional Ensemble Kalman Filter: Model Advance If posterior ensemble at time t 1 is T 1,n, n = 1, …, N, advance each member to time t 2 with model, T 2,n = L(T 1, n ) n = 1, …,N.

pg 27 SAMSI UQ Workshop; Sep A One-Dimensional Ensemble Kalman Filter: Model Advance Same as advancing continuous pdf at time t 1 …

pg 28 SAMSI UQ Workshop; Sep A One-Dimensional Ensemble Kalman Filter: Model Advance Same as advancing continuous pdf at time t 1 to time t 2 with model L.

pg 29 SAMSI UQ Workshop; Sep A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation

pg 30 SAMSI UQ Workshop; Sep Fit a Gaussian to the sample. A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation

pg 31 SAMSI UQ Workshop; Sep Get the observation likelihood. A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation

pg 32 SAMSI UQ Workshop; Sep Compute the continuous posterior PDF. A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation

pg 33 SAMSI UQ Workshop; Sep Use a deterministic algorithm to ‘adjust’ the ensemble. A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation

pg 34 SAMSI UQ Workshop; Sep First, ‘shift’ the ensemble to have the exact mean of the posterior. A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation

pg 35 SAMSI UQ Workshop; Sep First, ‘shift’ the ensemble to have the exact mean of the posterior. Second, linearly contract to have the exact variance of the posterior. Sample statistics are identical to Kalman filter. A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation

pg 36 SAMSI UQ Workshop; Sep  (Ensemble) KF optimal for linear model, gaussian likelihood.  In KF, only mean and variance have meaning.  Variance defines the uncertainty.  Ensemble allows computation of many other statistics.  What do they mean? Not entirely clear.  Example: Kurtosis. Completely constrained by initial ensemble. It is problem specific whether this is even defined! (See example set 1 in Appendix) UQ from an Ensemble Kalman Filter

pg 37 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance Draw 5 values from a real-valued distribution. Call the first 4 ‘ensemble members’.

pg 38 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance These partition the real line into 5 bins.

pg 39 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance Call the 5th draw the ‘truth’. 1/5 chance that this is in any given bin.

pg 40 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance Rank histogram shows the frequency of the truth in each bin over many assimilations.

pg 41 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance Rank histogram shows the frequency of the truth in each bin over many assimilations.

pg 42 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance Rank histogram shows the frequency of the truth in each bin over many assimilations.

pg 43 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance Rank histogram shows the frequency of the truth in each bin over many assimilations.

pg 44 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance Rank histograms for good ensembles should be uniform (caveat sampling noise). Want truth to look like random draw from ensemble.

pg 45 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance A biased ensemble leads to skewed histograms.

pg 46 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance An ensemble with too little spread gives a u-shape. This is the most common behavior for geophysics.

pg 47 SAMSI UQ Workshop; Sep The Rank Histogram: Evaluating Ensemble Performance An ensemble with too much spread is peaked in the center. (See example set 1 in appendix)

pg 48 SAMSI UQ Workshop; Sep Back to 1-Dimensional (Ensemble) Kalman Filter  All bets are off when model is nonlinear, likelihood nongaussian.  Must assess quality of estimates for each case.  Example: A weakly-nonlinear 1D model: dx/dt = x + 0.4|x|x (See example set 2 in appendix)  Ensemble statistics can become degenerate.  Nonlinear ensemble filters may address this but beyond scope for today.

pg 49 SAMSI UQ Workshop; Sep Ensemble Kalman Filter with Model Error  Welcome to the real world. Models aren’t perfect.  Model (and other) error can corrupt ensemble mean and variance. (See example set 3 part 1 in appendix)  Adapt to this by increasing uncertainty in prior.  Inflation is common method for ensembles.

pg 50 SAMSI UQ Workshop; Sep Dealing with systematic error: Variance Inflation Observations + physical system => ‘true’ distribution.

pg 51 SAMSI UQ Workshop; Sep Dealing with systematic error: Variance Inflation Observations + physical system => ‘true’ distribution. Model bias (and other errors) can shift actual prior. Prior ensemble is too certain (needs more spread).

pg 52 SAMSI UQ Workshop; Sep Dealing with systematic error: Variance Inflation Naïve solution: increase the spread in the prior. Give more weight to the observation, less to prior.

pg 53 SAMSI UQ Workshop; Sep Ensemble Kalman Filter with Model Error  Errors in model (and other things) result in too much certainty in prior.  Can use inflation to correct for this.  Generally tuned on dependent data set.  Adaptive methods exist but must be calibrated. (See example set 3 part 1 in appendix)  Uncertainty errors depend on forecast length.

pg 54 DART-LAB Tutorial -- June 09 Multivariate Ensemble Kalman Filter So far, we have an observation likelihood for single variable. Suppose the model prior has additional variables. Use linear regression to update additional variables.

pg 55 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Assume that all we know is prior joint distribution. One variable is observed. What should happen to the unobserved variable?

pg 56 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Assume that all we know is prior joint distribution. One variable is observed. Compute increments for prior ensemble members of observed variable.

pg 57 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Assume that all we know is prior joint distribution. One variable is observed. Using only increments guarantees that if observation had no impact on observed variable, unobserved variable is unchanged (highly desirable).

pg 58 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Assume that all we know is prior joint distribution. How should the unobserved variable be impacted? First choice: least squares. Equivalent to linear regression. Same as assuming binormal prior.

pg 59 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. How should the unobserved variable be impacted? First choice: least squares. Begin by finding least squares fit.

pg 60 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space.

pg 61 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space.

pg 62 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space.

pg 63 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space.

pg 64 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space.

pg 65 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors.

pg 66 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors.

pg 67 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors.

pg 68 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors.

pg 69 DART-LAB Tutorial -- June 09 Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors.

pg 70 DART-LAB Tutorial -- June 09 The Lorenz63 3-Variable ‘Chaotic’ Model  Two lobed attractor.  State orbits lobes, switches lobes sporadically.  Nonlinear if observations are sparse.  Qualitative uncertainty information available from ensemble.  Nongaussian distributions are sampled.  Quantitative uncertainty requires careful calibration. (See Example 4 in the appendix)

pg 71 DART-LAB Tutorial -- June 09 Ensemble Kalman Filters with Large Models  Kalman filter is too costly (time and storage) for large models.  Ensemble filters can be used for very large applications.  Sampling error when computing covariances may become large.  Spurious correlations between unrelated observations and state variables contaminate results.  Leads to further uncertainty underestimates.  Can be compensated by a priori limits on what state variables are impacted by an observation. Called ‘localization’.  Requires even more calibration.

pg 72 DART-LAB Tutorial -- June 09 The Lorenz96 40-Variable ‘Chaotic’ Model  Something like weather systems around a latitude circle.  Has spurious correlations for small ensembles.  Basic implementation of ensemble filter leads to bad uncertainty estimates.  Both inflation and localization can improve performance.  Quantitative uncertainty requires careful calibration. (See Example 5 in the appendix)

pg 73 DART-LAB Tutorial -- June 09 Uncertainty and Ensemble Kalman Filters: Conclusions  Ensemble KF variance is exact quantification of uncertainty when things are linear, gaussian, perfect, and ensemble is large enough.  Too little uncertainty nearly universal for geophysical applications.  Uncertainty errors are a function of forecast lead time.  Adaptive algorithms like inflation improve uncertainty estimates.  Quantitative uncertainty estimates require careful calibration.  Out of sample estimates (climate change) are tenuous.

pg 74 DART-LAB Tutorial -- June 09 Appendix: Matlab examples This is an outline of the live matlab demos from NCAR’s DART system that were used to support this tutorial. The matlab code is available by checking out the DART system from NCAR at: The example scripts can be found in the DART directory: DART_LAB/matlab Some of the features of the GUI’s for the exercises do not work due to bugs in Matlab releases 2010a and 2010b but should work in earlier and later releases. Many more features of these matlab scripts are described in the tutorial files in the DART directory: DART_LAB/presentation These presentations are a complement and extension of things presented in this tutorial.

pg 75 DART-LAB Tutorial -- June 09 Appendix: Matlab examples Example Set 1: Use matlab script oned_model Change the “Ens. Size” to 10 Select “Start Free Run” Observe that the spread curve in the second diagnostic panel quickly converges to have the same prior and posterior values. The rms error varies with time due to observation error, but its time mean should be the same as the spread. Restart this exercise several times to see that the kurtosis curve in the third panel also converges immediately, but to values that change with every randomly selected initial ensemble. Also note the behavior of the rank histograms for this case. They may not be uniform even though the ensemble filter is the optimal solution. In fact, they can be made arbitrarily nonuniform by the selection of the initial conditions.

pg 76 DART-LAB Tutorial -- June 09 Appendix: Matlab examples

pg 77 DART-LAB Tutorial -- June 09 Appendix: Matlab examples

pg 78 DART-LAB Tutorial -- June 09 Appendix: Matlab examples Example Set 2: Use matlab script oned_model Change the “Ens. Size” to 10 Change “Nonlin. A” to 0.4 Select “Start Free Run” This examines what happens when the model is nonlinear. Observe the evolution of the rms error and spread in the second panel and the kurtosis in the third panel. Also observe the ensemble prior distribution (green) in the first panel and in the separate plot on the control panel (where you changed the ensemble size). In many cases, the ensemble will become degenerate with all but one of the ensemble members collapsing while the other stays separate. Observe the impact on the rank histograms as this evolves.

pg 79 DART-LAB Tutorial -- June 09

pg 80 DART-LAB Tutorial -- June 09

pg 81 DART-LAB Tutorial -- June 09 Appendix: Matlab examples Example Set 3: Use matlab script oned_model Change the “Ens. Size” to 10 Change “Model Bias” to 1 Select “Start Free Run” This examines what happens when the model used to assimilate observations is biased compared to the model that generated the observations (an imperfect model study). The model for the assimilation adds 1 to what the perfect model would do at each time step. Observe the evolution of the rms error and spread in the second panel and the rank histograms. Now, try adding some uncertainty to the prior ensemble using inflation by setting “Inflation” to 1.5. This should impact all aspects of the assimilation.

pg 82 DART-LAB Tutorial -- June 09

pg 83 DART-LAB Tutorial -- June 09

pg 84 DART-LAB Tutorial -- June 09 Appendix: Matlab examples Example Set 4: Use matlab script run_lorenz_63 Select “Start Free Run” This run without assimilation shows how uncertainty grows in an ensemble forecast in the 3 variable Lorenz model. After a few trips around the attractor select “Stop Free Run”. Then turn on an ensemble filter assimilation by changing “No Assimilation “ to “EAKF”. Continue the run by selecting “Start Free Run” and observe how the ensemble evolves. You can stop the advance again and use the matlab rotation feature to study the structure of the ensemble.

pg 85 DART-LAB Tutorial -- June 09

pg 86 DART-LAB Tutorial -- June 09

pg 87 DART-LAB Tutorial -- June 09 Appendix: Matlab examples Example Set 5: Use matlab script run_lorenz_96 Select “Start Free Run” This run without assimilation shows how uncertainty grows in an ensemble forecast in the 40 variable model. After about time=40, select “Stop Free Run” and start an assimilation by changing “No Assimilation” to “EAKF” and selecting “Start Free Run”. Exploring the use of a localization of 0.2 and/or an inflation of 1.1 helps to show how estimates of uncertainty can be improved by modifying the base ensemble Kalman filter.