Download presentation
Presentation is loading. Please wait.
Published byLuis Cooke Modified over 10 years ago
2
Introduction to Data Assimilation NCEO Data-assimilation training days 5-7 July 2010 Peter Jan van Leeuwen Data Assimilation Research Center (DARC) University of Reading p.j.vanleeuwen@reading.ac.uk
3
Basic estimation theory Observation: T 0 = T + e 0 First guess: T m = T + e m E{e 0 } = 0 E{e m } = 0 Assume a linear best estimate: T n = a T 0 + b T m with T n = T + e n. 1) Gives: E{e n } = E{T n -T} = E{aT 0 +bT m -T} = = E{ae 0 +be m + (a+b-1) T} = (a+b-1) T = 0 Hence b=1-a. Find a and b such that 1) E{e n } = 0 2) E{e n 2 } minimal E{e 0 2 } = s 0 2 E{e m 2 } = s m 2 E{e 0 e m } = 0
4
Basic estimation theory 2) E{e n 2 } minimal gives: E{e n 2 } = E{(T n –T) 2 } = E{(aT 0 + bT m – T) 2 } = = E{(ae 0 +be m ) 2 } = a 2 E{e 0 2 } + b 2 E{e m 2 } = = a 2 s 0 2 + (1-a) 2 s m 2 a = __________ and sm2sm2 s 0 2 + s m 2 This has to be minimal, so the derivative wrt a has to be zero: 2 a s 0 2 - 2(1-a) s m 2 = 0, so (s 0 2 + s m 2 )a – s m 2 = 0, hence: b = 1-a = __________ s02s02 s 0 2 + s m 2 s n 2 = E{e n 2 } = ______________ = ________ (s 0 2 + s m 2 ) 2 s m 4 s 0 2 + s 0 4 s m 2 s 0 2 s m 2 s 0 2 + s m 2
5
Solution: T n = _______ T 0 + _______ T m sm2sm2 s02s02 s 0 2 + s m 2 ___ = ___ + ___ 111 sm2sm2 s02s02 sn2sn2 and Note: s n smaller than s 0 and s m ! Basic estimation theory Best Linear Unbiased Estimate BLUE Just least squares!!!
6
Can we generalize this? More dimensions Nonlinear estimates (why linear?) Observations that are not directly modeled Biases (Is this reference to the truth necessary?)
7
P(u) u (m/s) 1.0 0.5 The basics: probability density functions
8
The model pdf p (u(x1), u(x2), T(x3), … ) u(x1) u(x2) T(x3)
9
Observations In situ observations: irregular in space and time e.g. sparse hydrographic observations, Satellite observations: indirect e.g. of the sea-surface
10
The solution is a pdf! Bayes theorem: Data assimilation: general formulation
11
Bayes Theorem Conditional pdf: Similarly: Combine: Even better:
12
Filters and smoothers Filter: solve 3D problem sequentially Smoother: solve 4D problem once Note: I assumed a dynamical (=time-varying) model here.
13
Most present-day data-assimilation methods assume that the model and observation pdf are Gaussian distributed. Advantage: only first two moments, mean and covariance, needed. Disadvantage: This is not always a good approximation. Gaussian distribution: The Gaussian assumption I Multivariate Gaussian for n-dimensional model vector (state or parameters or a combination):
14
The Gaussian assumption II The pdf of the observations is actually (and luckily) the pdf of the observations given the model state. Hence we need the model equivalent of the observations. We can express each observation as: in which d is an m-dimensional observation vector. The observation pdf is now given by:
15
(Ensemble) Kalman Filter I Use Gaussianity in Bayes at a specific time: Hence Complete the squares to find again a Gaussian (only for linear H !!!):
16
(Ensemble) Kalman Filter III Both lead to the Kalman filter equations, which again are just the least squares solutions: Two possibilities to find the expressions for the mean and covariance: 1)Completing the squares 2)Writing the new solution as a linear combination of model and observations, assume unbiased and minimal trace of error covariance.
17
Spatial correlation of SSH and SST in the Indian Ocean x x Haugen and Evensen, 2002 The error covariance: tells us how model variables co-vary Likewise you can find the covariance of a model parameter with any other model variable.
18
(Ensemble) Kalman Filter IV Interestingly Old solution: T n = _______ T 0 + _______ T m sm2sm2 s02s02 s m 2 + s 0 2 Hence we find the same solution, but now generalized to more dimensions and partial or indirect observations (H operator)!
19
(Ensemble) Kalman Filter V Operational steps: 1)Determine mean and covariance from the prior pdf 2)Use Kalman filter equations to update the model variables 3)Propagate the new forward in time using the model equations 4)Propagate the new covariance in time using the linearized model equations 5)Back to 2) In the ensemble Kalman filter the mean and covariance are found from an ensemble of model states/runs.
20
Propagation of pdf in time: ensemble or particle methods
21
And parameter estimation? Several ways: 1)Augment the state vector with the parameters (i.e. add parameters to the state vector). This makes the parameter estimates time variable. How do we find the covariances? See Friday. 2) Prescribe the covariances between parameters and observed variables. 3)Find best fits using Monte-Carlo runs (lots of tries, can be expensive).
22
Nonlinear measurement operators H (but observations still Gaussian distributed ) 1) Use the nonlinear H in the Kalman filter equations (no rigorous derivation possible). 2) Augment the state vector x with H(x) (still assuming Gausian pdf for new model vector). 3)Solve the nonlinear problem. For large-scale problems variational methods tend to be efficient: 3DVar. (Sometimes also used when H is linear.)
23
Variational methods Recall that for Gaussian distributed model and observations the posterior pdf can be written as: A variational method looks for the most probable state, which is the maximum of this posterior pdf also called the mode. Instead of looking for the maximum one solves for the minimum of a so-called costfunction.
24
The pdf can be rewritten as in which Find min J from variational derivative: J is costfunction or penalty function Variational methods
25
Gradient descent methods J model variable 12 34 56 1
26
3DVar 3DVar solves for the minimum of the costfucntion, which is the maximum of the posterior pdf. Advantages: 1)When preconditioned appropriately it can handle very large dimensional systems (e.g. 10^8 variables) 2)It uses an adjoint, which tells us the linear sensitivity of each model variable to all other model variables. This is useful for e.g. observation impact. Disadvantages: 1)The adjoint has to be coded 2)It doesnt automatically provide an error estimate. For weakly nonlinear systems the inverse of the 2 nd variational derivative, the Hessian, is equal to the covariance, but it comes at great cost.
27
4DVar There is an interesting extension to this formulation to a smoother. Filter: Solve a 3D problem at each observation time. 4DVar: Solve a 3D problem at the beginning of the time window using all observations. Note we get a new full 4D solution!
28
4DVar: The observation pdf The part of the costfunction related to the observations can again be written as: But now we have sets of observations at different times t obs, and H is a complex operator that includes the forward model. If observation errors at different times are uncorrelated we find:
29
4DVar: the dynamical model The dynamical model is denoted by M: Using the model operator twice brings us to the next time step: And some short-hand notation:
30
4DVar The total costfunction that we have to minimize now becomes: in which the measurement operator H i contains the forward model: defined as
31
Weak-constraint 4DVar So far, we have considered errors in observations and the model initial condition. What about the forward model? If the forward model has errors, i.e. if there are errors present in the model equations the model pdf has to be extended. This leads to extra terms in the costfunction:
32
Parameter estimation Bayes: Looks simple, but we dont observe model parameters…. We observe model fields, so: in which H has to be found from model integrations.
33
And what if the distributions are non-Gaussian? Fully non-Gaussian data-assimilation methods exist, but none has proven to work for large-scale systems. A few do have potential among which the Particle Filter. It has been shown that the curse of dimensionality has been cured….
34
Summary and outlook 1We know how to formulate the data assimilation problem using Bayses Theorem. 2We have derived the Kalman Filter and shown that it is the best linear unbiased estimator (BLUE). 3We derived 3D and 4DVar and discussed some of their properties. 4This forms the basis for what is to come in the next days! 5Notation: when we talk about the state vector and its observations notation will be changed: becomes x, and d becomes y. Beware !!!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.