École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire 2015-2016 Modélisation Numérique de l’Écoulement Atmosphérique.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Data-Assimilation Research Centre
Introduction to Data Assimilation NCEO Data-assimilation training days 5-7 July 2010 Peter Jan van Leeuwen Data Assimilation Research Center (DARC) University.
Assimilation Algorithms: Tangent Linear and Adjoint models Yannick Trémolet ECMWF Data Assimilation Training Course March 2006.
The Inverse Regional Ocean Modeling System:
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
1 アンサンブルカルマンフィルターによ る大気海洋結合モデルへのデータ同化 On-line estimation of observation error covariance for ensemble-based filters Genta Ueno The Institute of Statistical.
Ibrahim Hoteit KAUST, CSIM, May 2010 Should we be using Data Assimilation to Combine Seismic Imaging and Reservoir Modeling? Earth Sciences and Engineering.
Data assimilation schemes in numerical weather forecasting and their link with ensemble forecasting Gérald Desroziers Météo-France, Toulouse, France.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1/20 Accelerating minimizations in ensemble variational assimilation G. Desroziers, L. Berre Météo-France/CNRS (CNRM/GAME)
Advanced data assimilation methods with evolving forecast error covariance Four-dimensional variational analysis (4D-Var) Shu-Chih Yang (with EK)
Advanced data assimilation methods- EKF and EnKF Hong Li and Eugenia Kalnay University of Maryland July 2006.
Course AE4-T40 Lecture 5: Control Apllication
Objectives of Multiple Regression
Helsinki University of Technology Adaptive Informatics Research Centre Finland Variational Bayesian Approach for Nonlinear Identification and Control Matti.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
IFAC Control of Distributed Parameter Systems, July 20-24, 2009, Toulouse Inverse method for pyrolysable and ablative materials with optimal control formulation.
Understanding the 3DVAR in a Practical Way
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
Biointelligence Laboratory, Seoul National University
Assimilation of Observations and Bayesianity Mohamed Jardak and Olivier Talagrand Laboratoire de Météorologie Dynamique, École Normale Supérieure, Paris,
Computing a posteriori covariance in variational DA I.Gejadze, F.-X. Le Dimet, V.Shutyaev.
CSDA Conference, Limassol, 2005 University of Medicine and Pharmacy “Gr. T. Popa” Iasi Department of Mathematics and Informatics Gabriel Dimitriu University.
« Data assimilation in isentropic coordinates » Which Accuracy can be achieved using an high resolution transport model ? F. FIERLI (1,2), A. HAUCHECORNE.
Ocean Data Variational Assimilation with OPA: Ongoing developments with OPAVAR and implementation plan for NEMOVAR Sophie RICCI, Anthony Weaver, Nicolas.
Mean Field Variational Bayesian Data Assimilation EGU 2012, Vienna Michail Vrettas 1, Dan Cornford 1, Manfred Opper 2 1 NCRG, Computer Science, Aston University,
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
MODEL ERROR ESTIMATION EMPLOYING DATA ASSIMILATION METHODOLOGIES Dusanka Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Maximum Likelihood Estimation and Simplified Kalman Filter tecniques for real time Data Assimilation.
Research Vignette: The TransCom3 Time-Dependent Global CO 2 Flux Inversion … and More David F. Baker NCAR 12 July 2007 David F. Baker NCAR 12 July 2007.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Errors, Uncertainties in Data Assimilation François-Xavier LE DIMET Université Joseph Fourier+INRIA Projet IDOPT, Grenoble, France.
Quality of model and Error Analysis in Variational Data Assimilation François-Xavier LE DIMET Victor SHUTYAEV Université Joseph Fourier+INRIA Projet IDOPT,
Variational data assimilation: examination of results obtained by different combinations of numerical algorithms and splitting procedures Zahari Zlatev.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
Simple and multiple regression analysis in matrix form Least square Beta estimation Beta Simple linear regression Multiple regression with two predictors.
The Ensemble Kalman filter
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
June 20, 2005Workshop on Chemical data assimilation and data needs Data Assimilation Methods Experience from operational meteorological assimilation John.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Tijl De Bie John Shawe-Taylor ECS, ISIS, University of Southampton
Hybrid Data Assimilation
ASEN 5070: Statistical Orbit Determination I Fall 2014
Assimilation of observations. The case of meteorology and oceanography
ASEN 5070: Statistical Orbit Determination I Fall 2014
Gleb Panteleev (IARC) Max Yaremchuk, (NRL), Dmitri Nechaev (USM)
Adjoint modeling and applications
Nodal Methods for Core Neutron Diffusion Calculations
PSG College of Technology
Course: Autonomous Machine Learning
CHAPTER 3 RECURSIVE ESTIMATION FOR LINEAR MODELS
Ensemble variance loss in transport models:
Hidden Markov Models Part 2: Algorithms
FSOI adapted for used with 4D-EnVar
Numerical Analysis Lecture 16.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Image and Video Processing
2. University of Northern British Columbia, Prince George, Canada
Parameterizations in Data Assimilation
Unfolding with system identification
Sarah Dance DARC/University of Reading
Presentation transcript:

École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire 2015-2016 Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation de Données Olivier Talagrand Cours 7 25 Mai 2016 Ceci est un fichier d’essai, sur lequel je vais essayer d’apprendre à utiliser PowerPoint

In the linear case, Kalman Filter and Variational Assimilation produce the same estimate at the end of the assimilation window, i. e., the BLUE of the state of the system from all available data. If in addition errors are Gaussian, both algorithms achieve Bayesian estimation.

Incremental Method Variational assimilation, as it has been described, requires the use of the adjoint of the full model. Simplifying the adjoint as such can be very dangerous. The computed gradient would not be exact, and experience shows that optimization algorithms (and especially efficient ones) are very sensitive to even slight misspecification of the gradient. Principle of Incremental Method (Courtier et al., 1994, Q. J. R. Meteorol. Soc.) : simplify simultaneously the (local tangent linear) dynamics and the corresponding adjoint.

Incremental Method (continuation 1) - Basic (nonlinear) model k+1 = Mk(k) - Tangent linear model k+1 = Mk’ k where Mk’ is jacobian of Mk at point k. - Adjoint model k = Mk’T k+1 + … Incremental Method. Simplify Mk’ and Mk’T.

Incremental Method (continuation 2) More precisely, for given solution k(0) of nonlinear model, replace tangent linear and adjoint models respectively by k+1 = Lk k (2) and k = LkT k+1 + … where Lk is an appropriate simplification of jacobian Mk’. It is then necessary, in order to ensure that the result of the adjoint integration is the exact gradient of the objective function, to modify the basic model in such a way that the solution emanating from 0(0) + 0 is equal to k(0) + k, where k evolves according to (2). This makes the basic dynamics exactly linear.

Incremental Method (continuation 3) As concerns the observation operators in the objective function, a similar procedure can be implemented if those operators are nonlinear. This leads to replacing Hk(k) by Hk(k(0)) + Nkk, where Nk is an appropriate ‘simple’ linear operator (possibly, but not necessarily, the jacobian of Hk at point k(0)). The objective function depends only on the initial 0 deviation from 0(0), and reads JI(0) = (1/2) (x0b - 0(0) - 0)T [P0b]-1 (x0b - 0(0) - 0) + (1/2) k[dk - Nkk]T Rk-1 [dk - Nkk] where dk  yk - Hk(k(0)) is the innovation at time k, and the k evolve according to k+1 = Lk k (2) With the choices made here, JI(0) is an exactly quadratic function of 0. The minimizing perturbation 0,m defines a new initial state 0(1)  0(0) + 0,m, from which a new solution k(1) of the basic nonlinear equation is determined. The process is restarted in the vicinity of that new solution.

Incremental Method (continuation 4) This defines a system of two-level nested loops for minimization. Advantage is that many degrees of freedom are available for defining the simplified operators Lk and Nk, and for defining an appropriate trade-off between practical implementability and physical usefulness and accuracy. It is the incremental method which, together with the adjoint method, makes variational assimilation possible. First-Guess-At-the-right-Time 3D-Var (FGAT 3D-Var). Corresponds to Lk = In. Assimilation is four-dimensional in that observations are compared to a first-guess which evolves in time, but is three-dimensional in that no dynamics other than the trivial dynamics expressed by the unit operator is present in the minimization.

Buehner et al. (Mon. Wea. Rev., 2010) For the same numerical cost, and in meteorologically realistic situations, Ensemble Kalman Filter and Variational Assimilation produce results of similar quality.

Two questions - How to propagate information backwards in time ? (useful for reassimilation of past data) - How to take into account possible temporal correlations in errors ? Kalman Filter, whether in its standard linear form or in its Ensemble form, does neither.

Weak constraint variational assimilation Allows for errors in the assimilating model Data - Background estimate at time 0 x0b = x0 + 0b E(0b0bT) = P0b - Observations at times k = 0, …, K yk = Hkxk + k E(kkT) = Rk - Model xk+1 = Mkxk + k E(kkT) = Qk k = 0, …, K-1 Errors assumed to be unbiased and uncorrelated in time, Hk and Mk linear

= (1/2) (x0b - 0)T [P0b]-1 (x0b - 0) Then objective function (01K  J(01K) = (1/2) (x0b - 0)T [P0b]-1 (x0b - 0) + (1/2) k=0,…,K[yk - Hkk]T Rk-1 [yk - Hkk] + (1/2) k=0,…,K-1[k+1 - Mkk]T Qk-1 [k+1 - Mkk] Can include nonlinear Mk and/or Hk. Implemented operationally at ECMWF for the assimilation in the stratosphere. Becomes singular in the strong constraint limit Qk  0

xa = xb + Pb HT [HPbHT + R]-1 (y - Hxb) Dual Algorithm for Variational Assimilation (aka Physical Space Analysis System, PSAS, pronounced ‘pizzazz’; see in particular book and papers by Bennett) xa = xb + Pb HT [HPbHT + R]-1 (y - Hxb) xa = xb + Pb HT -1 d = xb + Pb HT m where   HPbHT + R, d  y - Hxb and m  -1 d maximises  K() = -(1/2) T   + dT Maximisation is performed in (dual of) observation space.

Dual Algorithm for Variational Assimilation (continuation 2) Extends to time dimension, and to weak-constraint case, by defining state vector as x  (x0T, x1T , …, xKT)T or, equivalently, but more conveniently, as x  (x0T, 0T , …, K-1T)T where, as before k = xk+1 - Mkxk , k = 0, …, K-1 The background for x0 is x0b, the background for k is 0. Complete background is xb = (x0bT, 0T , …, 0T)T It is associated with error covariance matrix Pb = diag(P0b, Q0 , …, QK-1)

  H = (u0T, …, uKT)T u0=H00 k+1 = Mkk + k uk+1 =Hk+1 k+1 Dual Algorithm for Variational Assimilation (continuation 3) For any state vector = (0T, 0T , …, K-1T)T, the observation operator H   H = (u0T, …, uKT)T is defined by the sequence of operations u0=H00 then for k = 0, …, K-1 k+1 = Mkk + k uk+1 =Hk+1 k+1 The observation error covariance matrix is equal to R = diag(R0, …, RK)

Dual Algorithm for Variational Assimilation (continuation 4) Maximization of dual objective function  K() = -(1/2) T   + dT requires explicit repeated computations of its gradient  K = -  + d = - (HPbHT + R) + d Starting from  = (T, …, T)T belonging to (dual) of observation space, this requires 5 successive steps - Step 1. Multiplication by HT. This is done by applying the transpose of the process defined above, viz., Set  = 0 Then, for k = K-1, …, 0 k = k+1 + Hk+1T k+1 k = MkT k Finally 0 = 0 + H0T 0 The output of this step, which includes a backward integration of the adjoint model, is the vector (0T, 0T , …, K-1T)T

Dual Algorithm for Variational Assimilation (continuation 5) - Step 2. Multiplication by Pb. This reduces to 0 = P0b 0 k = Qkk , k = 0, …, K-1 - Step 3. Multiplication by H. Apply the process defined above on the vector (0T, 0T , …, K-1T)T, thereby producing vector (u0T, …, uKT)T. - Step 4. Add vector R, i. e. compute 0 = 0 + R0  k = k-1 + Rk k , k = 1, …, - Step 5. Change sign of vector  = (T, …, T)T, and add vector d = y - Hxb,

Dual Algorithm for Variational Assimilation (continuation 6) Dual algorithm remains regular in the limit of vanishing model error. Can be used for both strong- and weak-constraint assimilation. No significant increase of computing cost in comparison with standard strong constraint variational assimilation (Courtier, Louvel)

Louvel, Doctoral Dissertation, Université Paul-Sabatier, Toulouse, 1999

Louvel, Doctoral Dissertation, Université Paul-Sabatier, Toulouse, 1999

Dual Algorithm for Variational Assimilation (continuation) Requires Explicit background (not much of a problem) Exact linearity (much more of a problem). Definition of iterative nonlinear procedures is being studied (Auroux, …)

Auroux, Doctoral Dissertation, Université de Nice-Sophia Antipolis, Nice, 2003

Variational assimilation has been extended to non Gaussian probability distributions (lognormal distributions), the unknown being the mode of the conditional distribution (M. Zupanski, Fletcher). Bayesian character of variational assimilation ? - If everything is linear and gaussian, ready recipe for obtaining bayesian sample Perturb data (background, observations and model) according to their error probability distributions, do variational assimilation, and repeat process Sample of system orbits thus obtained is bayesian - If not, very little can be said at present

Time-correlated Errors Example of time-correlated observation errors z1 = x + 1 z2 = x + 2 E(1) = E(2) = 0 ; E(12) = E(22) = s ; E(12) = 0 BLUE of x from z1 and z2 gives equal weights to z1 and z2. Additional observation then becomes available z3 = x + 3 E(3) = 0 ; E(32) = s ; E(13) = cs ; E(23) = 0 BLUE of x from (z1, z2, z3) has weights in the proportion (1, 1+c, 1)

Time-correlated Errors (continuation 1) Example of time-correlated model errors Evolution equation ` xk+1 = xk + k E(k2) = q Observations yk = xk + k , k = 0, 1, 2 E(k2) = r, errors uncorrelated in time Sequential assimilation. Weights given to y0 and y1 in analysis at time 1 are in the ratio r/(r+q). That ratio will be conserved in sequential assimilation. All right if model errors are uncorrelated in time. Assume E(01) = cq Weights given to y0 and y1 in estimation of x2 are in the ratio

Conclusion Sequential assimilation, in which data are processed by batches, the data of one batch being discarded once that batch has been used, cannot be optimal if data in different batches are affected with correlated errors. This is so even if one keeps trace of the correlations. Solution Process all correlated in the same batch (4DVar, some smoothers)

Time-correlated Errors (continuation 3) Moral. If data errors are correlated in time, it is not possible to discard observations as they are used. In particular, if model error is correlated in time, all observations are liable to be reweighted as assimilation proceeds. Variational assimilation can take time-correlated errors into account. Example of time-correlated observation errors. Global covariance matrix R = (Rkk’ = E(kk’T)) Objective function 0 S  J(0) = (1/2) (x0b - 0)T [P0b]-1 (x0b - 0) + (1/2) kk’[yk - Hkk]T [R -1]kk’ [yk’ - Hk’k’] where [R -1]kk’ is the kk’-sub-block of global inverse matrix R -1. Similar approach for time-correlated model error.

Time-correlated Errors (continuation 4) Temporal correlation of observational error has been introduced by ECMWF (Järvinen et al., 1999) in variational assimilation of high-frequency surface pressure observations (correlation originates in that case in representativeness error). Identification and quantification of time correlation of errors, especially model errors ?

Conclusion on Sequential Assimilation Pros ‘Natural’, and well adapted to many practical situations Provides, at least relatively easily, explicit estimate of estimation error Cons Carries information only forward in time (of no importance if one is interested only in doing forecast) In present form, optimality is possible only if errors are independent in time

Conclusion on Variational Assimilation Pros Carries information both forward and backward in time (important for reassimilation of past data). Can easily take into account temporal statistical dependence (Järvinen et al.) Does not require explicit computation of temporal evolution of estimation error Very well adapted to some specific problems (e. g., identification of tracer sources) Cons Does not readily provide estimate of estimation error Requires development and maintenance of adjoint codes. But the latter can have other uses (sensitivity studies). Dual approach seems most promising. But still needs further development for application in non exactly linear cases. Is ensemble variational assimilation possible ? Probably yes. But also needs development.

Weak constraint variational assimilation Allows for errors in the assimilating model Data - Background estimate at time 0 x0b = x0 + 0b E(0b0bT) = P0b - Observations at times k = 0, …, K yk = Hkxk + k E(kkT) = Rk - Model xk+1 = Mkxk + k E(kkT) = Qk k = 0, …, K-1 Errors assumed to be unbiased and uncorrelated in time, Hk and Mk linear

Various possibilities Define new state vector xT  (x0T, …, xKT) Kalman smoother Propagates information both forward and backward in time, as does 4DVar, but uses Kalman-type formulæ Various possibilities Define new state vector xT  (x0T, …, xKT) and use Kalman formula from a background xb and associated covariance matrix Pb. Can take into account temporal correlations Update sequentially vector (x0T, …, xkT) T for increasing k Cannot take into account temporal correlations Algorithms exist in ensemble form

E. Cosme, HDR, 2015, Lissage d’ensemble SEEK

E. Cosme, HDR, 2015, Lissage d’ensemble SEEK

In the linear case, Kalman Smoother and Variational Assimilation produce the same estimate over the whole assimilation window, i. e., the BLUE of the state of the system from all available data. If in addition errors are Gaussian, both algorithms achieve Bayesian estimation.