École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire 2014-2015 Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Data-Assimilation Research Centre
Introduction to Data Assimilation NCEO Data-assimilation training days 5-7 July 2010 Peter Jan van Leeuwen Data Assimilation Research Center (DARC) University.
Zhen Lu CPACT University of Newcastle MDC Technology Reduced Hessian Sequential Quadratic Programming(SQP)
Assimilation Algorithms: Tangent Linear and Adjoint models Yannick Trémolet ECMWF Data Assimilation Training Course March 2006.
The Inverse Regional Ocean Modeling System:
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Budapest May 27, 2008 Unifying mixed linear models and the MASH algorithm for breakpoint detection and correction Anders Grimvall, Sackmone Sirisack, Agne.
Extended Kalman Filter (EKF) And some other useful Kalman stuff!
Observers and Kalman Filters
DA/SAT Training Course, March 2006 Variational Quality Control Erik Andersson Room: 302 Extension: 2627
Ibrahim Hoteit KAUST, CSIM, May 2010 Should we be using Data Assimilation to Combine Seismic Imaging and Reservoir Modeling? Earth Sciences and Engineering.
Data assimilation schemes in numerical weather forecasting and their link with ensemble forecasting Gérald Desroziers Météo-France, Toulouse, France.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1/20 Accelerating minimizations in ensemble variational assimilation G. Desroziers, L. Berre Météo-France/CNRS (CNRM/GAME)
Advanced data assimilation methods with evolving forecast error covariance Four-dimensional variational analysis (4D-Var) Shu-Chih Yang (with EK)
Advanced data assimilation methods- EKF and EnKF Hong Li and Eugenia Kalnay University of Maryland July 2006.
Lecture II-2: Probability Review
Objectives of Multiple Regression
Helsinki University of Technology Adaptive Informatics Research Centre Finland Variational Bayesian Approach for Nonlinear Identification and Control Matti.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Dee: Practice of Quality ControlNCAR Summer Colloquium Practice of Quality Control Dick Dee Global Modeling and Assimilation Office NASA Goddard.
Understanding the 3DVAR in a Practical Way
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
Biointelligence Laboratory, Seoul National University
Assimilation of Observations and Bayesianity Mohamed Jardak and Olivier Talagrand Laboratoire de Météorologie Dynamique, École Normale Supérieure, Paris,
Computing a posteriori covariance in variational DA I.Gejadze, F.-X. Le Dimet, V.Shutyaev.
3D/4D-Var Methods Liang Xu (NRL) JCSDA Summer Colloquium on Satellite DA 1 3D-Var/4D-Var Solution Methods Liang Xu Naval Research Laboratory, Monterey,
CSDA Conference, Limassol, 2005 University of Medicine and Pharmacy “Gr. T. Popa” Iasi Department of Mathematics and Informatics Gabriel Dimitriu University.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss High-resolution data assimilation in COSMO: Status and.
« Data assimilation in isentropic coordinates » Which Accuracy can be achieved using an high resolution transport model ? F. FIERLI (1,2), A. HAUCHECORNE.
Ocean Data Variational Assimilation with OPA: Ongoing developments with OPAVAR and implementation plan for NEMOVAR Sophie RICCI, Anthony Weaver, Nicolas.
Assimilation of HF Radar Data into Coastal Wave Models NERC-funded PhD work also supervised by Clive W Anderson (University of Sheffield) Judith Wolf (Proudman.
Mean Field Variational Bayesian Data Assimilation EGU 2012, Vienna Michail Vrettas 1, Dan Cornford 1, Manfred Opper 2 1 NCRG, Computer Science, Aston University,
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 18: Minimum Variance Estimator.
2004 SIAM Annual Meeting Minisymposium on Data Assimilation and Predictability for Atmospheric and Oceanographic Modeling July 15, 2004, Portland, Oregon.
Applications of optimal control and EnKF to Flow Simulation and Modeling Florida State University, February, 2005, Tallahassee, Florida The Maximum.
MODEL ERROR ESTIMATION EMPLOYING DATA ASSIMILATION METHODOLOGIES Dusanka Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Maximum Likelihood Estimation and Simplified Kalman Filter tecniques for real time Data Assimilation.
Research Vignette: The TransCom3 Time-Dependent Global CO 2 Flux Inversion … and More David F. Baker NCAR 12 July 2007 David F. Baker NCAR 12 July 2007.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Errors, Uncertainties in Data Assimilation François-Xavier LE DIMET Université Joseph Fourier+INRIA Projet IDOPT, Grenoble, France.
Quality of model and Error Analysis in Variational Data Assimilation François-Xavier LE DIMET Victor SHUTYAEV Université Joseph Fourier+INRIA Projet IDOPT,
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Variational data assimilation: examination of results obtained by different combinations of numerical algorithms and splitting procedures Zahari Zlatev.
V0 analytical selection Marian Ivanov, Alexander Kalweit.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
The Ensemble Kalman filter
June 20, 2005Workshop on Chemical data assimilation and data needs Data Assimilation Methods Experience from operational meteorological assimilation John.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Hybrid Data Assimilation
ASEN 5070: Statistical Orbit Determination I Fall 2014
ASEN 5070: Statistical Orbit Determination I Fall 2014
Gleb Panteleev (IARC) Max Yaremchuk, (NRL), Dmitri Nechaev (USM)
Adjoint modeling and applications
Ensemble variance loss in transport models:
Hidden Markov Models Part 2: Algorithms
FSOI adapted for used with 4D-EnVar
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique.
Parameterizations in Data Assimilation
Presentation transcript:

École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation de Données Olivier Talagrand Cours 7 4 Mai 2015

Variational assimilation (4DVar) Available data - Background estimate at time 0 x 0 b = x 0 +  0 b E(  0 b  0 bT ) = P 0 b - Observations at times k = 0, …, K y k = H k x k +  k E(  k  j T ) = R k  kj - Model (supposed for the time being to be exact) x k+1 = M k x k k = 0, …, K-1 Errors assumed to be unbiased and uncorrelated in time, H k and M k linear Then objective function  0  S  J (  0 ) = (1/2) (x 0 b -  0 ) T [P 0 b ] -1 (x 0 b -  0 ) + (1/2)  k [y k - H k  k ] T R k -1 [y k - H k  k ] subject to  k+1 = M k  k,k = 0, …, K-1

ECMWF, Results on one FASTEX case (1997)

Strong Constraint 4D-Var is now used operationally at several meteorological centres (Météo-France, UK Meteorological Office, Canadian Meteorological Centre, Japan Meteorological Agency, …) and, until recently, at ECMWF. The latter now has a ‘weak constraint’ component in its operational system.

Persistence = 0 ; climatology = 50 at long range

Initial state error reduction 4DVar EDA Reforecasts from reanalysis Operational forecasts Credit E. Källén, ECMWFCredit E. Källén, ECMWF

Incremental Method Variational assimilation, as it has been described, requires the use of the adjoint of the full model. Simplifying the adjoint as such can be very dangerous. The computed gradient would not be exact, and experience shows that optimization algorithms (and especially efficient ones) are very sensitive to even slight misspecification of the gradient. Principle of Incremental Method (Courtier et al., 1994, Q. J. R. Meteorol. Soc.) : simplify simultaneously the (local tangent linear) dynamics and the corresponding adjoint.

Incremental Method (continuation 1) - Basic (nonlinear) model  k+1 = M k (  k ) - Tangent linear model  k+1 = M k ’  k where M k ’ is jacobian of M k at point  k. - Adjoint model k = M k ’ T k+1 + … Incremental Method. Simplify M k ’ and M k ’ T.

Incremental Method (continuation 2) More precisely, for given solution  k (0) of nonlinear model, replace tangent linear and adjoint models respectively by  k+1 = L k  k (2) and k = L k T k+1 + … where L k is an appropriate simplification of jacobian M k ’. It is then necessary, in order to ensure that the result of the adjoint integration is the exact gradient of the objective function, to modify the basic model in such a way that the solution emanating from  0 (0) +  0 is equal to  k (0) +  k, where  k evolves according to (2). This makes the basic dynamics exactly linear.

Incremental Method (continuation 3) As concerns the observation operators in the objective function, a similar procedure can be implemented if those operators are nonlinear. This leads to replacing H k (  k ) by H k (  k (0) ) + N k  k, where N k is an appropriate ‘simple’ linear operator (possibly, but not necessarily, the jacobian of H k at point  k (0) ). The objective function depends only on the initial  0 deviation from  0 (0), and reads J I (  0 ) = (1/2) (x 0 b -  0 (0) -  0 ) T [P 0 b ] -1 (x 0 b -  0 (0) -  0 ) + (1/2)  k [d k - N k  k ] T R k -1 [d k - N k  k ] where d k  y k - H k (  k (0) ) is the innovation at time k, and the  k evolve according to  k+1 = L k  k (2) With the choices made here, J I (  0 ) is an exactly quadratic function of  0. The minimizing perturbation  0,m defines a new initial state  0 (1)   0 (0) +  0,m, from which a new solution  k (1) of the basic nonlinear equation is determined. The process is restarted in the vicinity of that new solution.

Incremental Method (continuation 4) This defines a system of two-level nested loops for minimization. Advantage is that many degrees of freedom are available for defining the simplified operators L k and N k, and for defining an appropriate trade-off between practical implementability and physical usefulness and accuracy. It is the incremental method which, together with the adjoint method, makes variational assimilation possible. First-Guess-At-the-right-Time 3D-Var (FGAT 3D-Var). Corresponds to L k = I n. Assimilation is four-dimensional in that observations are compared to a first-guess which evolves in time, but is three-dimensional in that no dynamics other than the trivial dynamics expressed by the unit operator is present in the minimization.

Weak constraint variational assimilation Allows for errors in the assimilating model  Data - Background estimate at time 0 x 0 b = x 0 +  0 b E(  0 b  0 bT ) = P 0 b - Observations at times k = 0, …, K y k = H k x k +  k E(  k  k T ) = R k - Model x k+1 = M k x k +  k E(  k  k T ) = Q k k = 0, …, K-1 Errors assumed to be unbiased and uncorrelated in time, H k and M k linear

Then objective function (  0  1  K   J(  0  1  K ) = (1/2) (x 0 b -  0 ) T [P 0 b ] -1 (x 0 b -  0 ) + (1/2)  k=0,…,K [y k - H k  k ] T R k -1 [y k - H k  k ] + (1/2)  k=0,…,K-1 [  k+1 - M k  k ] T Q k -1 [  k+1 - M k  k ] Can include nonlinear M k and/or H k. Implemented operationally at ECMWF for the assimilation in the stratosphere. Becomes singular in the strong constraint limit Q k  0

Dual Algorithm for Variational Assimilation (aka Physical Space Analysis System, PSAS, pronounced ‘pizzazz’; see in particular book and papers by Bennett) x a = x b + P b H T [HP b H T + R] -1 (y - Hx b ) x a = x b + P b H T  -1 d = x b + P b H T m where   HP b H T + R, d  y - Hx b and m   -1 d maximises   K(  ) = -(1/2)  T  + d T  Maximisation is performed in (dual of) observation space.

Dual Algorithm for Variational Assimilation (continuation 2) Extends to time dimension, and to weak-constraint case, by defining state vector as x  (x 0 T, x 1 T, …, x K T ) T or, equivalently, but more conveniently, as x  (x 0 T,  0 T, …,  K-1 T ) T where, as before  k = x k+1 - M k x k,k = 0, …, K-1 The background for x 0 is x 0 b, the background for  k is 0. Complete background is x b = (x 0 bT, 0 T, …, 0 T ) T It is associated with error covariance matrix P b = diag(P 0 b, Q 0, …, Q K-1 )

Dual Algorithm for Variational Assimilation (continuation 3) For any state vector  = (  0 T,  0 T, …,  K-1 T ) T, the observation operator H  H  = (u 0 T, …, u K T ) T is defined by the sequence of operations u 0  =  H 0  0 then for k = 0, …, K-1  k+1 = M k  k +  k u k+1  =  H k+1  k+1 The observation error covariance matrix is equal to R = diag(R 0, …, R K )

Dual Algorithm for Variational Assimilation (continuation 4) Maximization of dual objective function   K(  ) = -(1/2)  T  + d T  requires explicit repeated computations of its gradient   K = -  + d = - (HP b H T + R)  + d Starting from  = (   T, …,   T ) T belonging to (dual) of observation space, this requires 5 successive steps - Step 1. Multiplication by H T. This is done by applying the transpose of the process defined above, viz., Set   = 0 Then, for k = K-1, …, 0  k =  k+1 +  H k+1 T  k+1  k =  M k T k Finally  0 =  0 +  H 0 T  0 The output of this step, which includes a backward integration of the adjoint model, is the vector ( 0 T, 0 T, …, K-1 T ) T

Dual Algorithm for Variational Assimilation (continuation 5) - Step 2. Multiplication by P b. This reduces to  0 = P 0 b 0  k = Q k k, k = 0, …, K-1 - Step 3. Multiplication by H. Apply the process defined above on the vector (  0 T,  0 T, …,  K-1 T ) T, thereby producing vector (u 0 T, …, u K T ) T. - Step 4. Add vector R , i. e. compute  0 =  0 + R 0    k =  k-1 + R k  k, k = 1, …, - Step 5. Change sign of vector  = (   T, …,   T ) T, and add vector d = y - Hx b,

Dual Algorithm for Variational Assimilation (continuation 6) Dual algorithm remains regular in the limit of vanishing model error. Can be used for both strong- and weak-constraint assimilation. No significant increase of computing cost in comparison with standard strong constraint variational assimilation (Courtier, Louvel)

Louvel, Doctoral Dissertation, Université Paul-Sabatier, Toulouse, 1999

Dual Algorithm for Variational Assimilation (continuation) Requires  Explicit background (not much of a problem)  Exact linearity (much more of a problem). Definition of iterative nonlinear procedures is being studied (Auroux, …)

Auroux, Doctoral Dissertation, Université de Nice-Sophia Antipolis, Nice, 2003

Variational assimilation has been extended to non Gaussian probability distributions (lognormal distributions), the unknown being the mode of the conditional distribution (M. Zupanski, Fletcher). Bayesian character of variational assimilation ? - If everything is linear and gaussian, ready recipe for obtaining bayesian sample Perturb data (background, observations and model) according to their error probability distributions, do variational assimilation, and repeat process Sample of system orbits thus obtained is bayesian - If not, very little can be said at present

Buehner et al. (Mon. Wea. Rev., 2010) For the same numerical cost, and in meteorologically realistic situations, Ensemble Kalman Filter and Variational Assimilation produce results of similar quality.

Kalman smoother Propagates information both forward and backward in time, as does 4DVar, but uses Kalman-type formulæ Various possibilities  Define new state vector x T  (x 0 T, …, x K T ) and use Kalman formula from a background x b and associated covariance matrix  b. Can take into account temporal correlations  Update sequentially vector (x 0 T, …, x k T ) T for increasing k Cannot take into account temporal correlations Algorithms exist in ensemble form

E. Cosme, HDR, 2015, Lissage d’ensemble SEEK

Conclusion on Sequential Assimilation Pros ‘Natural’, and well adapted to many practical situations Provides, at least relatively easily, explicit estimate of estimation error Cons Carries information only forward in time (of no importance if one is interested only in doing forecast) In present form, optimality is possible only if errors are independent in time

Conclusion on Variational Assimilation Pros Carries information both forward and backward in time (important for reassimilation of past data). Can easily take into account temporal statistical dependence (Järvinen et al.) Does not require explicit computation of temporal evolution of estimation error Very well adapted to some specific problems (e. g., identification of tracer sources) Cons Does not readily provide estimate of estimation error Requires development and maintenance of adjoint codes. But the latter can have other uses (sensitivity studies). Dual approach seems most promising. But still needs further development for application in non exactly linear cases. Is ensemble variational assimilation possible ? Probably yes. But also needs development.

How to write the adjoint of a code ? Operation a = b x c Input b, c Output a but also b, c For clarity, we write a = b x c b’ = b c’ = c  J/  a,  J/  b’,  J/  c’ available. We want to determine  J/  b,  J/  c Chain rule  J/  b = (  J/  a)(  a/  b) + (  J/  b’)(  b’/  b) + (  J/  c’)(  c’/  b) c 1 0  J/  b = (  J/  a) c +  J/  b’ Similarly  J/  c = (  J/  a) b +  J/  c’

M. Jardak