Lecture 3 A Review of ADI and Operator- Splitting Methods for the Solution of Initial Value Problems.

Slides:



Advertisements
Similar presentations
Formal Computational Skills
Advertisements

Lect.3 Modeling in The Time Domain Basil Hamed
Lecture 7: Basis Functions & Fourier Series
Engineering Optimization
Chapter 8 Elliptic Equation.
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Data mining and statistical learning - lecture 6
EARS1160 – Numerical Methods notes by G. Houseman
Total Recall Math, Part 2 Ordinary diff. equations First order ODE, one boundary/initial condition: Second order ODE.
MULTIPLE INTEGRALS MULTIPLE INTEGRALS Recall that it is usually difficult to evaluate single integrals directly from the definition of an integral.
Finite Element Method Introduction General Principle
Lecture 1 Linear Variational Problems (Part I). 1. Motivation For those participants wondering why we start a course dedicated to nonlinear problems by.
Universidad de La Habana Lectures 5 & 6 : Difference Equations Kurt Helmes 22 nd September - 2nd October, 2008.
Lecture 2 Linear Variational Problems (Part II). Conjugate Gradient Algorithms for Linear Variational Problems in Hilbert Spaces 1.Introduction. Synopsis.
INFINITE SEQUENCES AND SERIES
Numerical Solution of a Non- Smooth Eigenvalue Problem An Operator-Splitting Approach A. Caboussat & R. Glowinski.
MADRID LECTURE # 5 Numerical solution of Eikonal equations.
MADRID LECTURE # 6 On the Numerical Solution of the Elliptic Monge-Ampère Equation in 2-D A Least Squares Approach.
Computational Fluid Dynamics I PIW Numerical Methods for Parabolic Equations Instructor: Hong G. Im University of Michigan Fall 2005.
Numerical Methods for Partial Differential Equations CAAM 452 Spring 2005 Lecture 9 Instructor: Tim Warburton.
III Solution of pde’s using variational principles
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 10. Ordinary differential equations. Initial value problems.
1 Chapter 6 Numerical Methods for Ordinary Differential Equations.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Lecture 35 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
Finite Differences Finite Difference Approximations  Simple geophysical partial differential equations  Finite differences - definitions  Finite-difference.
AP CALCULUS PERIODIC REVIEW. 1: Limits and Continuity A function y = f(x) is continuous at x = a if: i) f(a) is defined (it exists) ii) iii) Otherwise,
1 On Free Mechanical Vibrations As derived in section 4.1( following Newton’s 2nd law of motion and the Hooke’s law), the D.E. for the mass-spring oscillator.
ME451 Kinematics and Dynamics of Machine Systems
Simpson Rule For Integration.
PTT 204/3 APPLIED FLUID MECHANICS SEM 2 (2012/2013)
ME451 Kinematics and Dynamics of Machine Systems Numerical Solution of DAE IVP Newmark Method November 1, 2013 Radu Serban University of Wisconsin-Madison.
1 EEE 431 Computational Methods in Electrodynamics Lecture 4 By Dr. Rasime Uyguroglu
Integration of 3-body encounter. Figure taken from
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Flow of mechanically incompressible, but thermally expansible viscous fluids A. Mikelic, A. Fasano, A. Farina Montecatini, Sept
“On Sizing and Shifting The BFGS Update Within The Sized-Broyden Family of Secant Updates” Richard Tapia (Joint work with H. Yabe and H.J. Martinez) Rice.
Numerical Methods.
1 Chapter 5: Harmonic Analysis in Frequency and Time Domains Contributors: A. Medina, N. R. Watson, P. Ribeiro, and C. Hatziadoniu Organized by Task Force.
Lecture 22 Numerical Analysis. Chapter 5 Interpolation.
Large Timestep Issues Lecture 12 Alessandra Nardi Thanks to Prof. Sangiovanni, Prof. Newton, Prof. White, Deepak Ramaswamy, Michal Rewienski, and Karen.
Jump to first page Chapter 3 Splines Definition (3.1) : Given a function f defined on [a, b] and a set of numbers, a = x 0 < x 1 < x 2 < ……. < x n = b,
CIS/ME 794Y A Case Study in Computational Science & Engineering 2-D conservation of momentum (contd.) Or, in cartesian tensor notation, Where repeated.
Discretization Methods Chapter 2. Training Manual May 15, 2001 Inventory # Discretization Methods Topics Equations and The Goal Brief overview.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Lecture 40 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Ordinary Differential Equations
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Lecture 39 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
Runge Kutta schemes Taylor series method Numeric solutions of ordinary differential equations.
The inference and accuracy We learned how to estimate the probability that the percentage of some subjects in the sample would be in a given interval by.
Amir Yavariabdi Introduction to the Calculus of Variations and Optical Flow.
Math for CS Fourier Transforms
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Chapter 30.
Systems of First Order Linear Equations
Quantum Two.
Numerical Analysis Lecture14.
Numerical Analysis Lecture10.
topic4: Implicit method, Stability, ADI method
topic4: Implicit method, Stability, ADI method
topic16_cylinder_flow_relaxation
topic4: Implicit method, Stability, ADI method
MATH 175: Numerical Analysis II
topic4: Implicit method, Stability, ADI method
6th Lecture : Numerical Methods
Modeling and Simulation: Exploring Dynamic System Behaviour
Presentation transcript:

Lecture 3 A Review of ADI and Operator- Splitting Methods for the Solution of Initial Value Problems

1. Preliminary Remarks Operator-splitting methods have a controversial reputation. Some practitioners see them as particular time- discretization methods, while others view them as iterative methods. They are clearly both, even if in this presentation we focus on the first point of view. Incidentally, let us mention that various numerical methods and algorithms are disguised operator-splitting methods as we will see on a particular example. Their main feature is to take advantage of decomposition properties in the structure of the problem to be solved, some of these properties being obvious, while some others are not.

2. Time-Discretization of Initial Value Problems by ADI and OS Schemes Let us consider the following IVP (flow for the Dyn. Syst. community): (IVP) dφ/dt + A(φ) = 0 in (0, T), φ(0) = φ 0. Operator A: V →V and can be multivalued. Suppose that (DP) A =Σ J j = 1 A j holds. A natural question is: Can we take advantage of (DP) to solve (IVP) ? As we all know here: the answer is yes!

2.1. ADI type schemes We suppose for the moment that J = 2. A first candidate to take advantage of (DP) is provided by the Peaceman-Rachford scheme, namely (with τ > 0 a time-discretization step): (1) φ 0 = φ 0 ; for n ≥ 0, φ n being known, solve (PRS) (2) (φ n + ½ – φ n )/( ½ τ) + A 1 (φ n + ½ ) + A 2 (φ n ) = 0, (3) (φ n + 1 – φ n + ½ )/( ½ τ) + A 1 (φ n + ½ ) + A 2 (φ n +1 ) = 0. (PRS) is of the backward-Euler type for A 1 and of the forward-Euler type for A 2 on the time interval [t n, t n+½ ], the situation being reversed on [t n+½, t n+1 ].

A second candidate is the Douglas-Rachford scheme (of the predictor-corrector type); if J = 2, the DR-scheme reads as follows (1) φ 0 = φ 0 ; for n ≥ 0, φ n being known, solve (DRS) (2) (φ n + ½ – φ n )/ τ + A 1 (φ n + ½ ) + A 2 (φ n ) = 0, (3) (φ n + 1 – φ n )/ τ + A 1 (φ n + ½ ) + A 2 (φ n +1 ) = 0. These schemes, which have been around for about 50 years, have motivated a huge literature, either as methods to approximate the solution of time dependent problems or as iterative methods to solve linear and nonlinear steady state problems in finite or infinite dimension. In the second case a strategy of periodically variable τ is advocated (E. Wachpress and others). Further remarks are in order; among them:

● If V is an Hilbert space and A 1 and A 2 are monotone operators (possibly multivalued) both schemes are unconditionally stable. ● The most general convergence results in the context of monotone operators in Hilbert spaces are those of P.L. Lions-B. Mercier (1979). ● The D-R scheme can be generalized to decompositions with more than two operators (J > 2). ● The particular case A = –  2, A = A 1 + A 2 with A 1 = – ∂ 2 /∂x 1 2 and A 2 = – ∂ 2 /∂x 2 2 explains the terminology ADI, but the applicability of (PRS) and (DRS) goes much beyond the above type of decomposition since A j can be for example the subgradient of a convex functional. ● The above schemes are O( τ ), at best, in general; however if A 1 and A 2 are both linear and commute, (PRS) is O( τ 2 ); T. Arbogast, J. Douglas et al have found a way to make (DRS) second order accurate via a slight modification (if A 1, A 2 are smooth enough) *. ● (PRS) and (DRS) are not stiff A-stable; indeed, (PRS) is very close to the Crank-Nicolson scheme. ● Several French scientists have contributed to ADI; among them, J. Lieutaud, D. Gabay, P.L. Lions, E.Godlewsky, M.Schatzmann.

On the Russian side important contributions by Dyakonov (who passed away recently). What about Spain ? Important contributions by E. Fernandez-Cara et al, Bermudez, Moreno, Pares,….. ● In order to improve the A – stability properties of (PRS) we introduced (in the mid-1980s) the so-called (by us) θ-scheme (the German scientists call it the Fractional Step θ-scheme). The idea is quite simple: (i) Consider θ  (0, ½ ) (0 < θ < 1/3, in practice). (ii) Denote (n +  ) τ by t n+ . (iii) Decompose [t n, t n+1 ] as [t n, t n+1 ] = [t n, t n+θ ]  [t n+θ, t n+1– θ ]  [t n+1– θ, t n+1 ]. Use the above decomposition of the time interval [t n, t n+1 ] to time-discretize the initial value problem (IVP) by the following variant of the Peaceman-Rachford scheme (where θ* = 1 – 2θ):

Description of the θ - Scheme (1) φ 0 = φ 0 ; for n ≥ 0, φ n being known, solve (2) (φ n+θ – φ n )/(θ τ) + A 1 (φ n+θ ) + A 2 (φ n ) = 0, (θ-scheme) (3) (φ n+1–θ – φ n+θ )/(θ* τ) + A 1 (φ n+θ ) + A 2 (φ n+1–θ ) = 0, (4) (φ n+1 – φ n+1–θ )/(θ τ) + A 1 (φ n+1 ) + A 2 (φ n+1–θ ) = 0.

Some Properties of the θ-Scheme ● Properties are more problem dependent than for P-R and D-R; basically, for 1/4 < θ < 1/3 the scheme is stiff-A stable. It is only O( τ ), but there are situations where θ = 1 – 1/√2 makes it O( τ 2 ) (actually, nearly O( τ 3 ); it would be O( τ 3 ) if √2 = 1.5). ● Quite good to capture steady state solutions (compared to (PRS) and (DRS)). ● Very popular in Germany (R. Rannacher, S.Turek, E. Bänsch, V. Heuveline and others) for the simulation of incompressible viscous flow. Also used in various places (Australia among them) for simulating non-Newtonian fluid flow.

2.2. LIE’S, STRANG’S and MARCHUK- YANENKO Schemes Co-existing with ADI, there have been another family of Operator- Splitting schemes going back (cf. Chorin, Hughes, Marsden et al, CPAM 1978) to S. Lie and widely used by CFD, Physics, Chemistry, Russian, etc,,, people. It is related to the Trotter formula in Semi- Group Theory. The basic idea is very simple: suppose that A is linear in (IVP), namely that (IVP) dφ/dt + Aφ = 0 in (0, T), φ(0) = φ 0. We have then φ(t) = e – At φ 0, and therefore:

(FSGR) φ(t + τ ) = e – A τ φ(t),  t ≥ 0. We also have (FOSR1) e – A 2 τ e – A 1 τ – e – (A 1 + A 2 ) τ = ½ (A 2 A 1 – A 1 A 2 ) τ 2 + O( τ 3 ), and (FOSR2) e – ½ A 1 τ e – A 2 τ e – ½ A 1 τ – e – (A 1 +A 2 ) τ = O( τ 3 ) ( = 0 if [A 1, A 2 ] = 0). Relations (FSGR) and (FOSR1) leads to the following OS scheme, known by some as the Lie’s scheme:

Descriptions of the Lie’s Scheme (I) Condensed form: (1) φ 0 = φ 0 ; n ≥ 0, φ n → φ n+½ → φ n+1 as follows: (LIE’S SCHEME) (2) φ n+½ = e – A 1 τ φ n, (3) φ n+1 = e – A 2 τ φ n+½. The Lie’s scheme is exact if A 1 and A 2 commute.

Descriptions of the Lie’s Scheme (II) Practical and Generalized Form: ( with 0 ≤ ,  ≤ 1,  +  = 1) (1) φ 0 = φ 0 ; n ≥ 0, φ n → φ n+½ → φ n+1 as follows: dφ/dt + A 1 [φ, t n +  (t – t n )] = 0 on (t n, t n+1 ), (2)  φ n+½ = φ(t n+1 ), φ(t n ) = φ n, dφ/dt + A 2 [φ, t n+  +  (t – t n )] = 0 on (t n, t n+1 ), (3)  φ n+1 = φ(t n+1 ). φ(t n ) = φ n+½,

Some Properties of the Lie’s Scheme ● Easily generalizable to J > 2 (when simulating particulate flow we may have J ≈ 10). ● Unconditionally stable if the A j are monotone operators (possibly multivalued). Indeed this scheme is quite robust. ● First order accurate at most, in general. ● Different time and space discretizations can be used to discretize the subproblems (2) and (3) (including closed form solutions). ● The LIE’S scheme is asymptotically inconsistent; when applied as an iterative solver to compute a steady state solution, in general φ n and φ n+½ converge to different limits whose distance to the exact solution is O( τ ) at best [it may happen that ½(φ n + φ n+½ ) has better convergence properties]. This makes some practitioners worry (I was one of them) if steady state solutions are required.

Description of the Strang’s Scheme (I) Relations (FSGR) and (FOSR2) leads to the following OS scheme, known as the Strang’s scheme: (1) φ 0 = φ 0 ; n ≥ 0, φ n → φ n+½ → φ # n+½ → φ n+1 as follows: dφ/dt + A 1 (φ, t) = 0 on (t n, t n+½ ), (2)  φ n+½ = φ(t n+½ ), φ(t n ) = φ n, dφ/dt + A 2 (φ, t n+ ½ ) = 0 on (0, τ ), (3)  φ # n+½ = φ( τ ), φ(0) = φ n+½,

Description of the Strang’s Scheme (II) dφ/dt + A 1 (φ, t) = 0 on (t n+½, t n+1 ), (4)  φ n+1 = φ(t n+1 ). φ(t n+½ ) = φ # n+½, Properties of the Strang’s Scheme: ● Unconditionally stable if the A j are monotone operators. ● O(  2 ) if the A j are smooth enough. ● Generalizable to J > 2. ● Asymptotically inconsistent but provide steady state solutions whose distance at the exact solution is O(  2 ). ● The sub-problems (2)-(4) have to be solved by schemes which are themselves second accurate (at least).

The following implicit Runge-Kutta scheme of order two (due to J. Cash) is well-suited to such a task: When applied to the solution of dX/dt + f(X, t) = 0 on (t 0, t f ), (IVP) X(t 0 ) = X 0 the scheme reads as follows: (1) X 0 = X 0 ; then for m ≥ 0

X m+θ + θΔt f(X m+θ, t m+θ ) = X m (2) X m+1 – θ = (1 – θ)/θ X m+θ + (2θ – 1)/θ X m (3) X m+1 + θΔt f(X m+1, t m+1 ) = X m+1 – θ (4) The above scheme is stiff A-stable and 2 nd order accurate if θ = 1 – 1/√2 (actually, almost 3 rd order accurate).

● There exist (e.g., M. Schatzman) variants of the Strang’s scheme which are O(  4 ) but they are not unconditionally stable. Description of the Marchuk-Yanenko scheme Suppose that to implement the Lie’s scheme we discretize the sub- initial value problems using just one step of the backward Euler scheme. We obtain then the following scheme: (1) φ 0 = φ 0 ; n ≥ 0, φ n → φ n+1/J …… → φ n+(J – 1)/J → φ n+1 as follows: (2) j (φ n+ j /J – φ n+(j – 1)/J )/ τ + A j (φ n+ j /J, t n+1 ) = 0, for j = 1, 2, ……., J.

The above scheme is known as the Marchuk-Yanenko scheme. If the A j are are monotone the scheme is unconditionally stable. It is O( τ ) at best, robust and relatively easy to implement. 3. ADI and Augmented Lagrangian algorithms (i) Consider the following functional J : V → R; assume that J is differentiable and that (DP) J = J 1 + J 2, J 1 and J 2 being both differentiable. (ii) Consider now the minimization problem (MP) Min v  V J (v).

(iii) Suppose that (MP) has a solution denoted by u. We have then: (OC) J’(u) = J 1 ’(u) + J 2 ’(u) = 0. (iv) Problem (MP) is clearly equivalent to (EMP) Min {v, q}  W j (v, q), with W = {{v, q}| {v, q}  V × V, v – q = 0} and j (v, q) = J 1 (v) + J 2 (q). (v) Observe that {u, u} is solution of problem (EMP).

(vi) Assuming that V is a real Hilbert space, we associate to (EMP) the following saddle-point problem Find {{u, p}, λ}  (V × V) × V such that (SDP) L r ( u, p; μ) ≤ L r (u, p; λ) ≤ L r (v, q; λ),  {{v, q}, μ}  (V × V) × V, where r > 0 and (LA) L r (v, q; μ) = j(v, q) + ½ r ||v – q|| 2 + (μ, v – q). (vii) If {{u, p}, λ} is solution of (SDP) then p = u where u is solution of (MP).

(viii) To solve (SDP) we use the following Relaxation/Uzawa algorithm ((ALG2) in various related references): (1) {u – 1, λ 0 } is given in V × V; n ≥ 0, {u n – 1, λ n } → p n → u n → λ n + 1 as follows (2) J 2 ’(p n ) + r (p n – u n – 1 ) – λ n = 0, (3) J 1 ’(u n ) + r (u n – p n ) + λ n = 0, (4) λ n + 1 = λ n + r (u n – p n ). Comparing (3) and (4) shows that λ n + 1 = – J 1 ’(u n ) which implies in turn that:

AL  ADI (a) r (p n – u n – 1 ) + J 2 ’(p n ) + J 1 ’(u n – 1 ) = 0, (b) r (u n – u n – 1 ) + J 2 ’(p n ) + J 1 ’(u n ) = 0. We have recovered thus the Douglas-Rachford scheme with  = 1/r. One can recover the Peaceman-Rachford scheme by updating λ n a 1 st time between (2) and (3). 4. A striking application. Let A be a d × d matrix, symmetric and positive definite. We denote by λ 1 the smallest eigenvalue of A; we have then:

(EVM) λ 1 = Min v  S Av.v with S = {v| v  R d, ||v|| = 1}. (EVM) is equivalent to (EVM-P) Min v  R d [½ Av.v + I s (v)] with 0 if v  S, I s (v) = i.e., I s (.) is the indicator functional of S. + ∞ if v  R d \S If u solves (EVM) we have the following optimality condition:

(NOC) Au + ∂I S (u) = 0, with ∂I S (u) a generalized differential of I S at u. To (NOC), we associate the following flow that we time-discretize by the Marchuk- Yanenko Scheme; we obtain thus: u(0) = u 0, (NOC-F) du/dt + Au + ∂I S (u) = 0 and then (1) u 0 = u 0 ; (M-Y) n ≥ 0; u n → u n+ ½ → u n+1 as follows

(2) (u n+½ – u n )/  + Au n+½ = 0, (3) (u n+1 – u n+½ )/  + ∂I S (u n+1 ) = 0. Eq. (2) implies that: (2)’ u n+½ = (I +  A) –1 u n Observing that I S =  I S, Eq. (3) implies that: u n+1 = Arg max v  S u n+½.v, namely (3)’ u n+1 = u n+½ /||u n+½ ||. We have reinvented thus the inverse power method with shift.

5. Another application: Bingham Flow in a Pipe Participating in the Fall of 2005 to a conference on visco-plasticity in Banff (BC), we had the pleasant surprise to discover that the ‘visco- plastic’ community was quite in favor of the Augmented Lagrangian (AL) approach for the numerical simulation of visco-plastic flow with stress yield, like Bingham’s. We are going to discuss thus the AL solution of a simple Bingham flow problem as an illustration of the above methodology: Let Ω be a bounded domain of R 2 ; we denote by Γ the boundary of Ω. We consider then the following problem from the Calculus of Variations u  H 1 0 (Ω), (BFP) J(u)  J(v),  v  H 1 0 (Ω),

with J(v) = ½μ ∫ Ω |  v| 2 dx + g ∫ Ω |  v|dx – C ∫ Ω vdx. The idea here is to uncouple nonlinearity and derivatives; to do that we will treat  v as an additional unknown q and force the relation  v – q = 0 by penalty and Lagrange multiplier. To implement the above idea, we proceed as follows: (i) Introduce Q = (L 2 (Ω)) 2, W = {{v,q}| v  H 1 0 (Ω), q  Q, q =  v }, j(v,q) = ½μ ∫ Ω |  v| 2 dx + g ∫ Ω |q|dx – C ∫ Ω vdx. (ii) Observe that (BFP) is equivalent to: (BFP-E) {u, p}  W, j(u,p)  j(v,q),  {v,q}  W.

(iii) Introduce the following augmented Lagrangian L r (with r > 0) from (H 1 0 (Ω) × Q) × Q into R: L r ({v,q},m) = j(v,q) + ½r ∫ Ω |  v – q|dx + ∫ Ω m.(  v – q)dx and observe that if {{u,p}, l } is a saddle-point of L r over the space (H 1 0 (Ω) × Q) × Q (i.e., {{u,p}, l }  (H 1 0 (Ω) × Q) × Q, (SDPP) L r ({u,p},m)  L r ({u,p}, l )  L r ({v,q}, l ),  {{v,q},m}  (H 1 0 (Ω) × Q) × Q, then {u,p} solves (BFP-E) which implies that u solves (BFP) and p =  u.

(iii) In order to solve (BFP-E) we advocate the following algorithm (a disguised Douglas-Rachford ADI algorithm): (1) u – 1 and l 0 are given in H 1 0 (Ω) and Q. For n ≥ 0, u n – 1 and l n being known, solve (2) p n  Q, L r ({u n – 1, p n }, l n )  L r ({u n – 1, q }, l n ),  q  Q, then (3) u n  H 1 0 (Ω), L r ({u n, p n }, l n )  L r ({v, p n }, l n ),  v  H 1 0 (Ω), and finally (4) l n+1 = l n + r (  u n – p n ). The convergence follows from, e.g., RG-Le Tallec, 1989.

The sub-problems (2) and (3) are simpler that what they look like since (a) 1/r (1 – g / |X n (x)|) if |X n (x)| > g, p n ( x ) = 0 if |X n (x)|  g, with X n = r  u n – 1 + l n. (b) Problem (3) is equivalent to u n  H 1 0 (Ω), (LVP) (μ + r) ∫ Ω  u n.  vdx = C ∫ Ω vdx + ∫ Ω (r p n – l n ).  vdx,  v  H 1 0 (Ω), a ‘simple’ linear problem indeed.

6. More applications

Above, we have visualized the behavior of the mixture of solid spherical particles with a Newtonian incompressible viscous fluid in a rotating cylinder, the number of particles being 160, here. When the angular velocity is sufficiently large, the particles cluster in 3 sub-populations, of equal size, approximately (this reminds the formation of Bénard rolls in heated flow). The simulation relies on the Lie’s scheme, with J of the order of 10. We could have illustrated our presentation with many more results from numerical experiments related to many areas of applications. Indeed, new applications of ADI/OS are discovered almost every day (e.g., Monge-Ampère type equations) and we are not sure that the Scientific Community is fully aware of the capabilities of the ADI/OS methods. 7. Some References [1] GLOWINSKI, R. and P. LE TALLEC, Augmented Lagrangians and Operator –Splitting Methods in Nonlinear Mechanics, SIAM, Philadelphia, PA, 1990.

[2] GLOWINSKI, R., Finite element methods for incompressible viscous flow. In Handbook of Numerical Analysis, Vol. IX, P.G. Ciarlet and J.L. Lions, eds., North-Holland, Amsterdam, 2003, pp [3]MARCHUK, G.I., Splitting and alternating direction methods. In Handbook of Numerical Analysis, Vol. I, P.G. Ciarlet and J.L. Lions, eds., North-Holland, Amsterdam, 1990, pp [4] DEAN, E.J. and R.GLOWINSKI, An augmented Lagrangian approach to the numerical solution of the Dirichlet problem for the elliptic Monge-Ampère equation in two dimension, Electronic Transactions in Numerical Analysis, 22, (2006),

Additional references on ADI and OS can be found in refs. [1]-[3]. In particular, the Chapters 2 and 6 of ref. [2] are almost self-contained introductions to ADI and OS methods. A recent reference concerning inverse problems (4 th order) of elliptic nature is: [5] F. DELBOS, J.CH. GILBERT, R. GLOWINSKI & D. SINOQUET, Constrained optimization in seismic reflection tomography: a Gauss- Newton augmented Lagrangian approach, Geophys. J. International, 164, (2006),