MADRID LECTURE # 6 On the Numerical Solution of the Elliptic Monge-Ampère Equation in 2-D A Least Squares Approach.

Slides:



Advertisements
Similar presentations
Scale & Affine Invariant Interest Point Detectors Mikolajczyk & Schmid presented by Dustin Lennon.
Advertisements

Lect.3 Modeling in The Time Domain Basil Hamed
Bregman Iterative Algorithms for L1 Minimization with
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Chapter 8 Elliptic Equation.
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Quantum One: Lecture 16.
Quantum One: Lecture 3. Implications of Schrödinger's Wave Mechanics for Conservative Systems.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Data mining and statistical learning - lecture 6
Empirical Maximum Likelihood and Stochastic Process Lecture VIII.
Error Measurement and Iterative Methods
Visual Recognition Tutorial
1cs542g-term Notes  Make-up lecture tomorrow 1-2, room 204.
1cs542g-term Notes  Extra class this Friday 1-2pm  Assignment 2 is out  Error in last lecture: quasi-Newton methods based on the secant condition:
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Function Optimization Newton’s Method. Conjugate Gradients
Finite Element Method Introduction General Principle
Curve-Fitting Regression
Lecture 1 Linear Variational Problems (Part I). 1. Motivation For those participants wondering why we start a course dedicated to nonlinear problems by.
Lecture 2 Linear Variational Problems (Part II). Conjugate Gradient Algorithms for Linear Variational Problems in Hilbert Spaces 1.Introduction. Synopsis.
Approximation Algorithms
Numerical Solution of a Non- Smooth Eigenvalue Problem An Operator-Splitting Approach A. Caboussat & R. Glowinski.
MADRID LECTURE # 5 Numerical solution of Eikonal equations.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Lesson 5 Method of Weighted Residuals. Classical Solution Technique The fundamental problem in calculus of variations is to obtain a function f(x) such.
Numerical Methods for Partial Differential Equations
Maximum likelihood (ML)
Numerical methods for PDEs PDEs are mathematical models for –Physical Phenomena Heat transfer Wave motion.
Scientific Computing Partial Differential Equations Poisson Equation Calculus of Variations.
Finite element method 1 Finite Elements  Basic formulation  Basis functions  Stiffness matrix  Poisson‘s equation  Regular grid  Boundary conditions.
Computational Optimization
Parallel Adaptive Mesh Refinement Combined With Multigrid for a Poisson Equation CRTI RD Project Review Meeting Canadian Meteorological Centre August.
1 Numerical Integration of Partial Differential Equations (PDEs)
Scientific Computing Partial Differential Equations Introduction and
AN ITERATIVE METHOD FOR MODEL PARAMETER IDENTIFICATION 4. DIFFERENTIAL EQUATION MODELS E.Dimitrova, Chr. Boyadjiev E.Dimitrova, Chr. Boyadjiev BULGARIAN.
Biointelligence Laboratory, Seoul National University
Computing a posteriori covariance in variational DA I.Gejadze, F.-X. Le Dimet, V.Shutyaev.
Start Presentation October 4, 2012 Solution of Non-linear Equation Systems In this lecture, we shall look at the mixed symbolic and numerical solution.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
C GasparAdvances in Numerical Algorithms, Graz, Fast interpolation techniques and meshless methods Csaba Gáspár Széchenyi István University, Department.
SECTION 13.8 STOKES ’ THEOREM. P2P213.8 STOKES ’ VS. GREEN ’ S THEOREM  Stokes ’ Theorem can be regarded as a higher- dimensional version of Green ’
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Nonlinear Programming Models
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
“On Sizing and Shifting The BFGS Update Within The Sized-Broyden Family of Secant Updates” Richard Tapia (Joint work with H. Yabe and H.J. Martinez) Rice.
Numerical Methods.
Parallel Solution of the Poisson Problem Using MPI
Ch11: Normal Modes 1. Review diagonalization and eigenvalue problem 2. Normal modes of Coupled oscillators (3 springs, 2 masses) 3. Case of Weakly coupled.
1.2 Guidelines for strong formulations  Running time for LP usually depends on m and n ( number of iterations O(m), O(log n)). Not critically depend on.
Mathematical Tools of Quantum Mechanics
Partial Derivatives Example: Find If solution: Partial Derivatives Example: Find If solution: gradient grad(u) = gradient.
Boundary-Value Problems in Rectangular Coordinates
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Amir Yavariabdi Introduction to the Calculus of Variations and Optical Flow.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Regularized Least-Squares and Convex Optimization.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Computation of the solutions of nonlinear polynomial systems
Tools for Decision Analysis: Analysis of Risky Decisions
Numerical Analysis Lecture14.
Numerical Analysis Lecture13.
6.5 Taylor Series Linearization
Numerical Analysis Lecture10.
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Chapter 5: Morse functions and function-induced persistence
Presentation transcript:

MADRID LECTURE # 6 On the Numerical Solution of the Elliptic Monge-Ampère Equation in 2-D A Least Squares Approach

1. Introduction Our goal here is to discuss the least-squares solution in H 2 (Ω) of some fully nonlinear elliptic equations of the Monge-Ampère type in 2-D. Why H 2 (Ω) and why least-squares? ● Because from a computational point of view there is always advantage at solving a given problem in a Hilbert space and, here H 2 (Ω) is a natural choice. ● Least-squares methods are well suited to Hilbert spaces and provide apparently an alternative to viscosity solution based methods.

Introduction (2) We will focus only on the solution of the Dirichlet problem for the canonical Monge-Ampère equation (E-MA-D) det D 2 ψ = f in Ω, ψ = g on Г, with Ω  R 2 and f > 0, but “our” methodology applies also (among other problems) to the Pucci-Dirichlet problem (PUC-D)  λ + + λ – = 0 in Ω, ψ = g on Г,

Introduction (3) with λ + (resp., λ – ) the largest (resp., the smallest ) eigenvalue of the matrix-valued function (Hessian) D 2 ψ and   (1, +∞) (if  = 1, one recovers the linear Poisson-Dirichlet problem). The Gaussian curvature equation det D 2 ψ = f (1 + |  ψ| 2 ) 2 in Ω is also in our agenda.

Introduction (4) The Mathematics of Monge-Ampère type equations has generated a large literature (Th. Aubin, L.A.Caffarelli, …). On the other hand (cf. Google Scholar) one can not say the same of their Numerics, with some notable exceptions such as Olicker- Prussner and Benamou-Brenier, and more recently A. Oberman; indeed (from B-B): “ It follows from this theoretical result that a natural computational solution of the L2 MKP is the numerical resolution of the Monge-Ampère equation (6). Unfortunately, this fully nonlinear second-order elliptic equation has not received much attention from numerical analysts and, to the best of our knowledge, there is no efficient finite-difference or finite-element methods, comparable to those developed for linear second-order elliptic equations (such as fast Poisson solvers, multigrid methods, preconditioned conjugate gradient methods,….).” Our goal is to show that several of the tools mentioned in the above statement concerning the solution of linear second order elliptic problems still apply for these fully nonlinear elliptic equations.

2. A least-squares method for the elliptic Monge-Ampère equation in dimension 2 The Dirichlet problem for the prototypical Monge-Ampère equation reads as follows: detD 2 ψ = f in , ψ = g on . (MA-D) If f is positive the above equation is elliptic (E-MA-D). This equation is somewhat tricky. Take  = (0, 1) 2 and consider the particular case of (E-MA-D) defined by  2 ψ/  x 1 2  2 ψ/  x 2 2 – |  2 ψ/  x 1  x 2 | 2 = 1 in , ψ = 0 on . (2.1) Clearly, (2.1) can not have smooth solutions, despite the smoothness of its data. Trouble lies with the non-strict convexity of .

Section 2 (2) From now on, we suppose that f > 0 and that {f, g}  {L 1 (  ), H 3/2 (  )}, implying that the following space and set are non empty: V g = {  |   H 2 (  ),  = g on  }, Q f = {q| q  Q, det q = f }, with Q = {q| q  (L 2 (  )) 2  2, q = q t }. Solving the Monge-Ampère equation in H 2 (  ) is equivalent to looking at the intersection in Q of D 2 V g and Q f.

Section 2 (3) (E-MA-D) has a solution in H 2 (  )

Section 2 (3) (E-MA-D) has no solution in H 2 (  )

Section 2 (4) In order to handle those situations where (E-MA-D) has no solution in H 2 (  ) despite the fact that neither V g nor Q f are empty we suggest to solve the above problem via the following least squares formulation Min { ,q} { ½ ∫  |D 2  – q| 2 dx}, { , q}  V g ×Q f (LSQ) with |q| = (q q q 12 2 ) ½.

Section 2 (5) In order to solve (LSQ) by operator-splitting techniques we observe that (LSQ) is equivalent to Min { ,q} { ½ ∫  |D 2  – q| 2 dx + I f (q)}, { , q}  V g × Q (LSQ-P) 0 if q  Q f, with I f (q) = i.e., I f is the indicator functional +  if q  Q \ Q f, of Q f.

Section 2(6) We can now solve (LSQ-P) by a block relaxation method operating alternatively between V g and Q f. A closely related algorithm is obtained as follows: (i) Derive the Euler-Lagrange equation of (LSQ-P), namely {ψ, p}  V g  Q, ∫ Ω D 2 ψ:D 2 φ dx = ∫ Ω p:D 2 φ dx,  φ  V 0, ∫ Ω p:q dx + = ∫ Ω D 2 ψ:q dx,  q  Q,

Section 2(7) with V 0 = H 2 (  )  H 0 1 (  ) and ∂I f (p) a generalized differential of I f at p. (ii) Associate to this E-L equation an initial value problem (flow) in V g ×Q. (iii) Use operator-splitting to time discretize the above flow problem.

Section 2(8) The above program leads to the following algorithm:

Section 2 (9) (1){ψ 0, p 0 } = {ψ 0, p 0 }; for n  0, {ψ n, p n } being known, solve for {ψ n+1, p n+1 } (2)(p n+1 – p n )/  + p n+1 +  I f (p n+1 )  D 2 ψ n, ψ n+1  V g, (3) ∫  Δ[(ψ n+1 – ψ n )/  ] Δ  dx + ∫  D 2 ψ n+1 : D 2  dx = ∫  p n+1 : D 2  dx,    V 0,

Section 2 (10) with r:s = r 11 s 11 + r 22 s r 12 s 12 if r = r t, s = s t. Problem (2) can be solved point-wise, while problem (3 ) can be solved by a conjugate gradient algorithm operating in V g and V 0 equipped with the scalar product {v, w} → ∫  Δv Δw dx. Each iteration of the above c.g. algorithm requires the solution of 2 Poisson-Dirichlet problems.

3. Finite Element Approximation of (E-MA-D). Numerical Experiments 3.1. Finite Element Approximation Suppose that T h is a finite element triangulation of Ω; we approximate H 2 (Ω),V g, V 0 (=H 2 (Ω)  H 1 0 (Ω)), Q and Q f by: (3.1) V h = {φ|φ  C 0 (  Γ), φ| T  P 1,  T  T h }, (3.2) V gh = {φ|φ  V h, φ(P) = g(P),  P  Г and vertex of T h }, (3.3) V 0h = {φ|φ  V h, φ = 0 on Г }, (3.4) Q h = {q|q  (V 0h ) 4, q = q t }, (3.5) Q fh = {q|q  Q h, (q 11 q 22 – q 2 12 )(P) = f h (P),  P vertex of T h,P  Г }, with f h a continuous approximation of f.

Section 3 (2) Next, we approximate ∂ 2 φ/∂x i ∂x j by D ijh (φ) defined as follows for 1 ≤ i, j ≤ 2: D ijh (φ)  V 0h, ∫ Ω D ijh (φ) v dx = – ½ ∫ Ω [∂φ/∂x i ∂v/∂x j + ∂φ/∂x j ∂v/∂x i ]dx,  v  V 0h,  φ  V h. This is a mixed finite element approximation of the second order derivatives, classically used for solving linear and nonlinear bi-harmonic problems (Cahn-Hilliard, Von Kármán equations for plates, Navier-Stokes equations in their {ψ, ω} formulation, etc…).

Section 3 (3) Deriving a discrete analogue of the above least squares formulation of (E-MA-D) is pretty obvious now Numerical Experiments The first test problem is defined as follows: (i) Ω = (0, 1) × (0, 1). (ii) f(x) = 1/|x|,  x  Ω. (iii) g(x) = (2|x|) 3/2 /3,  x  Γ. With these data one can easily show that the function ψ defined by ψ(x) = (2|x|) 3/2 /3,  x  Ω, is solution of the corresponding (E-MA-D) problem. The above function does not belong to C 2 (Ω  Г) but belongs to W 2, p (Ω) for p  [1, 4); it has in principle enough regularity to be handled by our approach. We have used a uniform mesh like the one below.

Section 3 (4) A uniform triangulation of Ω (h= 1/4).

Section 3 (5) h  nit ||D 2 h ψ c h – p c h || Q || ψ c h – ψ|| L 2 (Ω) _____________________________________________________________________ 1/ × 10 – × 10 – 4 1/ × 10 – × 10 – 4 1/ × 10 – × 10 – 4 1/ 32 1, × 10 – × 10 – 4 1/ × 10 – × 10 – 4 1/ × 10 – × 10 – 4 1/ × 10 – × 10 – 4 1/ 64 1, × 10 – × 10 – 4 First Test Problem The above results suggest an approximation error in O(h 2 ) for the L 2 (Ω)- norm.

First test problem: Graph of f

First test problem: Graph of Ψ h c

Data:  = (0,1)×(0,1), f = 1, g = 0. Results: Section 3(8) Second Test Problem

Section 3 (9) Second Test Problem

Section 3 (10) Second Test Problem

4. Other Fully Nonlinear Elliptic Equations With some subtle differences the methodology we applied to the solution of the Monge-Ampère equation applies also to the solution of the following Pucci’s Equation (PE) αλ + + λ - = 0 in Ω, Ψ = g on ∂Ω, where: (i) α  (1, + ∞). (ii) λ + and λ - are the largest and smallest eigenvalues of the Hessian matrix D 2 Ψ of the function Ψ. (iii) Ω  R 2.

Section 4 (2) (PE) is equivalent to the following system α|  Ψ| 2 + (α – 1) 2 detD 2 Ψ = 0 in Ω, Ψ = g on ∂Ω, (PE)’  Ψ ≤ 0 in Ω. A note which appeared in the CRAS, Paris (2005) describes the LS/OS solution of (PE)’.

5. Final Observations Solving (E-MA-D) by a mixed method is really solving it via its equivalent PFAFF System, namely completed by: ψ = g on Г. The “burden of nonlinearity” has been transferred from ψ to p.

Section 5 (2) A natural question is the following: Is our approach a (kind of) viscosity method ? The answer is “yes” as shown below. Let us show it: The Flow associated to the Least-Squares optimality conditions reads as follows: Find {ψ(t), p(t)}  V g  Q,  t > 0, such that ∫ Ω ∂(Δψ)/∂t Δφ dx + ∫ Ω D 2 ψ:D 2 φ dx = ∫ Ω p:D 2 φ dx,  φ  V 0, ∫ Ω ∂p/∂t : q dx + ∫ Ω p:q dx + = ∫ Ω D 2 ψ:q dx,  q  Q, (FE) {ψ(0), p (0)} = {ψ 0, p 0 }.

Section 5 (3) Assuming that Ω is simply connected, introduce: u = {u 1, u 2 } = {∂ψ/∂x 2, – ∂ψ/∂x 1 }, v = {v 1, v 2 } = {∂φ/∂x 2, – ∂φ/∂x 1 }, ω = ∂u 2 /∂x 1 – ∂u 1 /∂x 2, θ = ∂v 2 /∂x 1 – ∂v 1 /∂x 2, V g = {v| v  (H 1 (Ω)) 2, .v = 0, v.n = dg/ds on Γ}, V 0 = {v| v  (H 1 (Ω)) 2, .v = 0, v.n = 0 on Γ}, L = ( – ). The formulation (FE) is equivalent to

Section 5 (3) Find u(t)  V g,  t > 0, such that ∫ Ω ∂ω/∂t θ dx + ∫ Ω  u:  v dx = ∫ Ω Lp:  v dx,  v  V 0, ∂p/∂t + p + ∂I f (p) + L  u = 0, (FE)* {u(0), p (0), ω(0)} = {u 0, p 0, ω 0 }.. Problem (FE)* has a visco-elasticity flavor, – L p playing here the role of the so-called extra-stress tensor. As t → +∞, we obtain thus at the limit a viscosity solution, but in a sense different from M.Crandall- P.L. Lions’.