Numerical Solution of a Non- Smooth Eigenvalue Problem An Operator-Splitting Approach A. Caboussat & R. Glowinski.

Slides:



Advertisements
Similar presentations
Duality for linear programming. Illustration of the notion Consider an enterprise producing r items: f k = demand for the item k =1,…, r using s components:
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
Chapter 8 Elliptic Equation.
Copyright © Cengage Learning. All rights reserved.
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2014 – 35148: Continuous Solution for Boundary Value Problems.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 14: Review / Subdivision Ravi Ramamoorthi Slides courtesy.
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
MANE 4240 & CIVL 4240 Introduction to Finite Elements
Notes Assignment questions… cs533d-winter-2005.
A Bezier Based Approach to Unstructured Moving Meshes ALADDIN and Sangria Gary Miller David Cardoze Todd Phillips Noel Walkington Mark Olah Miklos Bergou.
Infinite Sequences and Series
Lecture 1 Linear Variational Problems (Part I). 1. Motivation For those participants wondering why we start a course dedicated to nonlinear problems by.
Lecture 2 Linear Variational Problems (Part II). Conjugate Gradient Algorithms for Linear Variational Problems in Hilbert Spaces 1.Introduction. Synopsis.
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
16 MULTIPLE INTEGRALS.
Lecture 3 A Review of ADI and Operator- Splitting Methods for the Solution of Initial Value Problems.
Background vs. foreground segmentation of video sequences = +
Ch 5.5: Euler Equations A relatively simple differential equation that has a regular singular point is the Euler equation, where ,  are constants. Note.
MADRID LECTURE # 5 Numerical solution of Eikonal equations.
MADRID LECTURE # 6 On the Numerical Solution of the Elliptic Monge-Ampère Equation in 2-D A Least Squares Approach.
Algorithmic Problems in Algebraic Structures Undecidability Paul Bell Supervisor: Dr. Igor Potapov Department of Computer Science
Martin Burger Total Variation 1 Cetraro, September 2008 Numerical Schemes Wrap up approximate formulations of subgradient relation.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Geometric characterization of nodal domains Y. Elon, C. Joas, S. Gnutzman and U. Smilansky Non-regular surfaces and random wave ensembles General scope.
Numerical Methods for Partial Differential Equations
Numerical Methods for Partial Differential Equations CAAM 452 Spring 2005 Lecture 9 Instructor: Tim Warburton.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Scientific Computing Partial Differential Equations Poisson Equation Calculus of Variations.
Differential Equations
Time Series Data Analysis - II
Computational Optimization
CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Accurate Implementation of the Schwarz-Christoffel Tranformation
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
MATH 224 – Discrete Mathematics
Finite Element Method.
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
INTEGRALS Areas and Distances INTEGRALS In this section, we will learn that: We get the same special type of limit in trying to find the area under.
IIT-Madras, Momentum Transfer: July 2005-Dec 2005.
ENM 503 Lesson 1 – Methods and Models The why’s, how’s, and what’s of mathematical modeling A model is a representation in mathematical terms of some real.
Practical Dynamic Programming in Ljungqvist – Sargent (2004) Presented by Edson Silveira Sobrinho for Dynamic Macro class University of Houston Economics.
Chapter 24 Sturm-Liouville problem Speaker: Lung-Sheng Chien Reference: [1] Veerle Ledoux, Study of Special Algorithms for solving Sturm-Liouville and.
Background 1. Energy conservation equation If there is no friction.
Ch 10.6: Other Heat Conduction Problems
Final Project Topics Numerical Methods for PDEs Spring 2007 Jim E. Jones.
Copyright © Cengage Learning. All rights reserved.
Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes.
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Boyce/DiPrima 9 th ed, Ch 11.3: Non- Homogeneous Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 9 th edition, by.
Copyright © Cengage Learning. All rights reserved. 16 Vector Calculus.
Department of Statistics University of Rajshahi, Bangladesh
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Runge Kutta schemes Taylor series method Numeric solutions of ordinary differential equations.
Amir Yavariabdi Introduction to the Calculus of Variations and Optical Flow.
CHAPTER 9.10~9.17 Vector Calculus.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
PDE Methods for Image Restoration
Computation of the solutions of nonlinear polynomial systems
Computational Optimization
Machine Learning Basics
Quantum One.
Objective Numerical methods Finite volume.
Numerical Analysis Lecture10.
6th Lecture : Numerical Methods
Presentation transcript:

Numerical Solution of a Non- Smooth Eigenvalue Problem An Operator-Splitting Approach A. Caboussat & R. Glowinski

1. Formulation. Motivation Our main objective is the numerical solution of the following problem from Calculus of Variations Compute γ = inf v  Σ ∫ Ω |  v|dx (NSEVP) where: Ω is a bounded domain of R 2 and Σ = {v| v  H 0 1 (Ω), ∫ Ω |v| 2 dx = 1}.

Actually, γ = 2√ π, independently of the shape and size of Ω (holds even for non-simply connected Ω and in fact for unbounded Ω ) (G. Talenti). A natural question is then: Why solve numerically a problem whose exact solution is known ? (i) If I claim that it is a new method to compute π nobody will believe me. (ii) (NSEVP) is a fun problem to test solution methods for non-smooth & non-convex optimization problems.

(iii) ∫ Ω |  v|dx arises in a variety of problems from Image Processing and Plasticity. Actually, our motivation for investigating (NSEVP) arises from the following problem from visco-plasticity : u  L 2 (0,T; H 0 1 (Ω))  C 0 ([0,T ]; L 2 (Ω)); u(0) = u 0, (BFP) ρ ∫ Ω (∂u/∂t)(t)(v – u(t))dx + μ ∫ Ω  u(t).  (v – u(t))dx + g[ ∫ Ω |  v|dx – ∫ Ω |  u(t)|dx ] ≥ C(t) ∫ Ω (v – u(t))dx,  v  H 0 1 (Ω), a.e. t  (0, T), with ρ > 0, μ > 0, g > 0, Ω a bounded domain of R 2 and u 0  L 2 (Ω).

(BFP) models the flow of a Bingham visco-plastic fluid in an infinitely long cylinder of cross section Ω, C being the pressure drop per unit length. Suppose that C = 0 and that T = +∞; we can show that (C-O.PR) u(t) = 0,  t ≥ T c, with T c = (ρ/μλ 0 )ln[1 + (μλ 0 / γg )||u 0 || L 2 (Ω) ], λ 0 being the smallest eigenvalue of –  2 in H 0 1 (Ω). A similar cut-off property holds if after space discretization we use the backward Euler scheme for the time discretization of (BFP), with λ 0 and γ replaced by their discrete analogues λ 0h and γ h.

Suppose that the space discretization is achieved via C 0 -piecewise linear finite element approximations, we have then |λ 0h – λ 0 | = O(h 2 ). But what can we say about | γ h – γ | ? The main goal of this lecture is to look for answers to the above question !

2. Some regularization procedures There are several ways to approximate (NSEVP) – at the continuous level – by a better posed and/or smoother variational problem. The most obvious candidate is clearly γ ε = inf v  Σ ∫ Ω (|  v| 2 + ε 2 ) ½ dx, (NSEVP.1) ε a regularization quite popular in Image Processing. Assuming that the above problem has a minimizer u ε, this minimizer verifies the following Euler-Lagrange equation (reminiscent of the mean curvature equation):

First regularized problem:

(RP.1) is clearly a nonlinear eigenvalue problem for a close variant of the mean curvature operator, the eigenva lue being γ ε. Another regularization, more sophisticated in some sense, since this time the regularized problem has minimizers, is provided (with ε > 0) by γ ε = min v  Σ [ ½ ε ∫ Ω |  v| 2 dx + ∫ Ω |  v|dx ]. (NSEVP.2) ε An associated Euler-Lagrange (multivalued) equation reads as follows, also of the nonlinear (in fact, non- smooth) eigenvalue type (as above the eigenvalue is γ ε ):

– ε  2 u ε + ∂j(u ε )  γ ε u ε in Ω, (RP.2) u ε = 0 on ∂ Ω, ∫ Ω |u ε | 2 dx = 1 ; in (RP.2), ∂j(u ε ) is the subgradient at u ε of the functional j : H 0 1 (Ω) → R defined by j(v) = ∫ Ω |  v|dx. The solution of problems such as (RP.2) is discussed in GKM (2007); the method used in the above reference is of the operator-splitting/inverse power method type.

In order to avoid handling simultaneously two small parameters, namely ε and h, we will address the solution of γ = inf v  Σ ∫ Ω |  v|dx without using any regularization (unless we consider the space approximation as a kind of regularization, that it is indeed).

3. Finite Element Approximation (i) First, we introduce a family {Ω h } h of polygonal approxi- mations of Ω, such that lim h→0 Ω h = Ω. (ii) With each Ω h we associate a triangulation T h verifying the usual assumptions of: (a) compatibility between triangles, and (b) regularity. (iii) With each T h we associate the finite dimensional space V 0h defined (classically) as follows:

V 0h = {v| v  C 0( Ω h  ∂Ω h ), v| T  P 1,  T  T h, v = 0 on ∂Ω h }. (iv) We approximate γ = inf v  Σ ∫ Ω |  v|dx (NSEVP) by γ h = min v  Σ h ∫ Ω h |  v|dx (NSEVP) h

with Σ h = {v| v  V 0h, ||v|| L 2 (Ω h ) = 1}. It is easy to prove that: (i) Problem (NSEVP) h has a solution, that is there exists u h  Σ h such that ∫ Ω h |  u h |dx = γ h. (ii) lim h → 0 γ h = γ ( = 2√π). We would like to investigate (computationally) the order of the convergence of γ h to γ. From the non-smoothness of the problem, we do not expect O(h 2 ).

4. An iterative method for the solution of (NSEVP) h We are going to look for robustness, modularity and simplicity of programming instead of performance measured in number of elementary operations (this is not image processing and/or real time). At ADI 50 ( December 2005, at Rice University), we showed that the inverse power method for eigenvalue computations has an operator-splitting interpretation; we also showed the equivalence between some augmented Lagrangian algorithms and ADI methods such as Douglas- Rachford’s and Peaceman-Rachford’s. For problem (NSEVP) h we think that it is simpler to take the AL approa- ch, keeping in mind that it will lead to a ‘disguised’ ADI method.

For formalism simplicity, we will use the continuous problem notation. We observe that there is equivalence between γ = inf v  Σ ∫ Ω |  v|dx and γ = inf {v, q, z}  E ∫ Ω |q|dx, where E = {{ v, q, z} | v  H 0 1 (Ω), q  (L 2 (Ω)) 2, z  L 2 (Ω),  v – q = 0, v – z = 0, ||z|| L 2 (Ω) = 1}.

The above equivalence suggests introducing the following augmented Lagrangian functional L r : (H 0 1 (Ω)×Q×L 2 (Ω))×(Q×L 2 (Ω)) → R defined as follows, with Q = (L 2 (Ω)) 2 and r = {r 1, r 2 }, r i > 0, L r (v, q, z; μ 1, μ 2 ) = ∫ Ω |q|dx + ½ r 1 ∫ Ω |  v – q| 2 dx + ½ r 2 ∫ Ω |v – z| 2 dx + ∫ Ω (  v – q).μ 1 dx + ∫ Ω (v – z)μ 2 dx

We consider then, the following saddle-point problem Find {{u, p, y}, {λ 1, λ 2 }}  (H 0 1 (Ω)×Q×S)×(Q×L 2 (Ω)) such that L r (u, p, y; μ 1, μ 2 ) ≤ L r (u, p, y; λ 1, λ 2 ) ≤ L r (v, q, z; λ 1, λ 2 ), (SDP-P)  {{v, q, z}, {μ 1, μ 2 }}  (H 0 1 (Ω)×Q×S)×(Q×L 2 (Ω)), with S = {z| z  L 2 (Ω), ||z|| L 2 (Ω) = 1}. Suppose that the above saddle-point problem has a solut ion. We have then p =  u, y = u, u being a minimizer for the original mimimization problem (the primal one).

To solve the above saddle-point problem, we advocate the algorithm below (called ALG 2 by some practitioners (BB)): (1) {u –1, {λ 1 0, λ 2 0 }} is given in H 0 1 (Ω)×(Q×L 2 (Ω)) ; for n ≥ 0, assuming that {u n – 1, {λ 1 n, λ 2 0 }} is known, solve: (2) {p n, y n } = arg min {q, z }  Q×S L r (u n – 1, q, z; λ 1 n, λ 2 n ), then (3) u n = arg min v L r (v, p n, y n ; λ 1 n, λ 2 n ), v  H 0 1 (Ω), (4) λ 1 n+1 = λ 1 n + r 1 (  u n – p n ), λ 2 n+1 = λ 2 n + r 2 (u n – y n ).

The above algorithm is easy to implement since: (i) Problem (3) is equivalent to the following linear variational problem in H 0 1 (Ω) u n  H 0 1 (Ω), r 1 ∫ Ω  u n.  v dx + r 2 ∫ Ω u n v dx = ∫ Ω (r 1 p n – λ 1 n ).  v dx + ∫ Ω (r 2 y n – λ 2 n )v dx,  v  H 0 1 (Ω). The solution of the discrete analogue of the above problem is a simple task nowadays.

(ii) Problem (2) decouples as (a) p n = arg min q  Q [½ r 1 ∫ Ω |q| 2 dx + ∫ Ω |q|dx – ∫ Ω (r 1  u n + λ 1 n ).qdx ]. (b) y n = arg min z  S [½ r 2 ∫ Ω |z| 2 dx – ∫ Ω (r 2 u n + λ 2 n )zdx ]. Both problems have closed form solutions; indeed, since ||z|| L 2 (Ω) = 1,  z  S, one has y n = (r 2 u n + λ 2 n ) / ||r 2 u n + λ 2 n || L 2 (Ω).

Similarly, the minimization problem in (a) can be solved point-wise (one such elementary problem for each triangle of T h, in practice). We obtain then, a.e. on Ω, p n (x) = (1/r 1 ) (1 – 1/|X n (x)|) + X n (x), where X n (x) = r 1  u n (x) + λ 1 n (x).

5. Numerical experiments First Test Problem: Ω is the unit disk

Unit Disk Test Problem Variation of γ h versus h

Unit Disk Test Problem Variation of γ h – γ versus h

Unit Disk Test Problem Visualisation of the coarse mesh solution

Unit Disk Test Problem Visualisation of the fine mesh solution

Unit Disk Test Problem Coarse mesh solution contours

Unit Disk Test Problem Fine mesh solution contours

Unit Disk Test Problem Fine mesh solution contours (details)

Second Test Problem: Ω is the unit square Coarse mesh

Unit Square Test Problem Fine mesh

Unit Square Test Problem Variation of γ h versus h

Unit Square Test Problem Variation of γ h – γ versus h

Unit Square Test Problem Visualisation of the coarse mesh solution

Unit Square Test Problem Visualisation of the fine mesh solution

Unit Square Test Problem Contours of the coarse mesh solution

Unit Square Test Problem Contours of the fine mesh solution

Unit Square Test Problem Contours of the fine mesh solution (details)

Circular Ring Test Problem (coarse mesh)

Circular Ring Test Problem (fine mesh)

A GENERALIZATION Compute for Ω  R 2 γ* = inf v  ∫ Ω |  v|dx with  = {v| v  (H 1 0 ( Ω )) 2, ||v|| (L 2 ( Ω )) 2 = 1}.

Conjecture (unless it is a classical result):

Square (coarse mesh)

Square (fine mesh)

Disk (coarse mesh)

Disk (fine mesh)

The results of our numerical computations suggest very strongly that the value we conjectu- red for γ* is the good one.

APPLICATION to a SEDIMENTATION PROBLEM The following problem has been considered by C. Evans & L. Prigozhin  u/  t +  I K (u)  f in Ω × (0, T), (SP) u(0) = u 0, with Ω  R 2 and K = {v | v  H 1 (Ω), |  v|  C, v = g on Γ 0 (   Ω)}.

After time-discretization by the backward Euler scheme, we obtain (1) u 0 = u 0 ; n ≥ 1, u n – 1 → u n as follows (2) u n – u n – 1 +  I K (u n )  Δt f n. “Equation” (2) is the Euler-Lagrange equation of the following problem from Calculus of Variations: (MP) u n = arg min v  K [ ½  Ω v 2 dx –  Ω (u n – 1 + Δt f n )vdx].

The minimization problem (MP) is equivalent to: {u n, p n } = arg min {v, q}  K [ ½  Ω v 2 dx –  Ω (u n – 1 + Δt f n )vdx], with K = {{v, q}| v  H 1 (Ω), v = g on Γ 0, |q|  C,  v – q = 0}. We can compute {u n, p n } via the following augmented Lagrangian L r (v, q; μ) = ½ r  Ω |  v – q| 2 dx +  Ω μ.(  v – q) dx + ½  Ω v 2 dx –  Ω (u n – 1 + Δt f n )vdx.

River sand pile: FE mesh

River sand pile (2)

River sand pile (3)

River sand pile (4)

Rectangular pond sand pile (1)

Rectangular pond sand pile (2)