Multigrid for Nonlinear Problems Ferien-Akademie 2005, Sarntal, Christoph Scheit FAS, Newton-MG, Multilevel Nonlinear Method.

Slides:



Advertisements
Similar presentations
M. Dumbser 1 / 23 Analisi Numerica Università degli Studi di Trento Dipartimento dIngegneria Civile ed Ambientale Dr.-Ing. Michael Dumbser Lecture on Numerical.
Advertisements

School of something FACULTY OF OTHER School of Computing An Adaptive Numerical Method for Multi- Scale Problems Arising in Phase-field Modelling Peter.
Mutigrid Methods for Solving Differential Equations Ferien Akademie 05 – Veselin Dikov.
Computational Methods II (Elliptic)
systems of linear equations
Lecture 5 Newton-Raphson Method
Improvement of a multigrid solver for 3D EM diffusion Research proposal final thesis Applied Mathematics, specialism CSE (Computational Science and Engineering)
1 Iterative Solvers for Linear Systems of Equations Presented by: Kaveh Rahnema Supervisor: Dr. Stefan Zimmer
Computer Science & Engineering Department University of California, San Diego SPICE Diego A Transistor Level Full System Simulator Chung-Kuan Cheng May.
1 Numerical Solvers for BVPs By Dong Xu State Key Lab of CAD&CG, ZJU.
CS 290H 7 November Introduction to multigrid methods
SOLVING THE DISCRETE POISSON EQUATION USING MULTIGRID ROY SROR ELIRAN COHEN.
MULTISCALE COMPUTATIONAL METHODS Achi Brandt The Weizmann Institute of Science UCLA
Geometric (Classical) MultiGrid. Hierarchy of graphs Apply grids in all scales: 2x2, 4x4, …, n 1/2 xn 1/2 Coarsening Interpolate and relax Solve the large.
Computational Methods in Physics PHYS 3437
Image Reconstruction Group 6 Zoran Golic. Overview Problem Multigrid-Algorithm Results Aspects worth mentioning.
Algebraic MultiGrid. Algebraic MultiGrid – AMG (Brandt 1982)  General structure  Choose a subset of variables: the C-points such that every variable.
Lecture #18 EEE 574 Dr. Dan Tylavsky Nonlinear Problem Solvers.
Influence of (pointwise) Gauss-Seidel relaxation on the error Poisson equation, uniform grid Error of initial guess Error after 5 relaxation Error after.
Geometric (Classical) MultiGrid. Linear scalar elliptic PDE (Brandt ~1971)  1 dimension Poisson equation  Discretize the continuum x0x0 x1x1 x2x2 xixi.
Picture Reconstruction / Multigrid Group 8 Stefan Spielvogel Alexander Piazza Alexander Kosukhin.
Multiscale Methods of Data Assimilation Achi Brandt The Weizmann Institute of Science UCLA INRODUCTION EXAMPLE FOR INVERSE PROBLEMS.
Newton's Method for Functions of Several Variables
Exercise where Discretize the problem as usual on square grid of points (including boundaries). Define g and f such that the solution to the differential.
Numerical methods for PDEs PDEs are mathematical models for –Physical Phenomena Heat transfer Wave motion.
Secant Method Another Recursive Method. Secant Method The secant method is a recursive method used to find the solution to an equation like Newton’s Method.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
1 Numerical Integration of Partial Differential Equations (PDEs)
© Fluent Inc. 9/5/2015L1 Fluids Review TRN Solution Methods.
Taylor Series.
Improving Coarsening and Interpolation for Algebraic Multigrid Jeff Butler Hans De Sterck Department of Applied Mathematics (In Collaboration with Ulrike.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 101 Quasi-Newton Methods.
Finite Element Method.
Hans De Sterck Department of Applied Mathematics University of Colorado at Boulder Ulrike Meier Yang Center for Applied Scientific Computing Lawrence Livermore.
Efficient Integration of Large Stiff Systems of ODEs Using Exponential Integrators M. Tokman, M. Tokman, University of California, Merced 2 hrs 1.5 hrs.
Integration of 3-body encounter. Figure taken from
6. Introduction to Spectral method. Finite difference method – approximate a function locally using lower order interpolating polynomials. Spectral method.
Elliptic PDEs and the Finite Difference Method
1 Complex Images k’k’ k”k” k0k0 -k0-k0 branch cut   k 0 pole C1C1 C0C0 from the Sommerfeld identity, the complex exponentials must be a function.
Numerical Methods.
Multigrid Computation for Variational Image Segmentation Problems: Multigrid approach  Rosa Maria Spitaleri Istituto per le Applicazioni del Calcolo-CNR.
A Dirichlet-to-Neumann (DtN)Multigrid Algorithm for Locally Conservative Methods Sandia National Laboratories is a multi program laboratory managed and.
Introduction to Scientific Computing II Multigrid Dr. Miriam Mehl Institut für Informatik Scientific Computing In Computer Science.
Introduction to Scientific Computing II Multigrid Dr. Miriam Mehl.
Lecture 21 MA471 Fall 03. Recall Jacobi Smoothing We recall that the relaxed Jacobi scheme: Smooths out the highest frequency modes fastest.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 27.
Introduction to Scientific Computing II
Discretization for PDEs Chunfang Chen,Danny Thorne Adam Zornes, Deng Li CS 521 Feb., 9,2006.
MULTISCALE COMPUTATIONAL METHODS Achi Brandt The Weizmann Institute of Science UCLA
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Multigrid Methods The Implementation Wei E Universität München. Ferien Akademie 19 th Sep
Optimizing 3D Multigrid to Be Comparable to the FFT Michael Maire and Kaushik Datta Note: Several diagrams were taken from Kathy Yelick’s CS267 lectures.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
Convection-Dominated Problems
MultiGrid.
© Fluent Inc. 1/10/2018L1 Fluids Review TRN Solution Methods.
Convergence in Computational Science
A Multigrid Tutorial part two
Introduction to Multigrid Method
Remo Minero Eindhoven University of Technology 16th November 2005
Introduction to Scientific Computing II
Solution of Equations by Iteration
Chapter 27.
Introduction to Scientific Computing II
Numerical Analysis Lecture13.
topic4: Implicit method, Stability, ADI method
Anisotropic Diffusion for Speckle Reduction of SAR Image
topic4: Implicit method, Stability, ADI method
Local Defect Correction for the Boundary Element Method
CASA Day 9 May, 2006.
Presentation transcript:

Multigrid for Nonlinear Problems Ferien-Akademie 2005, Sarntal, Christoph Scheit FAS, Newton-MG, Multilevel Nonlinear Method

Outline Motivation Basic Idea of Multigrid Classical MG-Approaches for nonlinear problems – Newton-Multigrid – FAS – Properties of both approaches Multilevel Nonlinear Method Conclusions Bibliography

Motivation To solve (linear) systems of equations arising from the discretization of nowadays engineering science problems fast and robust solvers are needed A lot of problems arising from engineering science contain nonlinearities Example: nonlinear diffusion equation

Basic Ideas of Multigrid A lot of relaxations schemes smooth the error. Consider Jacobi-relaxation:

Basic Ideas of Multigrid From eigenvalue analysis (Local Mode analysis) or numerical experiments one can see that only high frequency errors are damped (smoothed) efficiently by most relaxation schemes The basic idea is now, to smooth the error on a grid, on which the error looks high frequent. On a twice as coarse grid, the same mode appears with the double frequency relative to the number of grid points. Therefore high-frequency errors can be smoothed efficiently on the coarser grid.

Basic Ideas of Multigrid The residual equation: Using the residual equation, it is possible to compute the error and update the approximate solution Bringing it all together, one can use coarser grids to efficiently compute a correction for the actual solution

Basic Ideas of Multigrid To use a sequence of Grids, transfer operators are needed: – Restriction to transport the residual to a coarser grid – Interpolation (or prolongation) to transport the correction back to the finer grid Using this ideas, one can construct schemes like the well known V-Cycle

Basic Algorithm MGM_Basic(x,f,l,ν1,ν2,μ) if (l == 1) return x = exact_sol(x,f) end x_h = preSmoothing(x,f,ν1); r_h = f-Ax; r_H = Restriction(r_h); for (i = 1; i < μ; i++) x_H = 0; e_H = MGM_Basic(x_H,r_H,l-1,ν1,ν2,μ) end e_h = Prolongate(e_H); x_h = x_h + e_h x_h = postSmoothing(x_h,f,ν2) return x_h; end

Nonlinear Problems For an equation like our resulting operator N for the discretized equation is itself depending on the solution u How to modify the algorithm for nonlinear problems? – Important, since for nonlinear problems the Residual Equation does not hold any more (instead, nonlinear Residual Equation / defect equation):

Nonlinear Approaches for Multigrid Newton-Multigrid -> global linearization FAS (Full Approximation Scheme/Storage) -> local linearization MNM (Multilevel Nonlinear Method) -> combine local and global method

Newtons Method First consider Newtons Method for scalar Problems: If f(x+s) is a solution, then

Newtons Method

How to use Newtons Method for nonlinear equation systems? – Analog to the scalar case: If F(v+s)=F(u) is a solution Where s is the error of the current approximation, we get:

Newtons Method What is grad(F(u))? -> the Jacobian of F(u)

Newtons Method A concrete example for J(v): Partial derivation yields:

Newton-Multigrid Back to Newtons Method for Multigrid: Using the nonlinear Residual Equation and the truncated taylor serious yields The last equation is the linearized equation system and has to be computed instead of the original one using multigrid methods.

Newton-Multigrid - Algorithm v = init_sol(); r = f-N(u); while (r < tol) compute J(v); e = 0; for i = 0; i < numV-Cycles; i++ e = MGM_Basic(e,r,l,ν1,ν2,μ) end v = v + e; r = f-N(u); end

FAS Newtons-Multigrid doesn‘t use Multigrid ideas to solve the nonlinear equation system, but uses a global linearization and an outer iteration with the basic Multigrid Method embeded as a solver for the linearized equation system Different from the idea of Newtons Method for Multigrid, FAS treats directly the nonlinear equation system, using a nonlinear smoother for local linearization such as Gauss-Newton relaxation

FAS Back to the nonlinear Residual Equation (defect correction equation): We can formulate this equation on the coarse grid by: Where is the injection operator (instead of full weighted restriction)

FAS - Algorithm (only V-Cycle) FAS(x,f,l,ν1,ν2) if(l==1) return x = exact_sol(x,f); end x_h = preSmoothing(x,f,ν1); f_H = restriction(f – A_h x_h) + A_h injection(x_h); x_H = injection(x_h);// initial guess for coarse grid FAS(x_H,f_H,l, ν1,ν2); x_h = x_h + prolongation(x_H – injection(x_h)); x_h = postSmoothing(x_h,f,ν2); return x_h; end

FAS – nonlinear relaxation Instead of a global linearization FAS uses a nonlinear smoother, which is simply obtained by Newtons Method (for scalar problem): Consider again the nonlinear equation: Discretized we obtain: Using Newtons Method yields the following iteration scheme:

FAS – implementation hints Start first with a linear problem; then the FAS-Algorithm must yield the same result as the standard MG- Algorithm (except roundoff errors) For the nonlinear problem considered here, a standard Gauss-Seidel relaxation works also. In general one has to use a nonlinear smoother like presented above Since FAS does not approximate the error, but directly improves the current solution on the different grid levels, don‘t forget to inject also the boundary condition (for the error in the standard MG this was not necessary, since for Dirichlet b.c. the b.c. for the error is always zero)

Properties of classical approaches Newton: Fast convergence, often only a few newton steps For each newton step, the linearized equation must be solved accurately A good initial guess is needed to ensure convergence (small attraction basin) (slow) backtracking to find a good initial guess FAS: No global linearization is needed Convergence even for poor initial guess, if a good approximation for the nonlinear operator is available (large attraction basin) Converges slower to the solution than Newton-MG

Multilevel Nonlinear Method (MNM) While Newtons-MG converges fast, we need a good initial guess While FAS converges not so fast, it converges even for a poor initial guess if we have a good approximation for the nonlinear operator Idea: Combine the properties of both algorithms, such that the resulting Method converges fast and even for a poor initial guess -> MNM Use a robust approximation for the dominating operator -> MNM, Galerkin Coarsening

MNM Once again back to the nonlinear Residual equation: Now we want to split this equation into a large linear part and a small nonlinear part. The linear part corresponds to Newton-MG while the nonlinear part corresponds to FAS. The nonlinear part should be small, because in this case it would not be so bad, if the approximation of the nonlinear part is not so good (which was required by FAS)

MNM To obtain this splitting, we add to the left hand side of the nonlinear Residual Equation J(v)e; e = u-v: Rearranging the terms yields a linear and a nonlinear term: Obviously, the linear part is O(e), but what about the nonlinear part?

MNM Consider a Taylor serious: Hence the nonlinear part is O(e²) and therefor we obtain a splitting with a large linear but a relatively small nonlinear part

MNM Back to the complete equation: There are two methods to bring the operators to the coarser grid: Rediscretization Galerkin Coarsening We will use rediscretization only for the nonlinear part (though rediscretization might yield a bad approximation in case of a PDE with jumping coefficients, the influence for MNM is only O(e²)). Denote rediscretized operators by a head Â:

MNM Now we obtain an iteration by defining: Substituting this into the original equation and rearranging yields the defect equation for MNM:

MNM As one can see, the following operators must be defined on the several levels: While the first two will be simply rediscretized on each level, the third one is obtained by Galerkin coarsening: To bring the current approximation to the next coarser level, we will use injection (as for FAS)

MNM - Algorithm MNM(u,N,L,f,l,ν1,ν2) if(l == 0) solve Nu+Lu=f: return u; end Relax ν1 times equation Nu+Lu=f; Compute residual r=f-(Nu+Lu); Construct linearized operator K = L+J^(u); Initialize coarse grid solution u_H = injection(u); Galerkin Coarsening for linearized operator K_H = galerkinCoarsening(K);

MNM – Algorithm(II) Compute L for coarse grid L_H = K_H + J^_H(u_H); Compute RHS for coarse grid f_H = restriction(r) + N_H u_H + L_H u_H; recursive call MNM(u_H,N_H,L_H,f_H,l-1,ν1,ν2); add correction u = u + prolongation(u_H - injection(u)); postsmoothing Nu+Lu=f; End Where L := the linear correction to N

MNM – Concrete Example Consider the equation For the approximated operators we obtain: Here B is a scaling to ensure compatibility with the linearized coarse grid operator due to Galerkin coarsening

MNM - Adaptive Idea: Use parameters to „controll“ how much of FAS and Newton should be used Consider the complete coarse grid operator: Two points of view: The first term is the main term, second and third term are a nonlinear correction The second term is the main term, while the first and the third term are a linear correction

MNM - Adaptive Now we can use a weighting of the operators: a=1, b=0: Newtons Mehtod a=0, b=1:FAS a=1=b=1, MNM

MNM – Results 2-D diffusion problem where

MNM - Results P/α MNM=0.18 FAS=0.26 MNM=0.26 FAS=0.35 MNM=0.97 FAS=0.64 2MNM=0.13 FAS=0.23 MNM=0.25 FAS=0.33 MNM=0.38 FAS= MNM=0.12 FAS=0.21 MNM=0.25 FAS=0.33 MNM=0.41 FAS=0.42

MNM – Implementation hints Due to the Galerkin coarsening we have the restriction operator acting on the left hand side as well as on the right hand side hence it cancels out. But the prolongation operator is only on the right hand side, therefor we have to introduce a compatible scaling also for the rediscretized operators. Since N + L is just an approximation of the fine grid operator (nonlinear), a nonlinear relaxation method is needed, such as for FAS (e.g. Gauss-Newton)

Conclusions(I) FAS and Newton-MG have both advantages and disadvantages MNM combines the good properties of both methods, but introduces difficulties due to scaling of the coarse grid approximations for the operators MNM yields usually fastest convergence factor of all three approaches Sometimes MNM does not converge, than backtracking can be used, but yields poor convergence

Conclusions(II) Adaptive MNM can be used instead of MNM with backtracking, yielding a quite good convergence factor The computational cost per V-Cycle for MNM is more expensive than for FAS or Newtons method, but less than the sum of both MNM is still a research topic MNM is more complicated to implement

Bibliography I. Yavneh and G Dardyk, A Multilevel Nonlinear Method, Haifa, 2005 W. L. Briggs, V. E. Henson, and S F. McCormick, A Multigrid Tutorial, SIAM, Philadelphia, second ed., 2000 V. E. Henson, Multigrid for nonlinear problems: an overview, Center for Applied Scientific Computing Lawrence Livermore National Laboratory, 2003