CSE 245: Computer Aided Circuit Simulation and Verification

Slides:



Advertisements
Similar presentations
Mutigrid Methods for Solving Differential Equations Ferien Akademie 05 – Veselin Dikov.
Advertisements

05/11/2005 Carnegie Mellon School of Computer Science Aladdin Lamps 05 Combinatorial and algebraic tools for multigrid Yiannis Koutis Computer Science.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
CSE 245: Computer Aided Circuit Simulation and Verification
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.
1 Iterative Solvers for Linear Systems of Equations Presented by: Kaveh Rahnema Supervisor: Dr. Stefan Zimmer
1 Numerical Solvers for BVPs By Dong Xu State Key Lab of CAD&CG, ZJU.
CS 290H 7 November Introduction to multigrid methods
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Linear Systems of Equations
Solving Linear Systems (Numerical Recipes, Chap 2)
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Algebraic MultiGrid. Algebraic MultiGrid – AMG (Brandt 1982)  General structure  Choose a subset of variables: the C-points such that every variable.
Jonathan Richard Shewchuk Reading Group Presention By David Cline
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
An Algebraic Multigrid Solver for Analytical Placement With Layout Based Clustering Hongyu Chen, Chung-Kuan Cheng, Andrew B. Kahng, Bo Yao, Zhengyong Zhu.
CS240A: Conjugate Gradients and the Model Problem.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
CSE 245: Computer Aided Circuit Simulation and Verification
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
1 Numerical Integration of Partial Differential Equations (PDEs)
Systems of Linear Equations Iterative Methods
Using Adaptive Methods for Updating/Downdating PageRank Gene H. Golub Stanford University SCCM Joint Work With Sep Kamvar, Taher Haveliwala.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Linear Systems Iterative Solutions CSE 541 Roger Crawfis.
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Elliptic PDEs and the Finite Difference Method
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
CS240A: Conjugate Gradients and the Model Problem.
Linear Systems – Iterative methods
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Lecture 21 MA471 Fall 03. Recall Jacobi Smoothing We recall that the relaxed Jacobi scheme: Smooths out the highest frequency modes fastest.
Al Parker January 18, 2009 Accelerating Gibbs sampling of Gaussians using matrix decompositions.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Iterative Solution Methods
Krylov-Subspace Methods - I
CSE 245: Computer Aided Circuit Simulation and Verification
Model Problem: Solving Poisson’s equation for temperature
Solving Linear Systems Ax=b
Gauss-Siedel Method.
Iterative Methods Good for sparse matrices Jacobi Iteration
Introduction to Multigrid Method
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
Matrix Methods Summary
Conjugate Gradient Method
CS5321 Numerical Optimization
Instructor :Dr. Aamer Iqbal Bhatti
~ Least Squares example
Numerical Linear Algebra
Solving Linear Systems: Iterative Methods and Sparse Systems
~ Least Squares example
Administrivia: November 9, 2009
Performance Optimization
Linear Algebra Lecture 16.
Presentation transcript:

CSE 245: Computer Aided Circuit Simulation and Verification Fall 2004, Oct 19 Lecture 7: Matrix Solver II -Iterative Method

Zhengyong (Simon) Zhu, UCSD Outline Iterative Method Stationary Iterative Method (SOR, GS,Jacob) Krylov Method (CG, GMRES) Multigrid Method November 12, 2018 Zhengyong (Simon) Zhu, UCSD

courtesy Alessandra Nardi, UCB Iterative Methods Stationary: x(k+1) =Gx(k)+c where G and c do not depend on iteration count (k) Non Stationary: x(k+1) =x(k)+akp(k) where computation involves information that change at each iteration November 12, 2018 courtesy Alessandra Nardi, UCB

Stationary: Jacobi Method In the i-th equation solve for the value of xi while assuming the other entries of x remain fixed: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M M=D-L-U November 12, 2018 courtesy Alessandra Nardi, UCB

Stationary-Gause-Seidel Like Jacobi, but now assume that previously computed results are used as soon as they are available: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M M=D-L-U November 12, 2018 courtesy Alessandra Nardi, UCB

Stationary: Successive Overrelaxation (SOR) Devised by extrapolation applied to Gauss-Seidel in the form of weighted average: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M M=D-L-U November 12, 2018 courtesy Alessandra Nardi, UCB

Zhengyong (Simon) Zhu, UCSD SOR Choose w to accelerate the convergence W =1 : Jacobi / Gauss-Seidel 2>W>1: Over-Relaxation W < 1: Under-Relaxation November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Convergence of Stationary Method Linear Equation: MX=b A sufficient condition for convergence of the solution(GS,Jacob) is that the matrix M is diagonally dominant. If M is symmetric positive definite, SOR converges for any w (0<w<2) A necessary and sufficient condition for the convergence is the magnitude of the largest eigenvalue of the matrix G is smaller than 1 Jacobi: Gauss-Seidel SOR: November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Outline Iterative Method Stationary Iterative Method (SOR, GS,Jacob) Krylov Method (CG, GMRES) Steepest Descent Conjugate Gradient Preconditioning Multigrid Method November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Linear Equation: an optimization problem Quadratic function of vector x Matrix A is positive-definite, if for any nonzero vector x If A is symmetric, positive-definite, f(x) is minimized by the solution November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Linear Equation: an optimization problem Quadratic function Derivative If A is symmetric If A is positive-definite is minimized by setting to 0 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

For symmetric positive definite matrix A November 12, 2018 from J. R. Shewchuk "painless CG"

Gradient of quadratic form The points in the direction of steepest increase of f(x) November 12, 2018 from J. R. Shewchuk "painless CG"

Zhengyong (Simon) Zhu, UCSD Symmetric Positive-Definite Matrix A If A is symmetric positive definite P is the arbitrary point X is the solution point since We have, If p != x November 12, 2018 Zhengyong (Simon) Zhu, UCSD

If A is not positive definite Positive definite matrix b) negative-definite matrix c) Singular matrix d) positive indefinite matrix November 12, 2018 from J. R. Shewchuk "painless CG"

Non-stationary Iterative Method State from initial guess x0, adjust it until close enough to the exact solution How to choose direction and step size? i=0,1,2,3,…… Adjustment Direction Step Size November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Steepest Descent Method (1) Choose the direction in which f decrease most quickly: the direction opposite of Which is also the direction of residue November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Steepest Descent Method (2) How to choose step size ? Line Search should minimize f, along the direction of , which means Orthogonal November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Steepest Descent Algorithm Given x0, iterate until residue is smaller than error tolerance November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Steepest Descent Method: example Starting at (-2,-2) take the direction of steepest descent of f b) Find the point on the intersec- tion of these two surfaces that minimize f c) Intersection of surfaces. d) The gradient at the bottommost point is orthogonal to the gradient of the previous step November 12, 2018 from J. R. Shewchuk "painless CG"

Iterations of Steepest Descent Method November 12, 2018 from J. R. Shewchuk "painless CG"

Convergence of Steepest Descent-1 let Eigenvector: EigenValue: j=1,2,…,n Energy norm: November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Convergence of Steepest Descent-2 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Convergence Study (n=2) assume let Spectral condition number let November 12, 2018 Zhengyong (Simon) Zhu, UCSD

from J. R. Shewchuk "painless CG" Plot of w November 12, 2018 from J. R. Shewchuk "painless CG"

from J. R. Shewchuk "painless CG" Case Study November 12, 2018 from J. R. Shewchuk "painless CG"

from J. R. Shewchuk "painless CG" Bound of Convergence It can be proved that it is also valid for n>2, where November 12, 2018 from J. R. Shewchuk "painless CG"

Conjugate Gradient Method Steepest Descent Repeat search direction Why take exact one step for each direction? Search direction of Steepest descent method November 12, 2018 figure from J. R. Shewchuk "painless CG"

Zhengyong (Simon) Zhu, UCSD Orthogonal Direction Pick orthogonal search direction: We don’t know !!! November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Orthogonal  A-orthogonal Instead of orthogonal search direction, we make search direction A –orthogonal (conjugate) November 12, 2018 from J. R. Shewchuk "painless CG"

Zhengyong (Simon) Zhu, UCSD Search Step Size November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Iteration finish in n steps Initial error: A-orthogonal The error component at direction dj is eliminated at step j. After n steps, all errors are eliminated. November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Conjugate Search Direction How to construct A-orthogonal search directions, given a set of n linear independent vectors. Since the residue vector in steepest descent method is orthogonal, a good candidate to start with November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Construct Search Direction -1 In Steepest Descent Method New residue is just a linear combination of previous residue and Let We have Krylov SubSpace: repeatedly applying a matrix to a vector November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Construct Search Direction -2 let For i > 0 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Construct Search Direction -3 can get next direction from the previous one, without saving them all. let then November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Conjugate Gradient Algorithm Given x0, iterate until residue is smaller than error tolerance November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Conjugate gradient: Convergence In exact arithmetic, CG converges in n steps (completely unrealistic!!) Accuracy after k steps of CG is related to: consider polynomials of degree k that are equal to 1 at 0. how small can such a polynomial be at all the eigenvalues of A? Thus, eigenvalues close together are good. Condition number: κ(A) = ||A||2 ||A-1||2 = λmax(A) / λmin(A) Residual is reduced by a constant factor by O(κ1/2(A)) iterations of CG. November 12, 2018 courtesy J.R.Gilbert, UCSB

Other Krylov subspace methods Nonsymmetric linear systems: GMRES: for i = 1, 2, 3, . . . find xi  Ki (A, b) such that ri = (Axi – b)  Ki (A, b) But, no short recurrence => save old vectors => lots more space (Usually “restarted” every k iterations to use less space.) BiCGStab, QMR, etc.: Two spaces Ki (A, b) and Ki (AT, b) w/ mutually orthogonal bases Short recurrences => O(n) space, but less robust Convergence and preconditioning more delicate than CG Active area of current research Eigenvalues: Lanczos (symmetric), Arnoldi (nonsymmetric) November 12, 2018 courtesy J.R.Gilbert, UCSB

courtesy J.R.Gilbert, UCSB Preconditioners Suppose you had a matrix B such that: condition number κ(B-1A) is small By = z is easy to solve Then you could solve (B-1A)x = B-1b instead of Ax = b B = A is great for (1), not for (2) B = I is great for (2), not for (1) Domain-specific approximations sometimes work B = diagonal of A sometimes works Better: blend in some direct-methods ideas. . . November 12, 2018 courtesy J.R.Gilbert, UCSB

Preconditioned conjugate gradient iteration x0 = 0, r0 = b, d0 = B-1 r0, y0 = B-1 r0 for k = 1, 2, 3, . . . αk = (yTk-1rk-1) / (dTk-1Adk-1) step length xk = xk-1 + αk dk-1 approx solution rk = rk-1 – αk Adk-1 residual yk = B-1 rk preconditioning solve βk = (yTk rk) / (yTk-1rk-1) improvement dk = yk + βk dk-1 search direction One matrix-vector multiplication per iteration One solve with preconditioner per iteration November 12, 2018 courtesy J.R.Gilbert, UCSB

Zhengyong (Simon) Zhu, UCSD Outline Iterative Method Stationary Iterative Method (SOR, GS,Jacob) Krylov Method (CG, GMRES) Multigrid Method November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD What is the multigrid A multilevel iterative method to solve Ax=b Originated in PDEs on geometric grids Expend the multigrid idea to unstructured problem – Algebraic MG Geometric multigrid for presenting the basic ideas of the multigrid method. November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD The model problem + v1 v2 v3 v4 v5 v6 v7 v8 vs Ax = b November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Simple iterative method x(0) -> x(1) -> … -> x(k) Jacobi iteration Matrix form : x(k) = Rjx(k-1) + Cj General form: x(k) = Rx(k-1) + C (1) Stationary: x* = Rx* + C (2) November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Error and Convergence Definition: error e = x* - x (3) residual r = b – Ax (4) e, r relation: Ae = r (5) ((3)+(4)) e(1) = x*-x(1) = Rx* + C – Rx(0) – C =Re(0) Error equation e(k) = Rke(0) (6) ((1)+(2)+(3)) Convergence: November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Error of diffenent frequency Wavenumber k and frequency  = k/n High frequency error is more oscillatory between points k= 1 k= 4 k= 2 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Iteration reduce low frequency error efficiently Smoothing iteration reduce high frequency error efficiently, but not low frequency error Error k = 1 k = 2 k = 4 Iterations November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Multigrid – a first glance Two levels : coarse and fine grid 2h A2hx2h=b2h 1 2 3 4 h Ahxh=bh 1 2 3 4 5 6 7 8 Ax=b November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Idea 1: the V-cycle iteration Also called the nested iteration Start with 2h A2hx2h = b2h A2hx2h = b2h Iterate => Prolongation:  Restriction:  h Iterate to get Ahxh = bh Question 1: Why we need the coarse grid ? November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Prolongation Prolongation (interpolation) operator xh = x2h 1 2 3 4 5 6 7 8 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Restriction Restriction operator xh = x2h 1 2 3 4 5 6 7 8 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Smoothing The basic iterations in each level In ph: xphold  xphnew Iteration reduces the error, makes the error smooth geometrically. So the iteration is called smoothing. November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Why multilevel ? Coarse lever iteration is cheap. More than this… Coarse level smoothing reduces the error more efficiently than fine level in some way . Why ? ( Question 2 ) November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Error restriction Map error to coarse grid will make the error more oscillatory K = 4,  =  K = 4,  = /2 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Idea 2: Residual correction Known current solution x Solve Ax=b eq. to MG do NOT map x directly between levels Map residual equation to coarse level Calculate rh b2h= Ih2h rh ( Restriction ) eh = Ih2h x2h ( Prolongation ) xh = xh + eh November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Why residual correction ? Error is smooth at fine level, but the actual solution may not be. Prolongation results in a smooth error in fine level, which is suppose to be a good evaluation of the fine level error. If the solution is not smooth in fine level, prolongation will introduce more high frequency error. November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Revised V-cycle with idea 2 2h h Smoothing on xh Calculate rh b2h= Ih2h rh Smoothing on x2h eh = Ih2h x2h Correct: xh = xh + eh ` Restriction Prolongation November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD What is A2h Galerkin condition November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Going to multilevels V-cycle and W-cycle Full Multigrid V-cycle h 2h 4h h 2h 4h 8h November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Performance of Multigrid Complexity comparison Gaussian elimination O(N2) Jacobi iteration O(N2log) Gauss-Seidel SOR O(N3/2log) Conjugate gradient Multigrid ( iterative ) O(Nlog) Multigrid ( FMG ) O(N) November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Summary of MG ideas Three important ideas of MG Nested iteration Residual correction Elimination of error: high frequency : fine grid low frequency : coarse grid November 12, 2018 Zhengyong (Simon) Zhu, UCSD

AMG :for unstructured grids Ax=b, no regular grid structure Fine grid defined from A 1 2 3 4 5 6 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Three questions for AMG How to choose coarse grid How to define the smoothness of errors How are interpolation and prolongation done November 12, 2018 Zhengyong (Simon) Zhu, UCSD

How to choose coarse grid Idea: C/F splitting As few coarse grid point as possible For each F-node, at least one of its neighbor is a C-node Choose node with strong coupling to other nodes as C-node 1 2 4 3 5 6 November 12, 2018 Zhengyong (Simon) Zhu, UCSD

How to define the smoothness of error AMG fundamental concept: Smooth error = small residuals ||r|| << ||e|| November 12, 2018 Zhengyong (Simon) Zhu, UCSD

How are Prolongation and Restriction done Prolongation is based on smooth error and strong connections Common practice: I November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD AMG Prolongation (2) November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD AMG Prolongation (3) Restriction : November 12, 2018 Zhengyong (Simon) Zhu, UCSD

Zhengyong (Simon) Zhu, UCSD Summary Multigrid is a multilevel iterative method. Advantage: scalable If no geometrical grid is available, try Algebraic multigrid method November 12, 2018 Zhengyong (Simon) Zhu, UCSD

The landscape of Solvers Direct A = LU Iterative y’ = Ay More Robust More General Non- symmetric Symmetric positive definite More Robust Less Storage (if sparse) November 12, 2018 courtesy J.R.Gilbert, UCSB