CSE 245: Computer Aided Circuit Simulation and Verification

Slides:



Advertisements
Similar presentations
Numbers Treasure Hunt Following each question, click on the answer. If correct, the next page will load with a graphic first – these can be used to check.
Advertisements

1 A B C
Computer Graphics: 2D Transformations
Scenario: EOT/EOT-R/COT Resident admitted March 10th Admitted for PT and OT following knee replacement for patient with CHF, COPD, shortness of breath.
Variations of the Turing Machine
Angstrom Care 培苗社 Quadratic Equation II
AP STUDY SESSION 2.
1
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
STATISTICS HYPOTHESES TEST (I)
David Burdett May 11, 2004 Package Binding for WS CDL.
Local Customization Chapter 2. Local Customization 2-2 Objectives Customization Considerations Types of Data Elements Location for Locally Defined Data.
Process a Customer Chapter 2. Process a Customer 2-2 Objectives Understand what defines a Customer Learn how to check for an existing Customer Learn how.
CALENDAR.
1 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt BlendsDigraphsShort.
1 Click here to End Presentation Software: Installation and Updates Internet Download CD release NACIS Updates.
Break Time Remaining 10:00.
Factoring Quadratics — ax² + bx + c Topic
EE, NCKU Tien-Hao Chang (Darby Chang)
Turing Machines.
Table 12.1: Cash Flows to a Cash and Carry Trading Strategy.
PP Test Review Sections 6-1 to 6-6
Briana B. Morrison Adapted from William Collins
LIAL HORNSBY SCHNEIDER
Outline Minimum Spanning Tree Maximal Flow Algorithm LP formulation 1.
Bellwork Do the following problem on a ½ sheet of paper and turn in.
Exarte Bezoek aan de Mediacampus Bachelor in de grafische en digitale media April 2014.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
Chapter 1: Expressions, Equations, & Inequalities
Adding Up In Chunks.
MaK_Full ahead loaded 1 Alarm Page Directory (F11)
1 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt 10 pt 15 pt 20 pt 25 pt 5 pt Synthetic.
Artificial Intelligence
6.4 Best Approximation; Least Squares
: 3 00.
5 minutes.
1 hi at no doifpi me be go we of at be do go hi if me no of pi we Inorder Traversal Inorder traversal. n Visit the left subtree. n Visit the node. n Visit.
1 Let’s Recapitulate. 2 Regular Languages DFAs NFAs Regular Expressions Regular Grammars.
Essential Cell Biology
12 System of Linear Equations Case Study
Converting a Fraction to %
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
CSE20 Lecture 15 Karnaugh Maps Professor CK Cheng CSE Dept. UC San Diego 1.
Clock will move after 1 minute
PSSA Preparation.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Physics for Scientists & Engineers, 3rd Edition
Select a time to count down from the clock above
Completing the Square Topic
Copyright Tim Morris/St Stephen's School
9. Two Functions of Two Random Variables
1 Dr. Scott Schaefer Least Squares Curves, Rational Representations, Splines and Continuity.
1 Decidability continued…. 2 Theorem: For a recursively enumerable language it is undecidable to determine whether is finite Proof: We will reduce the.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
FIGURE 3-1 Basic parts of a computer. Dale R. Patrick Electricity and Electronics: A Survey, 5e Copyright ©2002 by Pearson Education, Inc. Upper Saddle.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
CS240A: Conjugate Gradients and the Model Problem.
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
CS240A: Conjugate Gradients and the Model Problem.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
Presentation transcript:

CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng

Outline Introduction Direct Methods Iterative Methods Formulations Projection Methods Krylov Space Methods Preconditioned Iterations Multigrid Methods Domain Decomposition Methods

Introduction Iterative Methods Direct Method LU Decomposition Domain Decomposition General and Robust but can be complicated if N>= 1M Preconditioning Conjugate Gradient GMRES Jacobi Gauss-Seidel Multigrid Excellent choice for SPD matrices Remain an art for arbitrary matrices

Formulation Error in A norm Minimal Residue Matrix A is SPD (Symmetric and Positive Definite Minimal Residue Matrix A can be arbitrary

Formulation: Error in A norm Min (x-x*)TA(x-x*), where Ax* =b, and A is SPD. Min E(x)=1/2 xTAx - bTx Search space: x=x0+Vy, where x0 is an initial solution, matrix Vnxm has m bases of subspace K vector ym contains the m variables.

Solution: Error in A norm Min E(x)=1/2xTAx - bTx, Search space: x=x0+Vy, r=b-Ax VTAV is nonsingular if A is SPD, V is full ranked y=(VTAV)-1VTr0 x=x0+V(VTAV)-1VTr0 VTr=0 E(x)=E(x0)-1/2 r0TV(VTAV)-1VTr0

Solution: Error in A norm For any V’=VW, where V is nxm and W is a nonsingular mxm matrix, the solution x remains the same. Search space: x=x0+V’y, r=b-Ax V’TAV’ is nonsingular if A is SPD & V is full ranked x=x0+V’(V’TAV’)-1V’Tr0 =x0+V(VTAV)-1VTr0 V’Tr=WTVTr=0 E(x)=E(x0)-1/2 r0TV(VTAV)-1VTr0

Steepest Descent: Error in A norm Min E(x)=1/2 xTAx - bTx, Gradient: Ax-b= -r Set x=x0+yr0 r0TAr0 is nonsingular if A is SPD y=(r0TAr0)-1r0Tr0 x=x0+r0(r0TAr0)-1r0Tr0 r0Tr=0 E(x)=E(x0)-1/2 (r0Tr0)2/(r0TAr0)

Lanczos: Error in A norm Min E(x)=1/2 xTAx-bTx, Set x=x0+Vy v1=r0 vi is in K{r0, A, i} V=[v1,v2, …,vm] is orthogonal AV=VHm+vm+1 emT VTAV=Hm Note since A is SPD, Hm is Tm Tridiagonal

Conjugate Gradient: Error in A norm Min E(x)=1/2 xTAx-bTx, Set x=x0+Vy v1=r0 vi is in K{r0, A, i} V=[v1,v2, …,vm] is orthogonal in A norm, i.e. VTAV= [diag(viTAvi)] yi= (viTAvi)-1

Formulation: Residual Min |r|2=|b-Ax|2, for an arbitrary square matrix A Min R(x)=(b-Ax)T(b-Ax) Search space: x=x0+Vy where x0 is an initial solution, matrix Vnxm has m bases of subspace K vector ym contains the m variables.

Solution: Residual Min R(x)=(b-Ax)T(b-Ax) Search space: x=x0+Vy VTATAV is nonsingular if A is nonsingular and V is full ranked. y=(VTATAV)-1VTATr0 x=x0+V(VTATAV)-1VTATr0 VTATr= 0 R(x)=R(x0)-r0TAV(VTATAV)-1VTATr0

Steepest Descent: Residual Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr Let x=x0+yATr0 VTATAV is nonsingular if A is nonsingular where V=ATr0. y=(VTATAV)-1VTATr0 x=x0+V(VTATAV)-1VTATr0 VTATr= 0 R(x)=R(x0)-r0TAV(VTATAV)-1VTATr0

GMRES: Residual Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr Let x=x0+yATr0 v1=r0 vi is in K{r0, A, i} V=[v1,v2, …,vm] is orthogonal AV=VHm+vm+1 emT=Vm+1Hm x=x0+V(VTATAV)-1VTATr0 =x0+V(HmTHm)-1 HmTe1|r0|2

Conjugate Residual: Residual Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr Let x=x0+yATr0 v1=r0 vi is in K{r0, A, i} (AV)TAV= D Diagonal Matrix x=x0+V(VTATAV)-1VTATr0 =x0+VD-1VTATr0

Conjugate Gradient Method Steepest Descent Repeat search direction Why take exact one step for each direction? Search direction of Steepest descent method

Orthogonal Direction We don’t know !!! Pick orthogonal search direction: We like to leave the error of xi+1 orthogonal to the search direction di, i.e. We don’t know !!!

Orthogonal  A-orthogonal Instead of orthogonal search direction, we make search direction A –orthogonal (conjugate)

Search Step Size

Iteration finish in n steps Initial error: A-orthogonal The error component at direction dj is eliminated at step j. After n steps, all errors are eliminated.

Conjugate Search Direction How to construct A-orthogonal search directions, given a set of n linear independent vectors. Since the residue vector in steepest descent method is orthogonal, a good candidate to start with

Construct Search Direction -1 In Steepest Descent Method New residue is just a linear combination of previous residue and Let We have Krylov SubSpace: repeatedly applying a matrix to a vector

Construct Search Direction -2 let For i > 0

Construct Search Direction -3 can get next direction from the previous one, without saving them all. let then

Conjugate Gradient Algorithm Given x0, iterate until residue is smaller than error tolerance

Conjugate gradient: Convergence In exact arithmetic, CG converges in n steps (completely unrealistic!!) Accuracy after k steps of CG is related to: consider polynomials of degree k that is equal to 1 at step 0. how small can such a polynomial be at all the eigenvalues of A? Eigenvalues close together are good. Condition number: κ(A) = ||A||2 ||A-1||2 = λmax(A) / λmin(A) Residual is reduced by a constant factor by O(κ1/2(A)) iterations of CG.

Other Krylov subspace methods Nonsymmetric linear systems: GMRES: for i = 1, 2, 3, . . . find xi  Ki (A, b) such that ri = (Axi – b)  Ki (A, b) But, no short recurrence => save old vectors => lots more space (Usually “restarted” every k iterations to use less space.) BiCGStab, QMR, etc.: Two spaces Ki (A, b) and Ki (AT, b) w/ mutually orthogonal bases Short recurrences => O(n) space, but less robust Convergence and preconditioning more delicate than CG Active area of current research Eigenvalues: Lanczos (symmetric), Arnoldi (nonsymmetric)

Preconditioners Suppose you had a matrix B such that: condition number κ(B-1A) is small By = z is easy to solve Then you could solve (B-1A)x = B-1b instead of Ax = b B = A is great for (1), not for (2) B = I is great for (2), not for (1) Domain-specific approximations sometimes work B = diagonal of A sometimes works Better: blend in some direct-methods ideas. . .

Preconditioned conjugate gradient iteration x0 = 0, r0 = b, d0 = B-1 r0, y0 = B-1 r0 for k = 1, 2, 3, . . . αk = (yTk-1rk-1) / (dTk-1Adk-1) step length xk = xk-1 + αk dk-1 approx solution rk = rk-1 – αk Adk-1 residual yk = B-1 rk preconditioning solve βk = (yTk rk) / (yTk-1rk-1) improvement dk = yk + βk dk-1 search direction One matrix-vector multiplication per iteration One solve with preconditioner per iteration

Outline Iterative Method Stationary Iterative Method (SOR, GS,Jacob) Krylov Method (CG, GMRES) Multigrid Method

What is the multigrid A multilevel iterative method to solve Ax=b Originated in PDEs on geometric grids Expend the multigrid idea to unstructured problem – Algebraic MG Geometric multigrid for presenting the basic ideas of the multigrid method.

The model problem + v1 v2 v3 v4 v5 v6 v7 v8 vs Ax = b

Simple iterative method x(0) -> x(1) -> … -> x(k) Jacobi iteration Matrix form : x(k) = Rjx(k-1) + Cj General form: x(k) = Rx(k-1) + C (1) Stationary: x* = Rx* + C (2)

Error and Convergence Definition: error e = x* - x (3) residual r = b – Ax (4) e, r relation: Ae = r (5) ((3)+(4)) e(1) = x*-x(1) = Rx* + C – Rx(0) – C =Re(0) Error equation e(k) = Rke(0) (6) ((1)+(2)+(3)) Convergence:

Error of diffenent frequency Wavenumber k and frequency  = k/n High frequency error is more oscillatory between points k= 1 k= 4 k= 2

Iteration reduce low frequency error efficiently Smoothing iteration reduce high frequency error efficiently, but not low frequency error Error k = 1 k = 2 k = 4 Iterations

Multigrid – a first glance Two levels : coarse and fine grid 2h A2hx2h=b2h 1 2 3 4 h Ahxh=bh 1 2 3 4 5 6 7 8 Ax=b

Idea 1: the V-cycle iteration Also called the nested iteration Start with 2h A2hx2h = b2h A2hx2h = b2h Iterate => Prolongation:  Restriction:  h Iterate to get Ahxh = bh Question 1: Why we need the coarse grid ?

Prolongation Prolongation (interpolation) operator xh = x2h 1 2 3 4 5 6 7 8

Restriction Restriction operator xh = x2h 1 2 3 4 5 6 7 8

Smoothing The basic iterations in each level In ph: xphold  xphnew Iteration reduces the error, makes the error smooth geometrically. So the iteration is called smoothing.

Why multilevel ? Coarse lever iteration is cheap. More than this… Coarse level smoothing reduces the error more efficiently than fine level in some way . Why ? ( Question 2 )

Error restriction Map error to coarse grid will make the error more oscillatory K = 4,  =  K = 4,  = /2

Idea 2: Residual correction Known current solution x Solve Ax=b eq. to MG do NOT map x directly between levels Map residual equation to coarse level Calculate rh b2h= Ih2h rh ( Restriction ) eh = Ih2h x2h ( Prolongation ) xh = xh + eh

Why residual correction ? Error is smooth at fine level, but the actual solution may not be. Prolongation results in a smooth error in fine level, which is suppose to be a good evaluation of the fine level error. If the solution is not smooth in fine level, prolongation will introduce more high frequency error.

Revised V-cycle with idea 2 2h h Smoothing on xh Calculate rh b2h= Ih2h rh Smoothing on x2h eh = Ih2h x2h Correct: xh = xh + eh ` Restriction Prolongation

What is A2h Galerkin condition

Going to multilevels V-cycle and W-cycle Full Multigrid V-cycle h 2h

Performance of Multigrid Complexity comparison Gaussian elimination O(N2) Jacobi iteration O(N2log) Gauss-Seidel SOR O(N3/2log) Conjugate gradient Multigrid ( iterative ) O(Nlog) Multigrid ( FMG ) O(N)

Summary of MG ideas Important ideas of MG Hierarchical iteration Residual correction Galerkin condition Smoothing the error: high frequency : fine grid low frequency : coarse grid

AMG :for unstructured grids Ax=b, no regular grid structure Fine grid defined from A 1 2 3 4 5 6

Three questions for AMG How to choose coarse grid How to define the smoothness of errors How are interpolation and prolongation done

How to choose coarse grid Idea: C/F splitting As few coarse grid point as possible For each F-node, at least one of its neighbor is a C-node Choose node with strong coupling to other nodes as C-node 1 2 4 3 5 6

How to define the smoothness of error AMG fundamental concept: Smooth error = small residuals ||r|| << ||e||

How are Prolongation and Restriction done Prolongation is based on smooth error and strong connections Common practice: I

AMG Prolongation (2)

AMG Prolongation (3) Restriction :

Summary Multigrid is a multilevel iterative method. Advantage: scalable If no geometrical grid is available, try Algebraic multigrid method

The landscape of Solvers Direct A = LU Iterative y’ = Ay More Robust More General Non- symmetric Symmetric positive definite More Robust Less Storage (if sparse)

References G.H. Golub and C.F. Van Loan, Matrix Computataions, Third Edition, Johns Hopkins, 1996 Y. Saad, Iterative Methods for Sparse Linear Systems, Second Edition, SIAM, 2003.