ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights.

Slides:



Advertisements
Similar presentations
5.1 Real Vector Spaces.
Advertisements

3D Geometry for Computer Graphics
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Chapter 6 Eigenvalues and Eigenvectors
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 17 Introduction to Eigenvalue Problems
Numerical Algorithms Matrix multiplication
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Computer Graphics Recitation 5.
Chapter 6 Eigenvalues.
5.II. Similarity 5.II.1. Definition and Examples
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
ECIV 520 Structural Analysis II Review of Matrix Algebra.
ECE 552 Numerical Circuit Analysis
Finding Eigenvalues and Eigenvectors What is really important?
CSE 245: Computer Aided Circuit Simulation and Verification
ECE 552 Numerical Circuit Analysis Chapter Six NONLINEAR DC ANALYSIS OR: Solution of Nonlinear Algebraic Equations Copyright © I. Hajj 2012 All rights.
Digital Control Systems
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
5.1 Orthogonality.
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
CHAPTER SIX Eigenvalues
Gerschgorin Circle Theorem. Eigenvalues In linear algebra Eigenvalues are defined for a square matrix M. An Eigenvalue for the matrix M is a scalar such.
Systems of Linear Equations Iterative Methods
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
Linear Algebra and Complexity Chris Dickson CAS Advanced Topics in Combinatorial Optimization McMaster University, January 23, 2006.
Mathematics for Computer Graphics (Appendix A) Won-Ki Jeong.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Chapter Content Real Vector Spaces Subspaces Linear Independence
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
5 5.2 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors THE CHARACTERISTIC EQUATION.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
Signal & Weight Vector Spaces
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
ALGEBRAIC EIGEN VALUE PROBLEMS
CHAPTER 7 Determinant s. Outline - Permutation - Definition of the Determinant - Properties of Determinants - Evaluation of Determinants by Elementary.
Chapter 6 Eigenvalues and Eigenvectors
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
Elementary Linear Algebra Anton & Rorres, 9th Edition
Solving Linear Systems Ax=b
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
Numerical Analysis Lecture 16.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Presentation transcript:

ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights reserved

Vector Norms

Properties of Vector Norms 1.||x|| > 0 for all x ≠ 0. 2.||x|| = 0 iff x = 0. 3.||αx|| = |α| ||x|| for any scalar α. 4.||x + y|| ≤ ||x|| + ||y|| for any two vectors x and y.

Matrix Norms Given Ax = y 'Induced' Norm The norm of a matrix measures the maximum “stretching” the matrix does to any vector in the given vector norm.

Properties of Matrix Norms

Relaxation or Iterative Methods of Solving Ax = b Aim: Generate a sequence of vectors x 0, x 1,..., x t that will "hopefully" converge to the solution x* = A -1 b (without finding A -1 or the LU factors of A, just A) Gauss-Jacobi Gauss-Seidel

Point Gauss-Jacobi Ax = b Guess a solution x (0), then Repeat until ||x k+1 - x k || < ε(can be done in parallel)

Block Gauss-Jacobi Start with initial guess x (0) Then solve Repeat until convergence:

Point Forward Gauss-Seidel k =0, initial guess x 0 Repeat until ||x k+1 - x k || < ε

Point Backward Gauss-Seidel

Block Forward Gauss-Seidel Initial guess x (0) Solve i = 1, 2, …., p Repeat until ||x k+1 - x k || < ε

Block Backward Gauss-Seidel Initial guess x (0) Solve i = p-1, p-2, …., 1 Repeat until ||x k+1 - x k || < ε

Symmetrical Gauss-Seidel Method A symmetrical G-S method performs a forward (point or block) G-S iteration followed by a backward G-S iteration

G-J and G-S for bordered-block-diagonal matrices

G-J: G-S:

Matrix Splitting More formally, Given Ax = b Let A = L + D + U (matrix splitting) where L is lower triangular (strictly) U strictly upper triangular D point or block-diagonal nonsingular matrix

Gauss-Jacobi A = L + D + U Solve Dx k+1 = b – (L+U) x k Or, x k+1 = D -1 b – D -1 (L+U) x k = D -1 b – M GJ x k where M GJ = D -1 (L + U) is called the Gauss-Jacobi Companion Matrix

Gauss-Seidel A = L + D + U Solve [L + D]x k+1 = b – Ux k or x k+1 = [L + D] -1 b - [L + D] -1 Ux k = [L + D] -1 b - M GS x k where M GS = [L + D] -1 U is called the Gauss-Seidel Companion Matrix If D is diagonal, then the methods are referred to as point Gauss-Jacobi and point Gauss-Seidel If D is block-diagonal, then they are referred to as block Gauss-Jacobi and block Gauss-Seidel.

Successive Over-Relaxation (S-O-R) (to accelerate convergence) Ax = b  ωAx = ωb, ω is a scalar ω (L + D + U) x = ωb ω ( L + D + U)x + Dx = ωb + Dx ωLx + Dx = ωb + Dx – ωDx – ωUx (ωL + D) x = ωb + [(1 - ω)D – ωU] M SOR = (ωL + D) -1 [(1 - ω)D – ωU]

If D is diagonal then

Convergence Theorem The necessary and sufficient condition for the G-J and G-S iterates to converge to a solution for any initial guess is that the spectral radii of M GJ and M GS are strictly less than 1, i.e., all eigenvalues of M GJ and M GS are inside the unit circle. Definition: Spectral radius of matrix M: G-J Convergence: ρ(M GJ ) < l G-S Convergence: ρ (M GS ) < 1 SOR Convergence: ρ (M SOR ) < 1

Eigenvalues and Eigenvectors Let M є R n×n, λ є C is an eigenvalue of M if there exists a u є C n, u ≠ 0, such that Mu = λu => (M - λI) u = 0; u is an eigenvector, The eigenvalues can be found by computing the roots of the characteristic polynomial of M: φ(λ) = det (λI – M) = 0 φ (λ) is a polynomial of degree n with real coefficients. It has n (possibly complex) roots.

Eigenvalues and Eigenvectors(cont.) 1.If λ є R, then u є R n 2.If λ є C, then u є C n, and complex conjugate λ* is also an eigenvalue and u* is an eigenvector 3.If A = A T (symmetric matrix), then all λ’s are real, and all u є R n. 4.The spectral radius of M is ρ(M) = max |λ i | 5.ρ (M) ≤ ||M||, i.e., 0 ≤ ρ (M) ≤ ||M|| 6.For symmetric matrices, ρ (M) = ||M|| 2

Eigenvalues and Eigenvectors (cont.) Lemma 1 If ρ (M) < 1, then lim M k = φ n×n, where M k = M.M.M... k → φ n×n is an n × n zero matrix Lemma 2 If ρ (M) < 1, then (I - M) -1 exists, and (I - M) -1 = I + M + M Scalar case, a: ρ (a) = |a|  if |a| < 1, then lim a k = 0 when k →  1/(1-a) = (1-a) -1 = 1 + a + a

Convergence Theorem Given x = Mx + c with solution x*, then (I - M) -1 exists and x* = (I - M) -1 c Let x k+1 =Mx k + c, then lim x k → x* iff ρ (M) < 1 k → Proof: x (1) = M x (0) + c x (2) = M x 1 + c = M(M x° + c) + c : = M 2 x 0 + Mc + c x (k) - M k x 0 + (I + M + M M k-1 ) c Since ρ (M) < 1, lim M k = Ф, and (I + M + M M k-l ) → (I - M) -1 x k → (I - M) -1 c = x* as k →

Given: Ax = b, A = L + D + U G-J: x k+1 = D -1 b - D -1 (L+U) x k M GJ = D -l (L+U) G-S: x k+1 = (L+D) -1 b - (L+D) -1 Ux k M GS = (L+D) -1 U SOR : (ωL + D) x k+1 = ωb + [(1 - ω)D – ωU]x k M SOR = (ωL + D) -1 [(1 - ω)D – ωU]

Convergence Necessary and Sufficient condition for convergence: ρ(M GJ ) < 1, ρ (M GS ) < 1 However, ρ (M) ≤ ||M|| Sufficient (but not necessary) condition for convergence : If ||M|| < 1 for some induced norm, then x k+1 = M x k + c converges to the solution x*. Proof: ρ (M) < ||M|| < 1 In practice, use ||M|| 1 (maximum column sum) or ||M|| (maximum row sum)

Convergence test using the original matrix A (sufficient conditions) GivenAx = b Regular splitting: A = M – N. M nonsingular, M -1 >0, N>0. M-matrix: Nonsingular, a ii > 0, a ij ≤ 0, i ≠ j  A -1 ≥ 0. The admittance matrix of a linear resistive circuit with no controlled sources and no voltage sources is an M-matrix. If A is an M-matrix, then the GJ and GS iterative methods converge for any initial point x o.

Convergence test using the original matrix A (sufficient conditions) A is diagonally dominant (d.d.) if A is strictly diagonally dominant (s.d.d.) if

Convergence test using the original matrix A (sufficient conditions) Irreducible: Let G(A) be the directed graph of A. If G(A) is strongly-connected, then A is irreducible Strongly-connected: For every pair of distinct vertices i and j in G(A), there exists a directed path from i to j and from j to i. Not strongly-connected

Convergence test using the original matrix A (sufficient conditions ) If A is reducible, then order A can be reordered to have leading top right-hand zeros:

Convergence test using the original matrix A sufficient conditions A is irreducibly diagonally dominant (i.d.d.) if it is irreducible, diagonally dominant, with at least one row strictly diagonally dominant.

Theorem If A is s.d.d. (does not have to be irreducible), or i.d.d., then ρ (M GJ ) < 1 and ρ (M GS ) < 1; and point G-J, block G-J, point G-S, and block G-S, converge to the solution of Ax = b. Remark: Diagonal dominance and, in general, the convergence of G-J and G-S depend on matrix ordering. Convergence block G-J and block G-S also depends on matrix partitioning.

Convergence test using the original matrix A sufficient conditions A is positive definite if u T Au > 0 for all vectors u ≠ 0 A symmetric positive definite matrix has real, positive eigenvalues If A is symmetric and diagonally dominant with positive diagonal elements, the A is positive definite If A is symmetric and positive definite then G-J and G-S converge

Convergence test for SOR methods For a few structured problems, ω is determined by minimizing ρ(M -1 N), where M = (ω L+D), N = [(1- ω)D -ωU] In general ω may be expensive to find. It is determined from “experience” in solving a certain class of problems. If A is symmetric and positive definite then the SOR method converges for any initial guess and for 0 < ω < 2 (ωL + D) x = ωb + [(1 - ω)D – ωU]

Example Ax = b 1. 2.

Example (cont.)

Example Ax = b

Convergence checks (cont.)

Block G-J and Block G-S (Examples)

Block G-J and Block G-S (Examples) (cont.) How to check if the roots of a polynomial are within the unit circle without finding the roots (related to stability of linear time-invariant discrete systems) Use Bilinear Transformation and Routh-Hurwitz Test (R-H)

For bilinear transformation, see D. POULLARIKAS and S. SEELY, Signals and Systems, PWS Publishers, Boston, MA, 1985, pp For Routh-Hurwitz Test, see E. W. KAMEN, Introduction to Signals and Systems, 2nd Edition, Macmillan Pub. Co., New York, 1990, pp

Example 1 Given: 4x 2 + 5x + 3 = 0. Are all the roots within the unit circle? * for 2nd order polynomial if all coefficients are of the same sign and none is zero, then all roots are in lhp.

Example 2 Given: 4x 3 -4x 2 -7x-3 = 0, Are all the roots within the unit circle?

Another Example R-H Test

SUMMARY To check the convergence of solving Ax = b by G-J or G-S iterative methods: Check if A is s.d.d. or i.d.d. If it is => Convergence If A is reducible, then order A to have leading top right-hand zeros:

Remark The matrix representing the linearized circuit equations of a digital combinational circuit (with no feedback), with “simple” transistor models (no gate-to-drain or gate-to-source capacitance) is reducible and can be ordered as a lower block triangular form.

For each irreducible submatrix (if it is to be solved by G-J or GS), construct the companion matrix M, (if the dimension of M is small) where M GJ = D -1 (L + U) M GS = [L + D] -1 U Find eigenvalues of M: Roots of φ(λ) = det (λI – M) = 0 If order of φ(λ) is three or more and hard to factorize, check if the roots of φ(λ) are strictly inside the unit circle in the complex plane: by applying bilinear transformation and the Routh-Hurwitz test on the transformed equation.

Finding the eigenvalues, or checking if they are within the unit circle, is very expensive for large matrices, so it is rarely used. Convergence can be checked by checking if the norm of the difference between successive iterations is getting smaller: ||x k+1 - x k || < ||x k - x k-1 || < ||x k-1 - x k-2 || …. Remark

Steepest Descent Method Given Ax = b, A positive definite and symmetric Error or Residual vector at iteration point x k : r k = Ax k – b Define

Steepest Descent Method

Steepest Descent Algorithm x 0 = initial guess r 0 = b – A x 0 k = 0 While r k ≠ 0 k = k + 1 α k = r k-1 T r k-1 / r k-1 T A r k-1 x k = x k-1 + α k r k-1 r k = b – Ax k end

Steepest Descent is slow Better to choose direction p k, not necessarily r k, such that is minimized. This occurs when p k should not be orthogonal to if p k is orthogonal to, then p k T r k-1 =0

Algorithm x 0 = initial guess r 0 = b – A x 0 k = 0 while r k ≠ 0 k =k+1 choose a direction p k such that p k T r k-1 ≠0 x k =x k-1 +α k p k r k =b-Ax k End

The search directions are linearly independent and x k solves the problem Min Ø(x) Where x k = x 0 + α 1 p 1 + α 2 p 2 + … + α k p k Or x k = x 0 + span {p 1,p 2, …, p k } Convergence is guaranteed in at most n steps

Conjugate Gradient Algorithm

Other Iterative methods Projection methods Incomplete LU factorization (ILU) Multigrid methods