Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights.

Similar presentations


Presentation on theme: "ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights."— Presentation transcript:

1 ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights reserved

2 Vector Norms

3 Properties of Vector Norms 1.||x|| > 0 for all x ≠ 0. 2.||x|| = 0 iff x = 0. 3.||αx|| = |α| ||x|| for any scalar α. 4.||x + y|| ≤ ||x|| + ||y|| for any two vectors x and y.

4 Matrix Norms Given Ax = y 'Induced' Norm The norm of a matrix measures the maximum “stretching” the matrix does to any vector in the given vector norm.

5 Properties of Matrix Norms

6 Relaxation or Iterative Methods of Solving Ax = b Aim: Generate a sequence of vectors x 0, x 1,..., x t that will "hopefully" converge to the solution x* = A -1 b (without finding A -1 or the LU factors of A, just A) Gauss-Jacobi Gauss-Seidel

7 Point Gauss-Jacobi Ax = b Guess a solution x (0), then Repeat until ||x k+1 - x k || < ε(can be done in parallel)

8 Block Gauss-Jacobi Start with initial guess x (0) Then solve Repeat until convergence:

9 Point Forward Gauss-Seidel k =0, initial guess x 0 Repeat until ||x k+1 - x k || < ε

10 Point Backward Gauss-Seidel

11 Block Forward Gauss-Seidel Initial guess x (0) Solve i = 1, 2, …., p Repeat until ||x k+1 - x k || < ε

12 Block Backward Gauss-Seidel Initial guess x (0) Solve i = p-1, p-2, …., 1 Repeat until ||x k+1 - x k || < ε

13 Symmetrical Gauss-Seidel Method A symmetrical G-S method performs a forward (point or block) G-S iteration followed by a backward G-S iteration

14 G-J and G-S for bordered-block-diagonal matrices

15 G-J: G-S:

16 Matrix Splitting More formally, Given Ax = b Let A = L + D + U (matrix splitting) where L is lower triangular (strictly) U strictly upper triangular D point or block-diagonal nonsingular matrix

17 Gauss-Jacobi A = L + D + U Solve Dx k+1 = b – (L+U) x k Or, x k+1 = D -1 b – D -1 (L+U) x k = D -1 b – M GJ x k where M GJ = D -1 (L + U) is called the Gauss-Jacobi Companion Matrix

18 Gauss-Seidel A = L + D + U Solve [L + D]x k+1 = b – Ux k or x k+1 = [L + D] -1 b - [L + D] -1 Ux k = [L + D] -1 b - M GS x k where M GS = [L + D] -1 U is called the Gauss-Seidel Companion Matrix If D is diagonal, then the methods are referred to as point Gauss-Jacobi and point Gauss-Seidel If D is block-diagonal, then they are referred to as block Gauss-Jacobi and block Gauss-Seidel.

19 Successive Over-Relaxation (S-O-R) (to accelerate convergence) Ax = b  ωAx = ωb, ω is a scalar ω (L + D + U) x = ωb ω ( L + D + U)x + Dx = ωb + Dx ωLx + Dx = ωb + Dx – ωDx – ωUx (ωL + D) x = ωb + [(1 - ω)D – ωU] M SOR = (ωL + D) -1 [(1 - ω)D – ωU]

20 If D is diagonal then

21 Convergence Theorem The necessary and sufficient condition for the G-J and G-S iterates to converge to a solution for any initial guess is that the spectral radii of M GJ and M GS are strictly less than 1, i.e., all eigenvalues of M GJ and M GS are inside the unit circle. Definition: Spectral radius of matrix M: G-J Convergence: ρ(M GJ ) < l G-S Convergence: ρ (M GS ) < 1 SOR Convergence: ρ (M SOR ) < 1

22 Eigenvalues and Eigenvectors Let M є R n×n, λ є C is an eigenvalue of M if there exists a u є C n, u ≠ 0, such that Mu = λu => (M - λI) u = 0; u is an eigenvector, The eigenvalues can be found by computing the roots of the characteristic polynomial of M: φ(λ) = det (λI – M) = 0 φ (λ) is a polynomial of degree n with real coefficients. It has n (possibly complex) roots.

23 Eigenvalues and Eigenvectors(cont.) 1.If λ є R, then u є R n 2.If λ є C, then u є C n, and complex conjugate λ* is also an eigenvalue and u* is an eigenvector 3.If A = A T (symmetric matrix), then all λ’s are real, and all u є R n. 4.The spectral radius of M is ρ(M) = max |λ i | 5.ρ (M) ≤ ||M||, i.e., 0 ≤ ρ (M) ≤ ||M|| 6.For symmetric matrices, ρ (M) = ||M|| 2

24 Eigenvalues and Eigenvectors (cont.) Lemma 1 If ρ (M) < 1, then lim M k = φ n×n, where M k = M.M.M... k → φ n×n is an n × n zero matrix Lemma 2 If ρ (M) < 1, then (I - M) -1 exists, and (I - M) -1 = I + M + M 2 +... Scalar case, a: ρ (a) = |a|  if |a| < 1, then lim a k = 0 when k →  1/(1-a) = (1-a) -1 = 1 + a + a 2 +...

25 Convergence Theorem Given x = Mx + c with solution x*, then (I - M) -1 exists and x* = (I - M) -1 c Let x k+1 =Mx k + c, then lim x k → x* iff ρ (M) < 1 k → Proof: x (1) = M x (0) + c x (2) = M x 1 + c = M(M x° + c) + c : = M 2 x 0 + Mc + c x (k) - M k x 0 + (I + M + M 2 +... M k-1 ) c Since ρ (M) < 1, lim M k = Ф, and (I + M + M 2 +...+ M k-l ) → (I - M) -1 x k → (I - M) -1 c = x* as k →

26 Given: Ax = b, A = L + D + U G-J: x k+1 = D -1 b - D -1 (L+U) x k M GJ = D -l (L+U) G-S: x k+1 = (L+D) -1 b - (L+D) -1 Ux k M GS = (L+D) -1 U SOR : (ωL + D) x k+1 = ωb + [(1 - ω)D – ωU]x k M SOR = (ωL + D) -1 [(1 - ω)D – ωU]

27 Convergence Necessary and Sufficient condition for convergence: ρ(M GJ ) < 1, ρ (M GS ) < 1 However, ρ (M) ≤ ||M|| Sufficient (but not necessary) condition for convergence : If ||M|| < 1 for some induced norm, then x k+1 = M x k + c converges to the solution x*. Proof: ρ (M) < ||M|| < 1 In practice, use ||M|| 1 (maximum column sum) or ||M|| (maximum row sum)

28 Convergence test using the original matrix A (sufficient conditions) GivenAx = b Regular splitting: A = M – N. M nonsingular, M -1 >0, N>0. M-matrix: Nonsingular, a ii > 0, a ij ≤ 0, i ≠ j  A -1 ≥ 0. The admittance matrix of a linear resistive circuit with no controlled sources and no voltage sources is an M-matrix. If A is an M-matrix, then the GJ and GS iterative methods converge for any initial point x o.

29 Convergence test using the original matrix A (sufficient conditions) A is diagonally dominant (d.d.) if A is strictly diagonally dominant (s.d.d.) if

30 Convergence test using the original matrix A (sufficient conditions) Irreducible: Let G(A) be the directed graph of A. If G(A) is strongly-connected, then A is irreducible Strongly-connected: For every pair of distinct vertices i and j in G(A), there exists a directed path from i to j and from j to i. Not strongly-connected

31 Convergence test using the original matrix A (sufficient conditions ) If A is reducible, then order A can be reordered to have leading top right-hand zeros:

32 Convergence test using the original matrix A sufficient conditions A is irreducibly diagonally dominant (i.d.d.) if it is irreducible, diagonally dominant, with at least one row strictly diagonally dominant.

33 Theorem If A is s.d.d. (does not have to be irreducible), or i.d.d., then ρ (M GJ ) < 1 and ρ (M GS ) < 1; and point G-J, block G-J, point G-S, and block G-S, converge to the solution of Ax = b. Remark: Diagonal dominance and, in general, the convergence of G-J and G-S depend on matrix ordering. Convergence block G-J and block G-S also depends on matrix partitioning.

34 Convergence test using the original matrix A sufficient conditions A is positive definite if u T Au > 0 for all vectors u ≠ 0 A symmetric positive definite matrix has real, positive eigenvalues If A is symmetric and diagonally dominant with positive diagonal elements, the A is positive definite If A is symmetric and positive definite then G-J and G-S converge

35 Convergence test for SOR methods For a few structured problems, ω is determined by minimizing ρ(M -1 N), where M = (ω L+D), N = [(1- ω)D -ωU] In general ω may be expensive to find. It is determined from “experience” in solving a certain class of problems. If A is symmetric and positive definite then the SOR method converges for any initial guess and for 0 < ω < 2 (ωL + D) x = ωb + [(1 - ω)D – ωU]

36 Example Ax = b 1. 2.

37 Example (cont.)

38 Example Ax = b

39 Convergence checks (cont.)

40

41

42

43 Block G-J and Block G-S (Examples)

44 Block G-J and Block G-S (Examples) (cont.) How to check if the roots of a polynomial are within the unit circle without finding the roots (related to stability of linear time-invariant discrete systems) Use Bilinear Transformation and Routh-Hurwitz Test (R-H)

45 For bilinear transformation, see D. POULLARIKAS and S. SEELY, Signals and Systems, PWS Publishers, Boston, MA, 1985, pp. 499-501. For Routh-Hurwitz Test, see E. W. KAMEN, Introduction to Signals and Systems, 2nd Edition, Macmillan Pub. Co., New York, 1990, pp. 26 1-265.

46 Example 1 Given: 4x 2 + 5x + 3 = 0. Are all the roots within the unit circle? * for 2nd order polynomial if all coefficients are of the same sign and none is zero, then all roots are in lhp.

47 Example 2 Given: 4x 3 -4x 2 -7x-3 = 0, Are all the roots within the unit circle?

48 Another Example R-H Test

49 SUMMARY To check the convergence of solving Ax = b by G-J or G-S iterative methods: Check if A is s.d.d. or i.d.d. If it is => Convergence If A is reducible, then order A to have leading top right-hand zeros:

50 Remark The matrix representing the linearized circuit equations of a digital combinational circuit (with no feedback), with “simple” transistor models (no gate-to-drain or gate-to-source capacitance) is reducible and can be ordered as a lower block triangular form.

51 For each irreducible submatrix (if it is to be solved by G-J or GS), construct the companion matrix M, (if the dimension of M is small) where M GJ = D -1 (L + U) M GS = [L + D] -1 U Find eigenvalues of M: Roots of φ(λ) = det (λI – M) = 0 If order of φ(λ) is three or more and hard to factorize, check if the roots of φ(λ) are strictly inside the unit circle in the complex plane: by applying bilinear transformation and the Routh-Hurwitz test on the transformed equation.

52 Finding the eigenvalues, or checking if they are within the unit circle, is very expensive for large matrices, so it is rarely used. Convergence can be checked by checking if the norm of the difference between successive iterations is getting smaller: ||x k+1 - x k || < ||x k - x k-1 || < ||x k-1 - x k-2 || …. Remark

53 Steepest Descent Method Given Ax = b, A positive definite and symmetric Error or Residual vector at iteration point x k : r k = Ax k – b Define

54 Steepest Descent Method

55 Steepest Descent Algorithm x 0 = initial guess r 0 = b – A x 0 k = 0 While r k ≠ 0 k = k + 1 α k = r k-1 T r k-1 / r k-1 T A r k-1 x k = x k-1 + α k r k-1 r k = b – Ax k end

56

57

58 Steepest Descent is slow Better to choose direction p k, not necessarily r k, such that is minimized. This occurs when p k should not be orthogonal to if p k is orthogonal to, then p k T r k-1 =0

59 Algorithm x 0 = initial guess r 0 = b – A x 0 k = 0 while r k ≠ 0 k =k+1 choose a direction p k such that p k T r k-1 ≠0 x k =x k-1 +α k p k r k =b-Ax k End

60 The search directions are linearly independent and x k solves the problem Min Ø(x) Where x k = x 0 + α 1 p 1 + α 2 p 2 + … + α k p k Or x k = x 0 + span {p 1,p 2, …, p k } Convergence is guaranteed in at most n steps

61 Conjugate Gradient Algorithm

62 Other Iterative methods Projection methods Incomplete LU factorization (ILU) Multigrid methods


Download ppt "ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights."

Similar presentations


Ads by Google