ECE 552 Numerical Circuit Analysis

Slides:



Advertisements
Similar presentations
E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 6.
Advertisements

Applied Informatics Štefan BEREŽNÝ
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
4/26/ LU Decomposition Civil Engineering Majors Authors: Autar Kaw Transforming.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
ECE 552 Numerical Circuit Analysis Chapter Four SPARSE MATRIX SOLUTION TECHNIQUES Copyright © I. Hajj 2012 All rights reserved.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Major: All Engineering Majors Authors: Autar Kaw
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Numerical Algorithms Matrix multiplication
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Linear Algebraic Equations
Linear Systems of Equations Ax = b Marco Lattuada Swiss Federal Institute of Technology - ETH Institut für Chemie und Bioingenieurwissenschaften ETH Hönggerberg/
Matrices and Systems of Equations
ECIV 520 Structural Analysis II Review of Matrix Algebra.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Systems of Linear Equations
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Linear Algebraic Equations ~ Gauss Elimination Chapter.
Major: All Engineering Majors Author(s): Autar Kaw
Chapter 8 Objectives Understanding matrix notation.
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Scientific Computing Linear Systems – LU Factorization.
1 資訊科學數學 14 : Determinants & Inverses 陳光琦助理教授 (Kuang-Chi Chen)
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Square n-by-n Matrix.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
EE616 Dr. Janusz Starzyk Computer Aided Analysis of Electronic Circuits Innovations in numerical techniques had profound import on CAD: –Sparse matrix.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Lecture 8 Matrix Inverse and LU Decomposition
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1/18/ LU Decomposition Industrial Engineering Majors Authors: Autar Kaw Transforming.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Linear Systems Dinesh A.
3/1/ LU Decomposition Computer Engineering Majors Authors: Autar Kaw Transforming.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
Numerical Methods.  LU Decomposition is another method to solve a set of simultaneous linear equations  For most non-singular matrix [A] that one could.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
CHAPTER 7 Determinant s. Outline - Permutation - Definition of the Determinant - Properties of Determinants - Evaluation of Determinants by Elementary.
1 Numerical Methods Solution of Systems of Linear Equations.
MAT 322: LINEAR ALGEBRA.
Spring Dr. Jehad Al Dallal
Linear Equations.
Numerical Analysis Lecture14.
Numerical Analysis Lecture10.
Major: All Engineering Majors Authors: Autar Kaw
Lecture 8 Matrix Inverse and LU Decomposition
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

ECE 552 Numerical Circuit Analysis Chapter Three SOLUTION OF LINEAR ALGEBRAIC EQUATIONS Copyright © I. Hajj 2012 All rights reserved

References: VLACH & SlNGHAL, Chap. 2 PILLAGE, ROHRER, and VlSWESWARIAH, Chaps. 3 and 7 F. NAJM, Chapter 3 Press, Teukolsky, Vettering, Flannery, “Numerical Recipe in C”, Cambridge University Press, 1995 (Check for latest Edition) G.H. Gulub & C.F. Van Loan, “Matrix Computation,” John Hopkins Univ. Press, 3rd Edition, 1996 Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd Edition, 2nd Edition, Siam 2003.

Linear Equation Solution Methods Consider a set of n linear algebraic equations Ax = b where A is a real or complex n x n nonsingular matrix. There are many ways of solving Ax =b I. Direct Methods: Cramer’s Rule, Inverse, Gaussian Elimination or LU Factorization, Sparse Matrix Techniques II. Iterative or Relaxation Methods: Gauss-Jacobi; Gauss-Seidel, Projection Methods ...

(1) Cramer's Rule Computationally very expensive. (2) Compute the inverse A-1, then for every b, x = A-1b Note: The inverse can be obtained by transforming: Then A-1 ≡ B Also computationally expensive, but much less than Cramer’s Rule

Using the inverse: x = A-1 b Computing A-1 requires n3 operations (we’ll prove this later) The computation of x = A-1 b requires n2 operations. Total: n3 + n2 operations Storage: n2 for A, n2 for A-1, n for b and n for x; total = 2(n2 + n) An operation is considered a multiplication or a division, not counting additions and subtractions, which are almost as many as multiplications and divisions in solving Ax = b.

Gaussian Elimination or LU factorization If A is sparse (i.e., the percentage of nonzero elements in A is small), then A-1, in general, is full and the solution process still requires n3 + n2 operation and 2(n2 + n) storage locations. A more efficient method, from both computation and storage, is to use Gaussian Elimination or LU factorization

Gaussian Elimination

xn = zn xn-1 = zn-1 – un-1, xn x1 = z1 – u12x2 – u13x3 … -u1nxn Final result of elimination: or U x = z Solution of Ux = z by Backward Substitution: xn = zn xn-1 = zn-1 – un-1, xn x1 = z1 – u12x2 – u13x3 … -u1nxn

Suppose b changes, while A remains unchanged. Question: Is it possible to solve for the new x without repeating all the elimination steps since the U matrix remains the same, and only b, and subsequently z, changes? The answer is YES: Save the operations that operate on b to generate the new z. How? Save the lower part of the "triangular" matrix.

More formally

Both L and U are stored in place of A

Summary of Solution Steps Ax = b Factorize A = LU  (LUx = b) Forward Substitution: Lz = b, where Ux ≡ z Backward Substitution: Solve Ux = z Factorization is always possible when A is nonsingular, provided pivoting is allowed. Pivoting will be explained later. Factorization is not unique There are different ways and different implementations, depending on computer architecture and on memory access. The solution, however, is unique.

Remarks The matrices L and U can overwrite the values aij of A. There is no need to provide separate storage locations for A and for its factors L and U. There is no need to store the diagonal unit entries of U or L. If only the right-hand-side vector b is changed, there is no need to repeat the factorization step; only the forward and backward substitutions need to be performed.

Determinant of A det A = det (LU) = det L det U det A = l11 × l22 × … × lnn (if U has 1s on diagonal det A = u11 × u22 × … × unn (if L has 1s on diagonal

Transpose Systems The solution of the transpose system ATx = c can be found by using the same LU factors of A. ATx = (LU)Tx = UTLTx = c Forward substitution: Solve UTz = c Backward substitution: Solve LTx = z

Remark Transpose systems are used in small-change sensitivity calculations and optimization.

Gauss Algorithm Continue until n. U will have 1’s on the diagonal.

Variation => L will have 1's on the diagonal

Gauss Algorithm Factorization Given A, a nonsingular nxn matrix

Forward Substitution: Lz = b column-by-column Let b(1) ≡ b For k = 1 to n

Backward Substitution: Ux = z row-by-row OR

Backward Substitution: Ux = z column-by-column OR

Other Factorization Procedures (Doolittle’s and Crout’s)

Gauss Entire submatrix Ar is updated at each step

Doolittle (L has l’s on Diagonal) kth subrow and kth subcolumn are updated at each step

Crout (U has l’s on Diagonal) For k = l,...,n kth subrow and kth subcolumn are updated at each step

Operation Count Useful identities

Number of operations in LU factorization  (assume L and U are full) Count divisions and multiplications only. Number of additions and subtractions is almost equal to the number of divisions and multiplications

Forward and Backward Substitutions

Pivoting For Numerical Accuracy and Stability (2) For Sparsity Preservation

(1) Pivoting for numerical stability (a) To avoid division by zero Example

Pivoting for numerical stability (cont.) Pivoting for numerical stability to offset computation errors caused by round-off errors due to computer word length. WILKINSON: “To minimize round-off errors in Gaussian elimination it is important to avoid growth in the size of aij(k+1) and bi(k+1) .” That is, the pivot should not be ‘too’ small.

Different types of pivoting (1) Complete pivoting: Ax = b

Complete pivoting can be used with GAUSS’ algorithm, but not with CROUT’S and DOOLITTLE’S. In complete pivoting the order of the variables and the rhs may change according to the reordering.

Partial Pivoting (2) Row Pivoting At the k-th step in the factorization process choose i.e., choose the largest element in column k as a pivot by performing row interchange.

Different types of pivoting (cont.) In row pivoting the order of the variables remains the same, while the order of the rhs changes accordingly. Row pivoting can be used with GAUSS’, CROUT’S, and DOOLITTLES.

Different types of pivoting (cont.) (3) Column Pivoting

Different types of pivoting (cont.) In column pivoting the order of the variables changes, while the order of the rhs remains the same. Column pivoting can be used with GAUSS’, CROUT’S, and DOOLITTLE’S.

Different types of pivoting (cont.) (4) Diagonal Pivoting

Different types of pivoting (cont.) Diagonal pivoting requires both row and column interchanges in a symmetrical way. It is used, for example, to preserve symmetry. Both the orders of the variables and the rhs change accordingly. Diagonal pivoting can be used with GAUSS’, but not with CROUT’S or DOOLITTLE’S. Diagonally dominant matrices require no pivoting to maintain numerical stability. The reduced matrix remains diagonally dominant. Pivoting, however, may be necessary to maintain sparsity (explained later).

Different types of pivoting (cont.) The nodal admittance matrix of a circuit with no controlled sources is diagonally dominant (e.g., power system load flow equations). Diagonal dominance depends on equation and variable ordering, i.e., diagonal dominance may be destroyed (or created) by pivoting (other than diagonal pivoting). Example:

Why is pivoting possible? Theorem: If at any step in the factorization process, a column or a row of the reduced matrix is zero, then A is singular. In other words, if A is nonsingular, then it is always possible to carry out complete, row, or column pivoting (but not necessarily diagonal pivoting). In some cases all the nonzero elements in the reduced matrix are small => matrix could be ill-conditioned. Advice: Always use double-precision

Pivoting and Permutation Matrix Row pivoting amounts to multiplying matrix A by a permutation matrix P: PAx =Pb P is a reordered identity matrix Example P can be encoded as a row vector pr =[2 4 1 3]

Pivoting and Permutation Matrix Column pivoting amounts to post-multiplying matrix A by a permutation matrix Q: AQ Q is a reordered identity matrix Example Q can be encoded as a row vector qc =[3 1 4 2]

Pivoting and Permutation Matrix Complete pivoting PAQ Diagonal pivoting PAPT

Condition Number If is small, A is well-conditioned => Solution can be obtained accurately If Is large, A is ill-conditioned => Solution not accurate

Residual Error and Accuracy Residual error: r = b – Ax(s) =>Small ||r|| indicates convergence Accuracy is related to number of correct decimal digits in solution, related to condition number In circuit simulation small ||r|| is more important, and Gaussian elimination with pivoting is adequate.

Improving Accuracy

If A is numerically symmetric, i. e If A is numerically symmetric, i.e., A = AT, and positive definite, then A can be factorized as: A = LLT (Cholesky Factorization) OR A = LDLT

Factoring A = LDLT avoids computing the square root Only lower (or upper part) is stored. Storage = (n2 + n)/2

Solution of LDLTx = b

Solution Updating (used in large-change sensitivity calculations and other applications) Suppose the LU factors of A and the solution xo of Ax = b are known. Suppose A is “perturbed” (or some entries changed); find the new solution from LU and xo.

Sherman-Morrison-Woodbury (Householder) Formula ( We’ll prove later) New Solution of

Solution Updating Algorithm Given x° and LU factors of A (1) Solve AV = P => V = A-1P (2) Form (D-1 + QTV) ≡H (3) Form y = QTxo (4) Solve Hz = y ← k x k system (5) xnew = xo - Vz

Proof of Housholder formula Back substitute:

Sensitivity Analysis of Linear Resistive Circuits and Linear RLC Circuits in the Frequency Domain: Linear Algebraic Systems Applications: Design, optimization, tolerance, statistical analysis, reliability studies, failure analysis, .....................

Large-Change Sensitivity Let F(p) represent a scalar output function of a system, where p is the set of system parameters. Let p° be the nominal value of p. F(p° + ∆p) - F(p°) Large-change sensitivity of F(p) with respect to p at p = p° Apply solution updating method

Solution Updating Algorithm

Example on Large-Change Sensitivity Calculation Find large-change sensitivity of V1 and V2 wrt ∆g2. Find the “Nominal Solution”:

Example on Large-Change Sensitivity Calculation (cont.)

Example on Large-Change Sensitivity Calculation (cont.)

Example on Large-Change Sensitivity Calculation (cont.)

Small-Change Sensitivity small-change sensitivity of F(p) with respect to p at p = p° If F(p) is available as an explicit function of p, then the sensitivities, both small and large, could be computed in a straight-forward manner. Usually, such an explicit function is not available. Consider a linear algebraic system described by:  A(p)x = b where A could be real or complex.

Small-Change Sensitivity (Cont.) Suppose the output is a linear combination of the system variables: a linear combination of the rows of the sensitivity matrix. The output could also be a vector xout = ETx.

Small-Change Sensitivity (Cont.) Differentiate system equations A(p) x = b with respect to p: (evaluated at nominal solution x = x°, p = p°) Suppose xout = eTx, then

Algorithm Solve Ax = b at p = p° and find LU and x° Construct Solve the adjoint system (or transpose) ATu = e → UTLTu = e (i.e., uT= eTA-l)

Example Find the small-change sensitivity of vout with respect to g1, g2, and g3.   Suppose we use nodal analysis, then

Example (cont.) (1) Find nominal solution: (2) Construct Constructed from the stamp connectivity

Example (cont.) (3) Solve ATu = e (Adjoint system)

Nonlinear Performance Function If the performance function is nonlinear ϕ = f(x, p) (the system is still linear), then and are evaluated at xo and po. is computed by the small-change sensitivity algorithm.

Partitioning and Block Decomposition

Partitioning and Block Decomposition

Partitioning and Block Decomposition

Bordered-Block Diagonal Form Node Tearing: In general:

Branch Tearing

Floating Subcircuit

Bordered-Block Diagonal Form

Solution of BBD Equations: Subcircuit Eqns Solution of BBD Equations: Subcircuit Eqns. Factorization and Forward Subst

Subcircuit Equations Terminal representation of the subcircuit at the tearing points

Solution of BBD Equations: Alg. I

Solution of BBD Equations: Alg. I (cont.)

Solution of BBD Equations: Alg. II

Solution of BBD Equations: Alg. II (cont.)

Solution of BBD Equations: Alg. III

Solution of BBD Equations: Alg. III (cont.)

Nested Decomposition Also known as: Nested Dissection Domain Decomposition

Nested Decomposition