EE616 Dr. Janusz Starzyk Computer Aided Analysis of Electronic Circuits Innovations in numerical techniques had profound import on CAD: –Sparse matrix.

Slides:



Advertisements
Similar presentations
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Advertisements

Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Linear Systems LU Factorization CSE 541 Roger Crawfis.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
MATH 685/ CSI 700/ OR 682 Lecture Notes
Scientific Computing Linear Systems – Gaussian Elimination.
Linear Systems of Equations
Simultaneous Linear Equations
LU Factorization LU-factorization Matrix factorization Forward substitution Back substitution.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Chapter 9.1 = LU Decomposition MATH 264 Linear Algebra.
Part 3 Chapter 9 Gauss Elimination
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 3 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
Major: All Engineering Majors Author(s): Autar Kaw
1cs542g-term Notes  Assignment 1 is out (questions?)
Linear Algebraic Equations
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 4 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
ECE 552 Numerical Circuit Analysis
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
1 Adjoint Method in Network Analysis Dr. Janusz A. Starzyk.
Systems of Linear Equations
1 Systems of Linear Equations Gauss-Jordan Elimination and LU Decomposition.
Major: All Engineering Majors Author(s): Autar Kaw
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Scientific Computing Linear Systems – LU Factorization.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Square n-by-n Matrix.
Linear Systems Gaussian Elimination CSE 541 Roger Crawfis.
Introduction to Numerical Analysis I MATH/CMPSC 455 PA=LU.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Chemical Engineering Majors Author(s): Autar Kaw
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Lecture 8 Matrix Inverse and LU Decomposition
Lesson 3 CSPP58001.
Gaussian Elimination Electrical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Gaussian Elimination Industrial Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
Linear Systems Dinesh A.
EKT 441 MICROWAVE COMMUNICATIONS CHAPTER 3: MICROWAVE NETWORK ANALYSIS (PART 1)
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Part 3 Chapter 9 Gauss Elimination
Spring Dr. Jehad Al Dallal
Gaussian Elimination.
Linear Equations.
Chapter 2 Interconnect Analysis
Chapter 10: Solving Linear Systems of Equations
Chapter 6 Direct Methods for Solving Linear Systems
Major: All Engineering Majors Author(s): Autar Kaw
Numerical Analysis Lecture14.
Numerical Computation and Optimization
Numerical Analysis Lecture10.
Linear Systems Numerical Methods.
Lecture 13 Simultaneous Linear Equations – Gaussian Elimination (2) Partial Pivoting Dr .Qi Ying.
Numerical Analysis Lecture11.
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

EE616 Dr. Janusz Starzyk

Computer Aided Analysis of Electronic Circuits Innovations in numerical techniques had profound import on CAD: –Sparse matrix methods. –Multi-step methods for solution of differential equation. –Adjoint techniques for sensitivity analysis. –Sequential quadratic programming in optimization.

Fundamental Concepts NETWORK ELEMENTS: –One-port Resistor voltage controlled. or current controlled Capacitor Inductor condition + V - i Independence voltage source Independence current source

Fundamental Concepts –Two-port: Voltage to voltage transducer (VVT): Voltage to current transducer (VCT): Current to voltage transducer (CVT): Current to current transducer (CCT): Ideal transformer (IT): Ideal gyrator (IG): + V1 - + V2 - i1 i2

Fundamental Concepts Positive impedance converter (PIC) Negative impedance converter (NIC) Ideal operational amplifier (OPAMP) OPAMP is equivalent to nullor constructed from two singular one-ports: nullator and norator OPAMP nullor + V - i i + V2 - + V1 - + V1 - + V2 - i1 i2 i1i2

Network Scaling Typical design deals with network elements having resistivity from ohms to MEG ohms, capacitance from fF to mF, inductance from mH to H within frequency range Hz. Consider EXAMPLE: Calculate derivative with 6 digits accuracy? Let but because of roundoff errors: Which is 16% error.

Scaling is used to bring network impedance close to unity Impedance scaling: Design values have subscript d and scaled values subscript s. For scaling factor K we get: Frequency scaling: has effect on reactive elements: With:

For both impedance and frequency scaling we have: WT, CCT, IT, PIC, NIC, OPAMP remain unchanged. VCT the transcondactance g is multiplied by K. CVT, IG the transresistance r is divided by K.

NODAL EQUATIONS For (n+1) terminal network: Y V = J or: V1 V2 V3 j1 j2 j3 Vn+1 Jn+1

Y is called indefinite admittance matrix. For network with R, L, C and VCT we can obtain Y directly from the network. For VCT: k m i j V1 gV1 from i to j from kto m

when k=I and m=j we have one-port and g = Y: Liner Equations and Gaussian Elimination: For liner network nodal equations are linear. Nonlinear networks can be solved by linearization about operating point. Thus solution of linear equations is basic to many problems. Consider the system of liner equations: or: i=Yv K=i m=j Y from k to m

Solution can be obtained by inverting matrix but this approach is not practical. Gaussian elimination: Rewrite equations in explicit from and denote b i by a i,n+1 to simplify notation :

How to start Gaussian elimination? Divide the first equation by a 11 obtaining: Where Multiply this equation by a 21 and add it to the second. The coefficients of the new second equation are with this transformation becomes zero. Similarly for the other equations, setting:

makes all coefficients of the first column zero with exception of. We repeat this process selecting diagonal elements as dividers and obtaining general formulas where superscript shows how many changes were made. The resulting equations have the form:

Back substitution is used to obtain solution. Last variable is used to obtain x n-1 and so on. In general: Gaussian elimination requires operations. EXAMPLE: Example 2.5.b (p70)

While back substitutions requires. Triangular decomposition: Triangular decomposition has an advantage over Gaussian elimination as it can give simple solution to systems with different right-hand-side vectors and transpose systems required in sensitivity computations. Assume we can factor matrix as follows: where

L stands for lower triangular and U for upper triangular. Replacing A by LU the system of equations takes a form: L U X = b Define an auxiliary vector Z as U X = Z thenL X = b and Z can be found easily as: so Z n= b 1 /l 11 and

This is called forward elimination. Solution of UX=Z is called backward substitution. We have so X n =Z n and to find LU decomposition consider matrix. Taking product of L and U we have :

From the first column we have from the first row we find from the second column we have and so on… In machine implementation L and U will overwrite A with L occupying the lower and U the upper triangle of A. In general the algorithm of LU decomposition can be written as (Crout algorithm): 1.Set k=1 and go to step 3. 2.Compute column k of L using:

if k=n stop. 3.Compute row k of U using 4.Set k=k+1 and go to step 2. This technique is represented in text by CROUT subroutine. Modification which is dealing with rows only by LUROW. Modification of Gaussian elimination which give LU decompositions realized by LUG subroutine. Features of LU decomposition: 1.Simple calculation of determinant 2.if only right-hand-side vector b is changed there is no need to recalculate the decomposition and only forward and backward substitution are performed, which takes n 2 operations. 3.Transpose system A T X = C required for sensitivity calculation `can be solved easily as A T = U T L T.

4.Number of operation required for LU decomposition is (equivalent to Gaussian elimination.) Example 2.5.1

2.6 PIVOTING: the element by which we divide (must not be zero) in gaussian elimination is called pivot. To improve accuracy pivot element should have large absolute value. Partial pivoting: search the largest element in the column. Full pivoting: search the largest element in the matrix. Example 2.6.1

SPARSE MATRIX PRINCIPLES To reduce number of operations in case many coefficients in the matrix A are zero we use sparse matrix technique. This not only reduces time required to solve the system of equations but reduces memory requirements as zero coefficients are not stored at all. (read section 2.7) Pivot selection strategies are motivated mostly by the possibility to reduce the number of operations.