Download presentation
Presentation is loading. Please wait.
Published byEverett Hall Modified over 8 years ago
2
EE616 Dr. Janusz Starzyk
3
Computer Aided Analysis of Electronic Circuits Innovations in numerical techniques had profound import on CAD: –Sparse matrix methods. –Multi-step methods for solution of differential equation. –Adjoint techniques for sensitivity analysis. –Sequential quadratic programming in optimization.
4
Fundamental Concepts NETWORK ELEMENTS: –One-port Resistor voltage controlled. or current controlled Capacitor Inductor condition + V - i Independence voltage source Independence current source
5
Fundamental Concepts –Two-port: Voltage to voltage transducer (VVT): Voltage to current transducer (VCT): Current to voltage transducer (CVT): Current to current transducer (CCT): Ideal transformer (IT): Ideal gyrator (IG): + V1 - + V2 - i1 i2
6
Fundamental Concepts Positive impedance converter (PIC) Negative impedance converter (NIC) Ideal operational amplifier (OPAMP) OPAMP is equivalent to nullor constructed from two singular one-ports: nullator and norator OPAMP nullor + V - i i + V2 - + V1 - + V1 - + V2 - i1 i2 i1i2
7
Network Scaling Typical design deals with network elements having resistivity from ohms to MEG ohms, capacitance from fF to mF, inductance from mH to H within frequency range Hz. Consider EXAMPLE: Calculate derivative with 6 digits accuracy? Let but because of roundoff errors: Which is 16% error.
8
Scaling is used to bring network impedance close to unity Impedance scaling: Design values have subscript d and scaled values subscript s. For scaling factor K we get: Frequency scaling: has effect on reactive elements: With:
9
For both impedance and frequency scaling we have: WT, CCT, IT, PIC, NIC, OPAMP remain unchanged. VCT the transcondactance g is multiplied by K. CVT, IG the transresistance r is divided by K.
10
NODAL EQUATIONS For (n+1) terminal network: Y V = J or: V1 V2 V3 j1 j2 j3 Vn+1 Jn+1
11
Y is called indefinite admittance matrix. For network with R, L, C and VCT we can obtain Y directly from the network. For VCT: k m i j V1 gV1 from i to j from kto m
12
when k=I and m=j we have one-port and g = Y: Liner Equations and Gaussian Elimination: For liner network nodal equations are linear. Nonlinear networks can be solved by linearization about operating point. Thus solution of linear equations is basic to many problems. Consider the system of liner equations: or: i=Yv K=i m=j Y from k to m
13
Solution can be obtained by inverting matrix but this approach is not practical. Gaussian elimination: Rewrite equations in explicit from and denote b i by a i,n+1 to simplify notation :
14
How to start Gaussian elimination? Divide the first equation by a 11 obtaining: Where Multiply this equation by a 21 and add it to the second. The coefficients of the new second equation are with this transformation becomes zero. Similarly for the other equations, setting:
15
makes all coefficients of the first column zero with exception of. We repeat this process selecting diagonal elements as dividers and obtaining general formulas where superscript shows how many changes were made. The resulting equations have the form:
16
Back substitution is used to obtain solution. Last variable is used to obtain x n-1 and so on. In general: Gaussian elimination requires operations. EXAMPLE: Example 2.5.b (p70)
17
While back substitutions requires. Triangular decomposition: Triangular decomposition has an advantage over Gaussian elimination as it can give simple solution to systems with different right-hand-side vectors and transpose systems required in sensitivity computations. Assume we can factor matrix as follows: where
18
L stands for lower triangular and U for upper triangular. Replacing A by LU the system of equations takes a form: L U X = b Define an auxiliary vector Z as U X = Z thenL X = b and Z can be found easily as: so Z n= b 1 /l 11 and
19
This is called forward elimination. Solution of UX=Z is called backward substitution. We have so X n =Z n and to find LU decomposition consider matrix. Taking product of L and U we have :
20
From the first column we have from the first row we find from the second column we have and so on… In machine implementation L and U will overwrite A with L occupying the lower and U the upper triangle of A. In general the algorithm of LU decomposition can be written as (Crout algorithm): 1.Set k=1 and go to step 3. 2.Compute column k of L using:
21
if k=n stop. 3.Compute row k of U using 4.Set k=k+1 and go to step 2. This technique is represented in text by CROUT subroutine. Modification which is dealing with rows only by LUROW. Modification of Gaussian elimination which give LU decompositions realized by LUG subroutine. Features of LU decomposition: 1.Simple calculation of determinant 2.if only right-hand-side vector b is changed there is no need to recalculate the decomposition and only forward and backward substitution are performed, which takes n 2 operations. 3.Transpose system A T X = C required for sensitivity calculation `can be solved easily as A T = U T L T.
22
4.Number of operation required for LU decomposition is (equivalent to Gaussian elimination.) Example 2.5.1
23
2.6 PIVOTING: the element by which we divide (must not be zero) in gaussian elimination is called pivot. To improve accuracy pivot element should have large absolute value. Partial pivoting: search the largest element in the column. Full pivoting: search the largest element in the matrix. Example 2.6.1
24
SPARSE MATRIX PRINCIPLES To reduce number of operations in case many coefficients in the matrix A are zero we use sparse matrix technique. This not only reduces time required to solve the system of equations but reduces memory requirements as zero coefficients are not stored at all. (read section 2.7) Pivot selection strategies are motivated mostly by the possibility to reduce the number of operations.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.