Download presentation
Presentation is loading. Please wait.
Published byEustacia Morris Modified over 9 years ago
1
Computer-Aided Verification of Electronic Circuits and Systems EE219A – Fall 2002 Professor: Prof. Alberto Sangiovanni-Vincentelli Instructor: Alessandra Nardi
2
Major Verification Tasks Design Concept Design Description Design Implementation Synthesis Design Verification Is what I asked for what I want? Implementation Verification Is what I asked for what I got?
3
Functional Verification Specification ValidationSpecification Validation: Are the specifications consistent? Are they complete, i.e. if the design satisfies them are we sure that it is correct? Design VerificationDesign Verification: Is the “entry” level description of my design correct? Most common reason for chip failure. Implementation VerificationImplementation Verification: Are the different levels of abstractions generated by the design process equivalent?
4
Multi-Million-Gate Verification Moore’s Law –Faster and more complex designs –Test-vector size grows even faster than design size –Time-to-market pressures will certainly not abate Clearly conflicts with the need to exhaustively verify a design before sign-off Verification is the bottleneck…. ….and could be a nightmare
5
Verification Techniques Simulation (FT):Simulation (FT): Build a mathematical model of the components of the design, submit test vectors and solve the equations that give the output as a function of the input and of the models on a computer Formal Verification (F):Formal Verification (F): Prove mathematically that: –A description has a set of properties –Two descriptions at different levels of abstraction are functionally equivalent Goal: Goal: Ensure the design meets its functional (F) and timing (T) requirements at each of those levels of abstraction
6
Verification Techniques Static Timing Analysis (T):Static Timing Analysis (T): Analyze circuit’s topological paths and check their timing properties and their impact on circuit delay Emulation (F):Emulation (F): Map the design onto the components of the emulation machine, submit test vectors and check the outputs of the machine possibly physically connecting them to a system Prototyping (F):Prototyping (F): Build a hardware implementation of the design and operate it Goal: Goal: Ensure the design meets its functional (F) and timing (T) requirements at each of those levels of abstraction
7
Simulation: Perfomance vs Abstraction.001x SPICE Event-driven Simulator Cycle-based Simulator 1x10x Performance and Capacity Abstraction
8
Boolean Simulation: Single- Processor Event-driven ("time-wheel" or static- ordered) –Delay Model Emphasis (Inertial or Transport) is major differentiator. –Today about 20-50K events/sec/Mip Cycle-based
9
Cycle-based simulation Cycle-based simulators work off of a control and data-flow representation Treats everything in the design description as either clocked element or zero-delay combinational logic Advantages – exceptionally fast – same internal representation for both simulation and synthesis – predicted results same as synthesized logic
10
Cycle-based Algorithm Input design must be completely synchronous Only evaluate on the clock edge – First: evaluate all combinational logic – Next: latch values into state registers – Repeat on next clock edge Comb. StateState Logic clock
11
Boolean Simulation: Hardware Acceleration Quickturn-IBM (Cobalt) type –1M Event/sec. –Requires fairly long compilation time
12
Emulation Based on re-programmable FPGA technology. Only functional verification (no timing verification yet). Close to implementation performance. –Can boot operating system, give look and feel for final implementation. Allows hardware-software co-design.
13
“Prototyping” Techniques in Design Stages time HardwareDesignChanges SoftwareSimulation Emulation PrototypeReplication Flexibility Performance Cost
14
Board Level Rapid-Prototyping Environment Early feedback on customer’s requirements Early system integration In-field test on vehicle Virtual prototyping (co-simulation) and physical prototyping (emulation board)
15
Simulation vs Formal Methods Degree of confidence in simulation depends on test vectors selected by the designers Formal methods most important for implementation verification Simulation cannot be replaced by formal verification especially for design verification: specifications are often not given in rigorous terms and are not complete
16
Analog Circuits – A World Apart Analog circuits’ behavior specified in terms of complex functions: time-domain, frequency-domain, distorsion, noise, power spectra…. Required accuracy of models much higher than digital …emerging paradigm: Field Programmable Analog Array for prototyping (and more)
17
Circuit Simulation Formulation of circuit equations –STA, MNA Solution of linear equations –LU factorization, QR factorization, Krylov Methods Solution of nonlinear equations –Newton’s method Solution of ordinary differential equations –One-step and Multi-step methods
18
Analog Circuit Simulation AC Analysis and Noise Simulation Techniques for RF –Shooting-Newton –Harmonic-Balance
19
SPICE history Prof. Pederson with “a cast of thousands” 1969-70: Prof. Roher and a class project –CANCER: Computer Analysis of Nonlinear Circuits, Excluding Radiation 1970-72: Prof. Roher and Nagel –Develop CANCER into a truly public-domain, general-purpose circuit simulator 1972: SPICE I released as public domain –SPICE: Simulation Program with Integrated Circuit Emphasis 1975: Cohen following Nagel research –SPICE 2A released as public domain 1976 SPICE 2D New MOS Models 1979 SPICE 2E Device Levels (R. Newton appears) 1980 SPICE 2G Pivoting (ASV appears)
20
Circuit Simulation Simulator: Solve dx/dt=f(x) numerically Input and setup Circuit Output Types of analysis: –DC Analysis –DC Transfer curves –Transient Analysis –AC Analysis, Noise, Distortion, Sensitivity
21
Ideal Elements: Reference Direction Branch voltages and currents are measured according to the associated reference directions –Also define a reference node (ground) + _ v i Two-terminal + _ v1v1 i1i1 Two-port i1i1 + _ v2v2 i2i2 i2i2
22
Branch Constitutive Equations (BCE) Ideal elements ElementBranch Eqn Resistorv = R·i Capacitori = C·dv/dt Inductorv = L·di/dt Voltage Sourcev = v s, i = ? Current Sourcei = i s, v = ? VCVSv s = A V · v c, i = ? VCCSi s = G T · v c, v = ? CCVSv s = R T · i c, i = ? CCCSi s = A I · i c, v = ?
23
Conservation Laws Determined by the topology of the circuit Kirchhoff’s Voltage Law (KVL): Every circuit node has a unique voltage with respect to the reference node. The voltage across a branch e b is equal to the difference between the positive and negative referenced voltages of the nodes on which it is incident Kirchhoff’s Current Law (KCL): The algebraic sum of all the currents flowing out of (or into) any circuit node is zero.
24
Nodal Analysis - Example R3R3 0 1 2 R1R1 G2v3G2v3 R4R4 I s5
25
Spice input format: Rk N+ N- Rkvalue Nodal Analysis – Resistor “Stamp” N+ N- N+ N- N+ N- i RkRk KCL at node N+ KCL at node N- What if a resistor is connected to ground? …. Only contributes to the diagonal
26
Spice input format: Gk N+ N- NC+ NC- Gkvalue Nodal Analysis – VCCS “Stamp” NC+ NC- N+ N- N+ N- GkvcGkvc NC+ NC- +vc-+vc- KCL at node N+ KCL at node N-
27
Spice input format: Ik N+ N- Ikvalue Nodal Analysis – Current source “Stamp” N+ N- N+ N- N+ N- IkIk
28
Nodal Analysis (NA) Advantages Yn is often diagonally dominant and symmetric Eqns can be assembled directly from input data Yn has non-zero diagonal entries Yn is sparse Limitations Conserved quantity must be a function of node variable –Cannot handle floating voltage sources, VCVS, CCCS, CCVS
29
Modified Nodal Analysis (MNA) i kl cannot be explicitly expressed in terms of node voltages it has to be added as unknown (new column) e k and e l are not independent variables anymore a constraint has to be added (new row) How do we deal with independent voltage sources? i kl kl +- E kl klkl
30
MNA – Voltage Source “Stamp” ikik N+N- +- EkEk Spice input format: ESk N+ N- Ekvalue 001 00 1 0 N+ N- Branch k N+ N- i k RHS
31
Modified Nodal Analysis (MNA) How do we deal with independent voltage sources? Augmented nodal matrix Some branch currents In general:
32
MNA – General rules A branch current is always introduced as and additional variable for a voltage source or an inductor For current sources, resistors, conductors and capacitors, the branch current is introduced only if: –Any circuit element depends on that branch current –That branch current is requested as output
33
MNA – CCCS and CCVS “Stamp”
34
MNA – An example 0 1 2 G2v3G2v3 R4R4 I s5 R1R1 E S6 -+ R8R8 3 E7v3E7v3 -+ 4
35
MNA – An example
36
Modified Nodal Analysis (MNA) Advantages MNA can be applied to any circuit Eqns can be assembled directly from input data MNA matrix is close to Y n Limitations Sometimes we have zeros on the main diagonal and principle minors may also be singular.
37
Systems of linear equations Problem to solve: M x = b Given M x = b : –Is there a solution? –Is the solution unique?
38
Systems of linear equations Find a set of weights x so that the weighted sum of the columns of the matrix M is equal to the right hand side b
39
Systems of linear equations - Existence A solution exists when b is in the span of the columns of M A solution exists if: There exist weights, x 1, …., x N, such that:
40
Systems of linear equations - Uniqueness A solution is unique only if the columns of M are linearly independent. Then: Mx = b Mx + My= b M(x+y) = b Suppose there exist weights, y 1, …., y N, not all zero, such that:
41
Systems of linear equations Square matrices Given Mx = b, where M is square – If a solution exists for any b, then the solution for a specific b is unique. For a solution to exist for any b, the columns of M must span all N-length vectors. Since there are only N columns of the matrix M to span this space, these vectors must be linearly independent. A square matrix with linearly independent columns is said to be nonsingular.
42
Application Problems Matrix is n x n Often symmetric and diagonally dominant Nonsingular of real numbers
43
Methods for solving linear equations Direct methods: find the exact solution in a finite number of steps Iterative methods: produce a sequence a sequence of approximate solutions hopefully converging to the exact solution
44
Gaussian Elimination Basics Gaussian Elimination Method for Solving M x = b A “Direct” Method Finite Termination for exact result (ignoring roundoff) Produces accurate results for a broad range of matrices Computationally Expensive
45
Gaussian Elimination Basics Reminder by 3x3 example
46
Gaussian Elimination Basics – Key idea Use Eqn 1 to Eliminate x 1 from Eqn 2 and 3
47
GE Basics – Key idea in the matrix MULTIPLIERSPivot Remove x1 from eqn 2 and eqn 3
48
GE Basics – Key idea in the matrix Pivot Multiplier Remove x2 from eqn 3
49
GE Basics – Simplify the notation Remove x1 from eqn 2 and eqn 3
50
Pivot Multiplier GE Basics – Simplify the notation Remove x2 from eqn 3
51
GE Basics – GE yields triangular system Altered During GE ~ ~
52
GE Basics – Backward substitution
53
GE Basics – RHS updates
54
GE basics: summary (1) M x = b U x = yEquivalent system U: upper trg (2)Noticed that: Ly = bL: unit lower trg (3)U x = y LU x = b M x = b GE Efficient way of implementing GE: LU factorization
55
Solve M x = b Step 1 Step 2 Forward Elimination Solve L y = b Step 3 Backward Substitution Solve U x = y = M = L U Gaussian Elimination Basics Note: Changing RHS does not imply to recompute LU factorization
56
LU Decomposition Code % dimensione delle matrici DIM=3; % Per ora generiamo una matrice di numeri casuali M=rand([DIM DIM]); % inizializzazione di L e U L = zeros([DIM DIM]); U = zeros([DIM DIM]); % ciclo per la decomposizione for (i=1:DIM) % i indica l'elemento della diagonale della matrice M % L(i,i) viene normalizzato ad 1 L(i,i) = 1; % si calcola U(i,i) U(i,i) = M(i,i) - L(i,:)*U(:,i); for (j=i+1:DIM) % si procede utilizzando la riga i-esima di M % a partire dalla colonna i+1 per il calcolo di U(i,:) U(i,j) = M(i,j) - L(i,:)*U(:,j); % in maniera analoga si utilizza la colonna i-esima di M a % partire dalla riga i+1 per il calcolo di L(:,i) L(j,i) = (M(j,i) - L(j,:)*U(:,i))/U(i,i); end
57
LU – Source-row and Target-row Multipliers Factored Portion Active Set k k Source-Row oriented approach
58
LU Decomposition - Complexity % dimensione delle matrici DIM=3; % Per ora generiamo una matrice di numeri casuali M=rand([DIM DIM]); % inizializzazione di L e U L = zeros([DIM DIM]); U = zeros([DIM DIM]); % ciclo per la decomposizione for (i=1:DIM) % i indica l'elemento della diagonale della matrice M % L(i,i) viene normalizzato ad 1 L(i,i) = 1; % si calcola U(i,i) U(i,i) = M(i,i) - L(i,:)*U(:,i); for (j=i+1:DIM) % si procede utilizzando la riga i-esima di M % a partire dalla colonna i+1 per il calcolo di U(i,:) U(i,j) = M(i,j) - L(i,:)*U(:,j); % in maniera analoga si utilizza la colonna i-esima di M a % partire dalla riga i+1 per il calcolo di L(:,i) L(j,i) = (M(j,i) - L(j,:)*U(:,i))/U(i,i); end DIM 3
59
GE Basics – Fitting the pieces together
61
LU factorization Basics – Picture
62
LU Basics Source-row oriented approach algorithm For i = 1 to n-1 { “For each source row” For j = i+1 to n { “For each target row below the source” For k = i+1 to n { “For each row element beyond Pivot” } Pivot Multiplier
63
LU Basics Target-row oriented approach algorithm For i = 2 to n { “For each target row” For j = 1 to i-1 { “For each source row above the target” For k = j+1 to n { “For each row element beyond Pivot” } Pivot Multiplier
64
LU – Source-row and Target-row Multipliers Factored Portion Active Set k k Factored Portion Mult Active Set k k Source-Row oriented approach Target-Row oriented approach
65
For i = 1 to n-1 { “For each Row” For j = i+1 to n { “For each target Row below the source” For k = i+1 to n { “For each Row element beyond Pivot” } Pivot Multiplier multipliers Multiply-adds LU Basics – Computational Complexity
66
LU Basics – Limitations of the naïve approach Zero Pivots Small Pivots (Round-off error) both can be solved with partial pivoting
67
At Step i Multipliers Factored Portion (L) Row i Row j What if Cannot form Simple Fix (Partial Pivoting) If Find Swap Row j with i LU Basics – Partial pivoting for zero pivots
68
Two Important Theorems 1) Partial pivoting (swapping rows) always succeeds if M is non singular 2) LU factorization applied to a diagonally dominant matrix will never produce a zero pivot LU Basics – Partial pivoting for zero pivots
69
Summary Existence and uniqueness review Gaussian elimination basics –GE basics –LU factorization –Pivoting
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.