Download presentation
Presentation is loading. Please wait.
1
Design for Testability
Raimund Ubar Tallinn Technical University D&T Laboratory Estonia
2
Boolean Differential Analysis
Course Map Models Theory Defect Level Boolean Differential Analysis Fault Modelling High Level Tools Fault Simulation Test Generation Fault Diagnosis BIST DFT DD Field: Test D&T Design High Level System Modelling Logic Level BDD
3
Motivation of the Course
The increasing complexity of VLSI circuits has made test generation one of the most complicated and time-consuming problems in digital design The more complex are getting systems, the more important will be the problems of test and design for testability because of the very high cost of testing electronic products Engineers involved in SoC design and technology should be made better aware of the importance of test, very close relationships between design and test, and trained in test technology to enable them to design and produce high quality, defect-free and fault-tolerant products
4
Goals of the Course The main goal of the course is to give the basic knowledge to answer the question: How to improve the testing quality at increasing complexities of today's systems? This knowledges includes understanding of how the physical defects can influence on the behavior of systems, and how the fault modelling can be carried out learning the basic techniques of fault simulation, test generation and fault diagnosis understanding the meaning of testability, and how the testability of a system can be measured and improved learning the basic methods of making systems self-testable The goal is also to give some hands-on experience of solving test related problems
5
Objective of the Course
VLSI Design Flow Verification Simulation. Timing analysis, formal verification Specification Hardware description languages (VHDL) Implementation Full custom, standard cell, gate arrays Testing Automatic test equipment (ATE), structural scan testing Built-in Self-Test Manufacturing CMOS
6
Content of the Course Lecture course – 16 h. Laboratory work – 8 h.
Introduction (1 h) General philosophy of digital test. Fault coverage. Types of tests. Test application. Design for test. Economy of test and the quality of product. Overview of mathematical methods in testing (2 h) Boolean differential algebra for test generation and fault diagnosis Binary decision diagrams and digital circuits Generalization of decision diagrams for modeling digital systems Fault modeling (2 h) Faults, errors and defects. Classification of faults. Modeling defects by Boolean differential equations. Functional faults. Fault equivalence and fault dominance. Fault masking.
7
Content of the Course Lecture course (cont.):
Test generation for VLSI (3 h) Combinational circuits, sequential circuits, finite state machines, digital systems, microprocessors, memories. Delay testing. Defect-oriented test generation. Universal test sets Fault simulation and diagnostics (3 h) Test quality analysis. Simulation algorithms: parallel, deductive, concurrent, critical path tracing. Fault diagnosis: combinational and sequential methods. Fault tables and fault dictionaries Design for testability (2 h) Testability measures. Adhoc testability improvement. Scan-Path design. Boundary Scan standard Built-in Self-Test (3 h) Pseudorandom test generators and signature analysers. BIST methods: BILBO, circular-self-test, store and generate, hybrid BIST, broadcasting BIST, embedding BIST
8
References N.Nicolici, B.M. Al-Hashimi. Power-Constrained Testing of VLSI Circuits. Kluwer Acad. Publishers, 2003, 178 p. R.Rajsuman. System-on-a-Chip. Design and Test. Artech House, Boston, London, 2000, 277 p. S.Mourad, Y.Zorian. Principles of Testing Electronic Systems. J.Wiley & Sons, Inc. New York, 2000, 420 p. M.L.Bushnell, V.D.Agrawal. Essentials of Electronic testing. Kluwer Acad. Publishers, 2000, 690 p. A.L.Crouch. Design for Test. Prentice Hall, 1999, 349 p. S. Minato. Binary Decision Diagrams and Applications for VLSI CAD. Kluwer Academic Publishers, 1996, 141 p. M. Abramovici et. al. Digital Systems Testing & Testable Designs. Computer Science Press, 1995, 653 p. D. Pradhan. Fault-Tolerant Computer System Design. Prentice Hall,1995, 550 p.
9
Overview Introduction Theory: Boolean differential algebra
Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
10
Overview: Introduction
Role of testing How much to test? The problem is money Complexity vs. quality Hierarchy as a compromise Testability – another compromise Quality policy History of test Course map
11
Introduction: the Role of Test
Dependability There is no sequrity on the earth, there is only oportunity Douglas McArthur (General) Reliability Security Safety Design for testability: Test Diagnosis Fault-Tolerance Fault Diagnosis Test BIST
12
Introduction : How Much to Test?
Amusing Test: Paradox 1: Digital model is finite, analog model is infinite. However, the complexity problem was introduced by Digital World Paradox 2: If I can show that the system works, then it should be not faulty. But, what does it mean: it works? 32-bit accumulator has 264 functions which all should work. So, you should test all of them! All life is an experiment. The more experiments you make, the better (American Wisdom) Stimuli Response X System Y Y Samples (for the analog case) X In digital case you cannot extrapolate
13
Introduction: How Much to Test?
Paradox: 264 input patterns (!) for 32-bit accumulator will be not enough. A short will change the circuit into sequential one, and you will need because of that 265 input patterns Mathematicians counted that Intel 8080 needed for exhaustive testing 37 (!) years Manufacturer did it by 10 seconds Majority of functions will never activated during the lifetime of the system Time can be your best friend or your worst enemy (Ray Charles) Y = F(x1, x2, x3) Bridging fault State q y x1 1 & & x2 * x3 1 Y = F(x1, x2, x3,q)
14
Introduction: the Problem is Money?
Cost of quality How to succeed? Try too hard! How to fail? (From American Wisdom) Cost Cost of testing Test coverage function Cost of the fault Time Conclusion: “The problem of testing can only be contained not solved” T.Williams Quality Optimum test / quality 0% 100%
15
Introduction: Hierarchy
Paradox: To generate a test for a block in a system, the computer needed 2 days and 2 nights An engineer did it by hand with 15 minutes So, why computers? The best place to start is with a good title. Then build a song around it. (Wisdom of country music) Sea of gates & Sequence of 216 bits 16 bit counter 1 System
16
Introduction: Complexity vs. Quality
Problems: Traditional low-level test generation and fault simulation methods and tools for digital systems have lost their importance because of the complexity reasons Traditional Stuck-at Fault (SAF) model does not quarantee the quality for deep-submicron technologies How to improve test quality at increasing complexities of today's systems? Two main trends: Defect-oriented test and High-level modelling Both trends are caused by the increasing complexities of systems based on deep-submicron technologies
17
Introduction: A Compromise
The complexity problem in testing digital systems is handled by raising the abstraction levels from gate to register-transfer level (RTL) instruction set architecture (ISA) or behavioral levels But this moves us even more away from the real life of defects (!) To handle defects in circuits implemented in deep-submicron technologies, new defect-oriented fault models and defect- oriented test methods should be used But, this is increasing even more the complexity (!) As a promising compromise and solution is: To combine hierarchical approach with defect orientation
18
Introduction: Testability
Amusing testability: Theorem: You can test an arbitrary digital system by only 3 test patterns if you design it approprietly Proof: 011 011 001 & 001 & & 101 101 ? 011 001 & 011 1 101 010 & 001 101 Solution: System FSM Scan-Path CC NAND
19
Introduction: Quality Policy
Yield (Y) P,n Defect level (DL) Pa Quality policy Design for testability P - probability of a defect n - number of defects Pa - probability of accepting a bad product Testing - probability of producing a good product
20
Introduction: Defect Level
DL 1 Y Y(%) 90 8 5 1 T(%) 50 45 25 5 100 10 81 45 9 T(%) DL T 10 50 90 Paradox: Testability DL
21
Introduction: History of Test
Historical Test: 1960s: Racks Functional testing Belle epoque for optimization... 1970s: Boards Structural testing Complexities, automata,… 1980s: VLSI Design for testability (DFT) Interactivity vs. testability? Hierarchy: top-down, bottom-up, yo-yo… 1990s: VLSI Self-test, Fault-tolerance Testability, Boundary-scan standard 2000s: Systems on Chip (SoC) Built-in Self-Test (BIST) The years teach much which the days never know (Ralph Waldo Emerson)
22
Introduction: Test Tools
Test experiment System Test result (BIST) Fault simulation Fault diagnosis System model Fault dictionary Test Go/No go Located defect Test generation Test tools
23
Introduction: Course Map
Models Theory Defect Level Boolean Differential Analysis Fault Modelling High Level Tools Fault Simulation Test Generation Fault Diagnosis BIST DFT DD Field: Test D&T Design High Level System Modelling Logic Level BDD
24
Overview Theory: Boolean differential algebra Introduction
Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
25
Overview: Boolean Differential Algebra
Boolean derivatives Boolean vector derivatives Multiple Boolean derivatives Boolean derivatives of complex functions Overview of applications of Boolean derivatives Boolean derivatives and sequential circuits Boolean differentials and fault diagnosis Universal test equation
26
Traditional algebra: speed Boolean algebra: change
Boolean Derivatives Traditional algebra: speed Boolean algebra: change y = F(x) y 0,1, F(X) 0,1 y F(X) will change if xi changes F(X) will not change if xi changes x xi xk
27
Boolean Derivatives Y = F(x) = F(x1, x2, … , xn) Boolean function:
Boolean partial derivative:
28
Boolean Derivatives - if F(x) is independent of xi
Useful properties of Boolean derivatives: Test generation algorithm: Solve the differential equation if F(x) is independent of xi if F(x) depends always on xi
29
If F(x) is independent of xi
Boolean Derivatives Useful properties of Boolean derivatives: These properties allow to simplify the Boolean differential equation to be solved for generating test pattern for a fault at xi If F(x) is independent of xi
30
Transformations of the Boolean derivative:
Boolean Derivatives Given: Transformations of the Boolean derivative:
31
Calculation of the Boolean derivative:
Boolean Derivatives Calculation of the Boolean derivative:
32
Boolean Vector Derivatives
If multiple faults take place independent of xi Example:
33
Boolean Vector Derivatives
Interpretation of the vector derivation components: 1 Two paths activated 1 1 Single path activated
34
Boolean Vector Derivatives
Calculation of the vector derivatives by Carnaugh maps: x2 x2 x2 1 1 1 1 1 1 = 1 x3 1 x3 1 1 1 1 1 1 1 1 x3 x1 x1 x1 1 1 1 1 1 1 1 1 x4 x4 x4
35
Multiple Boolean Derivatives
Test for x3 No fault masking x 1 & x 2 1 y Faults 1 1 1 x 3 & x 1 4 1 Fault masking 1 x 1 1 & x Fault in x2 cannot mask the fault in x3 2 1 y Faults 1 1 1 x 3 & x 1 4 1
36
Derivatives for Complex Functions
Boolean derivative for a complex function: Example: Additional condition:
37
Overview about Applications of BDs
Fault simulation Calculate the value of: Test generation Single faults: Find the solution for: Multiple faults: Find the solution for: Decompositional approach (complex functions): Fault masking analysis: Defect modelling Logic constraints finding for defects:
38
Bool. Derivatives for Sequential Circuits
Boolean derivatives for state transfer and output functions of FSM: y(t) = [x(t),q(t)] q(t+1) = [x(t),q(t)]
39
Bool. Derivatives for Sequential Circuits
Boolean derivatives for JK Flip-Flop: J K T Q The erroneos signal will propagate from inputs to output The erroneous signal was stored in the previous clock cycle
40
Boolean Differentials
dx fault variable, dx (0,1) dx = if the value of x has changed because of a fault Partial Boolean differential: Full Boolean differential:
41
Boolean Differentials and Fault Diagnosis
x1 = 0 x2 = 1 x3 = 1 dy = 0 Correct output signal: x1 = 0 x2 = 0 x3 = 0 dy = 1 Erroneous output signal:
42
Boolean Differentials and Fault Diagnosis
Rule: Diagnosis: = 0 The line x3 works correct There is a fault: The fault is missing
43
Boolean differentials and Fault Diagnosis
Fault Diagnosis and Test Generation as direct and reverse mathematical tasks: dy = F(x1, ... , xn) F(x1 dx1 , ... , xn dxn) dy = F(X, dX) Direct task: Test generation: dX, dy = 1 given, X = ? Reverse task: Fault diagnosis: X, dy given, dX = ?
44
Test Tasks and Test Tools
Test experiment System Test result Fault simulation Fault diagnosis System model Fault dictionary Test Go/No go Located defect Test generation Test tools
45
Universal Test Equation
Fault Diagnosis and Test Generation - direct and reverse mathematical tasks Model of test experiment: Given system Possible faults dy = F(x1, ... , xn) F(x1 dx1 , ... , xn dxn) F(X, dX) = dy Result of the test experiment Fault vector (fault) Test vector Direct task: Test generation: dX, dy = 1 given, X = ? Reverse task: Fault diagnosis: X, dy given, dX = ? Fault Simulation is a special case of fault diagnosis
46
Basics of Theory for Test and Diagnostics
Two basic tasks: 1. Which test patterns are needed to detect a fault (or all faults) 2. Which faults are detected by a given test (or by all tests) System Boolean differential algebra ALU Gate 1 & & 1 DD Decision diagrams Multiplier BDD
47
Overview Theory: Decision diagrams Introduction
Theory: Boolean differential algebra Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
48
Overview: Decision Diagrams
Binary Decision Diagrams (BDDs) Structurally Synthesized BDDs (SSBDDs) High Level Decision Diagrams (DD) DDs for Finite State Machines DDs for Digital Systems Vector DDs DDs for microprocessors DD synthesis from behavioral descriptions Example of DD synthesis from VHDL description
49
Binary Decision Diagrams
1 x1 Functional BDD 1 x2 x3 Simulation: x4 x5 Boolean derivative: x6 x7
50
Binary Decision Diagrams
Functional synthesis BDDs: Shannon’s Theorem: Example: Using the Theorem for BDD synthesis: y x1 x2 y xk x3 x3 x4 x4
51
Binary Decision Diagrams
Elementary BDDs: AND x1 x1 x2 y x1 x2 x3 & x2 y x3 + x3 Adder x1 OR x2 y x1 1 y x1 x2 x3 x3 x2 x2 x3 x3 NOR x1 x2 y 1 x1 x2 x3 x3
52
Binary Decision Diagrams
Elementary BDDs S D Flip-Flop J JK Flip-Flop q c D D C q S C K q’ c q’ R R K RS Flip-Flop q’ J q c S S R C q’ q’ R U R U - unknown value
53
Building a SSBDD for a Circuit
Structurally Synthesized BDDs: DD-library: y a b Given circuit: x1 a a x1 b x22 x21 1 x2 y & x21 x3 x22 1 x3 Superposition of DDs SSBDD b y a x22 y x1 x22 Compare to Superposition of Boolean functions: x3 x21 x3 b a
54
Representing by SSBDD a Circuit
Structurally synthesized BDD for a subcircuit (macro) 6 73 1 2 5 72 71 y & 1 2 3 4 5 6 7 71 72 73 a b c d e y Macro To each node of the SSBDD a signal path in the circuit corresponds y = cyey = cy ey = x6,e,yx73,e,y deybey y = x6x73 ( x1 x2 x71) ( x5 x72)
55
High-Level Decision Diagrams
Superposition of High-Level DDs: A single DD for a subcircuit 2 y # 4 1 R 2 M1 2 y y 3 1 R + R 1 2 R2 1 IN + R 2 1 IN 2 R 1 3 y 2 R * R R2 + M3 1 2 1 IN* R 2 M2 Instead of simulating all the components in the circuit, only a single path in the DD should be traced
56
High-Level DDs for Finite State Machines
State Transition Diagram: DD:
57
High-Level DDs for Digital Systems
B C M ADR MUX 1 2 CC CON D Control Path Data Path d / l FF y x q z
58
High-Level DDs for Digital Systems
Begin A = B + C x A A = A + 1 B = B + C B = B C = C A = A + B + C C = A + B C + B END 1 s 2 3 4 5
59
High-Level Vector Decision Diagrams
A system of 4 DDs Vector DD A M=A.B.C.q q B’ + C’ C A q i B + C q x A’ + B’ B i q 1 #1 x A A + 1 #5 1 A 1 C 3 1 x i A’ + 1 A x q i C’ C C + B q #4 4 #3 x x A C A + B + C 1 B i B’ + C’ A q x x A’ + B’+C’ 1 1 #2 A C i B B q x B 2 A + C B’ q 4 x 3 C #5 A B q q # 1 x A’ + B’ C i B q 2 1 i B’ #5 q C q x A + B x # B 4 1 A 1 A #5 3 1 B’ + C’ i 1 C x # 2 q C i C’ 4 #5 q 4 1 2 x x # #5 C B 5 A 1 3,4 # 3
60
Decision Diagrams for Microprocessors
High-Level DDs for a microprocessor (example): Instruction set: DD-model of the microprocessor: 1,6 A I IN I1: MVI A,D A IN I2: MOV R,A R A I3: MOV M,R OUT R I4: MOV M,A OUT A I5: MOV R,M R IN I6: MOV A,M A IN I7: ADD R A A + R I8: ORA R A A R I9: ANA R A A R I10: CMA A,D A A 3 2,3,4,5 OUT I R A 4 7 A + R A 8 2 A R R I A 9 5 A R IN 10 A 1,3,4,6-10 R
61
Decision Diagrams for Microprocessors
High-Level DD-based structure of the microprocessor (example): DD-model of the microprocessor: 1,6 A I IN IN 3 R 2,3,4,5 OUT I R A 4 7 A + R I A OUT 8 2 A R R I A 9 5 A R A IN 10 A 1,3,4,6-10 R
62
Vector DDs for Miocroprocessors
DDs for representing microprocessor output behaviour
63
DD Synthesis from Behavioral Descriptions
BEGIN Memory state: M Processor state: PC, AC, AX Internal state: TMP Instruction format: IR = OP. A. F0. F1. F2. Execution process: EXEC: DECODE OP ( 0: AC AC + MA 1: M[A] AC, AC 0 2: M[A] M[A]+ 1, IF M[A]= 0 THEN PC PC + 1 3: PC A 7: IF F0 THEN AC AC + 1 IF F1 THEN IF AC = 0 THEN PC PC + 1 IF F2 THEN (TMP AC, AC AX, AX TM’) END Procedural description of a microprocessor
64
DD Synthesis from Behavioral Descriptions
Symbolic execution tree: Start OP=7 OP=0 ... 1 F0=1 OP=3 F0=0 AC = AC + 1 AC = AC + M [A] OP=1 PC = A F1=1 F1=0 2 OP=2 5 AC0 AC=0 M [A] = AC, AC = 0 F2=0 PC = PC + 1 F2=0 6 F2=0 F2=1 F2=1 M [A] = M [A] + 1 11 F2=1 9 3 M[A]=0 AC = AX, AX = AC M[A]=1 AC = AX, AX = AC PC = PC + 1 7 AC = AX, AX = AC 10 4 8
65
DD Synthesis from Behavioral Descriptions
Generation of nonprocedural descriptions via symbolic execution Terminal contexts
66
DD Synthesis from Behavioral Descriptions
Decision Diagram for AC AC OP AC+M [A] 1 #0 2,3 AC 7 F0 F2 1 1 AX F2 AC+1 1
67
DD Synthesis from VHDL Descriptions
VHDL description of 4 processes which represent a simple control unit
68
DD Synthesis from VHDL Descriptions
DDs for state, enable_in and nstate 1 state rst #1 1 clk nstate Superposition of DDs state’ 1 nstate state’ enable_in #1 2 1 rb0 #2 1 1 enable_in clk enable enable’
69
DD Synthesis from VHDL Descriptions
DDs for the total VHDL model rst #1 state 1 state’ rb0 enable' #2 2 enable #0011 #0001 #0100 #1100 #0010 outreg fin reg_cp reg
70
DD Synthesis from VHDL Descriptions
rst #1 state 1 state’ rb0 enable' #2 2 enable #0011 #0001 #0100 #1100 #0010 outreg fin reg_cp reg Simulation and Fault Tracing on the DDs Simulated vector
71
Overview Fault modelling Introduction
Theory: Boolean differential algebra Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
72
Overview: Fault Modelling
Faults, errors and defects Stuck-at-faults (SAF) Fault equivalence and fault dominance Redundant faults Transistor level physical defects Mapping transistor defects to logic level Fault modelling by Boolean differential equations Functional fault modelling Faults and test generation hierarchy High-level fault modelling Fault modelling with DDs
73
Fault and defect modeling
Defects, errors and faults An instance of an incorrect operation of the system being tested is referred to as an error The causes of the observed errors may be design errors or physical faults - defects Physical faults do not allow a direct mathematical treatment of testing and diagnosis The solution is to deal with fault models System Defect Component Fault Error
74
Fault and defect modeling
Why logic fault models? complexity of simulation reduces (many physical faults may be modeled by the same logic fault) one logic fault model is applicable to many technologies logic fault tests may be used for physical faults whose effect is not completely understood they give a possibility to move from the lower physical level to the higher logic level Stuck-at fault model: Two defects: Broken line Bridge to ground x1 x1 1 1 x2 x2 Single model: Stuck-at-0 0V
75
Fault and defect modeling
x1 a x21 & Fault models are: explicit and implicit explicit faults may be enumerated implicit faults are given by some characterizing properties Fault models are: structural and functional: structural faults are related to structural models, they modify interconnections between components functional faults are related to functional models, they modify functions of components x2 y 1 x22 & x3 b Structural faults: - line a is broken - short between x2 and x3 Functional fault: Instead of
76
Fault and defect modeling
Structural faults Structural fault models assume that components are fault-free and only their interconnections are affected: a short is formed by connecting points not intended to be connected an open results from the breaking of a connection Structural fault models are: a line is stuck at a fixed logic value v (v {0,1}), examples: a short between ground or power and a signal line an open on a unidirectional signal line any internal fault in the component driving its output that it keeps a constant value bridging faults (shorts between signal lines) with two types: AND and OR bridging faults (depending on the technology).
77
Gate-Level Faults 1 & 1 Broken (stuck-at-0) 2 Broken (stuck-at-1) 3
Broken 1 stuck branches: 1,2,3 (or stuck stem) Broken 2 stuck branches: 2,3 Broken 3 stuck branches: 3
78
Stuck-at Fault Properties
Fault equivalence and fault dominance: A B C D Fault class A/0, B/0, C/0, D/ Equivalence class A/1, D/0 B/1, D/ Dominance classes C/1, D/0 A & B D C Fault collapsing: 1 1 1 0 & & 1 1 & & 1 0 0 1 Equivalence Dominance Dominance Equivalence
79
Fault Redundancy Redundant gates (bad design): x1
Internal signal dependencies: & 1 1 & x2 y & 1 1 & 1 1 x3 1 & & x4 1 & Impossible pattern, OR XOR not testable Faults at x2 not testable Optimized function:
80
Fault Redundancy Hazard control circuitry: Error control circuitry: &
1 01 & Decoder 01 10 1 & 1 1 0 & 0 1 E Redundant AND-gate Fault 0 not testable E 1 if decoder is fault-free Fault 0 not testable
81
Transistor Level Faults
Stuck-at-1 Broken (change of the function) Bridging Stuck-open New State Stuck-on (change of the function) Short (change of the function) Stuck-off (change of the function) Stuck-at-0 SAF-model is not able to cover all the transistor level defects How to model transistor defects ?
82
Transistor Level Stuck-on Faults
NOR gate x1 x2 y yd 1 VY/IDDQ Stuck-on VDD VDD x1 x1 x2 x2 RN Y Y x1 x2 x1 x2 RP VSS VSS Conducting path for “10”
83
Transistor Level Stuck-off Faults
NOR gate x1 x2 y yd 1 Y’ Stuck-off (open) VDD VDD x1 x1 x2 x2 Y Y Test sequence is needed: 00,10 x1 x2 x1 x2 VSS VSS No conducting path from VDD to VSS for “10”
84
Bridging Faults x1 x’1 Wired AND/OR model x2 x’2 W-AND: W-AND W-OR x1
Fault-free W-AND W-OR x1 x2 x’1 x’2 1 x1 x’1 & x2 x’2 W-OR: x1 x’1 1 x2 x’2
85
Bridging Faults x1 x’1 Dominant bridging model x2 x’2 x1 dom x2: x1 x2
Fault-free x1 dom x2 x2 dom x1 x1 x2 x’1 x’2 1 x1 x’1 x2 x’2 x2 dom x1: x1 x’1 x2 x’2
86
Delay fault activated, but not detected
Delay Faults Delay fault activated, but not detected Two models: - gate delay - path delay Test pattern pairs: The first test initializes the circuit, and the second pattern sensitizes the fault Robust delay test: If and only if when L is faulty and a test pair is applied, the fault is detected independently of the delays along the path x1 1x0 B 11 & D 1 1xxx0 & A y x2 & C 01 & x3 0xxxx1 11 Robust delay test x1 11 B 00 0xxxxx1 & D y 0xxx1 & A x2 C 10 & & x3 1xxxx0 11
87
Mapping Transistor Faults to Logic Level
A transistor fault causes a change in a logic function not representable by SAF model Function: y Faulty function: x1 x4 Short 0 – defect d is missing 1 – defect d is present d = Defect variable: x2 Generic function with defect: x3 x5 Mapping the physical defect onto the logic level by solving the equation:
88
Mapping Transistor Faults to Logic Level
Function: Faulty function: Generic function with defect: y x1 x4 Short Test calculation by Boolean derivative: x2 x3 x5
89
Why Boolean Derivatives?
Given: Distinguishing function: BD-based approach: Using the properties of BDs, the procedure of solving the equation becomes easier
90
Functional Fault vs. Stuck-at Fault
Full 100% Stuck-at-Fault-Test is not able to detect the short: No Full SAF-Test Test for the defect x1 x2 x3 x4 x5 X3 1 - 2 3 4 5 Functional fault The full SAF test is not covering any of the patterns able to detect the given transistor defect
91
Defect coverage for 100% Stuck-at Test
Results: the difference between stuck-at fault and physical defect coverages reduces when the complexity of the circuit increases (C2 is more complex than C1) the difference between stuck-at fault and physical defect coverages is higher when the defect probabilities are taken into account compared to the traditional method where all faults are assumed to have the same probability
92
Generalization: Functional Fault Model
Constraints calculation: Fault-free Faulty d = 1, if the defect is present Component with defect: Constraints: Component F(x1,x2,…,xn) y Wd Defect Fault model: (dy,Wd), (dy,{Wkd}) Logical constraints
93
Functional Fault Model Examples
Constraints: Component with defect: Component F(x1,x2,…,xn) y Wd Constraints examples: Defect Logical constraints FF model: (dy,Wd), (dy,{Wkd})
94
Functional Fault Model for Stuck-ON
x1 x2 y yd 1 Z: VY/IDDQ NOR gate Stuck-on VDD x1 x2 RN Y x1 x2 RP VSS Condition of the fault potential detecting: Conducting path for “10”
95
Functional Fault Model for Stuck-Open
NOR gate Test sequence is needed: 00,10 x1 x2 y yd 1 Y’ Stuck-off (open) t x1 x2 y VDD x1 x2 Y x1 x2 VSS No conducting path from VDD to VSS for “10”
96
Functional Fault Model
xk x*k Example: Bridging fault between leads xk and xl The condition means that in order to detect the short between leads xk and xl on the lead xk we have to assign to xk the value 1 and to xl the value 0. d xl xk*= f(xk,xl,d) Wired-AND model
97
Functional Fault Model
Example: Bridging fault causes a feedback loop: A short between leads xk and xl changes the combinational circuit into sequential one x1 & y x2 & x3 Equivalent faulty circuit: x1 & y & x2 t x1 x2 x3 y x3 Sequential constraints: &
98
First Step to Quality How to improve the test quality at the increasing complexity of systems? First step to solution: Functional fault model was introduced as a means for mapping physical defects from the transistor or layout level to the logic level Component Low level k WFk WSk Environment Bridging fault Mapping High level System
99
Fault Table: Mapping Defects to Faults
100
Probabilistic Defect Analysis
Probabilities of physical defects Effectiveness of input patterns in detecting real physical defects
101
Hierarchical Defect-Oriented Test Analysis
102
Faults and Test Generation Hierarchy
Functional Structural Higher Level approach approach WFk Component Lower level k F Test W S k System WSk Network Bridging fault W F k of modules Environment F Test W S k ki Interpretation of WFk: - as a test on the lower level - as a functional fault on the higher level Module Network W F ki of gates F Test W d ki ki Gat e Circuit
103
Register Level Fault Models
RTL statement: K: (If T,C) RD F(RS1, RS2, … RSm), N Components (variables) of the statement: RT level faults: K K’ - label faults T T’ - timing faults C C’ - logical condition faults RD RD - register decoding faults RS RS - data storage faults F F’ - operation decoding faults - data transfer faults N - control faults (F) (F)’ - data manipulation faults K - label T - timing condition C - logical condition RD - destination register RS - source register F - operation (microoperation) - data transfer N - jump to the next statement
104
Fault Models for High-Level Components
Decoder: - instead of correct line, incorrect is activated - in addition to correct line, additional line is activated - no lines are activated Multiplexer (n inputs log2 n control lines): - stuck-at - 0 (1) on inputs - another input (instead of, additional) - value, followed by its complement - value, followed by its complement on a line whose address differs in 1 bit Memory fault models: - one or more cells stuck-at - 0 (1) - two or more cells coupled
105
Fault models and Tests Dedicated functional fault model for multiplexer: stuck-at-0 (1) on inputs, another input (instead of, additional) value, followed by its complement value, followed by its complement on a line whose address differs in one bit Functional fault model Test description
106
Combinational Fault Models
Exhaustive combinational fault model: - exhaustive test patterns - pseudoexhaustive test patterns - exhaustive output line oriented test patterns - exhaustive module
107
Fault modeling on SSBDDs
The nodes represent signal paths through gates Two possible faults of a DD-node represent all the stuck-at faults along the corresponding path 6 73 1 2 5 72 71 y & 1 2 3 4 5 6 7 71 72 73 a b c d e y Macro
108
Fault Modeling on High Level DDs
High-level DDs (RT-level): Terminal nodes represent: RTL-statement faults: data storage, data transfer, data manipulation faults Nonterminal nodes represent: RTL-statement faults: label, timing condition, logical condition, register decoding, operation decoding, control faults
109
Fault modeling on DDs This fault model leads to exhaustive test
The fault model for DDs is defined as faulty behaviour of a node m labelled with a variable x(m): output edge is always activated to x(m) = i, output edge for x(m) = i is broken, instead of the given output edge for x(m) = i, another edge or a set of edges are activated This fault model leads to exhaustive test of the node
110
Overview Test generation Introduction
Theory: Boolean differential algebra Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
111
Overview: Test Generation
Universal test sets exhaustive and pseudoexhaustive tests Structural gate level test generation methods Path activation principle Test generation algorithms: D-alg, Podem, Fan ENF-based test generation Multiple fault testing Defect-oriented test generation Test generation for sequential circuits Hierarchical test generation DD-based test generation SSBDDs and macro-level test generation RT-level test generation Microprocessor behavior test generation
112
Functional testing: universal test sets
1. Exhaustive test (trivial test) 2. Pseudo-exhaustive test Properties of exhaustive tests 1. Advantages (concerning the stuck at fault model): - test pattern generation is not needed - fault simulation is not needed - no need for a fault model - redundancy problem is eliminated - single and multiple stuck-at fault coverage is 100% - easily generated on-line by hardware 2. Shortcomings: - long test length (2n patterns are needed, n - is the number of inputs) - CMOS stuck-open fault problem
113
Functional testing: universal test sets
Pseudo-exhaustive test sets: Output function verification maximal parallel testability partial parallel testability Segment function verification Output function verification 4 4 4 Segment function verification 4 216 = >> 4x16 = 64 > 16 1111 0011 & F Exhaustive test Pseudo- exhaustive sequential Pseudo- exhaustive parallel 0101
114
Functional testing: universal test sets
Output function verification (maximum parallelity) Exhaustive test generation for n-bit adder: Good news: Bit number n - arbitrary Test length - always 8 (!) Bad news: The method is correct only for ripple-carry adder 0-bit testing 1-bit testing 2-bit testing 3-bit testing … etc
115
Testing carry-lookahead adder
General expressions: n-bit carry-lookahead adder:
116
Testing carry-lookahead adder
Testing 0 Testing 1 For 3-bit carry lookahead adder for testing only this part of the circuit at least 9 test patterns are needed (i.e. pseudoexhaustive testing will not work) Increase in the speed implies worse testability
117
Functional testing: universal test sets
Output function verification (partial parallelity) F1 x1 F1(x1, x2) F2(x1, x3) x2 010101 F3(x2, x3) F3 x3 F4(x2, x4) F2 010110 F5(x1, x4) F4 x4 F6(x3, x4) F5 000111 Exhaustive testing - 16 Pseudo-exhaustive, full parallel - 4 Pseudo-exhaustive, partially parallel - 6
118
Structural Test Generation
Structural gate-level testing: fault sensitization: A fault a/0 is sensitisized by the value 1 on a line a A test t = 1101 is simulated, both without and with the fault a/0 The fault is detected since the output values in the two cases are different A path from the faulty line a is sensitized (bold lines) to the primary output
119
Structural Test Generation
Structural gate-level testing: Path activation Fault sensitisization: x7,1= D Fault propagation: x2 = 1, x1 = 1, b = 1, c = 1 Line justification: x7= D = 0: x3 = 1, x4 = 1 b = 1: (already justified) c = 1: (already justified) 1 1 Macro 1 d 2 a & 71 & D D D & 3 & 7 e 72 & b 4 1 y D D 5 & 73 & c 1 6 Symbolic fault modeling: D = if fault is missing D = if fault is present
120
Structural Test Generation
Multiple path fault propagation: D D x1 D 1 x1 D 1 1 1 1 1 D x2 y x2 y D D 1 1 1 1 x3 D x3 D 1 1 x4 x4 D 1 1 1 1 1 Single path activation is not possible Three paths simultaneously activated
121
Structural Test Generation Algorithms
D - algorithm (Roth, 1966): Select a fault site, assign D Propagate D along all available paths using D-cubes of gates Backtracking, to find the inputs needed Example: Fault site D 1 D 1 & 2 1 4 Propagation D-cubes for AND-gate 3 1 1 D D & 2 4 1 3
122
Structural Test Generation Algorithms
D - algorithm: Primitive D-cubes for NAND and c 0: a b c 0 x D x D Propagation of D-cubes in the circuit: Singular cover for C = NAND (A,B): a b c x 0 x 1 1 4 & 2 & 6 5 Intersection of cubes: Let have 2 D-cubes A = (a1, a2,... an) B = (b1, b2,... bn) where ai, bj 0,1,x,D,D) 1) x ai = ai 2) If ai x and bi x then ai bi = ai if bi = ai or ai bi = otherwise 3) A B = if for any i: ai bi = & 3 D-drive: Primitive cube for x2 D Propagate D through G4 1 D D Propagate D through G6 1 D D D Consistency operation: Intersect with G D 0 D 1 D Propagation D-cubes for C = NAND (A,B): a b c 1 D D D D D D D
123
Structural Test Generation Algorithms
PODEM - algorithm (Goel, 1981): 1. Controllability measures are used during backtracking Decision gate: The “easiest” input will be chosen at first Imply gate: The “most difficult” input 2. Backtracking ends always only at inputs 3. D-propagation on the basis of observability measures & 1 & 1
124
Structural Test Generation Algorithms
FAN - algorithm (Fujiwara, 1983): 1. Special handling of fan-outs (by using counters) PODEM: backtracking continues over fan-outs up to inputs FAN: backtracking breaks off, the value is chosen on the basis of values in counters 2. Heuristics is introduced into D-propagation PODEM: moves step by step (without predicting problems) FAN: finds bottlenecks and makes appropriate decisions at the beginning, before starting D-propagation 1 (C = 6) Chosen value: 1 0 (C = 3) 0 (C = 2)
125
Structural Test Generation Algorithms
Test generation by using disjunctive normal forms
126
Multiple Fault Testing
Multiple faults fenomena: Multiple stuck-fault (MSF) model is a straightforward extension of the single stuck-fault (SSF) model where several lines can be simultaneously stuck If n - is the number of possible SSF sites, there are 2n possible SSFs, but there are 3n -1 possible MSFs If we assume that the multiplicity of faults is no greater than k , then the number of possible MSFs is ki=1 {Cni}2i The number of multiple faults is very big. However, their consideration is needed because of possible fault masking
127
Multiple Fault Testing
Fault masking Let Tg be a test that detects a fault g A fault f functionally masks the fault g iff the multiple fault { f, g } is not detected by any pattern in Tg The test 011 is the only test that detects the fault c 0 The same test does not detect the multiple fault { c 0, a 1} Thus a 1 masks c 0 Let Tg’ T be the set of all tests in T that detect a fault g A fault f masks the fault g under a test T iff the multiple fault { f , g } is not detected by any test in Tg’ Example: Fault a 1 Fault c 0
128
Multiple Fault Testing
Circular fault masking Example: The test T = {1111, 0111, 1110, 1001, 1010, 0101} detects every SSF The only test in T that detects the single faults b 1 and c 1 is 1001 However, the multiple fault {b1, c1} is not detected because under the test vector 1001, b 1 masks c 1, and c 1 masks b 1 Multiple fault F may be not detected by a complete test T for single faults because of circular masking among the faults in F 1 a 1/0 & 0/1 b 1 & 0/1 & 0/1 & 0/1 1 c 1 & d 1/0
129
Multiple Fault Testing
Testing multiple faults by pairs of patterns To test a path under condition of multiple faults, two pattern test is needed As the result, either the faults on the path under test are detected or the masking fault is detected Example: The lower path from b to output is under test A pair of patterns is applied on b There is a masking fault c 1 1st pattern: fault on b is masked 2nd pattern: fault on c is detected 11 a 10 & 01 b 11 & 1 faults 01 (00) & 01 00 & (11) 10 (11) c & 11 d 11(00) The possible results: 01 - No faults detected 00 - Either b 0 or c 1 detected 11 - The fault b 1 is detected
130
Multiple Fault Testing
Testing multiple faults by groups of patterns Multiple fault: x11, x20, x31 An example where the method of test pairs does not help Fault masking Fault detecting T1 T2 T3 x11 x20 x31
131
Defect-Oriented Test Generation
Defect-Level constraints calculation: y* = F*(x1,x2,…,xn, d) = (d & F) (d & Fd) where d = 1, if the defect is present Wd : y* / d = 1 Component with defect: Component F(x1,x2,…,xn) y Wd Defect Constraints Logic level
132
Defect-Oriented Test Generation
Test generation for a bridging fault: Component F(x1,x2,…,xn) y Activate a path Bridge between leads 73 and 6 Wd 1 1 Macro Defect 1 d 2 a & D 71 & D Fault manifestation: Wd = x6x7= 1: x6 = 0, x7 = 1, x7 = D Fault propagation: x2 = 1, x1 = 1, b = 1, c = 1 Line justification: b = 1: x5 = 0 D & 3 e & 7 72 & b 4 1 D y D 5 & 73 & c 1 6 Wd
133
Test generation for Sequential Faults
Fault sensitization: Test pattern consists of an input pattern and a state Fault propagation: To propagate a fault to the output, an input pattern and a state is needed Line justification: To reach the needed state, an input sequence is needed x y CC R Time frame model: y x x y x y CC CC CC R R R
134
Hierarchical Test Generation
In high-level symbolic test generation the test properties of components are often described in form of fault-propagation modes These modes will usually contain: a list of control signals such that the data on input lines is reproduced without logic transformation at the output lines - I-path, or a list of control signals that provide one-to-one mapping between data inputs and data outputs - F-path The I-paths and F-paths constitute connections for propagating test vectors from input ports (or any controllable points) to the inputs of the Module Under Test (MUT) and to propagate the test response to an output port (or any observable points) In the hierarchical approach, top-down and bottom-up strategies can be distinguished
135
Hierarchical Test Generation Approaches
Bottom-up approach: Top-down approach: A A System System a a’ B D B D’ C c C c’ a’,c’,D’ fixed x - free a,c,D fixed x - free a a’x D d’x A = ax D: B = bx C = cx A = a’x D’ = d’x C = c’x c c’x Module Module
136
Hierarchical Test Generation Approaches
Bottom-up approach: Pre-calculated tests for components generated on low-level will be assembled at a higher level It fits well to the uniform hierarchical approach to test, which covers both component testing and communication network testing However, the bottom-up algorithms ignore the incompleteness problem The constraints imposed by other modules and/or the network structure may prevent the local test solutions from being assembled into a global test The approach would work well only if the the corresponding testability demands were fulfilled A System a B D C c a,c,D fixed x - free a D A = ax D: B = bx C = cx c Module
137
Hierarchical Test Generation Approaches
Top-down approach: A System Top-down approach has been proposed to solve the test generation problem by deriving environmental constraints for low-level solutions. This method is more flexible since it does not narrow the search for the global test solution to pregenerated patterns for the system modules However the method is of little use when the system is still under development in a top-down fashion, or when “canned” local tests for modules or cores have to be applied a’ B D’ C c’ a’,c’,D’ fixed x - free a’x d’x A = a’x D’ = d’x C = c’x c’x Module
138
Test Generation with SSBDDs
The nodes represent signal paths through gates Two possible faults of a DD-node represent all the stuck-at faults along the signal path & 1 2 3 4 5 6 7 71 72 73 a b c d e y Macro y 6 73 No fault 1 1 5 2 71 72 y Fault 710 Test pattern for the node 71:
139
Structural Test Generation on SSBDDs
Multiple path fault propagation by DDs: Structural DD for testing paths y x21 x12 x41 x3 x11 x31 x22 x32 x23 x33 x24 x42 Functional DD for testing inputs D x1 D 1 1 y x2 x1 x4 x3 D 1 x2 y 1 1 x3 D 1 x1 x3 x4 x4 D 1 1 1
140
Example: Test Generation with SSBDDs
Testing Stuck-at-0 faults on paths: x11 y x11 x21 1 x1 x21 x2 & x12 x12 x31 x4 x31 x3 y & 1 x4 Test pattern: x22 x32 & x13 x13 & x1 x2 x3 x4 y x22 x32 Tested faults: x120, x210
141
Example: Test Generation with SSBDDs
Testing Stuck-at-0 faults on paths: y x11 x21 x1 x2 & x12 x31 x4 1 x3 y & 1 x4 x13 x22 x32 & & Test pattern: x1 x2 x3 x4 y Tested faults: x120, x310, x40
142
Example: Test Generation with SSBDDs
Testing Stuck-at-0 faults on paths: y x11 x21 x1 x2 & x12 x31 x4 x3 y & 1 x4 x13 x22 x32 1 & & Test pattern: x1 x2 x3 x4 y Tested faults: x131, x220, x320
143
Example: Test Generation with SSBDDs
Testing Stuck-at-1 faults on paths: y x11 x21 x11 1 x1 x21 x2 & x12 x31 x4 x12 1 x31 x3 y & 1 x4 x13 x22 x32 Test pattern: 1 & x13 & x1 x2 x3 x4 y x22 x32 Tested faults: x111, x121, x221
144
Example: Test Generation with SSBDDs
Testing Stuck-at-1 faults on paths: y x11 x21 x11 1 x1 x21 x2 & x12 x31 x4 x12 1 x31 x3 y & 1 x4 x13 x22 x32 Test pattern: 1 & x13 & x1 x2 x3 x4 y x22 x32 Tested faults: x211, x311, x130
145
Example: Test Generation with SSBDDs
Testing Stuck-at-1 faults on paths: y x11 x21 x11 1 x1 x21 x2 & x12 x31 x4 x12 1 x31 x3 y & 1 x4 x13 x22 x32 Test pattern: 1 & x13 & x1 x2 x3 x4 y x22 x32 Not yet tested fault: x321 Tested fault: x41
146
Transformation of BDDs
y x11 x21 y x1 x2 y x1 x2 SSBDD: x12 x31 x4 x12 x31 x4 x12 x3 x4 x13 x22 x32 x13 x22 x32 x13 x22 x32 y x1 x2 y x1 x2 Optimized BDD: x3 x4 x4 x3 BDD: x2 x3 x2
147
Example: Test Generation with BDDs
Testing Stuck-at faults on inputs: y x11 x21 x11 x1 x21 SSBDD: x2 & x12 x31 x4 x12 x31 x3 y & 1 x4 x13 x22 x32 & x13 y x1 x2 & 1 x22 BDD: x32 x1 x2 x3 x4 y D D x4 x3 Test pair D=0,1: x2 Tested faults: x10, x11
148
Multiple Fault Testing with SSBDDs
Method of pattern groups on SSBDDs x1 y x1 x2 x2 & Test group for testing a part of circuit: x3 y x1 x3 x4 & 1 x4 & x1 x2 x3 x4 y x1 x2 x3 & Disjunctive normal forms are trending to explode DDs provide an alternative
149
Test Generation for Digital Systems
High-level test generation with DDs: Conformity test Decision Diagram Multiple paths activation in a single DD Control function y3 is tested R 2 y # 4 Data path 1 R R 2 M 3 e + 1 a * b IN c d y 4 2 2 y y 3 1 R + R 1 2 1 IN + R 2 1 IN 2 R 1 3 y 2 R * R 1 2 1 Control: For D = 0,1,2,3: y1 y2 y3 y4 = 00D2 Data: Solution of R1+ R1 IN R1 R1* R1 IN* R Test program: 2
150
Test Generation for Digital Systems
High-level test generation with DDs: Scanning test Decision Diagram Single path activation in a single DD Data function R1* R2 is tested R 2 y # 4 Data path 1 R R 2 M 3 e + 1 a * b IN c d y 4 2 2 y y 3 1 R + R 1 2 1 IN + R 2 1 IN 2 R 1 3 y 2 R * R 1 2 1 Control: y1 y2 y3 y4 = 0032 Data: For all specified pairs of (R1, R2) IN* R Test program: 2
151
Test Generation for Digital Systems
High-level path activation on DDs Transparency functions on Decision Diagrams: Y = C y3 = 2, R3’ = 0 C - to be tested R1 = B y1 = 2, R3’ = 0 R1 - to be justified
152
Test Generation for Digital Systems
Modelling the Control Path by DDs DD for the FSM: FSM state transitions and output functions q’ # 1001 q y1 y2 y3 4200 1 2 R’ =0 #2120 #3021 4211 0112 3 4
153
Test Generation for Digital Systems
System model Data path Y,R y # R y # 3 3 2 2 2 1 1 R’ 2 R’ 2 3 2 2 2 C R’ A 2 2 3 R’ C 2R’ 2 2 Control path C+R’ C A # q’ # 1001 q y1 y2 y3 4200 1 2 R’ =0 #2120 #3021 4211 0112 3 4 2 1 R’ 1 1 R y 1 1 # # 1 1 R’ R’ 1 1 2 R’ R’ B 1 3 1 A * F(B,R’ ) A R’ 3 1
154
Test Generation for Digital Systems
High-level test generation for data-path (example): Time: t t-1 t-2 t-3 q’=4 q’=2 q’=1 q’=0 y =2 3 y = 0 R’ = 0 y 2 = 0 2 2 R = D 3 # # R’ =0 2 q’=2 Fault propagation q’=1 y =2 Test generation steps: Fault manifestation Fault-effect propagation Constraints justification 1 y = 0 C = D 3 A = D R’ =0 1 3 # A * R’ R’ = D 1 1 2 B = D 2 Fault manifestation Constraints justification
155
Test Generation for Digital Systems
Test generation step: Fault-effect propagation + R 3 2 * F 1 A B C Y y s Time: t t-1 t-2 t-3 q’=4 q’=2 q’=1 q’=0 y = 2 Y,R y # 3 3 3 y = 0 R’ = 0 y 2 = 0 2 2 1 R =D 3 R’ # # 2 3 q y1 y2 y3 R’ = 0 2 q’=2 C R’ 2 Fault propagation # 1001 q’=1 q’ y 1 1 R’ C = 2 2 1 R’ =0 #2120 y 2 = 0 C =D 3 A =D R’ = 0 # #3021 1 3 C+R’ # 2 2 # 4200 A * R’ R’ =D 1 1 2 B =D 2 3 Fault manifestation # 4211 4 Constraints justification # 0112
156
Test Generation for Digital Systems
Test generation step: Line justification Time: t-1 Path activation procedures on DDs: Y,R y # 3 3 y 2 A 2R’ # R 1 3 R’ 1 y 3 =2 R’ 2 =0 = 0 # R =D A * 1 B C q’=4 Fault manifestation q’=2 q’=1 q’=0 Constraints justification Fault propagation t t-1 t-2 t-3 Time: R’ 2 3 C R’ 2 R’ C 2 C+R’ 2 q y1 y2 y3 q’ # 1001 1 1 R’ =0 #2120 2 y 1 R’ 3 B F(B,R’ ) # R 2 #3021 # 2 # 4200 3 # 4211 4 # 0112
157
Test Generation for Digital Systems
High-level test generation example: Time: t t-1 t-2 t-3 Symbolic test sequence: q’=4 q’=2 q’=1 q’=0 y =2 3 y = 0 R’ = 0 y 2 = 0 2 2 R =D 3 # # R’ =0 2 q’=2 Fault propagation q’=1 y =2 1 y = 0 C =D 3 A =D R’ =0 1 3 # A * R’ R’ =D 1 1 2 B =D 2 Fault manifestation Constraints justification
158
Test Generation for Microprocessors
Modelling a microprocessor by High-Level DDs (example): Instruction set: DD-model of the microprocessor: 1,6 A I IN I1: MVI A,D A IN I2: MOV R,A R A I3: MOV M,R OUT R I4: MOV M,A OUT A I5: MOV R,M R IN I6: MOV A,M A IN I7: ADD R A A + R I8: ORA R A A R I9: ANA R A A R I10: CMA A,D A A 3 2,3,4,5 OUT I R A 4 7 A + R A 8 2 A R R I A 9 5 A R IN 10 A 1,3,4,6-10 R
159
Test Generation for Microprocessors
Test program synthesis (example): DD-model of the microprocessor: Scanning test for adder: Instruction sequence I5 I1 I7 I4 for all needed pairs of (A,R) 1,6 A I IN 3 I4 2,3,4,5 OUT OUT I R A 4 A I7 7 A + R A A I1 8 2 R IN(2) A R R I A R I5 9 A R 5 IN(1) IN 10 Time: t t - 1 A t - 2 t - 3 1,3,4,6-10 Observation Test Load R
160
Test Generation for Microprocessors
Data generation for a test program (example): Conformity test for decoder: Instruction sequence I5 I1 D I4 for all DI1 - I10 at given A,R,IN DD-model of the microprocessor: 1,6 A I IN Data generation: 3 2,3,4,5 OUT I R A 4 7 A + R A 8 2 A R R I A 9 A R 5 IN 10 A 1,3,4,6-10 Data IN,A,R are generated so that the values of all functions were different R
161
Defect-Oriented Hierarchical Test Generation
Multi-Level approach with functional fault model:
162
Overview Fault simulation Introduction
Theory: Boolean differential algebra Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
163
Overview: Fault Simulation
Overview about methods Low (gate) level methods Parallel fault simulation Deductive fault simulation Gate-level fault lists propagation (library based) Boolean full differential based (general approach) SSBDD based (tradeoff possibility) Concurrent fault simulation Critical path tracing Parallel critical path tracing Hierarchical fault simulation
164
Fault simulation Goals:
Evaluation (grading) of a test T (fault coverage) Guiding the test generation process Constructing fault tables (dictionaries) Fault diagnosis No more faults Select target fault Done Generate initial T Generate test for target Evaluate T Fault simulate No Yes Sufficient fault coverage? Discard detected faults Done Update T
165
Comparison of methods:
Fault simulation Fault simulation techniques: serial fault simulation parallel fault simulation deductive fault simulation concurrent fault simulation critical path analysis parallel critical path analysis Common concepts: fault specification (fault collaps) fault insertion fault effect propagation fault discarding (dropping) Comparison of methods: Fault table Faults Fi Test patterns Tj Entry (i,j) = 1(0) if Fi is detectable (not detectable) by Tj
166
Parallel Fault Simulation
Parallel patterns Parallel faults Fault-free circuit: Computer word 001 x1 z 001 Fault-free & 011 x2 1 y 101 x3 010 Stuck-at-1 Inserted faults Stuck-at-0 Faulty circuit: Detected error Inserted stuck-at-1 fault 000 x1 z 0 1 0 & 010 x2 1 y 111 x3 001 000 x1 z 111 & 111 Detected error x2 1 y 101 x3 010
167
Parallel Simulation of Faults
Interpretation of values in a computer word: Fault insertion on z: 4 3 2 1 31 30 2 1 x x 1 y 1 z 0 z 1 z y Value of A in faulty circuit #31 Mask for z: Stuck values for z: z before fault insertion: z after fault insertion: 1 1 1 1 Value of A in faulty circuit #1 1 1 1 Value of A in the good circuit 1 1 Three-valued logic 0 1 u A A Boolean operations: C = A B: C1 = A1 B1 C2 = A2 B2 C = A: C1 = A2 C2 = A1 Values: A = (A1, A2)
168
Deductive Fault Simulation
Gate-level fault list propagation Fault list calculation: 1 1 b La = L4 L5 Lb = L1 L2 Lc = L3 La Ly = Lb Lc Ly = (L1 L2) - (L3 (L4 L5)) & 1 2 1 Library of formulas for gates 1 1 y 3 & c 4 1 5 a La – faults causing erroneous signal on the node a Ly – faults causing erroneous signal on the output node y
169
Deductive Fault Simulation
Macro-level fault propagation: Fault list calculated: 1 1 b & 1 Ly = (L1 L2) - (L3 (L4 L5)) 2 1 1 1 y 3 & c 4 1 5 a Solving Boolean differential equation: Lk
170
Deductive Fault Simulation with DDs
Macro-level fault propagation: Fault list propagated: Ly = (L1 L2) - (L3 (L4 L5)) 1 1 b & 1 2 1 1 Fault list calculation on the DD 1 y 3 & Faults on the activated path: c 4 Ly = (L1 L2) 1 5 a First order fault masking effect: Ly = (L1 L2) - L3 y 1 2 Second order masking effect (tradeoff): 3 4 Ly = (L1 L2) - (L3 (L4 L5)) 5 There is a tradeoff possibility between the speed and accuracy When increasing the speed of simulation the results will be not accurate (pessimistic): less fault detected than in reality
171
Concurrent Fault Simulation
A good circuit N is simulated For every faulty circuit NF, only those elements in NF are simulated that are different from the corresponding ones in N These differences are maintained for every element x in N in the form of concurrent fault list Example: a gate concurrently simulated a 1 & c b 1 Concurrent fault list
172
Concurrent Fault Simulation
Example: simple circuit simulated A fault is said to be visible on a line when its values for N and NF are different In deductive simulation, only visible faults belong to the fault lists In concurrent simulation, faults are excluded (dropped) from the fault list only when the element in NF is equivalent to the element of N a c 1 & e b & 1 d 1 Concurrent fault list
173
Critical Path Tracing & & 1 1 & & 1 & 1 & Problems: 1 1 1 b 1 1 2 1 1
0/1 1 1 y y 1/0 3 & & 1 c 4 1 1 5 a The critical path is not continuous y 1 2 1 & 1 1 1/0 y 3 4 & 1 1 5 The critical path breaks on the fan-out
174
Parallel Critical Path Tracing
Handling of fanout points: Fault simulation Boolean differential calculus 1011 x1 & 1110 x2 1011 1 1001 y x3 x1 F x2 x y xk Detected faults vector: T1: No faults detected T2: x1 1 detected T3: x1 0 detected T4: No faults detected
175
Hierarchical Concurrent Fault Simulation
Set of patterns Set of patterns with faults With faults P; P (R )…P ( R ) P; P (R )…P ( R ) High-Level 1 1 m m 1 1 n n component R: Faults P: Pattern High-Level component P: First Pattern High-Level component Set of patterns with faults Sequence P; P (R )…P ( R ) 1 1 n n of patterns
176
Hierarchical fault simulation
Main ideas of the procedure: A target block is chosen This will be represented on gate level Fault simulation is carried out on the low level The faults through other blocks are propagated on the high level
177
Hierarchical fault simulation
178
Hierarchical fault simulation
Definition of the complex pattern: D = {P, (P1,R1), …, (Pk, Rk)} P is the fault-free pattern (value) Pi (i = 1,2, ..., k) are faulty patterns, caused by a set of faults Ri All the faults simulated causing the same faulty pattern Pi are put together in one group Ri R1- Rk are the propagated fault groups, causing, correspondingly, the faulty patterns P1- Pk
179
Fault Simulation with DD-s
Fault propagation through a complex RT-level component Decision diagram A 1 q xA B + C A + 1 3 xC A + C 4 A 2 A - 1 A + B B Sub-system for A C A q xA xc Dq = {1, 0 (1,2,5), 4 (3,4)}, DxA = {0, 1 (3,5)}, DxC = {1, 0 (4,6)}, DA = {7, 3 (4,5,7), 4 (1,3,9), 8 (2,8)}, DB = {8, 3 (4,5), 4 (3,7), 6 (2,8)}, DC = {4, 1 (1,3,4), 2 (2,6), 5 (6,7)}. New DA to be calculated
180
Fault Simulation with DD-s
Example of high level fault simulation Dq = {1, 0 (1,2,5), 4 (3,4)}, DxA = {0, 1 (3,5)}, DxC = {1, 0 (4,6)}, DA = {7, 3 (4,5,7), 4 (1,3,9), 8 (2,8)}, DB = {8, 3 (4,5), 4 (3,7), 6 (2,8)}, DC = {4, 1 (1,3,4), 2 (2,6), 5 (6,7)} Final complex vector for A: DA = {8, 3(4), 4(3,7), 5(9), 7(5), 9(1,8)}
181
Overview Fault diagnosis Introduction
Theory: Boolean differential algebra Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
182
Overview: Fault Diagnosis
Overview of the methods Combinational methods of diagnosis Fault table based methods Fault Dictionary based methods Minimization of diagnostic data in fault tables Methods for improving the diagnostic resolution Sequential methods of diagnosis Edge-Pin testing Guided Probe fault location Design error diagnosis
183
Fault diagnosis Fault Diagnosis methods Combinational methods
The process of fault localization is carried out after finishing the whole testing experiment by combining all the gathered experimental data The diagnosis is made by using fault tables or fault dictionaries Sequential methods (adaptive testing) The process of fault location is carried out step by step, where each step depends on the result of the diagnostic experiment at the previous step Sequential fault diagnosis can be carried out either by observing only output responses of the UUT or by pinpointing by a special probe also internal control points of the UUT (guided probing)
184
Combinational Fault diagnosis
Fault localization by fault tables F 1 2 3 4 5 6 7 T 1 1 T 1 1 1 6 Fault F located 5 Faults F and F are not distinguishable 1 4 No match, diagnosis not possible
185
Combinational Fault Diagnosis
Fault localization by fault dictionaries Fault dictionaries contain the sama data as the fault tables with the difference that the data is reorganised The column bit vectors can be represented by ordered decimal codes or by some kind of compressed signature
186
Combinational Fault Diagnosis
Minimization of diagnostic data To reduce the cost of building a fault table, the detected faults may be dropped from simulation All the faults detected for the first time by the same vector produce the same column vector in the table, and will included in the same equivalence class of faults Testing can stop after the first failing test, no information from the following tests can be used With fault dropping, only 19 faults need to be simulated compared to the all 42 faults The following faults remain not distinguishable: {F2, F3}, {F1, F4}. A tradeoff between computing time and diagnostic resolution can be achieved by dropping faults after k >1 detections
187
Improving Diagnostic Resolution
Generating tests to distinguish faults To improve the fault resolution of a given test set T, it is necessary to generate tests to distinguish among faults equivalent under T Consider the problem of distinguishing between faults F1 and F2. A test is to be found which detects one of these faults but not the other The following cases are possible: F1 and F2 do not influence the same outputs A test should be generated for F1 (F2) using only the circuit feeding the outputs influenced by F1 (F2) F1 and F2 influence the same set of outputs. A test should be generated for F1 (F2) without activating F2 (F1) How to activate a fault without activating another one?
188
Improving Diagnostic Resolution
Generating tests to distinguish faults Faults are influencing on different outputs: Method: F1 may influence both outputs, F2 may influence only x8 A test pattern 0010 activates F1 up to the both outputs, and F2 only to x8 If both outputs will be wrong, F1 is present, and if only x8 will be wrong, F2 is present F1: x3,1 0, F2: x4 1
189
Improving Diagnostic Resolution
Generating tests to distinguish faults How to activate a fault without activating another one? Method: Both faults influence the same output of the circuit A test pattern 0100 activates the fault F2. F1 is not activated: the line x3,2 has the same value as it would have if F1 were present A test pattern 0110 activates the fault F2. F1 is now activated at his site but not propagated through the AND gate F1: x3,2 0, F2: x5,2 1 x 5,1 5,2 2 3 4 3,1 3,2 5 6 7 8 1 & 0/1
190
Improving Diagnostic Resolution
Generating tests to distinguish faults Method: Both of the faults may influence only the same output Both of the faults are activated to the same OR gate (none of them is blocked) However, the faults produce different values at the inputs of the gate, they are distinguished If x8 = 0, F1 is present Otherwise, either F2 is present or none of the faults F1 and F2 are present How to activate a fault without activating another one? F1: x3,1 1, F2: x3,2 1
191
Sequential Fault Diagnosis
Sequential fault diagnosis by Edge-Pin Testing Diagnostic tree: Two faults F1,F4 remain indistinguishable Not all test patterns used in the fault table are needed Different faults need for identifying test sequences with different lengths The shortest test contains two patterns, the longest four patterns
192
Sequential Fault Diagnosis
Guided-probe testing at the gate level Searh tree: Faulty circuit
193
Sequential Fault Diagnosis
Guided-probe testing at the macro-level Rules on DDs: Only the nodes where the leaving direction coincides with the leaving direction from the DD should be pinponted If simulation shows that these nodes cannot explain the faulty behavior they can be dropped There is a fault on the line 71 1 1 Macro d 2 a & 1 & 71 1 & 3 & 7 e 72 & b y 4 1 y 6 73 1 5 & 1 73 1 c & 1 6 1 5 Nodes to be pinpointed: Gate level: c, e, d, 1, a, 71 (6 attempts) Macro level (DD): 1, 71 (2 attempts) 2 71 72
194
Design error diagnosis
Design error sources: manual interference of the designer with the design during synthesis bugs in CAD software Design error types: gate replacements extra or missing inverters extra or missing wires incorrectly placed wires extra or missing gates Main approaches to design error diagnosis: Error model (design error types) explicitly described single gate replacement on the basis of {AND, OR, NAND, NOR} Diagnosis without error model
195
Design Error Diagnosis
Basic idea of the single gate error: To detect a design error in the implementation at an arbitrary gate sk = gk (s1, s2,...,sh), it is sufficient to apply a pair of test patterns which detect the faults si /1 and si /0 at one of the gate inputs si,= 1,2, ... h Single gate design error model Stuck-at fault model is used with subsequent translation of the diagnosis into the design error area This allows to exploit standard gate-level ATPGs for verification and design error diagnosis purposes Hierarchical approach is used for generating test patterns which at first, localize the faulty macro (tree-like subcircuit), and second, localize the erroneous gate in the faulty macro s k 1 2 h g OUT IN i 11 01
196
Design Error Diagnosis
Mapping stuck-at faults into design errors: Path-level diagnosis: x1 /1, x2,2 /0, x2,1 /1, x3,1 /1 Gate-level diagnosis: g6, g8, g9, g12 Additional test T8 = (00011): x3,1 /1 is missing Remove g9 and g12 x2,2 /0 suspected, x2,2 /1 is missing g6 is correct Diagnosis: g8 (x1 /1, x2,1 /1) AND8 OR8
197
Overview Testability measuring Introduction
Theory: Boolean differential algebra Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
198
Overview: Testability Measuring
Quality Policy of Electronic Design Tradeoffs of Design for Testability Testability criteria Testability measures Heuristic measures Probabilistic measures Calculation of signal probabilities Parker - Mc Cluskey method Cutting method Conditional probabilities based method
199
Design for Testability
The problem is - QUALITY: Yield (Y) P,n Defect level (DL) Pa Quality policy Design for testability P - probability of a defect n - number of defects Pa - probability of accepting a bad product Testing - probability of producing a good product
200
Design for Testability
The problem is - QUALITY: Yield (Y) P,n Defect level (DL) Pa Quality policy n - number of defects m - number of faults tested P - probability of a defect Pa - probability of accepting a bad product T - test coverage
201
Design for Testability
The problem is - QUALITY: DL 1 Y Y(%) 90 8 5 1 T(%) 50 45 25 5 100 10 81 45 9 T(%) DL T 10 50 90 Paradox: Testability DL
202
Design for Testability
Tradeoffs: DFT: Resynthesis or adding extra hardware DL T Testability DL T Performance Logic complexity Area Number of I/O Economic tradeoff: C (Design + Test) < C (Design) + C (Test) Power consumption Yield
203
Design for Testability
Economic tradeoff: C (Design + Test) < C (Design) + C (Test) C (DFT) + C (Test’) < C (Design) + C (Test) C (Test) = CTGEN + (CAPLIC + (1 - Y) CTS) Q Test generation Testing Troubleshooting Volume Design Product C (DFT) = (CD + ΔCD) + Q(CP + ΔCP)
204
Testability Criteria Qualitative criteria for Design for testability:
Testing cost: Test generation time Test application time Fault coverage Test storage cost (test length) Availability of Automatic Test Equipment Redesign for testability cost: Performance degradation Area overhead I/O pin demand
205
Testability of Design Types
General important relationships: T (Sequential logic) < T (Combinational logic) Solutions: Scan-Path design strategy T (Control logic) < T (Data path) Solutions: Data-Flow design, Scan-Path design strategies T (Random logic) < T (Structured logic) Solutions: Bus-oriented design, Core-oriented design T (Asynchronous design) < T (Synchronous design)
206
Testability Estimations for Circuit Types
Circuits less controllable Decoders Circuits with feedback Counters Clock generators Oscillators Self-timing circuits Self-resetting circuits Circuits less observable Circuits with feedback Embedded RAMs ROMs PLAs Error-checking circuits Circuits with redundant nodes
207
Controllability for 1 needed
Testability Measures Evaluation of testability: Controllability C0 (i) C1 (j) Observability OY (k) OZ (k) Testability Controllability for 1 needed 1 i 1 2 & 2 Y & 20 k 20 1 j & 2 1 1 20 2 Z 1 x 20 Defect Probability of detecting 1/p60
208
Heuristic Testability Measures
Controllability calculation: Value: minimum number of nodes that must be set in order to produce 0 or 1 For inputs: C0(x) = C1(x) = 1 For other signals: recursive calculation rules: x1 C0(y) = minC0(x1), C0(x2) + 1 C1(y) = C1(x1) + C1(x2) + 1 x & y & y x2 C0(y) = C1(x) + 1 C1(y) = C0(x) + 1 x1 C1(y) = minC1(x1), C1(x2) + 1 C0(y) = C0(x1) + C0(x2) + 1 1 y x2 x1 C0(y) = min(C0(x1) + C0(x2)), (C1(x1) + C1(x2)) + 1 C1(y) = min(C0(x1) + C1(x2)), (C1(x1) + C0(x2)) + 1 y x2
209
Heuristic Testability Measures
Observability calculation: Value: minimum number of nodes which must be set for fault propagating For inputs: O(y) = 1 For other signals: recursive calculation rules: x1 & y O(x1) = O(y) + C1(x2) + 1 x2 x & y x1 O(x) = O(y) + 1 1 y O(x1) = O(y) + C0(x2) + 1 x2 x1 y x2 O(x1) = O(y) + 1
210
Heuristic Testability Measures
Controllability and observability: & 1 2 3 4 5 6 7 71 72 73 a b c d e y Macro
211
Heuristic Testability Measures
Testability calculation: T(x 0) = C1(x) + O(x) T(x 1) = C0(x) + O(x) & 1 2 3 4 5 6 7 71 72 73 a b c d e y Macro
212
Probabilistic Testability Measures
Controllability calculation: Value: minimum number of nodes that must be set in order to produce 0 or 1 For inputs: C0(i) = p(xi=0) C1(i) = p(xi=1) = 1 - p(xi=0) For other signals: recursive calculation rules: x1 & y py= px1 px2 x2 x & y py = 1 - px x1 1 y py= 1 - (1 - px1)(1 - px2) x2 x1 x1 & y 1 ... ... y xn xn
213
Probabilistic Testability Measures
Probabilities of reconverging fanouts: x1 y x2 py= ? x1 a py = 1 - (1 - pa ) (1 - pb) = = 1 - (1 - px1(1 - px2))(1 - px2(1 - px1)) = ? & x2 1 y & b Correction of signal correlations: x & y py = px px = px2 px
214
Calculation of Signal Probabilities
Straightforward methods: Calculation gate by gate: pa = 1 – p1p2 = 0,75, pb = 0,75, pc = 0,4375, py = 0,22 & a c y b 1 2 3 21 22 23 Parker - McCluskey algorithm: py = pcp2 = (1- papb) p2 = = (1 – (1- p1p2) (1- p2p3)) p2 = = p1p p22p p1p23p3 = = p1p2 + p2 p p1p2p3 = 0,38 For all inputs: pk = 1/2
215
Probabilistic Testability Measures
Parker-McCluskey: Observability: p(y/a = 1) = pb p2 = = (1 - p2p3) p2 = p2 - p22p3 = p2 - p2p3 = 0,25 & a c y b 1 2 3 21 22 23 x Testability: p(a 1) = p(y/a = 1) (1 - pa) = = (p2 - p2p3)(p1p2) = = p1p p1p22p3 = = p1p2 - p1p2p3 = 0,125 For all inputs: pk = 1/2
216
Calculation of fault testabilities
Calculation the probability of detecting the fault b1 1 & 21 a 2 Fault 1 & c 22 & b y 3 & 23
217
Calculation of Signal Probabilities
Using BDDs: & a c y b 1 2 3 21 22 23 py = p(L1) + p(L2) = = p1 p21 p23 + (1 - p1) p22 p3 p23 = = p1 p2 + p2 p3 - p1p2 p3 = 0,38 L1 y 1 21 23 L2 22 3 For all inputs: pk = 1/2
218
Heuristic Testability Measures
Using BDDs for controllability calculation: & a c y b 1 2 3 21 22 23 Gate level calculation BDD-based algorithm for the heuristic measure is the same as for the probabilistic measure C1(y) = min [(C1(L1), C1(L2)] + 1 = = min [C1(x1) + C1(x2), C0(x1) + C1(x2) + C1(x3)] + 1 = = min [2, 3] + 1 = 3 1 21 22 23 3 y L1 L2
219
Probabilistic Testability Measures
Using BDDs: & a c y b 1 2 3 21 22 23 x Observability: p(y/x21 = 1) = p(L1) p(L2) p(L3) = = p1 p23 (1 - p3) = 0,125 Testability: p(a 0) = p21 p(y/x21 = 1) = = p21 p(L1) p(L2) p(L3) = = p2p1 (1 - p3) = 0,125 L1 L2 y 1 21 23 22 3 Why: p(y/x21 = 1) = p21 p(y/x21 = 1)? L3
220
Calculation of Signal Probabilities
& 1 2 3 4 5 6 7 71 72 73 a b c d e y Complexity of exact calculation is reduced by using lower and higher bounds of probabilities Reconvergent fan-outs are cut except of one Probability range of [0,1] is assigned to all the cut lines The bounds are propagated by straightforward calculation Cutting method For all inputs: pk = 1/2
221
Calculation of Signal Probabilities
& 1 2 3 4 5 6 7 71 72 73 a b c d e y Method of conditional probabilities y x NB! Probabilities Pk = [Pk* = p(xk/x7=0), Pk ** = p(xk/x7=1)] are propagated, not bounds as in the cutting method. For all inputs: pk = 1/2 py = p(y/x7=0)(1 - p7) + p(y/x7=1)p7 = (1/2 x 1/4) + (11/16 x 3/4) = 41/64
222
Calculation of Signal Probabilities
Combining BDDs and conditional probabilities w x y z Using BDDs gives correct results only inside the blocks, not for the whole system New method: Block level: use BDDs and straightforward calculation System level: use conditional probabilities
223
Overview Design for testability Introduction
Theory: Boolean differential algebra Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
224
Overview: Design for Testability
Ad Hoc Design for Testability Techniques Method of test points Multiplexing and demultiplexing of test points Time sharing of I/O for normal working and testing modes Partitioning of registers and large combinational circuits Scan-Path Design Scan-path design concept Controllability and observability by means of scan-path Full and partial serial scan-paths Non-serial scan design Classical scan designs Boundary Scan Standard Synthesis of Testable Circuits
225
Ad Hoc Design for Testability Techniques
Method of Test Points: Block 1 is not observable, Block 2 is not controllable Block 1 Block 2 Improving controllability and observability: OP 1- controllability: CP = normal working mode CP = controlling Block 2 with signal 1 Block 1 1 Block 2 CP OP 0- controllability: CP = normal working mode CP = controlling Block 2 with signal 0 Block 1 & Block 2 CP
226
Ad Hoc Design for Testability Techniques
Method of Test Points: Block 1 is not observable, Block 2 is not controllable Block 1 Block 2 Improving controllability: Normal working mode: CP1 = 0, CP2 = 1 Controlling Block 2 with 1: CP1 = 1, CP2 = 1 Controlling Block 2 with 0: CP2 = 0 Block 1 1 & Block 2 CP1 CP2 Normal working mode: CP2 = 0 Controlling Block 2 with 1: CP1 = 1, CP2 = 1 Controlling Block 2 with 0: CP1 = 0, CP2 = 1 Block 1 MUX Block 2 CP1 CP2
227
Ad Hoc Design for Testability Techniques
Multiplexing monitor points: To reduce the number of output pins for observing monitor points, multiplexer can be used: 2n observation points are replaced by a single output and n inputs to address a selected observation point Disadvantage: only one observation point can be observed at a time 1 MUX OUT 2n-1 x0 x1 xn
228
Ad Hoc Design for Testability Techniques
Multiplexing monitor points: To reduce the number of output pins for observing monitor points, multiplexer can be used: To reduce the number of inputs, a counter (or a shift register) can be used to drive the address lines of the multiplexer Disadvantage: Only one observation point can be observed at a time 1 MUX OUT 2n-1 c Counter
229
Ad Hoc Design for Testability Techniques
Demultiplexer for implementing control points: To reduce the number of input pins for controlling testpoints, demultiplexer and a latch register can be used. N clock times are required between test vectors to set up the proper control values CP1 1 CP2 x DMUX N CPN x1 x2 xN N = 2n
230
Ad Hoc Design for Testability Techniques
Demultiplexer for implementing control points: To reduce the number of input pins for controlling testpoints, demultiplexer and a latch register can be used. To reduce the number of inputs for addressing, a counter (or a shift register) can be used to drive the address lines of the demultiplexer CP1 1 CP2 x DMUX N CPN c Counter N = 2n
231
Time-sharing of outputs for monitoring
To reduce the number of output pins for observing monitor points, time-sharing of working outputs can be introduced: no additional outputs are needed To reduce the number of inputs, again counter or shift register can be used Disadvantage: only one observation point can be observed at a time MUX Original circuit
232
Time-sharing of inputs for controlling
CP1 To reduce the number of input pins for controlling test points, time-sharing of working inputs can be introduced. To reduce the number of inputs for driving the address lines of demultiplexer, counter or shift register can be used 1 CP2 Normal input lines N CPN DMUX
233
Ad Hoc Design for Testability Techniques
Examples of good candidates for control points: control, address, and data bus lines on bus-structured designs enable/hold inputs of microprocessors enable and read/write inputs to memory devices clock and preset/clear inputs to memory devices (flip-flops, counters, ...) data select inputs to multiplexers and demultiplexers control lines on tristate devices Examples of good candidates for observation points: stem lines associated with signals having high fanout global feedback paths redundant signal lines outputs of logic devices having many inputs (multiplexers, parity generators) outputs from state devices (flip-flops, counters, shift registers) address, control and data busses
234
Ad Hoc Design for Testability Techniques
Logical redundancy: Hazard control circuitry: Redundancy should be avoided: If a redundant fault occurs, it may invalidate some test for nonredundant faults Redundant faults cause difficulty in calculating fault coverage Much test generation time can be spent in trying to generate a test for a redundant fault Redundancy intentionally added: To eliminate hazards in combinational circuits To achieve high reliability (using error detecting circuits) 1 01 & 01 10 1 & 1 1 & 0 1 Redundant AND-gate Fault 0 not testable
235
Ad Hoc Design for Testability Techniques
Fault redundancy: Testable error control circuitry: Error control circuitry: Decoder Decoder 0 0 E E T E 1 if decoder is fault-free Fault 0 not testable T 0 - normal working mode T = 1 - testing mode
236
Ad Hoc Design for Testability Techniques
Partitioning of registers (counters): 16 bit counter divided into two 8-bit counters: Instead of 216 = clocks, 2x28 = 512 clocks needed If tested in parallel, only 256 clocks needed IN OUT IN OUT C REG 1 REG 2 CL CL CP: Tester Data CP: Tester Data CP: Data Inhibit CP: Data Inhibit & & & IN OUT & IN OUT REG 2 REG 1 CL CL C & & OP CP: Clock Inhibit CP: Tester Clock
237
Ad Hoc Design for Testability Techniques
Partitioning of large combinational circuits: The time complexity of test generation and fault simulation grows faster than a linear function of circuit size Partioning of large circuits reduces these costs I/O sharing of normal and testing modes is used Three modes can be chosen: normal mode testing C testing C2 (bolded lines) DMUX1 C1 MUX3 C1 MUX1 MUX2 C2 MUX4 DMUX2 C2
238
Combinational circuit
Scan-Path Design IN OUT The complexity of testing is a function of the number of feedback loops and their length The longer a feedback loop, the more clock cycles are needed to initialize and sensitize patterns Scan-register is a aregister with both shift and parallel-load capability T = 0 - normal working mode T = 1 - scan mode Normal mode : flip-flops are connected to the combinational circuit Test mode: flip-flops are disconnected from the combinational circuit and connected to each other to form a shift register Combinational circuit Scan-IN q’ R q Scan-OUT q’ & 1 D T q Scan-IN & C T Scan-OUT
239
Scan-Path Design and Testability
Two possibilities for improving controllability/observability SCANOUT MUX OUT IN DMUX SCAN IN
240
Boundary Scan Standard
241
Boundary Scan Architecture
TDO internal logic T A P TMS TCK TDI BSC Data_out Data_in
242
Boundary Scan Architecture
Device ID. Register Bypass Register Instruction Register (IR) TDI TDO Boundary Scan Registers Internal logic Data Registers
243
Used at the input or output pins
Boundary Scan Cell From last cell Update DR To next cell Q SET CLR D Clock DR Test/Normal 1 From system pin Shift DR To system logic Used at the input or output pins
244
Boundary Scan Working Modes
SAMPLE mode: Get snapshot of normal chip output signals
245
Boundary Scan Working Modes
PRELOAD mode: Put data on boundary scan chain before next instruction
246
Boundary Scan Working Modes
Extest instruction: Test off-chip circuits and board-level interconnections
247
Boundary Scan Working Modes
INTEST instruction Feeds external test patterns in and shifts responses out
248
Boundary Scan Working Modes
Bypass instruction: Bypasses the corresponding chip using 1-bit register To TDO From TDI Shift DR Clock DR Q D SET CLR
249
Boundary Scan Working Modes
IDCODE instruction: Connects the component device identification register serially between TDI and TDO in the Shift-DR TAP controller state Allows board-level test controller or external tester to read out component ID Required whenever a JEDEC identification register is included in the design TDI TDO Version Part Number Manufacturer ID 1 4-bits Any format 16-bits Any format 11-bits Coded form of JEDEC
250
Fault Diagnosis with Boundary Scan
Short Open 1 Assume wired AND 1 Assume stuck-at-0
251
Fault Diagnosis with Boundary Scan
Short Open 10 00 01 11 Assume stuck-at-0 Assume wired AND Kautz showed in 1974 that a sufficient condition to detect any pair of short circuited nets was that the “horizontal” codes must be unique for all nets. Therefore the test length is ]log2(N)[
252
Fault Diagnosis with Boundary Scan
Short Open 101 001 011 001 Assume wired AND 001 001 110 000 Assume stuck-at-0 All 0-s and all 1-s are forbidden codes because of stuck-at faults Therefore the final test length is ]log2(N+2)[
253
Fault Diagnosis with Boundary Scan
Short Open 0 101 0 001 0 011 0 001 Assume wired AND 1 001 1 001 1 110 0 000 Assume stuck-at-0 To improve the diagnostic resolution we have to add one bit more
254
Synthesis of Testable Circuits
Test generation: x1 x2 x3 y x1 & & y 1 x3 & x2 & 4 test patterns are needed
255
Synthesis of Testable Circuits
Two implementations for the same circuit: x1 x2 x3 010 110 010 x1 & & 110 y & y 101 1 x3 & & 110 x2 & Here: Only 3 test patterns are needed Here: 4 test patterns are needed
256
Synthesis of Testable Circuits
Calculation of constants: fi x1 x2 x3 y f C0 = f0 f C1 = f0 f f C2 = f0 f2 f C3 = f0 f1 f2 f3 f C4 = f0 f4 f C5 = f0 f1 f4 f5 f C6 = f0 f2 f4 f6 f C3 = f0 f1 f2 f3 f4 f5 f6 f5
257
Synthesis of Testable Circuits
Test generation method: 1 & x1 x2 x3 x1 x2 x3 010 0 & 110 0 010 & 110 y 101 1 & & 110
258
Testability as a Trade-off
Amusing testability: Theorem: You can test an arbitrary digital system by only 3 test patterns if you design it approprietly Proof: 011 011 001 & 001 & & 101 101 ? 011 001 & 011 1 101 010 & 001 101 Solution: System FSM Scan-Path CC NAND
259
Overview Built in Self-Test Introduction
Theory: Boolean differential algebra Theory: Decision diagrams Fault modelling Test generation Fault simulation Fault diagnosis Testability measuring Design for testability Built in Self-Test
260
Overview: Built-In Self-Test
Motivation for BIST Test generation in BIST Pseudorandom test generation with LFSR Weighted pseudorandom test Response compaction Signature analyzers BIST implementation BIST architectures Hybrid BIST Test broadcasting in BIST Embedding BIST Testing of NoC IEEE P1500 Standard
261
Built-In Self-Test Motivations for BIST: Drawbacks of BIST:
Need for a cost-efficient testing Doubts about the stuck-at fault model Increasing difficulties with TPG (Test Pattern Generation) Growing volume of test pattern data Cost of ATE (Automatic Test Equipment) Test application time Gap between tester and UUT (Unit Under Test) speeds Drawbacks of BIST: Additional pins and silicon area needed Decreased reliability due to increased silicon area Performance impact due to additional circuitry Additional design time and cost
262
BIST Techniques BIST techniques are classified:
on-line BIST - includes concurrent and nonconcurrent techniques off-line BIST - includes functional and structural approaches On-line BIST - testing occurs during normal functional operation Concurrent on-line BIST - testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines Off-line BIST - system is not in its normal working mode, usually on-chip test generators and output response analyzers or microdiagnostic routines Functional off-line BIST is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models Structural off-line BIST is based on the structure of the CUT and uses structural fault models (e.g. SAF)
263
Built-in Self-Test in SoC
System-on-Chip Advances in microelectronics technology have introduced a new paradigm in IC design: System-on-Chip (SoC) SoCs are designed by embedding predesigned and preverified complex functional blocks (cores) into one single die Such a design style allows designers to reuse previous designs and will lead to shorter time-to-market and reduced cost Testing of SoC, on the other hand, is a problematic and time consuming task, mainly due to the resulting complexity and high integration density On-chip test solutions (BIST) are becoming a mainstream technology for testing such SoC based systems
264
Built-In Self-Test in SoC
System-on-Chip testing Test architecture components: Test pattern source & sink Test Access Mechanism Core test wrapper Solutions: Off-chip solution need for external ATE Combined solution mostly on-chip, ATE needed for control On-chip solution BIST
265
Built-In Self-Test in SoC
Embedded tester for testing multiple cores
266
Built-In Self-Test Components
BIST components: Test pattern generator (TPG) Test response analyzer (TRA) TPG & TRA are usually implemented as linear feedback shift registers (LFSR) Two widespread schemes: test-per-scan test-per-clock
267
BIST: Test per Scan Initial test set: T1: 1100 T2: 1010 T3: 0101
Test application: 1100 T 1010 T 0101T 1001 T Number of clocks = 4 x = 20 Assumes existing scan architecture Drawback: Long test application time
268
Combinational Circuit
BIST: Test per Clock Test per Clock: Initial test set: T1: 1100 T2: 1010 T3: 0101 T4: 1001 Test application: T1 T4 T T2 Number of clocks = 10 (instead of 16) Combinational Circuit Under Test Scan-Path Register
269
Test Generation in BIST
Pseudorandom Test generation by LFSR: Using special LFSR registers Several proposals: BILBO CSTP Main characteristics of LFSR: polynomial initial state test length
270
Linear Feedback Shift Register (LFSR)
Pseudorandom Test generation by LFSR: Standard LFSR 1 x x2 x4 x3 Modular LFSR x4 x2 x 1 x3 Polynomial: P(x) = 1 + x3 + x4
271
Built-In Self-Test with LFSR
Pseudorandom Test generation by LFSR: The main motivations of using random patterns are: - low generation cost - high initial efeciency Reasons of the high initial efficiency: A circuit may implement functions A test vector partitions the functions into 2 equal sized equivalence classes (correct circuit in one of them) The second vector partitions into 4 classes etc. After m patterns the fraction of functions distinguished from the correct function is
272
Built-In Self-Test with LFSR
Pseudorandom Test generation by LFSR: Full identification is achieved only after 2n input combinations have been tried out (exhaustive test) A better fault model (stuck- at-0/1) may limit the number of partitions necessary, leaving only the faults with low probability in an equivalence class with the fault-free circuit Pseudorandom testing of sequential circuits: The following rules suggested: clock-signals should not be random control signals such as reset, should be activated with low probability data signals may be chosen randomly Microprocessor testing A test generator picks randomly an instruction and generates random data patterns By repeating this sequence a specified number of times it will produce a test program which will test the microprocessor by randomly excercising its logic
273
Pseudorandom Test Length
Problems: Very long test application time Low fault coverage Area overhead Additional delay The main motivations of using random patterns are: - low generation cost - high initial efeciency Possible solutions Weighted pseudorandom test Combining pseudorandom test with deterministic test Multiple seed Bit flipping Hybrid BIST
274
BIST: Weighted pseudorandom test
Hardware implementation of weight generator LFSR & & & 1/16 1/8 1/4 1/2 Weight select MUX Desired weighted value Scan-IN
275
BIST: Weighted pseudorandom test
Problem: random-pattern-resistant faults Solution: weighted pseudorandom testing The probabilities of pseudorandom signals are weighted, the weights are determined by circuit analysis NDI - number of circuit inputs for each gate to be the number of PIs or SRLs in the backtrace cone PI - primary inputs SRL - scan register latch 1 NCV NDI - relative measure of the number of faults to be detected through the gate Faults to be tested & Propagated faults I NDII G & NCV - noncontrolling value The more faults that must be tested through a gate input, the more the other inputs should be weighted to NCV NDIG
276
BIST: Weighted pseudorandom test
1 NCV R I = NDIG / NDII R I - the desired ratio of the NCV to the controlling value for each gate input Faults to be tested & Propagated faults NCV - noncontrolling value The more faults that must be tested through a gate input, the more the other inputs should be weighted to NCV I NDII G & NDIG
277
BIST: Weighted pseudorandom test
Example: R 1 = NDIG / NDII = 6/1 = 6 R 2 = NDIG / NDII = 6/2 = 3 R 3 = NDIG / NDII = 6/3 = 2 PI 1 PI 2 G & PI 3 More faults must be detected through the third input than through others This results in the other inputs being weighted more heavily towards NCV PI PI PI
278
BIST: Weighted pseudorandom test
Calculation of signal weights: PI WV - the value to which the input is biased W0, W1 - weights of the signals WV = 0, if W0 W1 else WV = 1 1 PI 2 G & PI Calculation of W0, W1 3 PI W0G = 1 W1G = 1 PI PI R 1 = W01 = W11 = 6 R 2 = W02 = W11 = 3 R 3 = W03 = W11 = 2
279
BIST: Weighted pseudorandom test
Calculation of signal weights: Backtracing from all the outputs to all the inputs of the given cone Weights are calculated for all gates and PIs PI1 1 1 PI2 2 G 1 & PI3 3 PI4 W0G = 1 W1G = 1 PI5 1 For PI1 RG = 1 W0 = W1 = 1 For PI2 and PI3 RG = 2 W0 = W1 = 3 For PI4 - PI6 RG = 3 W0 = W1 = 2 PI6 W01 = W11 = 6 W02 = W12 = 3 W03 = W13 = 2
280
BIST: Weighted pseudorandom test
Calculation of signal probabilities: WF - weighting factor indicating the amount of biasing toward weighted value WF = max {W0,W1} / min {W1,W0} Probability: P = WF / (WF + 1) PI 1 1 PI 2 G 1 & PI 3 PI For PI1 W0 = 6 W1 = WV = 0 WF = 6 P1 = 1 - 6/7 = 0.15 For PI2 and PI3 W0 = 2 W1 = WV = 1 WF = P1 = 0.6 For PI4 - PI6 W0 = 3 W1 = WV = 0 WF = P1 = = 0.4 PI 1 PI
281
BIST: Weighted pseudorandom test
Calculation of signal probabilities: Probability of detecting the fault 1 at the input 3 of the gate G: 1) equal probabilities (p = 0.5): P = 0.5 ( ) 0.53 = = 0.5 0.75 = = 0.046 2) weighted probabilities: P = 0.85 (0.6 ) = = 0.85 0.84 0.22 = = 0.16 PI 1 1 PI 2 G 1 & PI 3 PI PI 1 1 PI For PI1 : P1 = 0.15 For PI2 and PI3 : P1 = 0.6 For PI4 - PI6 : P1 = 0.4
282
BIST: Weighted pseudorandom test
Hardware implementation of weight generator LFSR & & & 1/16 1/8 1/4 1/2 Weight select MUX Desired weighted value Scan-IN
283
BIST: Response Compression
Pi-1 1. Parity checking Test UUT T ri 2. One counting Test ri 3. Zero counting UUT Counter
284
BIST: Response Compression
ri 4. Transition counting a) Transition 01 Test UUT T ri-1 ri b) Transition 10 Test UUT T ri-1 5. Signature analysis
285
BIST: Signature Analyser
Standard LFSR 1 x x2 x4 UUT x3 Modular LFSR Response string x4 x2 x 1 x3 Response in compacted by LFSR The content of LFSR after test is called signature Polynomial: P(x) = 1 + x3 + x4
286
BIST: Signature Analysis
The principles of CRC (Cyclic Redundancy Coding) are used in LFSR based test response compaction Coding theory treats binary strings as polynomials: R = rm-1 rm-2 … r1 r m-bit binary sequence R(x) = rm-1 xm-1 + rm-2 xm-2 + … + r1 x + r polynomial in x Example: R(x) = x4 + x3 + 1 Only the coefficients are of interest, not the actual value of x However, for x = 2, R(x) is the decimal value of the bit string
287
BIST: Signature Analysis
Arithmetic of coefficients: - linear algebra over the field of 0 and 1: all integers mapped into either 0 or 1 - mapping: representation of any integer n by the remainder resulting from the division of n by 2: n = 2m + r, r { 0,1 } or n r (modulo 2) Linear - refers to the arithmetic unit (modulo-2 adder), used in CRC generator (linear, since each bit has equal weight upon the output) Examples: x4 + x x + 1 + x x x x3 + x x4 + x x + 1 x x5 + x x2 + x x4 + x x + 1 x x3 + x
288
BIST: Signature Analysis
Division of one polynomial P(x) by another G(x) produces a quotient polynomial Q(x), and if the division is not exact, a remainder polynomial R(x) Example: Remainder R(x) is used as a check word in data transmission The transmitted code consists of the unaltered message P(x) followed by the check word R(x) Upon receipt, the reverse process occurs: the message P(x) is divided by known G(x), and a mismatch between R(x) and the remainder from the division indicates an error
289
BIST: Signature Analysis
In signature testing we mean the use of CRC encoding as the data compressor G(x) and the use of the remainder R(x) as the signature of the test response string P(x) from the UUT Signature is the CRC code word Example: G(x) = Q(x) = x2 + 1 = R(x) = x3 + x2 + 1 P(x) Signature
290
BIST: Signature Analysis
The division process can be mechanized using LFSR The divisor polynomial G(x) is defined by the feedback connections Shift creates x5 which is replaced by x5 = x3 + x + 1 IN x0 x1 x2 x3 x4 IN: Shifted into LFSR 1 0 1 = R(x) = x3 + x2 + 1 G(x) P(x) x5 Signature
291
BIST: Signature Analysis
Aliasing: Response L - test length N - number of stages in Signature Analyzer SA UUT L N All possible responses All possible signatures Faulty response Correct response N << L
292
BIST: Signature Analysis
Aliasing: Response L - test length N - number of stages in Signature Analyzer SA UUT L N - number of different possible responses No aliasing is possible for those strings with L - N leading zeros since they are represented by polynomials of degree N that are not divisible by characteristic polynomial of LFSR. There are such strings Probability of aliasing:
293
BIST: Signature Analysis
Parallel Signature Analyzer: Single Input Signature Analyser x4 x2 x 1 UUT x3 x4 x2 x 1 x3 Multiple Input Signature Analyser (MISR) UUT
294
Built-In Self-Test Signature calculating for multiple outputs:
LFSR - Test Pattern Generator LFSR - Test Pattern Generator Combinational circuit Combinational circuit Multiplexer Multiplexer LFSR - Signature analyzer LFSR - Signature analyzer
295
LFSR: Signature Analyser
1 x x2 x3 x4 LFSR UUT Response string for Signature Analysis Test Patterns (when generating tests) Signature (when analyzing test responses) FF Stimuli
296
Test-per-Clock BIST Architectures
BILBO - Built- In Logic Block Observer: CSTP - Circular Self-Test Path: LFSR - Test Pattern Generator LFSR - Test Pattern Generator & Signature analyser Combinational circuit Combinational circuit LFSR - Signature analyzer
297
BIST: BILBO Working modes: B1 B2 0 0 Reset 0 1 Normal mode
Scan mode Test mode Testing modes: CC1: LFSR TPG LFSR SA CC2: LFSR TPG LFSR SA B1 LFSR 1 B2 CC1 B1 LFSR 2 B2 CC2
298
BIST: Circular Self-Test
Circuit Under Test FF
299
Functional Self-Test Traditional BIST solutions use special hardware for pattern generation on chip, this may introduce area overhead and performance degradation New methods have been proposed which exploit specific functional units like arithmetic blocks or processor cores for on-chip test generation It has been shown that adders can be used as test generators for pseudorandom and deterministic patterns Today, there is no general method how to use arbitrary functional units for built-in test generation
300
BIST Embedding Example
LFSR1 LFSR2 CSTP M1 M2 M4 M3 M5 MUX BILBO Concurrent testing: MISR1 M6 MUX LFSR, CSTP M2 MISR1 M2 M5 MISR2 (Functional BIST) CSTP M3 CSTP LFSR2 M4 BILBO MISR2
301
Test-per-Scan BIST Architectures
STUMPS: Self-Testing Unit Using MISR and Parallel Shift Register Sequence Generator LOCST: LSSD On-Chip Self-Test Error Test Controller SI SO TPG SA CUT BS Scan Path Test Pattern Generator R1 CC1 Rn CCn ... MISR
302
Software based test generation:
Software BIST Software based test generation: To reduce the hardware overhead cost in the BIST applications the hardware LFSR can be replaced by software Software BIST is especially attractive to test SoCs, because of the availability of computing resources directly in the system (a typical SoC usually contains at least one processor core) The TPG software is the same for all cores and is stored as a single copy All characteristics of the LFSR are specific to each core and stored in the ROM They will be loaded upon request. For each additional core, only the BIST characteristics for this core have to be stored
303
Problems with BIST Problems: Possible solutions
Very long test application time Low fault coverage Area overhead Additional delay The main motivations of using random patterns are: - low generation cost - high initial efeciency Possible solutions Weighted pseudorandom test Combining pseudorandom test with deterministic test Multiple seed Bit flipping Hybrid BIST
304
Store-and-Generate test architecture
ROM TPG UUT RD ADR Counter 2 Counter 1 CL ROM contains test patterns for hard-to-test faults Each pattern Pk in ROM serves as an initial state of the LFSR for test pattern generation (TPG) Counter 1 counts the number of pseudorandom patterns generated starting from Pk After finishing the cycle for Counter 2 is incremented for reading the next pattern Pk+1
305
Built-In Self-Test Hybrid BIST
Hybrid test set contains a limited number of pseudorandom and deterministic vectors Pseudorandom test vectors can be generated either by hardware or by software Pseudorandom test is improved by a stored test set which is specially generated to shorten the on-line pseudorandom test cycle and to target the random resistant faults The problem is to find a trade-off between the on-line generated pseudorandom test and the stored test
306
Optimization of Hybrid BIST
Cost curves for BIST: Total Cost C TOTAL Cost of pseudorandom test patterns C GEN Number of remaining faults after applying k patterns r NOT (k) Cost of stored test C MEM L LOPT
307
Hybrid BIST for Multiple Cores
Embedded tester for testing multiple cores
308
Hybrid BIST for Multiple Cores
309
Multi-Core Hybrid BIST Optimization
Cost functions for HBIST: Iterative optimization:
310
Optimized Multi-Core Hybrid BIST
Pseudorandom test is carried out in parallel, deterministic test - sequentially
311
Test-per-Scan Hybrid BIST
Every core’s BIST logic is capable to produce a set of independent pseudorandom test The pseudorandom test sets for all the cores can be carried out simultaneously Deterministic tests can only be carried out for one core at a time Only one test access bus at the system level is needed.
312
Broadcasting Test Patterns in BIST
Concept of test pattern sharing via novel scan structure – to reduce the test application time: ... ... ... ... CUT 1 CUT 2 CUT 1 CUT 2 Traditional single scan design Broadcast test architecture While one module is tested by its test patterns, the same test patterns can be applied simultaneously to other modules in the manner of pseudorandom testing
313
Broadcasting Test Patterns in BIST
Examples of connection possibilities in Broadcasting BIST: CUT 1 CUT 2 CUT 1 CUT 2 j-to-j connections Random connections
314
Broadcasting Test Patterns in BIST
Scan configurations in Broadcasting BIST: Scan-In Scan-In ... ... ... ... ... CUT 1 CUT n CUT 1 CUT n ... ... ... ... MISR MISR 1 MISR n Scan-Out Scan-Out Common MISR Individual and multiple MISRs
315
Hybrid BIST with Test Broadcasting
SoC with multiple cores to be tested Not for all cores 100% fault coverage can be achieved by pure pseudorandom test Additional deterministic tests have to be applied to achieve 100% coverage Deterministic test patterns are precomputed and stored in the system
316
Hybrid BIST with Test Broadcasting
Hybrid test consisting of a pseudorandom test with length LP and a deterministic test with length LD LDk - length of the deterministic test set dedicated for the core Ck L - deterministic test patterns moved from the pseudorandom part to deterministic part
317
Testing of Networks-on-Chip (NoC)
Consider a mesh-like topology of NoC consisting of switches (routers), wire connections between them and slots for SoC resources, also referred to as tiles. Other types of topological architectures, e.g. honeycomb and torus may be implemented and their choice depends on the constraints for low-power, area, speed, testability The resource can be a processor, memory, ASIC core etc. The network switch contains buffers, or queues, for the incoming data and the selection logic to determine the output direction, where the data is passed (upward, downward, leftward and rightward neighbours)
318
Testing of Networks-on-Chip
Useful knowledge for testing NoC network structures can be obtained from the interconnect testing of other regular topological structures The test of wires and switches is to some extent analogous to testing of interconnects of an FPGA a switch in a mesh-like communication structure can be tested by using only three different configurations
319
Testing of Networks-on-Chip
Concatenated bus concept Arbitrary short and open in an n-bit bus can be tested by log2(n) test patterns When testing the NoC interconnects we can regard different paths through the interconnect structures as one single concatenated bus Assuming we have a NoC, whose mesh consists of m x m switches, we can view the test paths through the matrix as a wide bus of 2mn wires
320
Testing of Networks-on-Chip
Concatenated bus concept The stuck-at-0 and stuck-at-1 faults are modeled as shorts to Vdd and ground Thus we need two extra wires, which makes the total bitwidth of the bus mn + 2 wires. From the above facts we can find that 3[log2(2mn+2)] test patterns are needed in order to test the switches and the wiring in the NoC
321
Testing of Networks-on-Chip
3[log2(2mn+2)] test patterns needed Bus Test Detected faults 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 Stuck-at-1 1 2 All opens and shorts 6 wires tested 3 4 5 6 7 Stuck-at-0
322
IEEE P1500 standard for core test
The following components are generally required to test embedded cores Source for application of test stimuli and a sink for observing the responces Test Access Mechanisms (TAM) to move the test data from the source to the core inputs and from the core outputs to the sink Wrapper around the embedded core
323
IEEE P1500 standard for core test
The two most important components of the P1500 standard are Core test language (CTL) and Scalable core test architecture Core Test Language The purpose of it is to standardize the core test knowledge transfer The CTL file of a core must be supplied by the core provider This file contains information on how to instanciate a wrapper, map core ports to wrapper ports, and reuse core test data
324
IEEE P1500 standard for core test
Core test architecture It standardizes only the wrapper and the interface between the wrapper and TAM, called Wrapper Interface Port or (WIP) The P1500 TAM interface and wrapper can be viewed as an extension to IEEE Std , since the TAP controller is a P1500-compliant TAM interface, and the boundary-scan register is a P1500-compliant wrapper Wrapper contains an instruction register (WIR), a wrapper boundary register consisting of wrapper cells, a bypass register and some additional logic. Wrapper has to allow normal functional operation of the core plus it has to include a 1-bit serial TAM. In addition to the serial test access, parallel TAMs may be used.
325
IEEE P1500 standard for core test
326
Practical works to the course “Design for Testability”
Artur Jutman Tallinn Technical University Estonia
327
Practical Works There are two practical works in the frames of this course: Test Generation Built-In Self-Test We provide only brief descriptions of these works in this handout. The descriptions are given just to make a short overview of what should be done during these exercises. The full description of mentioned works is available in the Web at the following URL: All the laboratory works are based on the Turbo Tester (TT) tool set, which will be preinstalled at the computer classes. However, it is a freeware and if you wish to have a copy of TT at your own disposal, do not hesitate to download it from:
328
Practical Work on Test Generation
Overview The objectives of this practical work are the following: to practice with manual and automatic test pattern generation to perform fault simulation and to analyze the simulation information to compare the efficiency of different methods and approaches There are three types of circuits to practice with. The main difference between them is their size. The gate level schematic is available for the smallest one. Based on that schematic, test vectors should be generated manually. For the second circuit, its function is known. It is an adder. A functional test should be generated manually for this circuit. In addition to that, automatic test pattern generators (ATPG) should be also run with the adder. These ATPGs have different settings. Best settings should be found during the work. The third circuit is too large, to analyze its function or schematic. Therefore, the test for this circuit should be generated automatically.
329
Practical Work on Test Generation
Workflow of manual & automatic test pattern generation
330
Practical Work on Test Generation
Steps 1. Apply manually as many random vectors as you think it will be enough for the first circuit. However, remember that the goal is to acquire a test with possibly better fault coverage using possibly smaller amount of test patterns. 2. Using a certain algorithm, prepare a test, which is better than that. 3. Repeat the steps above (1, 2) for the adder. Use functional methods instead. 4. Run the ATPGs with default settings. 5. Try different settings of the "genetic" and "random" ATPGs to obtain shorter test. Run the deterministic ATPG without tuning again and perform the test compaction using the optimize tool. 6. Compare the results and decide which test generation method (or several methods) is the best for the given circuit. Why? 7. Repeat steps 4,5,6 for the third circuit. 8. Calculate the cost of testing for all the methods you used.
331
Practical Work on Built-In Self-Test
Overview The objectives of this practical work are the following: to explore and to compare different built-in self-test techniques to learn finding best LFSR architectures for BILBO and CSTP methods to study Hybrid BIST approach Let us have a system-on-chip (SoC) with several cores, which have to be tested by a single BIST device using Broadcasting BIST method. Then our task is to minimize the time requirements via selection of proper BIST configuration suitable for all the cores simultaneously. We are going to solve this problem by simulation of different configurations and selection of the best one. There are three combinational circuits (cores) in our SoC. First we have to select the best configuration for each circuit separately and then select the best one for the SoC as a whole. Another problem to be solved here is a search for an optimal combination of stored and generated vectors in Hybrid BIST approach.
332
Practical Work on Built-In Self-Test
Selection of the Best Configuration for Broadcasting BIST (a workflow)
333
Practical Work on Built-In Self-Test
Steps 1. Choose proper length of TPG (Test Pattern Generator) and SA (Signature Analyzer) for BILBO. It depends on parameters of selected circuits. 2. For each circuit, find the configuration that gives the best fault coverage and test length. Run BIST emulator with at least 15 different settings. You have to obtain three best configurations (one for each circuit). 3. Take the first configuration and apply it to the second and the third circuit. Do the same with the 2nd and the 3rd one. Choose the best configuration. 4. Repeat steps 1-3 in CSTP mode. Be sure to select a proper TPG/SA length. Compare the efficiency of CSTP and BILBO methods. 5. Write the schedule for the Hybrid BIST device. The target is to reduce the initial test lengths by 2 times. The main task is to find the minimal number of stored seeds to approach the target test length. 6. Answer the following question: Which method is better: Hybrid BIST or BILBO if each stored seed costs as much as 50 generated test vectors?
334
Contact Data Prof. Raimund Ubar Artur Jutman
Tallinn Technical University Computer Engineering Department Address: Raja tee 15, Tallinn, Estonia Tel.: , Fax:
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.