Presentation is loading. Please wait.

Presentation is loading. Please wait.

Generalization of BDDs

Similar presentations


Presentation on theme: "Generalization of BDDs"— Presentation transcript:

1 Generalization of BDDs
Binary DD terminal nodes and edges from each node General case of DD n  2 terminal nodes and n  2 edges from each node m Y 1 2 h Fk Fn l0 l1 l2 lh lk lk+1 Fk+1 ln lm GY m y 1 lm l1 l0 Gy Novelty: Boolean methods can be generalized in a straightforward way to higher functional levels

2 Faults and High-Level Decision Diagrams
RTL-statement: K: (If T,C) RD  F(RS1,RS2,…RSm),  N Nonterminal nodes RTL-statement faults: label, timing condition, logical condition, register decoding, operation decoding, control faults Terminal nodes RTL-statement faults: data storage, data transfer, data manipulation faults Testing concept on the DD-model (uniform for all nodes): 1) Exhaustive testing 2) Optimization

3 From Trad. MP Fault Model to HLDD Faults
Each path in a DD represents a working mode of the system The faults having effect on the behaviour can be associated with nodes along the path D1:  node output is always activated (SAF-1) D2:   node output is broken (SAF-0) D3:  instead of the given output another output set is activated Instead of the 14 fault classes,the HLDD based fault model includes only 3 fault classes

4 Test Program Synthesis with HLDDs
Test algorithm: Control: For D = 0,1,2,3: y1 y2 y3 y4 = 00D2 Data: Solution of R1+ R2 IN  R1  R1* R2 Comment: The data rule is simplified Test program: For D = 0,1,2,3 Begin Load R1 = IN1 Load R2 = IN2 Apply IN = IN3 y1 y2 y3 y4 = 00D2 Read R2 End Initialization Test Observation Data: (IN1+IN2)  IN3  IN1  (IN1*IN2) y 4 3 1 R + 2 IN * IN* # Decision Diagram Advantages: Straightforward synthesis procedure Compactness of the cycle-based test program

5 IF C=1, THEN PC = A,ELSE PC = PC + 2
HLDDs of Microprocessors OP B Semantic RT level operations READ memory R(A1) = M(A) PC = PC + 2 1 WRITE memory M(A) = R(A2) Transfer R(A1) = R(A2) PC = PC + 1 Complement R(A1) =  R(A2) 2 Addition R(A1) = R(A1)+ R(A2) Subtraction R(A1) = R(A1)- R(A2) 3 Jump PC = A Conditional jump IF C=1, THEN PC = A,ELSE PC = PC + 2 Instruction code: ADD A1 A2 R3 = R3 + R2 PC = PC+1 OP=2. B=0. A1=3. A2=2 OP PC 1, 2 B 3 A PC + 2 PC + 1 C 1 A1 = 0 R0 1 A1 = 3 R3 R1, R2 A1 R0 R(A1) R1 1 R2 2 R3 3 A2 R(A2) OP B0 M(A) 1 B1 R(A2) B2 2 R(A1) - R(A2) 3 R(A1) R(A1) + R(A2) OP B M(A) 1 R(A2) 1-3

6 Explanation of the meaning of constraints (rules):
Uniform Conditional Node Fault Model A1 = 0 R0 OP B0 1 M(A) B1 R(A2) B2 2 R(A1) - R(A2) 3 A1 = 3 R3 R1, R2 R(A1) R(A1) + R(A2) Test data calculation rules: mT MT(OP): [f(mT)  ZERO)] mi,mj MT(OP): [(f(mi)  f(mj)) ≠ f(mi)] Terminal nodes: MT(OP) = {M(A), R(A2), R(A1)+R(A2), R(A1))} OP=0 B=0 M(A) Explanation of the meaning of constraints (rules): M(A) & R(A2) R(A2) 1 & R3 = M(A) ALU R(A1) + R(A2) 1 R(A1) & 2 R(A1) 3 & In case of fault: R3 = M(A)  R(A2)

7 Testing of Multiple Faults in Several Nodes
Fault model for non-terminal nodes: A1 = 0 R0 OP B0 1 M(A) B1 R(A2) B2 2 R(A1) - R(A2) 3 A1 = 3 R3 R1, R2 R(A1) R(A1) + R(A2) Test data Synthesis: mT MT(OP): [f(mT)  ZERO)] mj,mj MT(OP): [(f(mi)  f(mj)) ≠ f(mi)] mj,mj MT(OP): [(f(mi) < f(mj)] Activated terminal nodes: MT(OP) = {M(A), R(A2), R(A1)+R(A2), R(A1))} A1 Activated by test: A1 = 0 Activation caused by fault: R1 = DATA Expected result: R0 = DATA Non-intended action: R1 = DATA DATA

8 Added non- intended functionality
Hard-to-Test Faults in Microprocessors Instruction set: I0: C = AB I1: C = (AB) I2: D = A v B I3: D = (A v B) DC & 1 I A B C D 2 3 Fault I AB C AB 1 2, 3 AB 2 D AB 3 0,1 New fault type: Added non- intended functionality OR-type of short between the outputs 1 and 2 of decoder i.e. The instruction I1 implys additionally I2. Normally, when testing I1, we read only the register C, but we do not read the register D

9 Covered structural faults
Mapping of Structural and Functional Faults Mapping low level structural faults into high-level functional faults Fault activation Covered structural faults fi Control word Local SAF Glob. SAF1 Glob. SAF0 OR bridge AND bridge f0 000 (0,1),(0,2),(0,4) 1,2,4 f1 001 (1,0),(1,3),(1,5) 3,5 f2 010 (2,0),(2,3),(2,6) 3,6 f3 011 (3,1),(3,2),(3,7) 7 1,2 f4 100 (4,0),(4,5),(4,6) 5,6 f5 101 (5,1),(5,4),(5,7) 1,4 f6 110 (6,2),(6,4),(6,7) 2,4 f7 111 (7,3),(7,5),(7,6) 3,5,6

10 fi,k < fj,k Fault Coverage Table and FC Measure
Functional fault model: j [fj  ZERO)] i,j: k [(fi,k < fj,k] Fault coverage measure: Percentage of 1-s in the fault coverage table 1 – means that a constraint is satisfied by at least one pair of data operands fi,k < fj,k

11 Testing of ALU Operations
Testing of terminal nodes of ALU Operations No D1 D2 MOV  1 ADD D1 + D2 2 3 4 5 6 7 SUB D1 – D2  AND D s 1 1+D 2 - 1 & D ctrAlu 3 ALU INs 4 Test program: WR (initialization of all R) FOR ctrALU  {0,…,4} FOR data  DATA set WR(data),op(ctrALU), RD (ALU) END FOR data END FOR ctrALU

12 Test Data Generation for ADD Operation
Testing the ALU operations: 0-bit 2-bit 1-bit 3-bit D s 1 1+D 2 - 1 & D ctrAlu 3 ALU INs 4 Test program: WR (initialization of all R) FOR ctrALU  {0,…,4} FOR data  DATA set WR(data),op(ctrALU), RD (ALU) END FOR data END FOR ctrALU Operations No D1 D2 ADD D1 + D2 1 2 3 4 5 6 7

13 Experiments with 8 bit ALU/16 instructions)
No Test gener. method Test length Fault coverage level #data #instr High-level SAF M1 Deterministic ATPG gate-level test 68 57,23 100 M2 Random 1000 16000 98,85 100 M3 Pseudo-exhaustive control part test 4 64 85,0 99,9 M1: Because of low High Level fault coverage the test gives no information about the quality of multiple fault detection

14 Experiments with 8 bit ALU/16 instructions)
No Test gener. method Test length Fault coverage level #data #instr High-level SAF M1 Deterministic ATPG gate-level test 68 57,23 100 M2 Random 1000 16000 98,85 100 M3 Pseudo-exhaustive control part test 4 64 85,0 99,9 Pseudo-exhaustive control part test: For two operands and for each data bit all combinations 00, 01, 10, 11 are applied in parallel No D1 D2 1 2 3 4

15 Experiments with 8 bit ALU/16 instructions)
No Test gener. method Test length Fault coverage level #data #instr High-level SAF M1 Deterministic ATPG gate-level test 68 57,23 100 M2 Random 1000 16000 98,85 100 M3 Pseudo-exhaustive control part test 4 64 85,0 99,9 M4 Pseudo-exhaustive data part test 77 95 90,3 99,1 M4 is the classical approach: each instruction is exercised with dedicated test data (operands) The result 90,3% (not 100%) is the answer to the question: „Is the proposed approach reasonable?“

16 Experiments with 8 bit ALU/16 instructions)
No Test gener. method Test length Fault coverage level #data #instr High-level SAF M1 Deterministic ATPG gate-level test 68 57,23 100 M2 Random 1000 16000 98,85 100 M3 Pseudo-exhaustive control part test 4 64 85,0 99,9 M4 Pseudo-exhaustive data part test 77 95 90,3 99,1 M5 Combined (No3, No4) 81 159 97,7 100 The test M5 is twice longer than M1, but the confidence of detecting multiple faults 97,7% is considerably higher than 57%

17 Fault Table: Pseudo-Exhaustive Test (M4)
M4: It was possible to prove the redundancy of 287 High Level faults among all 1920 faults Data OR AND XOR 00 01 1 10 11 Functional fault model: j [fj  ZERO)] i,j: k [(fi,k < fj,k]

18 Single fault assumption Multiple faults allowed
Why Fault Masking is Important Issue? Diagnosis method Fault table Test result Devil’s advocate approach Tested faults Passed Failed Single fault assumption Fault candi-dates Diagnosis Multiple faults allowed ? Fault candidates Angel’s advocate Proved OK Fault candidates

19 HLDD for a Data-Path HLDD for a Data-Path HLDD HLDD Data Path +1 
(read) HLDD (write) HLDD for a Data-Path R3 R2 R1 R0 D1 ALU D2 IN RD WR ctrALU OUT ctr2 ctr1 d +1 Data Path

20 Multi-Cycle HLDD for a Data-Path
ALU D2 IN RD WR ctrALU OUT ctr2 ctr1 d +1 Data Path

21 Fault Candidates Reasoning
WRITE and READ activate a multi-cycle path Fault candidates: Nodes along the red path

22 Fault Candidates and Fault Masking
WRITE and READ activate a path The red part is excercised, but the blue part may be as well affected by erroneous accesses WRITE-faults d READ-faults ctr1

23 Test Data Generation for READ-WRITE Test
Two tests for the node ctr1: (1) Initialization: WR : WR(R0 = 0001) WR(R1 = 0010) WR(R2 = 0100) WR(R3 = 1000) READ all Functional fault model: j [fj  ZERO)] i,j: k [(fi,k < fj,k]

24 Test Data Generation for READ-WRITE Test
Two tests for the node ctr1: (2) Initialization: WR : WR(R0 = 1110) WR(R1 = 1101) WR(R2 = 1011) WR(R3 = 0111) READ all Functional fault model: j [fj  ZERO)] i,j: k [(fi,k < fjk]

25 Test Group for READ Initialization: WR : WR(R0 = 0001) WR(R1 = 0010)
High-Level Single Node Test (Test group for a single node): RD : RD(R0 = 0001) RD(R1 = 0010) + WR(R0 = 0010) – Fault F* RD(R2 = 0100) RD(R3 = 1000) F* was overwritten Initialization: WR  High-Level Single Node Test Pair (A pair of test groups): RD  : RD(R3 = 1000) RD(R0 = 0010) F* is now detected

26 Test Groups for READ & WRITE
Angel’s approach to prove the correctness of Write ja READ (ctr1 & d): ctr1: (WR, RD) For data: / (WR, RD) / d: (WR, RD) The next step is to test ALU (ctrAlu; D1, D1+ D2, etc.) as described before Next step: ALU test (ctrAlu) 0001 0010 0100 1000 And the same for data: 1110 1101 1011 0111 Test length for R&W: 12n n – number of registers


Download ppt "Generalization of BDDs"

Similar presentations


Ads by Google