Presentation is loading. Please wait.

Presentation is loading. Please wait.

Embedded Systems: Testing. Combinational Logic Main issues in testing embedded systems hardware: The world is ANALOG, not digital; even in designing.

Similar presentations


Presentation on theme: "Embedded Systems: Testing. Combinational Logic Main issues in testing embedded systems hardware: The world is ANALOG, not digital; even in designing."— Presentation transcript:

1 Embedded Systems: Testing

2 Combinational Logic

3 Main issues in testing embedded systems hardware: The world is ANALOG, not digital; even in designing combinational logic, we need to take analog effects into account Software doesn’t change but hardware does: --different manufacturers of the same component --different batches from the same manufacturer --environmental effects --aging --noise main areas of concern: --signal levels (“0”, “1”—min, typical, and max values; fan-in, fan-out) --timing—rise & fall times; propagation delay; hazards and race conditions --how to deal with effects of unwanted resistance, capacitance, induction

4 Testing combinational circuits Fault-unsatisfactory condition or state; can be constant, intermittent, or transient; can be static or dynamic Error: static; inherent in system; most can be caught Failure: undesired dynamic event occurring at a specific time—typically random—often occurs from breakage or age; cannot all be designed away Physical faults: one-fault model Logical faults: Structural—from interconnections Functional: within a component

5 Structural faults: Stuck-at model: (a single fault model) s-a-1; s-a-0; may be because circuit is open or there is a short

6 fig_02_39 Testing combinational circuits: s-a-0 fault

7 fig_02_40 Modeling s-a-0 fault:

8 fig_02_41 S-a-1 fault

9 fig_02_42 Modeling s-a-1 fault:

10 fig_02_43 Open circuit fault; appears as a s-a-0 fault

11 fig_02_49fig_02_50 Functional faults: Example: hazards, race conditions Two possible methods: A: consider devices to be delay-free, add spike generator B: add delay elements on paths Method AMethod B As frequencies increase, eliminating hazards through good design is even more important

12 The above examples refer to physical faults or performance faults These are less common in FPGAs. Our main concern here is with SPECIFICATION and DESIGN/IMPLEMENTTION faults—these are similar to software problems The main source of error is “human error” And software testing strategies are applicable

13 Testing for: Storage Elements; Finite State Machines; Sequential Logic

14 fig_03_30, 03_31, 03_32, 03_33 Johnson counter (2-bit): shift register + feedback input; often used in embedded applications; states for a Gray code; thus states can be decoded using combinational logic; there will not be any race conditions or hazards

15 fig_03_34 3-stage Johnson counter: --Output is Gray sequence—no decoding spikes --not all 2 3 (2 n ) states are legal—period is 2n (here 2*3=6) --unused states are illegal; must prevent circuit from ever going into these states

16 Making actual working circuits: Must consider --timing in latches and flip-flops --clock distribution --how to test sequential circuits (with n flip- flops, there are potentially 2 n states, a large number; access to individual flipflops for testing must also be carefully planned)

17 fig_03_36, 03_37 Timing in latches and flip-flops: Setup time: how long must inputs be present and stable before gate or clock changes state? Hold time: how long must input remain stable after the gate or clock has changed state? Metastable oscillations can occur if timing is not correct Setup and hold times for a gated latch enabled by a logical 1 on the gate

18 fig_03_38 Example: positive edge triggered FF; 50% point of each signal

19 fig_03_39, 03-40 Propagation delay: minimum, typical, maximum values--with respect to causative edge of clock: Latch: must also specify delay when gate is enabled:

20 fig_03_41, 03_42 Timing margins: example: increasing frequency for 2-stage Johnson counter –output from either FF is 00110011…. assume t PDLH = 5-16ns t PDLH =7-18ns t su = 16ns

21 Case 1: L to H transition of Q A Clock period = t PDLH + t su + slack 0  t PDLH + t su If t PDLH is max, Frequency F max = 1/ [5 + 16)* 10 -9 ]sec = 48MHz If it is min, F max = 31.3 MHz Case 2: H to L transition: Similar calculations give F max = 43.5 MHz or 29.4 MHz Conclusion: F max cannot be larger than 29.4 MHz to get correct behavior

22 Clocks and clock distribution: --frequency and frequency range --rise times and fall times --stability --precision

23 fig_03_43 Clocks and clock distribution: Lower frequency than input; can use divider circuit above Higher frequncy: can use phase locked loop:

24 fig_03_44 Selecting portion of clock: rate multiplier

25 fig_03_46 Note: delays can accumulate

26 fig_03_47 Clock design and distribution: Need precision Need to decide on number of phases Distribution: need to be careful about delays Example: H-tree / buffers

27 fig_03_48 Testing: Scan path is basic tool

28 fig_03_56 Testing fsms: Real-world fsms are weakly connected, i.e., we can’t get from any state S1 to any state S2 (but we could if we treat the transition diagram as an UNDIRECTED graph) Strongly connected: we can get from a state S initial to any state Sj; sequence of inputs which permits this is called a transfer sequence Homing sequence: produce a unique destination state after it is applied Inputs: I test = Ihoming + Itransfer Finding a fault: requires a Distinguishing sequence Example: Strongly connected Weakly connected

29 fig_03_57 Basic testing setup:

30 fig_03_58

31 fig_03_59 Example: machine specified by table below Successor tree

32 fig_03_63 Example: recognize 1010

33 fig_03_65 Scan path

34 fig_03_66 Standardized boundary scan architecture Architecture and unit under test

35 Testing: General Requirements DFT Multilevel Testing-- System, Black Box, White Box Tests

36 Testing--General Requirements Testing--general requirements: thorough ongoing DEVELOPED WITH DESIGN (DFT--design for test) note: this implies that several LEVELS of testing will be carried out efficient

37 Good, Bad, and Successful Tests good test: has a high probability of finding an error ("bad test": not likely to find anything new) successful test: finds a new error

38 Most Effective Testing Is Independent most effective testing: by an "independent” third party Question: what does this imply about your team testing strategy for the quarter project?

39 How Thoroughly Can We Test? how thoroughly can we test? example: VLSI chip 200 inputs 2000 flipflops (one-bit memory cells) # exhaustive tests? What is the overall time to test if we can do 1 test / msec? 1 test /  sec?  test  sec?

40 Design for Testability (DFT) Design for Testability (DFT)--what makes component "testable"?  operability: few bugs, incremental test  observability: you can see the results of the test  controllability: you can control state + input to test  decomposability: you can decompose into smaller problems and test each separately  simplicity: you choose the “simplest solution that will work”  stability: same test will give same results each time  understandability: you understand component, inputs, and outputs

41 Testing strategies testing strategies: verification--functions correctly implemented validation--we are implementing the correct functions (according to requirements)

42 Spiral design/testing strategy A general design/testing strategy can be described as a "spiral”: requirements  design  code system test  module,integ. tests  unit test (system) (black (white box) box) when is testing complete? One model: "logarithmic Poisson model” f(t)=(1/p)ln(I 0 pt+1) f(t) = cumulative expected failures at time t I 0 = failures per time unit at beginning of testing p = reduction rate in failure intensity START END Requirements, Specs/System Tests Design/Integration Tests Implement/Unit Tests Design/Module Tests

43 Types of testing Types of testing:  white box--"internals” (also called "glass box")  black box—modules and their "interfaces” (also called "behavioral")  system--”functionality” (can be based on specs, use cases)  (application-specific)

44 Good testing strategy steps in good test strategy: quantified requirements test objectives explicit user requirements clear use "rapid cycle testing" build self-testing software filter errors by technical reviews review test cases and strategy formally also continually improve testing process

45 Black box testing guidelines General guidelines: test BOUNDARIES test output also choose "orthogonal” cases if possible


Download ppt "Embedded Systems: Testing. Combinational Logic Main issues in testing embedded systems hardware: The world is ANALOG, not digital; even in designing."

Similar presentations


Ads by Google