Presentation is loading. Please wait.

Presentation is loading. Please wait.

Embedded Systems: Testing. Testing needs to occur at many levels and at many stages in the development process. Testing is needed to verify adherence.

Similar presentations


Presentation on theme: "Embedded Systems: Testing. Testing needs to occur at many levels and at many stages in the development process. Testing is needed to verify adherence."— Presentation transcript:

1 Embedded Systems: Testing

2 Testing needs to occur at many levels and at many stages in the development process. Testing is needed to verify adherence to specifications, fault-free execution, reliability, safety, security, etc. Here we will be focusing on testing hardware for functionality and timing. Later on we will address additional aspects of testing, including software and system testing.. Before addressing hardware testing specifically we need to look at some general principles of testing.

3 What exactly are we “testing”? Might be one or more of: Correct specification Correct “behavior” (functionality) Timing Desired power management Meets physical / environmental specifications Reliable Safe Secure User-friendly / usability Etc Different areas require different types of analysis, but some basic principles apply in more than one area.

4 Important general concepts: DFT Multilevel Testing-- System, Black Box, White Box Tests

5 Testing--General Requirements Testing--general requirements: thorough ongoing DEVELOPED WITH DESIGN (DFT--design for test) note: this implies that several LEVELS of testing will be carried out efficient

6 Good, Bad, and Successful Tests good test: has a high probability of finding an error ("bad test": not likely to find anything new) successful test: finds a new error

7 Most Effective Testing Is Independent most effective testing: by an "independent” third party Question: what does this imply about your team testing strategy for class projects?

8 How Thoroughly Can We Test? how thoroughly can we test? example: VLSI chip 200 inputs 2000 flipflops (one-bit memory cells) # exhaustive tests? What is the overall time to test if we can do 1 test / msec? 1 test / msec? 1 test /nsec?

9 Design for Testability (DFT) Design for Testability (DFT)--what makes component "testable"?  operability: few bugs, incremental test  observability: you can see the results of the test  controllability: you can control state + input to test  decomposability: you can decompose into smaller problems and test each separately  simplicity: you choose the “simplest solution that will work”  stability: same test will give same results each time  understandability: you understand component, inputs, and outputs

10 Testing strategies testing strategies: verification--functions correctly implemented validation--we are implementing the correct functions (according to requirements / specifications)

11 Spiral design/testing strategy A general design/testing strategy can be described as a "spiral”: requirements  design  code system test  module &  unit test integ. tests (system) (black (“white box) box”) when is testing complete? One model: "logarithmic Poisson model” f(t)=(1/p)ln(I 0 pt+1) f(t) = cumulative expected failures at time t I 0 = failures per time unit at beginning of testing p = reduction rate in failure intensity START END Requirements, Specs/System Tests Design/Integration Tests Implement/Unit Tests Design/Module Tests

12 Types of testing Types of testing:  white box—component "internals” (also called "glass box")  black box—modules / components and their "interfaces” (also called "behavioral")  system--”functionality” (can be based on specs, use cases)  (application-specific)

13 Good testing strategy steps in good test strategy: quantified requirements test objectives explicit user requirements clear use "rapid cycle testing" build self-testing software filter errors by technical reviews review test cases and strategy formally also continually improve testing process

14 Black box testing guidelines General guidelines: test BOUNDARIES (“corner cases”) test output also choose "orthogonal” cases if possible

15 Hardware [& software] testing Goals: reliability / fault-free / fault-tolerant … Devel. Sources of “Error” Process “Human”: --ReqReq --SpecSpec --DesignDesign --Implement“Other causes” --Testfab --Maintainmeasure aging enviro op cond misuse Hardware levels Software levels Mixed levels

16 Testing: Combinational Logic

17 Main issues in testing embedded systems hardware: The world is ANALOG, not digital; even in designing combinational logic, we need to take analog effects into account Software doesn’t change but hardware does: --different manufacturers of the same component --different batches from the same manufacturer --environmental effects --aging --noise main areas of concern: --signal levels (“0”, “1”—min, typical, and max values; fan-in, fan-out) --timing—rise & fall times; propagation delay; hazards and race conditions --how to deal with effects of unwanted resistance, capacitance, induction

18 Testing combinational circuits Fault-unsatisfactory condition or state; can be constant, intermittent, or transient; can be static or dynamic Error: static; inherent in system; most can be caught Failure: undesired dynamic event occurring at a specific time—typically random—often occurs from breakage or age; cannot all be designed away Physical faults: one-fault model Logical faults: Structural—from interconnections Functional: within a component

19 Structural faults: Stuck-at model: (a single fault model) s-a-1; s-a-0; may be because circuit is open or there is a short

20 fig_02_39 Testing combinational circuits: s-a-0 fault

21 fig_02_40 Modeling s-a-0 fault:

22 fig_02_41 S-a-1 fault

23 fig_02_42 Modeling s-a-1 fault:

24 fig_02_43 Open circuit fault; appears as a s-a-0 fault

25 fig_02_49fig_02_50 Functional faults: Example: hazards, race conditions Two possible methods: A: consider devices to be delay-free, add spike generator B: add delay elements on paths Method AMethod B As frequencies increase, eliminating hazards through good design is even more important

26 The above examples refer to physical faults or performance faults These are less common in FPGAs. Our main concern here is with SPECIFICATION and DESIGN/IMPLEMENTTION faults—these are similar to software problems The main source of error is “human error” And software testing strategies are applicable

27 Testing for: Storage Elements; Finite State Machines; Sequential Logic

28 fig_03_30, 03_31, 03_32, 03_33 Johnson counter (2-bit): shift register + feedback input; often used in embedded applications; states for a Gray code; thus states can be decoded using combinational logic; there will not be any race conditions or hazards

29 fig_03_34 3-stage Johnson counter: --Output is Gray sequence—no decoding spikes --not all 2 3 (2 n ) states are legal—period is 2n (here 2*3=6) --unused states are illegal; must prevent circuit from ever going into these states

30 Making actual working circuits: Must consider --timing in latches and flip-flops --clock distribution --how to test sequential circuits (with n flip- flops, there are potentially 2 n states, a large number; access to individual flipflops for testing must also be carefully planned)

31 fig_03_36, 03_37 Timing in latches and flip-flops: Setup time: how long must inputs be present and stable before gate or clock changes state? Hold time: how long must input remain stable after the gate or clock has changed state? Metastable oscillations can occur if timing is not correct Setup and hold times for a gated latch enabled by a logical 1 on the gate

32 fig_03_38 Example: positive edge triggered FF; 50% point of each signal

33 fig_03_39, 03-40 Propagation delay: minimum, typical, maximum values--with respect to causative edge of clock: Latch: must also specify delay when gate is enabled:

34 fig_03_41, 03_42 Timing margins: example: increasing frequency for 2-stage Johnson counter –output from either FF is 00110011…. assume t PDLH = 5-16ns t PDLH =7-18ns t su = 16ns

35 Case 1: L to H transition of Q A Clock period = t PDLH + t su + slack 0  t PDLH + t su If t PDLH is max, Frequency F max = 1/ [5 + 16)* 10 -9 ]sec = 48MHz If it is min, F max = 31.3 MHz Case 2: H to L transition: Similar calculations give F max = 43.5 MHz or 29.4 MHz Conclusion: F max cannot be larger than 29.4 MHz to get correct behavior

36 Clocks and clock distribution: --frequency and frequency range --rise times and fall times --stability --precision

37 fig_03_43 Clocks and clock distribution: Lower frequency than input; can use divider circuit above Higher frequncy: can use phase locked loop:

38 fig_03_44 Selecting portion of clock: rate multiplier

39 fig_03_46 Note: delays can accumulate

40 fig_03_47 Clock design and distribution: Need precision Need to decide on number of phases Distribution: need to be careful about delays Example: H-tree / buffers

41 fig_03_48 Testing sequential circuits: Must know “initial state” (contents of all storage elements) and input Scan path is basic tool Scan path requires extra circuitry and thus adds to size and cost of curcuit; it may also affect power usage Scan path replaces the older notion of a “homing sequence” which was used to determine initial state for testing

42 fig_03_65 Scan path: example

43 fig_03_66 Standardized boundary scan architecture Architecture and unit under test


Download ppt "Embedded Systems: Testing. Testing needs to occur at many levels and at many stages in the development process. Testing is needed to verify adherence."

Similar presentations


Ads by Google