Presentation is loading. Please wait.

Presentation is loading. Please wait.

September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 1 System Test Planning and the usefulness of a “Safety Checklist” ECEN5543.

Similar presentations


Presentation on theme: "September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 1 System Test Planning and the usefulness of a “Safety Checklist” ECEN5543."— Presentation transcript:

1 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 1 System Test Planning and the usefulness of a “Safety Checklist” ECEN5543

2 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 2 Plan for class period Additional notations to analyze requirements, prepare for design and prepare for system test plan –Event tables –State transition tables –Decision tables (plus an old but useful shorthand), aka condition tables Overview of system test categories What to use to determine tests in each category –Includes analysis of Safety Checklist with respect to stand-alone software (Do Safety Checklist on use case as an experiment.)

3 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 3 More Requirements Analysis Tools Useful for System Testing Examples from Software Engineering Concepts, Richard Fairley, McGraw, Hill, 1985. Out of print.

4 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 4 System Sequence Diagram -- SSD External actors interaction with system Must define scope to know what is external Actors can be other programs, other products, people High Level view Sequence diagrams are part of UML Same rules used to create SYSTEM sequence diagrams

5 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 5 SSD

6 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 6 State Transition Table for “split” routine Present State InputActionsOutputsNext State S0S0 DSDS Open F 6 Open F 7 S1S1 S1S1 D 11 Write F 6 D 11 ; F 6 S1S1 D 12 Close F 6 Write F 7 D 12 ; F 7 S2S2 DEDE Close F 6 Close F 7 S0S0 S2S2 D 12 Write F 7 D 12 ; F 7 S2S2 DEDE Close F 7 S0S0

7 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 7 Draw the state transition diagram for this state transition table Notation reminders: –Circle = state –Arc = transition –X/y = input/output

8 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 8 Present StateInput or EventActionOutputNext State ST1. Idlecard insertedrequest for PINWaiting for PIN ST2. Waiting for PINPIN entereddisplay asterisksValidating PIN ST3. Waiting for PINcanceldisplay msgEjecting ST4. Validating PINindicates “valid”display choicesWaiting for customer transaction choice ST5. Validating PINindicates “stolen”display “stolen” confiscating ST6. Validating PINindicates “invalid”display “invalid” Waiting for PIN ST7. Waiting for customer transaction choice Canceldisplay “cancel” Ejecting ST8. Waiting for customer transaction choice Balance Query selectedProcessing query continued on next slide

9 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 9 ST9. Waiting for customer transaction choice Withdrawal selectedProcessing w/d ST10. confiscatingCard confiscatedterminating ST11. Processing queryRejected for this userdisplay “rejected” Ejecting ST12. Processing queryQuery OKdisplay printing printing ST13. Processing withdrawal ok amountdisplay ok msgdispensing ST14. Processing withdrawal not ok amountdisplay refusalEjecting ST15. Printingtransaction completeprint receiptejecting ST16. Dispensingsufficient cash in ATMcashprinting ST17. Dispensinginsufficient cash in ATMdisp “insufficient cash” ejecting ST18. Ejectingcard ejection starteddisplay msg to take cardterminating ST19. terminatingcard ejection completedisplay ending msg idle

10 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 10 State Transition Diagram - incomplete Idle card inserted/ waiting for PIN PIN inserted/ validating PIN ejecting “cancel” “invalid” “stolen” confis- cating “valid” waiting for cust xaction “cancel” terminat- ing card ej complete card confis’d

11 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 11 Notation: Circle = action Arc = data direction Arc label = data label = data sink label = data source = data store Data flow diagram for “split” split D S D 11 * D 12 * D E D 11 * D 12 * F6F6 F7F7

12 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 12 Data Flow Diagram for ATM Validate User ATM customer card request for pin pin validated user dispatch request display transactions transaction request

13 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 13 2-dimensional event table EVENT ModeE13E37E45… Start- up A16----A14; A32 SteadyXA6, A2--- Shut- down --- Alarm--- Action;action = sequential actions. Action, action = concurrent actions. X = impossible. --- = no action required.

14 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 14 Decision table 1234 Credit limit is satisfactoryYNNN Pay experience is favorable--YNN Special clearance obtained-- YN Perform the approve-orderXXX Reject orderX X = do that action; Y = yes, N = no, -- = don’t care Actions Conditions

15 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 15 Decision tables Ambiguous = two identical Rule columns with different actions Redundant = identical Rule columns and identical actions Incomplete = failure to specify an action for a Rule column Karnaugh map is more succinct

16 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 16 Incomplete and multiply-specified decision table as a Karnaugh map A1 A2 A1 A2 A1 A3 C2 C3 C1 Example from Software Engineering Concepts, Richard Fairley, McGraw, Hill, 1985. Out of print.

17 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 17 Incomplete and multiply-specified decision table A1 A2 A1 A2 A1 A3 C2 C3 C1 Example from Software Engineering Concepts, Richard Fairley, McGraw, Hill, 1985. Out of print.

18 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 18 Equivalent to this decision table: (incomplete and multiply-specified) C1YdN C2dYN C3ddY A1X A2X A3X Example from Software Engineering Concepts, Richard Fairley, McGraw, Hill, 1985. Out of print.

19 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 19 General Test Categories Functional Performance Stress --------------------- not system test------------ Glass-box, sometimes called white-box

20 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 20 Functional Success Scenario paths of Use Cases All alternate paths of Use Cases – if intentionally not implemented in a particular release, how is their absence handled?

21 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 21 Performance –How does the system perform under normal conditions? –Is it adequate? –“WHAT is performance” depends on application –Can be extended to include those quality ranges that can be tested

22 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 22 How does system behave under unreasonable conditions? Evaluates robustness

23 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 23 Performance vs. Specified performance criteria are tested as performance tests (duh!) Unspecified performance criteria are tested as stress conditions Stress tests also include conditions outside the specified performance criteria

24 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 24 How do we decide what to test for performance and stress conditions? Targeting Safety-Related Errors During Software Requirements Analysis, Robyn R. Lutz, JPL, CalTech, Proceedings of the ACM SigSoft Symposium on the Foundations of Software Engineering, 1993 The requirements discussed in the above paper provide excellent guidelines

25 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 25 Safety Checklist for safety- critical embedded systems Two most common causes of safety-related software errors –Inadequate interface requirements –Robustness issues -- discrepancies between The documented requirements Requirements actually needed for correct functioning Usage of the checklist reduces safety- related software errors

26 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 26 Earlier Study of Causes of S-R SW Errors – ref. 11 in the paper Errors identified as potentially hazardous to a system tend to be produced by different error mechanisms than non-safety-related software errors S-R sw errors found during integration & testing –Misunderstandings of sw’s interfaces to rest of sys –Discrepancies between documented reqs & necessary requirements for correct functioning, i.e., robustness –(In other words, the documented requirements are inadequate and therefore … wrong.)

27 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 27 What’s an error? A discrepancy between: Computed Observed Measured value or condition True Specified Theoretically correct value or condition Lutz: “Safety-related” if the systems safety analyst determines during standard error-correction process that the error represents potentially significant or catastrophic failure effects.

28 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 28 Effort to target known causes during requirements phase Interfaces Challenge: –Specify correctly the software/system interfaces in complex embedded systems with software distributed among various hardware components – some of which are not yet determined Robustness Challenge: –Many s-r software errors involve inadequate software responses to extreme conditions or extreme values (stress conditions) –For “extreme”, read “boundary and beyond”

29 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 29 Formal vs. Informal Lutz’ Safety Checklist is ok for a development process without formal specification languages (whew!) And without (complete) finite-state-machine modeling DOES focus on software/system interfaces, failure modes, timing, and boundary conditions and values Informal practicesFuture formal analysis

30 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 30 2 key components of any sw requirements methodology To aid in determining the requirements To represent the software requirements in specifications This Safety Checklist focuses on first, still must incorporate in specification Determination technique does not preclude any specification techniques

31 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 31 NOTE: Benefit of Formal Inspections Example given is a formal inspection on 2,000,000 lines of code – seismic processing Can add the Safety Checklist to any other checklists being used in requirements reviews which are the first stage of system testing (prior to architecture design)

32 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 32 The Formal Approach Build a formal, finite-state model of the requirements specifications Analyze the model to ensure its properties match the desired behavior –State criteria (as formal predicates – logical relationships) that must hold in the model

33 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 33 Lutz’ Less Formal Approach Translate the criteria into informal, natural- language format Formatting concerns as a checklist avoids need to build the complete finite-state model

34 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 34 Less Is More – Informal process extended to embedded systems Safety Checklist allows for –Multiple processors –Concurrently executing processes –Redundant resources to manage –Externally commanded state changes –State changes not visible Typical of many complex, embedded systems with timing constraints and safety-critical functions Even in stand-alone system, can have operating system environment, “apparent concurrency”, externally commanded state changes, invisible state changes

35 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 35 Checklist regarding Interfaces

36 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 36 Interfaces – Is the software’s response specified: 1.For every input, what is the response to out-of-range values? 2.For not receiving an expected input ?  If appropriate to time out – a. length, b. when to start counting, c. latency 3.If input arrives when it shouldn’t 4.On given input, will software always follow the same path? (deterministic)  If not, is that a problem?

37 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 37 Interfaces, continued 5.Is each input bounded in time?  Specify earliest time to accept input  Latest time to consider data to be valid 6. Is minimum and maximum arrival rate specified for each input? For each communication path? 7.If interrupts are masked or disabled, can events be lost? Include cooperating processes in “apparent concurrency”

38 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 38 Interfaces, continued 2 8.Can output be produced faster than it can be used (absorbed) by the interfacing module? Is overload behavior specified? 9.Is all data output [to the buses from the sensors] used by the software? If not, is some required function omitted from the spec? 10.Can input received before startup, while offline, or after shutdown influence the next software startup behavior? Are any values retained in hw or sw? The earliest? The most recent?

39 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 39 Checklist regarding Robustness

40 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 40 Robustness 11. If performance degradation is the chosen response, is the degradation predictable? 12. Are there sufficient delays incorporated into the error-recovery responses, e.g. don’t return to normal state prior to managing the error 13. Are feedback loops specified where appropriate to compare the actual effects of outputs on the system with the predicted effects?

41 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 41 Robustness, continued 14.Are all modes and modules reachable (used in some path through the code)? Superfluous? Other logic error? 15.If a hazards analysis has been done, does every path from a hazardous state (a failure-mode) lead to a low-risk state? 16.Are the inputs identified which, if not received, can lead to a hazardous state or can prevent recovery (single-point failures)?

42 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 42 Usefulness? Safety checklist has been demonstrated to “ask the right questions” Not sufficient to preclude introducing errors Necessary although not sufficient

43 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 43 May I have the envelope, please … Not every hazardous state led to a low-risk state. Error-recovery responses incorporated insufficient delays. Input arrived when it shouldn’t and no response was specified; response defaulted to unintended behavior. Response not specified for out-of-range values that were possible for some inputs #5 Output produced too fast for interfacing module #4#2 #3 #1

44 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 44 Allows tailoring Focuses on historically troublesome aspects of safety-critical, embedded software Avoids over-specification of well- understood or low-risk requirements Tailor to level of technical or historical risk

45 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 45 First step toward safety constraints Many items that it identifies are system hazards Can be used to identify safety constraints Not yet ready for formal prediction –How use for informal prediction of error prone factors

46 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 46 After Requirements Are Improved … How do we ensure that requirements are implemented and maintained? –After code is written (new code or bug fixes); note: difficult to unit test these issues –After new requirements are added –After old requirements are modified Role of reviews Code the invariants where appropriate System tests to test use cases and the safety checklist

47 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 47 Create a system test plan – IEEE Std. 829 1.Test the Success Scenario and conditions that lead to alternative paths of use cases 2.If possible, test to verify the relevant safety checklist items – “safety” may not be main concern but correct interfaces and robustness are. 3.If any resources are shared among processes, review and test for correctness of mutual exclusion. (SW Eng of Multi-program Sys) 4.If “cooperating processes”, verify suspension happens correctly, a suspended process restored when appropriate, restoration correct.

48 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 48 IEEE 829 Standard Test Plan Outline - 1 1.0Introduction 2.0Test Items 3.0Tested Features 4.0Features Not Tested (per cycle) [Repeat 5.0 for each system level test.] 5.0 Testing Strategy and Approach 5.1Syntax 5.2Description of functionality 5.3Arguments for Test 5.4Expected Output 5.5Specific Exclusions 5.6Dependencies 5.7Test Case Success/Failure Criteria

49 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 49 IEEE 829 Standard Test Plan Outline - 2 6.0Pass/Fail Criteria for the Complete Test Cycle 7.0Entrance Criteria/Exit Criteria 8.0Test-Suspension Criteria and Resumption Req’s 9.0Test Deliverables/Status Communications Vehicles 10.0Testing Tasks 11.0Hardware and Software Requirements (for testing) 12.0Problem Determination and Correction Responsibilities

50 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 50 IEEE 829 Standard Test Plan outline - 3 13.0Staffing and Training Needs/Assignments 14.0Test Schedules 15.0Risks and Contingencies 16.0Approvals

51 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 51 Glass-box – briefly (Need implementation details) Test module/process/object-cluster interfaces (process level can be system test) Test object/object-cluster contracts Create test data to force certain code paths Note: if team is doing incremental development, you can begin glass-box testing early

52 September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 52 More to consider If the system is too large to test thoroughly, what tests should you emphasize? Stay tuned …


Download ppt "September, 2006R. Dameron, University of Colorado, ECEN5033, System Test Planning 1 System Test Planning and the usefulness of a “Safety Checklist” ECEN5543."

Similar presentations


Ads by Google