Download presentation
Presentation is loading. Please wait.
1
Lecturer: Dr. AJ Bieszczad Chapter 87-1 How does software fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible to implement Faulty design Faulty code Improperly implemented design
2
Lecturer: Dr. AJ Bieszczad Chapter 87-2 Terminology Fault identification: what fault caused the failure? Fault correction: change the system Fault removal: take out the fault
3
Lecturer: Dr. AJ Bieszczad Chapter 87-3 Types of faults Algorithmic fault Syntax fault Computation and precision fault Documentation fault Stress or overload fault Capacity or boundary fault Timing or coordination fault Throughput or performance fault Recovery fault Hardware and system software fault Documentation fault Standards and procedures fault
4
Lecturer: Dr. AJ Bieszczad Chapter 87-4 Table 8.1. IBM orthogonal defect classification. Fault typeMeaning FunctionFault that affects capability, end-user interfaces, product interfaces, interface with hardware architecture, or global data structure InterfaceFault in interacting with other components or drivers via calls, macros, control blocks or parameter lists CheckingFault in program logic that fails to validate data and values properly before they are used AssignmentFault in data structure or code block initialization. Timing/serializationFault that involves timing of shared and real-time resources Build/package/mergeFault that occurs because of problems in repositories, management changes, or version control DocumentationFault that affects publications and maintenance notes AlgorithmFault involving efficiency or correctness of algorithm or data structure but not design
5
Lecturer: Dr. AJ Bieszczad Chapter 87-5 HP fault classification
6
Lecturer: Dr. AJ Bieszczad Chapter 87-6 Faults distribution (HP)
7
Lecturer: Dr. AJ Bieszczad Chapter 87-7 Testing steps Unit test Unit test Unit test Integration test Function test Performance test Acceptance test Installation test Design specifications System functional requirements Other software requirements Customer requirements specifications User environment SYSTEM IN USE!
8
Lecturer: Dr. AJ Bieszczad Chapter 87-8 Views of Test Objects Closed box or black box the object seen from outside tests developed without the knowledge of the internal structure focus on inputs and outputs Open box or clear box the object seen from inside tests developed so the internal structure is tested focus algorithms, data structures, etc. Mixed model Selection factors: the number of possible logical paths the nature of input data the amount of computation involved the complexity of algorithms
9
Lecturer: Dr. AJ Bieszczad Chapter 87-9 Exmining code Code walkthrough –the developer presents the code and the documentation to a review team –the team comments on their correctness Code inspection –the review team checks the code and documentation against a prepared list of concerns –more formal than a walkthrough Inspections criticize the code, and not the developer
10
Lecturer: Dr. AJ Bieszczad Chapter 87-10 Table 8.2. Typical inspection preparation and meeting times. Development artifactPreparation timeMeeting time Requirements document25 pages per hour12 pages per hour Functional specification45 pages per hour15 pages per hour Logic specification50 pages per hour20 pages per hour Source code150 lines of code per hour75 lines of code per hour User documents35 pages per hour20 pages per hour Table 8.3. Faults found during discovery activities. Discovery activityFaults found per thousand lines of code Requirements review2.5 Design review5.0 Code inspection10.0 Integration test3.0 Acceptance test2.0
11
Lecturer: Dr. AJ Bieszczad Chapter 87-11 Proving code correct Formal proof techniques + formal understanding of the program + mathematical correctness - difficult to build a model - usually takes more time to prove correctness than to write the code itself Symbolic execution + reduces the number of cases for the formal proof - difficult to build a model - may take more time to prove correctness than to write the code itself Automated theorem-proving + extremely convenient - tools exists only for toy problems at research centers - ideal prover that that can infer program correctness from arbitrary code can never be built (the problem is equivalent to the halting problem of Turing machines, which is unsolvable)
12
Lecturer: Dr. AJ Bieszczad Chapter 87-12 Testing program components Test point or test case –A particular choice of input data to be used in testing a program Test –A finite collection of test cases
13
Lecturer: Dr. AJ Bieszczad Chapter 87-13 Test thoroughness Statement testing Branch testing Path testing Definition-use testing All-uses testing All-predicate-uses/some-computational-uses testing All-computational-uses/some-predicate-uses testing
14
Lecturer: Dr. AJ Bieszczad Chapter 87-14 Example: Array arrangement Statement testing: 1-2-3-4-5-6-7 Branch testing: 1-2-3-4-5-6-7 1-2-4-5-6-1 Path testing: 1-2-3-4-5-6-7 1-2-3-4-5-6-1 1-2-4-5-6-7 1-2-4-5-6-1
15
Lecturer: Dr. AJ Bieszczad Chapter 87-15 Relative strength of test strategies All paths All definition-use paths All uses All computational/ some predicate uses All definitions All computational uses All predicate/ some computational uses Statement All predicate uses Branch
16
Lecturer: Dr. AJ Bieszczad Chapter 87-16 Comparing techniques Table 8.5. Fault discovery percentages by fault origin. Discovery techniqueRequirementsDesignCodingDocumentation Prototyping4035 15 Requirements review401505 Design review1555015 Code inspection20406525 Unit testing15200 Table 8.6. Effectiveness of fault discovery techniques. (Jones 1991) Requirements faults Design faultsCode faultsDocumentation faults Reviews FairExcellent Good Prototypes GoodFair Not applicable Testing Poor GoodFair Correctness proofs Poor Fair
17
Lecturer: Dr. AJ Bieszczad Chapter 87-17 Integration testing Bottom-up Top-down Big-bang Sandwich testing Modified top-down Modified sandwich
18
Lecturer: Dr. AJ Bieszczad Chapter 87-18 Component hierarchy for the examples
19
Lecturer: Dr. AJ Bieszczad Chapter 87-19 Bottom-up testing
20
Lecturer: Dr. AJ Bieszczad Chapter 87-20 Top-down testing
21
Lecturer: Dr. AJ Bieszczad Chapter 87-21 Modified top-down testing
22
Lecturer: Dr. AJ Bieszczad Chapter 87-22 Big-bang integration
23
Lecturer: Dr. AJ Bieszczad Chapter 87-23 Sandwich testing
24
Lecturer: Dr. AJ Bieszczad Chapter 87-24 Modifying sandwich testing
25
Lecturer: Dr. AJ Bieszczad Chapter 87-25 Comparison of integration strategies.
26
Lecturer: Dr. AJ Bieszczad Chapter 87-26 Sync-and-stabilize approach (Microsoft)
27
Lecturer: Dr. AJ Bieszczad Chapter 87-27 Testing OO vs. testing other systems
28
Lecturer: Dr. AJ Bieszczad Chapter 87-28 Where testing OO is harder?
29
Lecturer: Dr. AJ Bieszczad Chapter 87-29 Test planning Establish test objectives Design test cases Write test cases Test test cases Execute tests Evaluate test results
30
Lecturer: Dr. AJ Bieszczad Chapter 87-30 Test plan Purpose: to organize testing activities Describes the way in which we will show our customers that the software works correctly Developed in parallel with the system development Contents: –test objectives –methods for test execution for each stage of testing –tools –exit criteria –system integration plan –source of test data –detailed list of testcases
31
Lecturer: Dr. AJ Bieszczad Chapter 87-31 Automated testing tools Code analysis –Static analysis code analyzer structure checker data analyzer sequence checker –Dynamic analysis program monitor Test execution –Capture and replay –Stubs and drivers –Automated testing environments Test case generators
32
Lecturer: Dr. AJ Bieszczad Chapter 87-32 Sample output from static code analysis
33
Lecturer: Dr. AJ Bieszczad Chapter 87-33 Probability of finding faults during development
34
Lecturer: Dr. AJ Bieszczad Chapter 87-34 When to stop testing Coverage criteria Fault seeding Estimate of remaining faults S – seeded faults, n – discovered faults, N – actual faults N = Sn/s Two test team testing x - faults found by group 1, y – by group 2, q – dicovered by both E 1 =x/nE 2 =y/nq <=x, q<=y also: E 1 =q/y (group 1 found q faults out of group’s 2 y) so:y=q/E 1 and n=y/E 2 therefore: n=q/(E 1 E 2 )
35
Lecturer: Dr. AJ Bieszczad Chapter 87-35 When to stop testing Coverage criteria Fault seeding Confidence in the software C = 1if n > N S/(S-N+1) if n < N S – seeded faults, n – discovered faults, N – actual faults (e.g., N=0, S=10, n=10 91% N=0, S=49, n=49 98%)
36
Lecturer: Dr. AJ Bieszczad Chapter 87-36 Classification tree to identify fault- prone components
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.