Download presentation
Presentation is loading. Please wait.
Published byCecily Stephens Modified over 9 years ago
1
Testing the programs In this part we look at classification of faults the purpose of testing unit testing integration testing strategies when to stop testing
2
Concept change!! Many programmers view testing as a demonstration that their program performs properly. The idea of demonstrating correctness is really the reverse of what testing is all about. We test a program to demonstrate the existence of a fault! Because our goal is to discover faults, we consider a test successful only when a fault is discovered or a failure occurs as a result of our testing procedures.
3
Classification of faults In an ideal world, we produce programs where everything works flawlessly every time. Unfortunately this is not the case! We say that our software has failed, usually when its behaviour deviates from the one described in the requirements. First we identify the fault, i.e. determine what fault or faults caused the failure. Next we correct the fault by making changes to the system so that the fault is removed.
4
Classification of faults Why do we classify faults? –In order to improve our development process! We would like to match a fault to a specific area of our development process. –In other words, we would like our classification scheme to be orthogonal.
5
IBM Orthogonal Defect Classification Fault type Meaning FunctionFault that affects capability, end-user interfaces, product interfaces, interface with hardware architecture, or global data structure InterfaceFault in interacting with other components or drivers via calls, macros, control blocks or parameter lists CheckingFault in program logic that fails to validate data and values properly before they are used AssignmentFault in data structure or code block initialization.
6
IBM Orthogonal Defect Classification Fault typeMeaning Timing/serialization Fault that involves timing of shared and real- time resources Build/package/merge Fault that occurs because of problems in repositories, management changes, or version control Documentation Fault that affects publications and maintenance notes Algorithm Fault involving efficiency or correctness of algorithm or data structure but not design
7
Hewlett-Packard fault classification
8
Testing steps
9
Views of Test Objects Black (opaque) box –In this type of testing, the test object is viewed from the outside and its contents are unknown. –Testing consists of feeding input to the object and noting what output is produced. –The test's goal is to be sure that every kind of input is submitted and that the observed output matches the expected output.
10
Black Box (example) Suppose we have a component that accepts as input the three numbers a, b, c and outputs the two roots of the equation ax 2 + bx + c = 0 or the message “no real roots”. It is impossible to test the component by submitting every possible triple of numbers (a,b,c). Representative cases may be chosen so that we have all combinations of positive, negative and zero for each of a, b, and c. Additionally we may select values that ensure that the discriminant, (b2 – 4ac) is positive, zero, or negative.
11
Black Box (example) If the tests reveal no faults, we have no guarantee that the component is fault- free! There are other reasons why failure may occur. For some components, it is impossible to generate a set of test cases to demonstrate correct functionality for all cases.
12
White box testing White (transparent) box –In this type of testing, we use the structure of the test object to test in different ways. –For example, we can devise test cases that execute all the statements or all the control paths within the component(s). –Sometimes with many branches and loops it may be impractical to use this kind of approach.
14
Unit testing Our goal is to find faults in components. There are several ways to do this: Examining the code –Code walkthroughs –Code inspections Proving code correct Testing components
15
Examining the Code Code walkthroughs are an informal type of review. Your code and documentation is presented to a review team and the team comments on their correctness. You lead and control the discussion. The focus is on the code not the programmer
16
Examining the Code A code inspection is similar to a walkthrough but is more formal. Here the review team checks the code and documentation against a prepared list of concerns. – For example, the team may examine the definition and use of data types and structures to see if their use is consistent with the design and with system standards and procedures. The team may review algorithms and computations for their correctness and efficiency
17
Proving code correct Proof techniques are not widely used. It is difficult to create proofs, these can sometimes be longer than the program itself! Additionally customers require demonstration that the program is working correctly. Whereas a proof tells us how a program will work in a hypothetical environment described by the design and requirements, testing gives us information on how the program works in its actual operating environment.
18
Testing components Choosing test cases: To test a component, we select input data and conditions and observe the output. A test point or case is a particular choice of test data. A test is a finite collection of test cases. Create tests that can convince ourselves and our customers that the program works correctly, not only for the test cases but for all input. –We start by defining test objectives and define tests designed to meet a specific objective. –One objective can be that all statements should execute correctly another can be that every function performed by the code is done correctly.
19
Testing components As seen before we view the component as either a “white” or “black” box. –If we use “black” box testing, we supply all possible input and compare the output with what was expected. –For example, with the quadratic equation seen earlier we can choose values for the coefficients that range over combinations of positive, zero and negative numbers. –Or select combinations based on the relative sizes e.g. a > b > c, b > c > a, c > b > a,...etc
20
Testing components We can go further and select values based upon the discriminant. We even supply non-numeric input to determine the program's response. In total we have four mutually exclusive types of test input. We thus use the test objective to help us separate the input into equivalence classes.
21
Equivalence classes Every possible input belongs to one of the classes. That is, the classes cover the entire set of input data. No input datum belongs to more than one class. That is, the classes are disjoint. If the executing code demonstrates a fault when using a particular class member is used as input, then the same fault can be detected using any other member of the class as input. That is, any element of the class represents all elements of that class.
22
Equivalence classes It is not always easy of feasible to tell if the third restriction can be met and it is usually rewritten to say: –if a class member is used to detect a fault then the probability is high that the other elements in the class will reveal the same fault.
23
Common Practice Usually 'white' box and 'black' box testing are combined. Suppose we have a component expects a positive input value. Then, using 'black' box testing, we can have a test case for each of the following: –a very large positive integer –a positive integer –a positive, fixed point decimal –a number greater than 0 but less than 1 –a negative number –a non numeric character
24
Common Practice Using 'white' box testing we can chose one or more of the following: –Statement testing: Every statement in the component is executed at least once in some test. –Branch testing: For every decision point in the the code, each branch is chosen at least once in some test. –Path testing: Every distinct path through the code is executed at least once in some test.
26
White box testing Statement testing –choose X > K that produces a +ve result –1-2-3-4-5-6-7 Branch testing –choose two test cases to traverse each branch of the decision points –1-2-3-4-5-6-7 –1-2-4-5-6-1 Path testing –four test cases needed –1-2-3-4-5-6-7 –1-2-3-4-5-6-1 –1-2-4-5-6-7 –1-2-4-5-6-1
27
Integration testing When each component has been completed and tested, we can then combine them into a working system. This integration must be planned and coordinated so that in the case of a failure, we would be able to determine what may have caused it. Suppose we view the system as a hierarchy of components (shown on the following slide).
28
Integration testing
29
Bottom-up integration
30
Top-down integration
31
Big-bang integration
32
Sandwich integration
33
Comparison of integration strategies
34
When to Stop Testing?
35
Fault Seeding We intentionally insert or “seed” a know number of faults in a program. Then another member of the team locate as many faults as possible. The number of undiscovered seeded faults act as an indicator of the total number of faults(unseeded and seeded) remaining in the program. We say:
36
Fault Seeding Problems: It is assumed that the seeded faults are of the same kind and complexity as the actual faults in the program. –This is difficult to do since we do not know what are the typical faults until we have found them. We can attempt to overcome this by basing the seeded faults on historical data about previous faults. –This, however requires that we have built similar systems before.
37
Fault Seeding Solution Use two independent groups, Test Group 1 and Test Group 2. Let x be the number detected by Group 1 and y the number detected by Group 2. Some faults will be detected by both groups say q, such that q <= x and q <= y. Finally let n be the total number of faults in the program which we want to estimate. The effectiveness of each group can be given by E 1 = x/n and E 2 = y/n
38
Fault Seeding The group effectiveness measures the group's ability to detect faults from among a set of existing faults. If we assume that Group 1 is just as effective at finding faults in any part of the program as in any other part, –we can look at the ratio of faults found by Group 1 from the set of faults found by Group 2. –E 1 = x/n = q/y –E 2 = y/n = q/x –Which gives n = (xy)/q = q/(E 1 * E 2 )
39
Confidence in the Software If we seeded a program with S faults and we claim that the code has only N actual faults. Suppose we tested until all S faults have been found as well as n non- seeded faults, then a confidence level can be calculated as 1, if n > N C = S/(S – N + 1), if n N
40
Confidence in the Software With that approach we cannot predict the level of confidence until all the seeded faults are detected. Richards (1974) suggests a modification, where the confidence level can be estimated whether or not all the seeded faults have located. 1, if n > N C= S S + N + 1,if n <= N s -1 N + s
41
Other stopping criteria We can use the test strategy to determine when to stop. –If we are doing statement, branch or path testing, we can track how many statements, branches or paths yet need to be executed and gauge our progress in terms of those statements, branches or paths left to test. There are many tools that can calculate these coverage values for us.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.