Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 2 Software Testing 2.1 System Testing 2.2 Component Testing ( Unit testing ) 2.3 Test Case Design 2.4 Test Automation.

Similar presentations


Presentation on theme: "Lecture 2 Software Testing 2.1 System Testing 2.2 Component Testing ( Unit testing ) 2.3 Test Case Design 2.4 Test Automation."— Presentation transcript:

1 Lecture 2 Software Testing 2.1 System Testing 2.2 Component Testing ( Unit testing ) 2.3 Test Case Design 2.4 Test Automation

2 The Testing Process  System testing  Testing of groups of components integrated to create a system or sub-system  The responsibility of an independent testing team  Tests are based on a system specification  Component testing (Unit testing )  Testing of individual program components  Usually the responsibility of the component developer (except sometimes for critical systems)  Tests are derived from the developer’s experience

3 The software testing process

4 2.1 System testing  Involves integrating components to create a system or sub-system  May involve testing an increment to be delivered to the customer  Two phases:  Integration testing - the test team have access to the system source code. The system is tested as components are integrated  Release testing - the test team test the complete system to be delivered as a black-box

5 2.1.1 Incremental Integration Testing Repeated again

6 2.1.2 Release Testing  The process of testing a release of a system that will be distributed to customers  Primary goal is to increase the supplier’s confidence that the system meets its requirements  Release testing is usually black-box or (functional testing)  Based on the system specification only;  Testers do not have knowledge of the system implementation

7 Black-box testing

8 Testing scenario-example Frontier mentality is some type of trial without judges and courts, was the dominate situation in America!!!!!!

9 Testing scenario-example (what to test?)

10 Use cases  Use cases can be a basis for deriving the tests for a system  They help identify operations to be tested and help design the required test cases  From an associated sequence diagram, the inputs and outputs to be created for the tests can be identified.  Example(ATM system):visit http://www.math- cs.gordon.edu/courses/cs211/ATMExample/UseCases.html#Startuphttp://www.math- cs.gordon.edu/courses/cs211/ATMExample/UseCases.html#Startup

11 Performance testing  Part of release testing may involve testing the emergent properties of a system, such as performance and reliability  Performance tests usually involve planning a series of tests where the load is steadily increased until the system performance becomes unacceptable

12 Stress testing  Exercises the system beyond its maximum design load. Stressing the system often causes defects to come to light  Stressing the system test failure behaviour Systems should not fail disastrously. Stress testing checks for unacceptable loss of service or data( server recovery time).  Stress testing is particularly applicable to distributed systems that can exhibit severe degradation as a network becomes overloaded.

13 2.2 Component testing (Unit testing )  Component or unit testing is the process of testing individual components in isolation.  It is a defect testing process.  Components may be:  Individual functions or methods within an object;  Object classes with several attributes and methods;  Composite components with defined interfaces used to access their functionality.

14 2.2.1 Object class testing  Complete test coverage of a class involves  Testing all operations associated with an object;  Setting and interrogating all object attributes;  Exercising the object in all possible states.  Inheritance makes it more difficult to design object class tests as the information to be tested is not localised.

15 Weather Station Software Example  It is a package of software controlled instruments which collects data, performs some data processing and transmits this data for further processing.  The instruments include air and ground thermometers, an anemometer, a wind vane, a barometer and a rain gauge. Data is collected periodically.  When a command is issued to transmit the weather data, the weather station processes and summarises the collected data. The summarised data is transmitted to the mapping computer when a request is received.

16 Weather station object interface-example

17 Weather station testing example-cont  Need to define test cases for the operations: reportWeather, calibrate, test, startup shutdown  Using a state model: identify sequences of state transitions to be tested The event sequences to cause these transitions

18 2.2.2 Interface testing  Objectives are to detect faults due to interface errors or invalid assumptions about interfaces.  Particularly important for object-oriented development as objects are defined by their interfaces.

19 Interface testing-cont Interaction between classes could not be tested. But the net result could be sensed via the main interface

20 Interface errors  Interface misuse  A calling component calls another component and makes an error in its use of its interface (e.g. parameters in the wrong order).  Interface misunderstanding  A calling component embeds assumptions about the behaviour of the called component which are incorrect ( may be implemented by different person ).  Timing errors  The called and the calling component operate at different speeds and out-of-date information is accessed.

21 2.3 Test Case Design  Involves designing the test cases (inputs and outputs) used to test the system  The goal of test case design is to create a set of tests that are effective in validation and defect testing  Design approaches:  Requirements-based testing;  Partition testing;  Structural testing

22 2.3.1 Requirements Based Testing  A general principle of requirements engineering is that requirements should be testable.  Requirements-based testing is a validation testing technique where you consider each requirement and derive a set of tests for that requirement.

23 LIBSYS requirements-example R1 R3 R2

24 LIBSYS requirements based testing cases example-cont 1 2 3 4 5

25 2.3.2 Partition Testing  Input data and output results often fall into different classes where all members of a class are related  Each of these classes is an equivalence partition or domain where the program behaves in an equivalent way for each class member  Test cases should be chosen from each partition

26 Equivalence partitioning

27 Equivalence partitions

28 Search routine specification procedure Search (Key : ELEM ; T: SEQ of ELEM; Found : in out BOOLEAN; L: in out ELEM_INDEX) ; Pre-condition -- the sequence has at least one element T’FIRST <= T’LAST Post-condition -- the element is found and is referenced by L ( Found and T (L) = Key) or -- the element is not in the array( not Found and not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key ))

29 Search routine - input partitions  Inputs which conform to the pre-conditions  Inputs where a pre-condition does not hold  Inputs where the key element is a member of the array  Inputs where the key element is not a member of the array

30 Testing guidelines(sequences)  Test software with sequences which have only a single value  Use sequences of different sizes in different tests  Derive tests so that the first, middle and last elements of the sequence are accessed  Test with sequences of zero length(null)

31 Search routine - input partitions ( 6 Testing cases )

32 2.3.3 Structural Testing  Sometime called white-box testing  Derivation of test cases according to program structure  Knowledge of the program is used to identify additional test cases  Objective is to exercise all program statements (not all path combinations)

33 Binary search - equivalent partitions  Pre-conditions satisfied, key element in array  Pre-conditions satisfied, key element not in array  Pre-conditions unsatisfied, key element in array  Pre-conditions unsatisfied, key element not in array  Input array has a single value  Input array has an even number of values  Input array has an odd number of values

34 Binary search equivalent partitions

35 Binary search - ( 8 test cases)

36 2.3.3.1 Path testing  It is a white-box testing strategy.  The objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once  The starting point for path testing is a program flow graph( not DFD ) that shows nodes representing program decisions and arcs representing the flow of control  Statements with conditions are therefore nodes in the flow graph

37 Binary search flow graph-example Program decision Flow of control program flow graph

38 Binary search flow graph example- cont (Independent paths)  1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14  1, 2, 3, 4, 5, 14  1, 2, 3, 4, 5, 6, 7, 11, 12, 5, …  1, 2, 3, 4, 6, 7, 11, 13, 5, …  Test cases should be derived so that all of these paths are executed  A dynamic program analyser may be used to check that paths have been executed

39 2.4 Test Automation  Testing is an expensive process phase. Testing workbenches provide a range of tools to reduce the time required and total testing costs.  Systems such as Junit support the automatic execution of tests. ( get a look at http://junit.sourceforge.net/ )http://junit.sourceforge.net/  Most testing workbenches are open systems because testing needs are organisation-specific.  They are sometimes difficult to integrate with closed design and analysis workbenches.

40 Topics in Metrics for Software Testing

41 Quantification One of the characteristics of a maturing discipline is the replacement of art by science. Quantification was impossible until the right questions were asked.  Computer Science is slowly following the quantification path. There is skepticism because so much of what we want to quantify it tied to erratic human behavior.

42 Software quantification  Software Engineers are still counting lines of code. This popular metric is highly inaccurate when used to predict: – costs – resources – schedules

43 Software quantification( Cont.) Lines of Code (LOC) is not a good measure software size. In software testing we need a notion of size when comparing two testing strategies. The number of tests should be normalized to software size, for example: – Strategy A needs 1.4 tests/unit size.

44 Asking the right questions  When can we stop testing? How many bugs can we expect? Which testing technique is more effective? Are we testing hard or smart? Do we have a strong program or a weak test suite? Currently, we are unable to answer these questions satisfactorily.

45 Metrics taxonomy  Linguistic Metrics: Based on measuring properties of program text without interpreting what the text means. – E.g., LOC. Structural Metrics: Based on structural relations between the objects in a program. – E.g., number of nodes and links in a control flowgraph.

46 Lines of code (LOC)  LOC is used as a measure of software complexity. This metric is just as good as source listing weight if we assume consistency w.r.t. paper and font size. Makes as much sense (or nonsense) to say: – “This is a 2 pound program” as it is to say: – “This is a 100,000 line program.”

47 Lines of code paradox  Paradox: If you unroll a loop, you reduce the complexity of your software... Studies show that there is a linear relationship between LOC and error rates for small programs (i.e., LOC < 100). The relationship becomes non-linear as programs increases in size.

48 Halstead’s program length

49 Example of program length

50

51

52

53 Structural metrics  Linguistic complexity is ignored.  Attention is focused on control-flow and data-flow complexity.  Structural metrics are based on the properties of flowgraph models of programs.

54 Cyclomatic complexity  McCabe’s Cyclomatic complexity is defined as: M = L - N + 2P L = number of links in the flowgraph N = number of nodes in the flowgraph P = number of disconnected parts of the flowgraph.

55 Examples of cyclomatic complexity

56

57

58

59

60 Pitfalls if... then... else has the same M as a loop! case statements, which are highly regular structures, have a high M. Warning: McCabe’s metric should be used as a rule of thumb at best.

61 Rules of thumb based on M  Bugs/LOC increases discontinuously for M > 10  M is better than LOC in judging life-cycle efforts.  Routines with a high M (say > 40) should be scrutinized.  M establishes a useful lower-bound rule of thumb for the number of test cases required to achieve branch coverage.

62 Assignment 2  Halstead software science metrics was developed by Maurice Halstead, who claimed they could be used to evaluate the mental effort (E), time required to create a program (T), and Predicted number of bugs (B).  These metrics are based on four primitives listed below: n 1 = number of unique operators n 2 = number of unique operands N 1 = total occurrences of operators N 2 = total occurrences of operands

63 Assignment 2-cont Software science metrics are listed below:  Vocabulary: n = n 1 + n 2  N 1 = n 1 log 2 n 1  N 2 = n 2 log 2 n 2  Program Length N = N 1 + N 2  Program volume: V = N * log 2 n = (N 1 + N 2 ) log 2 ( n 1 + n 2 )  Effort required to generate the program ( as a measure of text complexity ) is : E = (n 1 N 2 N log 2 n)/ (2n 2 )  Time required to generate the program is T = E/18  Predicted theoretical initial number of bugs B = (E) 2/3 /3000

64 Assignment 2-cont  Given that a software application contains 2048 operators and 1024 operand (a) Compute V,E,T, theoretical B (b) Assume that Three different practical feedbacks indicated that the practical founded bugs was as follows: 2000 bugs in the first feedback 825 bugs in the second feedback 100 bugs in the third feedback Give your comment for each feedback

65 Assignment 2  Halstead software science metrics was developed by Maurice Halstead, who claimed they could be used to evaluate the mental effort (E), time required to create a program (T), and Predicted number of bugs (B).  These metrics are based on four primitives listed below: n 1 = number of unique operators n 2 = number of unique operands N 1 = total occurrences of operators N 2 = total occurrences of operands

66 Assignment 2-cont Software science metrics are listed below:  Vocabulary: n = n 1 + n 2  N 1 = n 1 log 2 n 1  N 2 = n 2 log 2 n 2  Program Length N = N 1 + N 2  Program volume: V = N * log 2 n = (N 1 + N 2 ) log 2 ( n 1 + n 2 )  Effort required to generate the program ( as a measure of text complexity ) is : E = (n 1 N 2 N log 2 n)/ (2n 2 )  Time required to generate the program is T = E/18  Predicted theoretical initial number of bugs B = (E) 2/3 /3000

67 Assignment 2-cont  Given that a software application contains 2048 operators and 1024 operand (a) Compute V,E,T, theoretical B (b) Assume that Three different practical feedbacks indicated that the practical founded bugs was as follows: 2000 bugs in the first feedback 825 bugs in the second feedback 100 bugs in the third feedback Give your comment for each feedback


Download ppt "Lecture 2 Software Testing 2.1 System Testing 2.2 Component Testing ( Unit testing ) 2.3 Test Case Design 2.4 Test Automation."

Similar presentations


Ads by Google