Lecture 2 Software Testing 2.1 System Testing 2.2 Component Testing ( Unit testing ) 2.3 Test Case Design 2.4 Test Automation.

Slides:



Advertisements
Similar presentations
Software Engineering COMP 201
Advertisements

Software testing.
Defect testing Objectives
การทดสอบโปรแกรม กระบวนการในการทดสอบ
Chapter 10 Software Testing
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software Engineering, COMP201 Slide 1 Software Testing Lecture 28 & 29.
Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Final Project Account for 40 pts out of 100 pts of the final score 10 pts from.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Modified from Sommerville’s originalsSoftware Engineering, 7th edition. Chapter 22 & 23 Slide 1 Verification and Validation.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing 2.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software Engineering Software Testing.
CS 425/625 Software Engineering Software Testing
Chapter 18 Testing Conventional Applications
- Testing programs to establish the presence of system defects -
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Topics in Metrics for Software Testing [Reading assignment: Chapter 20, pp ]
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 23 Slide 1 Software testing Slightly adapted by Anders Børjesson.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Hoang Huu Hanh, Hue University hanh-at-hueuni.edu.vn.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Integration testing l Tests complete systems or subsystems composed of integrated.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Testing phases. Test data Inputs which have been devised to test the system Test cases Inputs to test the system and the predicted outputs from these.
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Software testing techniques 3. Software testing
Prof. Mohamed Batouche Software Testing.
Software Engineering Chapter 23 Software Testing Ku-Yaw Chang Assistant Professor Department of Computer Science and Information.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Chapter 20 Software Testing.
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
CSC 480 Software Engineering Lecture 14 Oct 16, 2002.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
1 Software Defect Testing Testing programs to establish the presence of system defects.
Csi565: Theory and Practice of Software Testing Introduction Spring 2009.
This chapter is extracted from Sommerville’s slides. Text book chapter
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
CSC 480 Software Engineering Lecture 15 Oct 21, 2002.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software process model from Ch2 Chapter 2 Software Processes1 Requirements Specification Design and Implementation ValidationEvolution.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Test case design l Involves designing the test cases (inputs and outputs) used.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Finding Software Faults: Testing Process.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
Defect testing Testing programs to establish the presence of system defects.
Chapter 9 Software Testing
IS301 – Software Engineering V:
Chapter 8 – Software Testing
Software testing.
Software testing strategies 2
Software testing.
Software Testing “If you can’t test it, you can’t design it”
Chapter 7 Software Testing.
Presentation transcript:

Lecture 2 Software Testing 2.1 System Testing 2.2 Component Testing ( Unit testing ) 2.3 Test Case Design 2.4 Test Automation

The Testing Process  System testing  Testing of groups of components integrated to create a system or sub-system  The responsibility of an independent testing team  Tests are based on a system specification  Component testing (Unit testing )  Testing of individual program components  Usually the responsibility of the component developer (except sometimes for critical systems)  Tests are derived from the developer’s experience

The software testing process

2.1 System testing  Involves integrating components to create a system or sub-system  May involve testing an increment to be delivered to the customer  Two phases:  Integration testing - the test team have access to the system source code. The system is tested as components are integrated  Release testing - the test team test the complete system to be delivered as a black-box

2.1.1 Incremental Integration Testing Repeated again

2.1.2 Release Testing  The process of testing a release of a system that will be distributed to customers  Primary goal is to increase the supplier’s confidence that the system meets its requirements  Release testing is usually black-box or (functional testing)  Based on the system specification only;  Testers do not have knowledge of the system implementation

Black-box testing

Testing scenario-example Frontier mentality is some type of trial without judges and courts, was the dominate situation in America!!!!!!

Testing scenario-example (what to test?)

Use cases  Use cases can be a basis for deriving the tests for a system  They help identify operations to be tested and help design the required test cases  From an associated sequence diagram, the inputs and outputs to be created for the tests can be identified.  Example(ATM system):visit cs.gordon.edu/courses/cs211/ATMExample/UseCases.html#Startuphttp:// cs.gordon.edu/courses/cs211/ATMExample/UseCases.html#Startup

Performance testing  Part of release testing may involve testing the emergent properties of a system, such as performance and reliability  Performance tests usually involve planning a series of tests where the load is steadily increased until the system performance becomes unacceptable

Stress testing  Exercises the system beyond its maximum design load. Stressing the system often causes defects to come to light  Stressing the system test failure behaviour Systems should not fail disastrously. Stress testing checks for unacceptable loss of service or data( server recovery time).  Stress testing is particularly applicable to distributed systems that can exhibit severe degradation as a network becomes overloaded.

2.2 Component testing (Unit testing )  Component or unit testing is the process of testing individual components in isolation.  It is a defect testing process.  Components may be:  Individual functions or methods within an object;  Object classes with several attributes and methods;  Composite components with defined interfaces used to access their functionality.

2.2.1 Object class testing  Complete test coverage of a class involves  Testing all operations associated with an object;  Setting and interrogating all object attributes;  Exercising the object in all possible states.  Inheritance makes it more difficult to design object class tests as the information to be tested is not localised.

Weather Station Software Example  It is a package of software controlled instruments which collects data, performs some data processing and transmits this data for further processing.  The instruments include air and ground thermometers, an anemometer, a wind vane, a barometer and a rain gauge. Data is collected periodically.  When a command is issued to transmit the weather data, the weather station processes and summarises the collected data. The summarised data is transmitted to the mapping computer when a request is received.

Weather station object interface-example

Weather station testing example-cont  Need to define test cases for the operations: reportWeather, calibrate, test, startup shutdown  Using a state model: identify sequences of state transitions to be tested The event sequences to cause these transitions

2.2.2 Interface testing  Objectives are to detect faults due to interface errors or invalid assumptions about interfaces.  Particularly important for object-oriented development as objects are defined by their interfaces.

Interface testing-cont Interaction between classes could not be tested. But the net result could be sensed via the main interface

Interface errors  Interface misuse  A calling component calls another component and makes an error in its use of its interface (e.g. parameters in the wrong order).  Interface misunderstanding  A calling component embeds assumptions about the behaviour of the called component which are incorrect ( may be implemented by different person ).  Timing errors  The called and the calling component operate at different speeds and out-of-date information is accessed.

2.3 Test Case Design  Involves designing the test cases (inputs and outputs) used to test the system  The goal of test case design is to create a set of tests that are effective in validation and defect testing  Design approaches:  Requirements-based testing;  Partition testing;  Structural testing

2.3.1 Requirements Based Testing  A general principle of requirements engineering is that requirements should be testable.  Requirements-based testing is a validation testing technique where you consider each requirement and derive a set of tests for that requirement.

LIBSYS requirements-example R1 R3 R2

LIBSYS requirements based testing cases example-cont

2.3.2 Partition Testing  Input data and output results often fall into different classes where all members of a class are related  Each of these classes is an equivalence partition or domain where the program behaves in an equivalent way for each class member  Test cases should be chosen from each partition

Equivalence partitioning

Equivalence partitions

Search routine specification procedure Search (Key : ELEM ; T: SEQ of ELEM; Found : in out BOOLEAN; L: in out ELEM_INDEX) ; Pre-condition -- the sequence has at least one element T’FIRST <= T’LAST Post-condition -- the element is found and is referenced by L ( Found and T (L) = Key) or -- the element is not in the array( not Found and not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key ))

Search routine - input partitions  Inputs which conform to the pre-conditions  Inputs where a pre-condition does not hold  Inputs where the key element is a member of the array  Inputs where the key element is not a member of the array

Testing guidelines(sequences)  Test software with sequences which have only a single value  Use sequences of different sizes in different tests  Derive tests so that the first, middle and last elements of the sequence are accessed  Test with sequences of zero length(null)

Search routine - input partitions ( 6 Testing cases )

2.3.3 Structural Testing  Sometime called white-box testing  Derivation of test cases according to program structure  Knowledge of the program is used to identify additional test cases  Objective is to exercise all program statements (not all path combinations)

Binary search - equivalent partitions  Pre-conditions satisfied, key element in array  Pre-conditions satisfied, key element not in array  Pre-conditions unsatisfied, key element in array  Pre-conditions unsatisfied, key element not in array  Input array has a single value  Input array has an even number of values  Input array has an odd number of values

Binary search equivalent partitions

Binary search - ( 8 test cases)

Path testing  It is a white-box testing strategy.  The objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once  The starting point for path testing is a program flow graph( not DFD ) that shows nodes representing program decisions and arcs representing the flow of control  Statements with conditions are therefore nodes in the flow graph

Binary search flow graph-example Program decision Flow of control program flow graph

Binary search flow graph example- cont (Independent paths)  1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14  1, 2, 3, 4, 5, 14  1, 2, 3, 4, 5, 6, 7, 11, 12, 5, …  1, 2, 3, 4, 6, 7, 11, 13, 5, …  Test cases should be derived so that all of these paths are executed  A dynamic program analyser may be used to check that paths have been executed

2.4 Test Automation  Testing is an expensive process phase. Testing workbenches provide a range of tools to reduce the time required and total testing costs.  Systems such as Junit support the automatic execution of tests. ( get a look at )  Most testing workbenches are open systems because testing needs are organisation-specific.  They are sometimes difficult to integrate with closed design and analysis workbenches.

Topics in Metrics for Software Testing

Quantification One of the characteristics of a maturing discipline is the replacement of art by science. Quantification was impossible until the right questions were asked.  Computer Science is slowly following the quantification path. There is skepticism because so much of what we want to quantify it tied to erratic human behavior.

Software quantification  Software Engineers are still counting lines of code. This popular metric is highly inaccurate when used to predict: – costs – resources – schedules

Software quantification( Cont.) Lines of Code (LOC) is not a good measure software size. In software testing we need a notion of size when comparing two testing strategies. The number of tests should be normalized to software size, for example: – Strategy A needs 1.4 tests/unit size.

Asking the right questions  When can we stop testing? How many bugs can we expect? Which testing technique is more effective? Are we testing hard or smart? Do we have a strong program or a weak test suite? Currently, we are unable to answer these questions satisfactorily.

Metrics taxonomy  Linguistic Metrics: Based on measuring properties of program text without interpreting what the text means. – E.g., LOC. Structural Metrics: Based on structural relations between the objects in a program. – E.g., number of nodes and links in a control flowgraph.

Lines of code (LOC)  LOC is used as a measure of software complexity. This metric is just as good as source listing weight if we assume consistency w.r.t. paper and font size. Makes as much sense (or nonsense) to say: – “This is a 2 pound program” as it is to say: – “This is a 100,000 line program.”

Lines of code paradox  Paradox: If you unroll a loop, you reduce the complexity of your software... Studies show that there is a linear relationship between LOC and error rates for small programs (i.e., LOC < 100). The relationship becomes non-linear as programs increases in size.

Halstead’s program length

Example of program length

Structural metrics  Linguistic complexity is ignored.  Attention is focused on control-flow and data-flow complexity.  Structural metrics are based on the properties of flowgraph models of programs.

Cyclomatic complexity  McCabe’s Cyclomatic complexity is defined as: M = L - N + 2P L = number of links in the flowgraph N = number of nodes in the flowgraph P = number of disconnected parts of the flowgraph.

Examples of cyclomatic complexity

Pitfalls if... then... else has the same M as a loop! case statements, which are highly regular structures, have a high M. Warning: McCabe’s metric should be used as a rule of thumb at best.

Rules of thumb based on M  Bugs/LOC increases discontinuously for M > 10  M is better than LOC in judging life-cycle efforts.  Routines with a high M (say > 40) should be scrutinized.  M establishes a useful lower-bound rule of thumb for the number of test cases required to achieve branch coverage.

Assignment 2  Halstead software science metrics was developed by Maurice Halstead, who claimed they could be used to evaluate the mental effort (E), time required to create a program (T), and Predicted number of bugs (B).  These metrics are based on four primitives listed below: n 1 = number of unique operators n 2 = number of unique operands N 1 = total occurrences of operators N 2 = total occurrences of operands

Assignment 2-cont Software science metrics are listed below:  Vocabulary: n = n 1 + n 2  N 1 = n 1 log 2 n 1  N 2 = n 2 log 2 n 2  Program Length N = N 1 + N 2  Program volume: V = N * log 2 n = (N 1 + N 2 ) log 2 ( n 1 + n 2 )  Effort required to generate the program ( as a measure of text complexity ) is : E = (n 1 N 2 N log 2 n)/ (2n 2 )  Time required to generate the program is T = E/18  Predicted theoretical initial number of bugs B = (E) 2/3 /3000

Assignment 2-cont  Given that a software application contains 2048 operators and 1024 operand (a) Compute V,E,T, theoretical B (b) Assume that Three different practical feedbacks indicated that the practical founded bugs was as follows: 2000 bugs in the first feedback 825 bugs in the second feedback 100 bugs in the third feedback Give your comment for each feedback

Assignment 2  Halstead software science metrics was developed by Maurice Halstead, who claimed they could be used to evaluate the mental effort (E), time required to create a program (T), and Predicted number of bugs (B).  These metrics are based on four primitives listed below: n 1 = number of unique operators n 2 = number of unique operands N 1 = total occurrences of operators N 2 = total occurrences of operands

Assignment 2-cont Software science metrics are listed below:  Vocabulary: n = n 1 + n 2  N 1 = n 1 log 2 n 1  N 2 = n 2 log 2 n 2  Program Length N = N 1 + N 2  Program volume: V = N * log 2 n = (N 1 + N 2 ) log 2 ( n 1 + n 2 )  Effort required to generate the program ( as a measure of text complexity ) is : E = (n 1 N 2 N log 2 n)/ (2n 2 )  Time required to generate the program is T = E/18  Predicted theoretical initial number of bugs B = (E) 2/3 /3000

Assignment 2-cont  Given that a software application contains 2048 operators and 1024 operand (a) Compute V,E,T, theoretical B (b) Assume that Three different practical feedbacks indicated that the practical founded bugs was as follows: 2000 bugs in the first feedback 825 bugs in the second feedback 100 bugs in the third feedback Give your comment for each feedback