Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.

Slides:



Advertisements
Similar presentations
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Chapter 20 Software Testing.
Advertisements

1 Note content copyright © 2004 Ian Sommerville. NU-specific content copyright © 2004 M. E. Kabay. All rights reserved. Software Testing IS301 – Software.
Software Engineering COMP 201
Software testing.
Defect testing Objectives
Lecturer: Sebastian Coope Ashton Building, Room G.18 COMP 201 web-page: Lecture.
การทดสอบโปรแกรม กระบวนการในการทดสอบ
Chapter 10 Software Testing
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software Engineering, COMP201 Slide 1 Software Testing Lecture 28 & 29.
Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Final Project Account for 40 pts out of 100 pts of the final score 10 pts from.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Modified from Sommerville’s originalsSoftware Engineering, 7th edition. Chapter 22 & 23 Slide 1 Verification and Validation.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing 2.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software Engineering Software Testing.
CS 425/625 Software Engineering Software Testing
- Testing programs to establish the presence of system defects -
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 1995 Software Engineering, 5th edition. Chapter 22Slide 1 Verification and Validation u Assuring that a software system meets a user's.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 23 Slide 1 Software testing Slightly adapted by Anders Børjesson.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Hoang Huu Hanh, Hue University hanh-at-hueuni.edu.vn.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Integration testing l Tests complete systems or subsystems composed of integrated.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Testing phases. Test data Inputs which have been devised to test the system Test cases Inputs to test the system and the predicted outputs from these.
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Software testing techniques 3. Software testing
©Ian Sommerville 1995/2000 (Modified by Spiros Mancoridis 1999) Software Engineering, 6th edition. Chapters 19,20 Slide 1 Verification and Validation l.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Chapter 20 Software Testing.
CSC 480 Software Engineering Lecture 14 Oct 16, 2002.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
1 Software Defect Testing Testing programs to establish the presence of system defects.
Csi565: Theory and Practice of Software Testing Introduction Spring 2009.
Software Testing. 2 CMSC 345, Version 4/12 Topics The testing process  unit testing  integration and system testing  acceptance testing Test case planning.
This chapter is extracted from Sommerville’s slides. Text book chapter
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
CSC 480 Software Engineering Lecture 15 Oct 21, 2002.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Software process model from Ch2 Chapter 2 Software Processes1 Requirements Specification Design and Implementation ValidationEvolution.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS 240, Prof. Sarwar Slide 1 CS 240: Software Project Fall 2003 Sections 1 & 2 Dr. Badrul M. Sarwar San Jose State University Lecture #18.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Adaptive Processes © Adaptive Processes Simpler, Faster, Better Defect Testing Testing programs to establish the presence of system defects.
CS 240, Prof. Sarwar Slide 1 CS 240: Software Project Fall 2003 Sections 1 & 2 Dr. Badrul M. Sarwar San Jose State University Lecture #17.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Test case design l Involves designing the test cases (inputs and outputs) used.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Lecture 2 Software Testing 2.1 System Testing 2.2 Component Testing ( Unit testing ) 2.3 Test Case Design 2.4 Test Automation.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Defect testing Testing programs to establish the presence of system defects.
Chapter 9 Software Testing
IS301 – Software Engineering V:
Chapter 8 – Software Testing
Chapter 18 Software Testing Strategies
Software testing.
Software testing strategies 2
Verification and Validation
Software testing.
Defect testing Testing programs to establish the presence of system defects.
Presentation transcript:

Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20

Topics Covered l Defect testing Black box vs. white box Equivalence partitions Path testing l Integration testing Top-down vs. bottom-up Interface testing Stress testing l Object-oriented testing

The Testing Process l Component testing Testing of individual program components Usually the responsibility of the component developer (except sometimes for critical systems) Tests are derived from the developer’s experience l Integration testing Testing of groups of components integrated to create a system or sub-system The responsibility of an independent testing team Tests are based on a system specification

Defect Testing l The goal of defect testing is to discover defects in programs. l A successful defect test is a test that causes a program to behave in an anomalous way. l Tests show the presence, not the absence, of defects.

l Only exhaustive testing can show a program is free from defects. However, exhaustive testing is impossible. l Tests should exercise a system's capabilities rather than its components. l Testing old capabilities is more important than testing new capabilities l Testing typical situations is more important than boundary value cases. Testing Priorities

l Test cases - Inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification l Test data - Inputs which have been devised to test the system Test Cases and Test Data

The Defect Testing Process

Black Box Testing l An approach to testing where the program is considered as a “black box” (i.e., one cannot “see” inside of it) l The program test cases are based on the system specification, not the internal workings (e.g., algorithms) of the program. l Test planning can begin early in the software process.

Equivalence Partitioning l Input data and output results often fall into different classes where all members of a class are related. l Each of these classes is an equivalence partition where the program behaves in an equivalent way for each class member. l Test cases should be chosen from each partition.

l Partition system inputs and outputs into equivalence sets If input is a 5-digit integer between 10,000 and 99,999, equivalence partitions are 99,999 l Choose test cases at the boundaries of these sets 9,999 10,000 99, ,000 Equivalence Partitioning

Equivalence Partitions

Example - Search Routine procedure Search (Key : ELEM ; T: ELEM_ARRAY; Found : in out BOOLEAN; L: in out ELEM_INDEX) ; Pre-condition -- the array has at least one element T’FIRST <= T’LAST Post-condition -- the element is found and is referenced by L ( Found and T (L) = Key) or -- the element is not in the array ( not Found and not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key ))

l Inputs that conform to the pre-conditions l Inputs where a pre-condition does not hold l Inputs where the key element is a member of the array l Inputs where the key element is not a member of the array Search Routine Input Partitions

Testing Guidelines (Sequences) l Test software with sequences which have only a single value l Use sequences of different sizes in different tests l Derive tests so that the first, middle and last elements of the sequence are accessed l Test with sequences of zero length

Search Routine Test Cases

l Sometimes called structural testing l Derivation of test cases according to program structure l Objective is to exercise all program statements l Usually applied to relatively small program units such as subroutines or class member functions White Box Testing

Binary search (Java)

l Pre-conditions satisfied, key element in array l Pre-conditions satisfied, key element not in array l Pre-conditions unsatisfied, key element in array l Pre-conditions unsatisfied, key element not in array l Input array has a single value l Input array has an even number of values l Input array has an odd number of values Binary Search - Equiv. Partitions

Binary Search Test Cases

Path Testing l The objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once. l The starting point for path testing is a program flow graph.

Binary search flow graph

l 1, 2, 3, 8, 9 l 1, 2, 3, 4, 6, 7, 2 l 1, 2, 3, 4, 5, 7, 2 l 1, 2, 3, 4, 6, 7, 2, 8, 9 l Test cases should be derived so that all of these paths are executed. l A dynamic program analyser may be used to check that paths have been executed (e.g., LINT). Independent Paths

l The minimum number of tests needed to test all statements equals the cyclomatic complexity. l Cyclomatic complexity equals one more than the number of conditions in a program (assuming no goto’s). l Useful if used with care. Does not imply adequacy of testing. l Although all paths are executed, all combinations of paths are not executed. Cyclomatic Complexity

Integration Testing l Tests complete systems or subsystems composed of integrated components l Integration testing should be black box testing with tests derived from the specification l Main difficulty is localizing errors l Incremental integration testing reduces this problem.

Incremental Integration Testing

Approaches to Integration Testing l Top-down testing Start with high-level system and integrate from the top- down, replacing individual components by stubs where appropriate l Bottom-up testing Integrate individual components in levels until the complete system is created l In practice, most integration involves a combination of both of these strategies.

Top-down Testing

Bottom-up Testing

Testing Approaches l Architectural validation Top-down integration testing is better at discovering errors in the system architecture l System demonstration Top-down integration testing allows a limited demonstration at an early stage in the development l Test implementation Often easier with bottom-up integration testing l Test observation Problems with both approaches. Extra code may be required to observe tests

l Takes place when modules or sub-systems are integrated to create larger systems l Objectives are to detect faults due to interface errors or invalid assumptions about interfaces l Particularly important for object-oriented development as objects are defined by their interfaces Interface Testing

Interfaces Types l Parameter interfaces Data passed from one procedure to another l Shared memory interfaces Block of memory is shared between procedures l Procedural interfaces Sub-system encapsulates a set of procedures to be called by other sub-systems l Message passing interfaces Sub-systems request services from other sub-systems

Some Interface Errors l Interface misuse A calling component calls another component and makes an error in its use of its interface; e.g., parameters in the wrong order or of the wrong type. l Interface misunderstanding A calling component embeds assumptions about the behavior of the called component which are incorrect.

Interface Testing Guidelines l Design tests so that parameters to a called procedure are at the extreme ends of their ranges. l Always test pointer parameters with null pointers. l Design tests that cause the component to fail. l Use stress testing in message passing systems. l In shared memory systems, vary the order in which components are activated.

Stress Testing l Exercises the system beyond its maximum design load. Stressing the system often causes defects to come to light. l Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data. l Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded.

l The components to be tested are object classes that are instantiated as objects. l Larger grain than individual functions so approaches to white box testing have to be extended. l No obvious “top” or “bottom” to the system for top-down or bottom-up integration and testing. Object-oriented Testing

Testing Levels l Testing operations associated with objects l Testing object classes l Testing clusters of cooperating objects l Testing the complete OO system

Object Class Testing l Complete test coverage of a class involves Testing all operations associated with an object Setting and interrogating all object attributes Exercising the object in all possible states l Inheritance makes it more difficult to design object class tests as the information to be tested is not localized.

Weather Station Object Interface l Test cases are needed for all operations l Use a state model to identify state transitions for testing l Examples of testing sequences Shutdown  Waiting  Shutdown Waiting  Calibrating  Testing  Transmitting  Waiting Waiting  Collecting  Waiting  Summarising  Transmitting  Waiting

Object Integration l Levels of integration are less distinct in object-oriented systems. l Cluster testing is concerned with integrating and testing clusters of cooperating objects. l Identify clusters using knowledge of the operation of objects and the system features that are implemented by these clusters

Approaches to Cluster Testing l Use case or scenario testing Testing is based on user interactions with the system Has the advantage that it tests system features as experienced by users l Thread testing Tests the systems response to events as processing threads through the system l Object interaction testing Tests sequences of object interactions that stop when an object operation does not call on services from another object

Scenario-based Testing l Identify scenarios from use cases and supplement these with sequence diagrams that show the objects involved in the scenario l Consider the scenario in the weather station system where a report is generated

Collect Weather Data

Weather Station Testing l Thread of methods executed CommsController:request  WeatherStation:report  WeatherData:summarise l Inputs and outputs Input of report request with associated acknowledge and a final output of a report Can be tested by creating raw data and ensuring that it is summarised properly Use the same raw data to test the WeatherData object