Defect testing Testing programs to establish the presence of system defects.

Slides:



Advertisements
Similar presentations
1 Note content copyright © 2004 Ian Sommerville. NU-specific content copyright © 2004 M. E. Kabay. All rights reserved. Software Testing IS301 – Software.
Advertisements

Software Engineering COMP 201
Software testing.
Defect testing Objectives
การทดสอบโปรแกรม กระบวนการในการทดสอบ
Chapter 10 Software Testing
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software Engineering, COMP201 Slide 1 Software Testing Lecture 28 & 29.
Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Final Project Account for 40 pts out of 100 pts of the final score 10 pts from.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing 2.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software Engineering Software Testing.
CS 425/625 Software Engineering Software Testing
- Testing programs to establish the presence of system defects -
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 1995 Software Engineering, 5th edition. Chapter 22Slide 1 Verification and Validation u Assuring that a software system meets a user's.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 23 Slide 1 Software testing Slightly adapted by Anders Børjesson.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Hoang Huu Hanh, Hue University hanh-at-hueuni.edu.vn.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Integration testing l Tests complete systems or subsystems composed of integrated.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Testing phases. Test data Inputs which have been devised to test the system Test cases Inputs to test the system and the predicted outputs from these.
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Software testing techniques 3. Software testing
©Ian Sommerville 1995/2000 (Modified by Spiros Mancoridis 1999) Software Engineering, 6th edition. Chapters 19,20 Slide 1 Verification and Validation l.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Chapter 20 Software Testing.
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
CSC 480 Software Engineering Lecture 14 Oct 16, 2002.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
1 Software Defect Testing Testing programs to establish the presence of system defects.
Software Testing. 2 CMSC 345, Version 4/12 Topics The testing process  unit testing  integration and system testing  acceptance testing Test case planning.
This chapter is extracted from Sommerville’s slides. Text book chapter
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
CSC 480 Software Engineering Lecture 15 Oct 21, 2002.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Defect testing Testing programs to establish the presence of system defects.
Chapter 9 Software Testing
Software Testing.
IS301 – Software Engineering V:
Chapter 8 – Software Testing
Chapter 18 Software Testing Strategies
Structural testing, Path Testing
Software testing.
Software testing strategies 2
Lecture 09:Software Testing
Higher-Level Testing and Integration Testing
Verification and Validation
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
Software testing.
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing “If you can’t test it, you can’t design it”
Chapter 7 Software Testing.
Presentation transcript:

Defect testing Testing programs to establish the presence of system defects

The testing process Component testing Integration testing Includes integration of subsystems, of systems, acceptance testing In OO development, boundary is less clear Component testing Testing of individual program components Usually the responsibility of the component developer (except sometimes for critical systems) Tests are derived from the developer’s experience Integration testing Testing of groups of components integrated to create a system or sub-system The responsibility of an independent testing team Tests are based on a system specification

20.1 Defect testing The goal of defect testing is to discover defects in programs A successful defect test is a test which causes a program to behave in an anomalous way Tests show the presence not the absence of defects <repeat from chapt 19 – briefly mention>

The defect testing process Only exhaustive testing can show a program is free from defects. However, exhaustive testing is impossible Test cases Inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification Test data Inputs which have been devised to test the system Important to test typical situations and boundary value cases

Black-box testing An approach to testing where the program is considered as a ‘black-box’ The program test cases are based on the system specification Test planning (developing test cases) can begin early in the software process … not based on consideration of what the code is … what if’s and loops there are, only based on what the system is supposed to do

Equivalence partitioning Input data and output results often fall into different classes where all members of a class are related Each of these classes is an equivalence partition where the program behaves in an equivalent way for each class member Test cases should be chosen from each partition

E.g. Equivalence partitioning Partition system inputs and outputs into ‘equivalence sets’ If input is a 5-digit integer between 10,000 and 99,999, equivalence partitions are <10,000, 10,000-99, 999 and > 100, 000 Choose test cases around the midpoint of partitions and at the boundary of these sets 5000,50000,150000 00000, 09999, 10000, 99999, 100000 … partitions could come from spec <See next slide for illustration> … checking easy cases and cases likely to be mishandled (including invalid values) If a value in a partition produces bad results, it is likely that other values in the partition will produce bad results. If a value in a partition produces correct results, it is likely that other values in the partition will produce correct results (if partitions have been correctly identified)

Equivalence partitions (spec says that will receive 4-10 values – each a 5 digit number between 10000 and 99999 – check partitions for each)

Search routine - input partitions Inputs which conform to the pre-conditions Inputs where a pre-condition does not hold Inputs where the key element is a member of the array Inputs where the key element is not a member of the array empty sequence or not found or not found

Other Testing guidelines (sequences) Test software with sequences which have only a single value Use sequences of different sizes in different tests Derive tests so that the first, middle and last elements of the sequence are accessed Spec doesn’t say that the sequence is ordered, so don’t assume that it is … this is the boundary condition on whether the precondition is met – non empty sequence … not really a partition – but programmer’s intuition that length could matter even though no explicit partition. Don’t want to “luck” into the one and only size that works … considers processing – could be bumping against boundaries in code

Search routine - input partitions Combine in various combinations … single value sequence – found, single value – not found etc (missing here – put in empty sequence)

Structural testing Sometime called white-box testing Derivation of test cases according to program structure. Knowledge of the program is used to identify additional test cases Objective is to exercise all program statements (not all path combinations) … or glass-box or clear-box … done on small programs, modules, or functions – the bigger the program the harder it is to hit all of the statements

<just comment that the code searches 3 places – in the middle, above, or below – so three equivalence classes to be tested – and that in this algorithm the sequence must be ordered> Binary search (Java)

Binary search equiv. partitions

Binary search - equiv. partitions Pre-conditions satisfied, key element in array Pre-conditions satisfied, key element not in array Pre-conditions unsatisfied, key element in array Pre-conditions unsatisfied, key element not in array Key found in center, top, or bottom Input array has a single value Input array has an even number of values Input array has an odd number of values (really, in this case, there is search to some depth, we ought to search for some items found at middling depth, and some found essentially at leaves)

Binary search - test cases <this misses preconditions not satisfied (e.g. empty sequence or sequence not in order) And I really think it ought to try some bigger sequences so can check nodes in middle and in leaves>

Path testing The objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once The starting point for path testing is a program flow graph that shows nodes representing program decisions and arcs representing the flow of control Statements with conditions are therefore nodes in the flow graph not practical with anything of any significant size (and here they are not even talking about each combination of paths – which is exponential in size of program)

Binary search flow graph Program Flow graph - Describes the program control flow. Each branch is shown as a separate path and loops are shown by arrows looping back to the loop condition node Binary search flow graph

Cyclomatic complexity The number of tests to test all control statements equals the cyclomatic complexity Cyclomatic complexity equals number of conditions in a program + 1 Useful if used with care. Does not imply adequacy of testing. Although all paths are executed, all combinations of paths are not executed Program flow graph Used as a basis for computing the cyclomatic complexity … (I don’t think that is right, I think we could go through all paths here in 2 tests) Cyclomatic complexity = Number of edges - Number of nodes +2 (these calcs come out the same if there are no gotos) (I don’t think this is adequate)

Independent paths Test cases should be derived so that all of these paths are executed 1, 2, 3, 8, 9 1, 2, 3, 4, 6, 7, 2, 8, 9 1, 2, 3, 4, 5, 7, 2, 8, 9 A dynamic program analyser may be used to check that paths have been executed … may be difficult <I couldn’t see any way that a test could end at 2, so I covered these in 3 test cases – but if we needed to search deeper, we could have gone up on one loop and down on the next loop – so only two cases were needed – a found case that goes on the 3-8 arc and a not found case that goes on the 2-8 arc> dynamic program analyzer is a testing tool that will tell you which statements have been executed (how many times) by your test cases

Integration testing Tests complete systems or subsystems composed of integrated components Integration testing should be black-box testing with tests derived from the specification Main difficulty is localizing errors Incremental integration testing reduces this problem … start testing as soon as usable versions of some of the components are available. Add components one at a time (if possible) to isolate problems. (see next slide)

Incremental integration testing run tests 1,2,3 on Modules A & B once it works, rerun tests on combined modules A,B, & C (plus other new tests for new capabilities) etc

Approaches to integration testing Top-down testing Bottom-up testing In practice, most integration involves a combination of these strategies Top-down testing Start with high-level system and integrate from the top-down replacing individual components by stubs where appropriate Bottom-up testing Integrate individual components in levels until the complete system is created … pieces are integrated as they are available

Top-down testing

Bottom-up testing

Comparing Testing approaches Architectural validation Top-down integration testing is better at discovering errors in the system architecture System demonstration Top-down integration testing allows a limited demonstration at an early stage in the development Test implementation Often easier with bottom-up integration testing Test observation Problems with both approaches. Extra code may be required to observe tests … allowing fixing problems when they are less costly to fix … top down testing needs stubs to give some simulation of lower level function. While bottom up, test drivers set a context and request services from lower level code

Interface testing Takes place when modules or sub-systems are integrated to create larger systems Objectives are to detect faults due to interface errors or invalid assumptions about interfaces Particularly important for object-oriented development as objects are defined by their interfaces

Interface testing

Common Interface errors Interface misuse Interface misunderstanding Timing errors Interface misuse A calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order Interface misunderstanding A calling component embeds assumptions about the behaviour of the called component which are incorrect – causes problem in the caller because it didn’t get what it needed Timing errors (When real-time shared memory or MPI parallel) The called and the calling component operate at different speeds and out-of-date information is accessed Interface testing is challenging because problems may only show up in unusual situations, and may show up in the component that is not at fault Static techniques (e.g. inspections) may be more effective (make sure all those involved in the interfaces are there – to clear up bad assumptions about what components do)

Stress testing Exercises the system up to and beyond its maximum design load. Stressing the system often causes defects to come to light Stressing the system test failure behaviour. Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded … see when it becomes no longer usable – test whether it handles the expected load (probably needs some kind of simulation to simulate all of those users!)

20.3 Object-oriented testing The components to be tested are object classes that are instantiated as objects Larger grain than individual functions No obvious ‘top’ to the system for top-down integration and testing

Testing levels Testing operations associated with objects Testing object classes Testing clusters of cooperating objects Testing the complete OO system … like testing non-OO functions (except functions tend to be smaller in OO). Regular black box or white box testing can be done … equivalence class testing must be enhanced to consider related-OPERATION SEQUENCES (client program may typically use several methods in a row) (Structural testing also needs changes (section 20.3.1)) … scenario based testing (section 20.3.2) … same as in other systems

Object class testing Complete test coverage of a class involves Testing all operations associated with an object Setting and interrogating all object attributes Exercising the object in all possible states Inheritance makes it more difficult to design object class tests as the information to be tested is not localized … all operations done with the object in all possible states (impossible)

Weather station object interface Test cases are needed for all operations Use a state model to identify state transitions for testing Examples of testing sequences Shutdown ® Waiting ® Shutdown Waiting ® Calibrating ® Testing ® Transmitting ® Waiting Waiting ® Collecting ® Waiting ® Summarising ® Transmitting ® Waiting … at least go on every arc in the state model (but probably not enough – can probably determine “equivalence classes” based on attributes accessed (as opposed to equivalence classes based on values for variables in regular testing)

Weather station state diagram

Object integration Levels of integration are less distinct in object-oriented systems Cluster testing is concerned with integrating and testing clusters of cooperating objects Identify clusters using knowledge of the operation of objects and the system features that are implemented by these clusters …. There’s not really an issue of top-down vs bottom-up testing

Approaches to cluster testing Use-case or scenario testing Thread testing Object interaction testing Use-case or scenario testing Testing is based on a user interactions with the system – tests are organized around the use-cases (or scenarios) defined for the system Has the advantage that it tests system features as experienced by users Thread testing Tests the systems response to events as processing threads through the system Appropriate match to event-driven systems Object interaction testing Tests sequences of object interactions that stop when an object operation does not call on services from another object

Scenario-based testing Identify scenarios from use-cases and supplement these with interaction (sequence) diagrams that show the objects involved in the scenario Consider the scenario in the weather station system where a report is generated … Useful approach – test most likely / most important scenarios first

Collect weather data … This would then become a test case … Want to test enough scenarios that all methods are tested

20.4 Testing workbenches Testing is an expensive process phase. Testing workbenches provide a range of tools to reduce the time required and total testing costs Most testing workbenches are open systems because testing needs are organization-specific – need tools to be adaptable to your organization and project Difficult to integrate your project with closed design and analysis workbenches

A testing workbench Tools in a testing workbench: Test Manager – keeps track of tested data, expected results, program facilities tested Test Data Generator – may generate randomly, pulling from DB, or … based on code Oracle – generates predictions of expected test results File Comparator – compares test results with expected test results (particularly useful for regression testing where previous version’s results make accurate expected results) Report generator – shows results of testing Dynamic analyzer – shows what statements have been run how many times during testing – useful for determining coverage of program by test cases Simulator – simulate multiple users or … other things