Testing the programs In this part we look at classification of faults the purpose of testing unit testing integration testing strategies when to stop testing.

Slides:



Advertisements
Similar presentations
Object Oriented Analysis And Design-IT0207 iiI Semester
Advertisements

Testing Relational Database
Defect testing Objectives
Lecture 8: Testing, Verification and Validation
Testing and Quality Assurance
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Software Failure: Reasons Incorrect, missing, impossible requirements * Requirement validation. Incorrect specification * Specification verification. Faulty.
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing an individual module
Lecturer: Dr. AJ Bieszczad Chapter 87-1 How does software fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible.
Bottom-Up Integration Testing After unit testing of individual components the components are combined together into a system. Bottom-Up Integration: each.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
System/Software Testing
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
The Software Development Life Cycle: An Overview Presented by Maxwell Drew and Dan Kaiser Southwest State University Computer Science Program.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee
CMSC 345 Fall 2000 Unit Testing. The testing process.
Chapter 13: Implementation Phase 13.3 Good Programming Practice 13.6 Module Test Case Selection 13.7 Black-Box Module-Testing Techniques 13.8 Glass-Box.
Software Testing Testing principles. Testing Testing involves operation of a system or application under controlled conditions & evaluating the results.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Software Testing Testing types Testing strategy Testing principles.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Today’s Agenda  Reminder: HW #1 Due next class  Quick Review  Input Space Partitioning Software Testing and Maintenance 1.
1 Ch. 1: Software Development (Read) 5 Phases of Software Life Cycle: Problem Analysis and Specification Design Implementation (Coding) Testing, Execution.
What is Testing? Testing is the process of finding errors in the system implementation. –The intent of testing is to find problems with the system.
Software Development Problem Analysis and Specification Design Implementation (Coding) Testing, Execution and Debugging Maintenance.
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Chapter 1 Software Engineering Principles. Problem analysis Requirements elicitation Software specification High- and low-level design Implementation.
Software Quality Assurance and Testing Fazal Rehman Shamil.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Chapter 8 Testing the Programs. Integration Testing  Combine individual comp., into a working s/m.  Test strategy gives why & how comp., are combined.
Chapter 8 Testing the Programs 8.1 Software Faults and Failures 1. Introduction  faults: A: definition: the problem caused by error B: cause: X: the software.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Defect testing Testing programs to establish the presence of system defects.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
Software Engineering TESTING Compiled by: Dr. S. Prem Kumar
Software Testing.
Rekayasa Perangkat Lunak Part-13
Software Engineering (CSI 321)
Some Simple Definitions for Testing
Structural testing, Path Testing
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Lecture 09:Software Testing
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.
Different Testing Methodology
Software testing.
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Chapter 7 Software Testing.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

Testing the programs In this part we look at classification of faults the purpose of testing unit testing integration testing strategies when to stop testing

Concept change!! Many programmers view testing as a demonstration that their program performs properly. The idea of demonstrating correctness is really the reverse of what testing is all about. We test a program to demonstrate the existence of a fault! Because our goal is to discover faults, we consider a test successful only when a fault is discovered or a failure occurs as a result of our testing procedures.

Classification of faults In an ideal world, we produce programs where everything works flawlessly every time. Unfortunately this is not the case! We say that our software has failed, usually when its behaviour deviates from the one described in the requirements. First we identify the fault, i.e. determine what fault or faults caused the failure. Next we correct the fault by making changes to the system so that the fault is removed.

Classification of faults Why do we classify faults? –In order to improve our development process! We would like to match a fault to a specific area of our development process. –In other words, we would like our classification scheme to be orthogonal.

IBM Orthogonal Defect Classification Fault type Meaning FunctionFault that affects capability, end-user interfaces, product interfaces, interface with hardware architecture, or global data structure InterfaceFault in interacting with other components or drivers via calls, macros, control blocks or parameter lists CheckingFault in program logic that fails to validate data and values properly before they are used AssignmentFault in data structure or code block initialization.

IBM Orthogonal Defect Classification Fault typeMeaning Timing/serialization Fault that involves timing of shared and real- time resources Build/package/merge Fault that occurs because of problems in repositories, management changes, or version control Documentation Fault that affects publications and maintenance notes Algorithm Fault involving efficiency or correctness of algorithm or data structure but not design

Hewlett-Packard fault classification

Testing steps

Views of Test Objects Black (opaque) box –In this type of testing, the test object is viewed from the outside and its contents are unknown. –Testing consists of feeding input to the object and noting what output is produced. –The test's goal is to be sure that every kind of input is submitted and that the observed output matches the expected output.

Black Box (example) Suppose we have a component that accepts as input the three numbers a, b, c and outputs the two roots of the equation ax 2 + bx + c = 0 or the message “no real roots”. It is impossible to test the component by submitting every possible triple of numbers (a,b,c). Representative cases may be chosen so that we have all combinations of positive, negative and zero for each of a, b, and c. Additionally we may select values that ensure that the discriminant, (b2 – 4ac) is positive, zero, or negative.

Black Box (example) If the tests reveal no faults, we have no guarantee that the component is fault- free! There are other reasons why failure may occur. For some components, it is impossible to generate a set of test cases to demonstrate correct functionality for all cases.

White box testing White (transparent) box –In this type of testing, we use the structure of the test object to test in different ways. –For example, we can devise test cases that execute all the statements or all the control paths within the component(s). –Sometimes with many branches and loops it may be impractical to use this kind of approach.

Unit testing Our goal is to find faults in components. There are several ways to do this: Examining the code –Code walkthroughs –Code inspections Proving code correct Testing components

Examining the Code Code walkthroughs are an informal type of review. Your code and documentation is presented to a review team and the team comments on their correctness. You lead and control the discussion. The focus is on the code not the programmer

Examining the Code A code inspection is similar to a walkthrough but is more formal. Here the review team checks the code and documentation against a prepared list of concerns. – For example, the team may examine the definition and use of data types and structures to see if their use is consistent with the design and with system standards and procedures. The team may review algorithms and computations for their correctness and efficiency

Proving code correct Proof techniques are not widely used. It is difficult to create proofs, these can sometimes be longer than the program itself! Additionally customers require demonstration that the program is working correctly. Whereas a proof tells us how a program will work in a hypothetical environment described by the design and requirements, testing gives us information on how the program works in its actual operating environment.

Testing components Choosing test cases: To test a component, we select input data and conditions and observe the output. A test point or case is a particular choice of test data. A test is a finite collection of test cases. Create tests that can convince ourselves and our customers that the program works correctly, not only for the test cases but for all input. –We start by defining test objectives and define tests designed to meet a specific objective. –One objective can be that all statements should execute correctly another can be that every function performed by the code is done correctly.

Testing components As seen before we view the component as either a “white” or “black” box. –If we use “black” box testing, we supply all possible input and compare the output with what was expected. –For example, with the quadratic equation seen earlier we can choose values for the coefficients that range over combinations of positive, zero and negative numbers. –Or select combinations based on the relative sizes e.g. a > b > c, b > c > a, c > b > a,...etc

Testing components We can go further and select values based upon the discriminant. We even supply non-numeric input to determine the program's response. In total we have four mutually exclusive types of test input. We thus use the test objective to help us separate the input into equivalence classes.

Equivalence classes Every possible input belongs to one of the classes. That is, the classes cover the entire set of input data. No input datum belongs to more than one class. That is, the classes are disjoint. If the executing code demonstrates a fault when using a particular class member is used as input, then the same fault can be detected using any other member of the class as input. That is, any element of the class represents all elements of that class.

Equivalence classes It is not always easy of feasible to tell if the third restriction can be met and it is usually rewritten to say: –if a class member is used to detect a fault then the probability is high that the other elements in the class will reveal the same fault.

Common Practice Usually 'white' box and 'black' box testing are combined. Suppose we have a component expects a positive input value. Then, using 'black' box testing, we can have a test case for each of the following: –a very large positive integer –a positive integer –a positive, fixed point decimal –a number greater than 0 but less than 1 –a negative number –a non numeric character

Common Practice Using 'white' box testing we can chose one or more of the following: –Statement testing: Every statement in the component is executed at least once in some test. –Branch testing: For every decision point in the the code, each branch is chosen at least once in some test. –Path testing: Every distinct path through the code is executed at least once in some test.

White box testing Statement testing –choose X > K that produces a +ve result – Branch testing –choose two test cases to traverse each branch of the decision points – – Path testing –four test cases needed – – – –

Integration testing When each component has been completed and tested, we can then combine them into a working system. This integration must be planned and coordinated so that in the case of a failure, we would be able to determine what may have caused it. Suppose we view the system as a hierarchy of components (shown on the following slide).

Integration testing

Bottom-up integration

Top-down integration

Big-bang integration

Sandwich integration

Comparison of integration strategies

When to Stop Testing?

Fault Seeding We intentionally insert or “seed” a know number of faults in a program. Then another member of the team locate as many faults as possible. The number of undiscovered seeded faults act as an indicator of the total number of faults(unseeded and seeded) remaining in the program. We say:

Fault Seeding Problems: It is assumed that the seeded faults are of the same kind and complexity as the actual faults in the program. –This is difficult to do since we do not know what are the typical faults until we have found them. We can attempt to overcome this by basing the seeded faults on historical data about previous faults. –This, however requires that we have built similar systems before.

Fault Seeding Solution Use two independent groups, Test Group 1 and Test Group 2. Let x be the number detected by Group 1 and y the number detected by Group 2. Some faults will be detected by both groups say q, such that q <= x and q <= y. Finally let n be the total number of faults in the program which we want to estimate. The effectiveness of each group can be given by E 1 = x/n and E 2 = y/n

Fault Seeding The group effectiveness measures the group's ability to detect faults from among a set of existing faults. If we assume that Group 1 is just as effective at finding faults in any part of the program as in any other part, –we can look at the ratio of faults found by Group 1 from the set of faults found by Group 2. –E 1 = x/n = q/y –E 2 = y/n = q/x –Which gives n = (xy)/q = q/(E 1 * E 2 )

Confidence in the Software If we seeded a program with S faults and we claim that the code has only N actual faults. Suppose we tested until all S faults have been found as well as n non- seeded faults, then a confidence level can be calculated as 1, if n > N C = S/(S – N + 1), if n  N

Confidence in the Software With that approach we cannot predict the level of confidence until all the seeded faults are detected. Richards (1974) suggests a modification, where the confidence level can be estimated whether or not all the seeded faults have located. 1, if n > N C= S S + N + 1,if n <= N s -1 N + s

Other stopping criteria We can use the test strategy to determine when to stop. –If we are doing statement, branch or path testing, we can track how many statements, branches or paths yet need to be executed and gauge our progress in terms of those statements, branches or paths left to test. There are many tools that can calculate these coverage values for us.