White-Box Testing Techniques IV

Slides:



Advertisements
Similar presentations
White-Box Testing Techniques IV
Advertisements

Unit Testing CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology March 27, 2007.
SOFTWARE TESTING WHITE BOX TESTING 1. GLASS BOX/WHITE BOX TESTING 2.
Testing Dr. Andrew Wallace PhD BEng(hons) EurIng
System/Software Testing
Class Specification Implementation Graph By: Njume Njinimbam Chi-Chang Sun.
CMSC 345 Fall 2000 Unit Testing. The testing process.
© SERG Dependable Software Systems (Mutation) Dependable Software Systems Topics in Mutation Testing and Program Perturbation Material drawn from [Offutt.
White-Box Testing Techniques II Originals prepared by Stephen M. Thebaut, Ph.D. University of Florida Dataflow Testing.
Testing Testing Techniques to Design Tests. Testing:Example Problem: Find a mode and its frequency given an ordered list (array) of with one or more integer.
Test Drivers and Stubs More Unit Testing Test Drivers and Stubs CEN 5076 Class 11 – 11/14.
Test Coverage CS-300 Fall 2005 Supreeth Venkataraman.
Summarizing “Structural” Testing Now that we have learned to create test cases through both: – a) Functional (blackbox)and – b) Structural (whitebox) testing.
Testing the programs In this part we look at classification of faults the purpose of testing unit testing integration testing strategies when to stop testing.
White Box Testing Arun Lakhotia University of Southwestern Louisiana P.O. Box Lafayette, LA 70504, USA
Software Testing. System/Software Testing Error detection and removal determine level of reliability well-planned procedure - Test Cases done by independent.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Week 5-6 MondayTuesdayWednesdayThursdayFriday Testing III No reading Group meetings Testing IVSection ZFR due ZFR demos Progress report due Readings out.
1 © 2011 Professor W. Eric Wong, The University of Texas at Dallas Requirements-based Test Generation for Functional Testing W. Eric Wong Department of.
White-Box Testing Techniques I Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 7.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 23, 1999.
Structural Coverage. Measurement of structural coverage of code is a means of assessing the thoroughness of testing. Such metrics do not constitute testing.
Mutation Testing Laraib Zahid & Mariam Arshad. What is Mutation Testing?  Fault-based Testing: directed towards “typical” faults that could occur in.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
Testing Data Structures
PREPARED BY G.VIJAYA KUMAR ASST.PROFESSOR
A Review of Software Testing - P. David Coward
Software TestIng White box testing.
Testing Tutorial 7.
Software Testing.
Rekayasa Perangkat Lunak Part-13
Software Testing.
White-Box Testing Techniques IV
CompSci 230 Software Construction
Structural testing, Path Testing
White-Box Testing Techniques
Types of Testing Visit to more Learning Resources.
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
White-Box Testing.
White-Box Testing Techniques II
CHAPTER 4 Test Design Techniques
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
Unit Test: Functions, Procedures, Classes, and Methods as Units
UNIT-4 BLACKBOX AND WHITEBOX TESTING
White-Box Testing Techniques III
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
Software Testing (Lecture 11-a)
White-Box Testing Techniques II
G&W Chapter 24: Making Agreements Software Specification Lecture 31
Black-Box Testing Techniques III
Lecture 09:Software Testing
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
White-Box Testing.
White-Box Testing Techniques III
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
White-Box Testing Techniques I
White-Box Testing Techniques II
Institute of Computing Tech.
Mutation Testing The Mutants are Coming! Copyright © 2017 – Curt Hill.
Black-Box Testing Techniques III
Regression Testing.
Functional Verification II
Black-Box Testing Techniques II
Functional Verification III
Software Testing.
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
Black-Box Testing Techniques II
UNIT-4 BLACKBOX AND WHITEBOX TESTING
08120: Programming 2: SoftwareTesting and Debugging
Presentation transcript:

White-Box Testing Techniques IV Software Testing and Verification Lecture 10 Prepared by Stephen M. Thebaut, Ph.D. University of Florida

White-Box Testing Topics Logic coverage (lecture I) Dataflow coverage (lecture II) Path conditions and symbolic evaluation (lecture III) Other white-box testing strategies (e.g., “fault-based testing”) (lecture IV)

Other white-box testing strategies Program instrumentation Boundary value analysis (revisited) Fault-based testing Mutation analysis Error seeding

Program Instrumentation Allows for the measurement of white-box coverage during program execution. Code is inserted into a program to record the cumulative execution of statements, branches, du-paths, etc. Execution takes longer and program timing may be altered.

Program Instrumentation Allows for the measurement of white-box coverage during program execution. Code is inserted into a program to record the cumulative execution of statements, branches, du-paths, etc. Execution takes longer and program timing may be altered.

Program Instrumentation Allows for the measurement of white-box coverage during program execution. Code is inserted into a program to record the cumulative execution of statements, branches, du-paths, etc. Execution takes longer and program timing may be altered.

Boundary Value Analysis (1) if (X<Y) then A (2) else B end_if_else Applies to both program control and data structures. Strategies are analogous to black-box boundary value analysis.

Boundary Value Analysis (1) if (X<Y) then A (2) else B end_if_else Applies to both program control and data structures. Strategies are analogous to black-box boundary value analysis.

Fault-Based Testing Suppose a test case set reveals NO program errors – should you celebrate or mourn the event? Answer: it depends on whether you’re the developer or the tester...  Serious answer: it depends on the error-revealing capability of your test set. Mutation Analysis attempts to measure test case set sufficiency.

Fault-Based Testing Suppose a test case set reveals NO program errors – should you celebrate or mourn the event? Answer: it depends on whether you’re the developer or the tester...  Serious answer: it depends on the error-revealing capability of your test set. Mutation Analysis attempts to measure test case set sufficiency.

Fault-Based Testing Suppose a test case set reveals NO program errors – should you celebrate or mourn the event? Answer: it depends on whether you’re the developer or the tester...  Serious answer: it depends on the error-revealing capability of your test set. Mutation Analysis attempts to measure test case set sufficiency.

Fault-Based Testing Suppose a test case set reveals NO program errors – should you celebrate or mourn the event? Answer: it depends on whether you’re the developer or the tester...  Serious answer: it depends on the error-revealing capability of your test set. Mutation Analysis attempts to measure test case set sufficiency.

Mutation Analysis Procedure Generate a large number of “mutant” programs by replicating the original program except for one small change (e.g., change the “+” in line 17 to a “-”; change the “<“ in line 132 to a “<=“; etc.). Compile and run each mutant program against the test set. (cont’d)

Mutation Analysis Procedure Generate a large number of “mutant” programs by replicating the original program except for one small change (e.g., change the “+” in line 17 to a “-”; change the “<“ in line 132 to a “<=“; etc.). Compile and run each mutant program against the test set. (cont’d)

Mutation Analysis Procedure (cont’d) Compare the ratio of mutants “killed” (i.e., revealed) by the test set to the number of “survivors.” --- The higher the “kill ratio” the better the test set. What are some of the potential draw-backs of this approach?

Mutation Analysis Procedure (cont’d) Compare the ratio of mutants “killed” (i.e., revealed) by the test set to the number of “survivors.” --- The higher the “kill ratio” the better the test set. What are some of the potential draw-backs of this approach?

Mutation Analysis Procedure (cont’d) Compare the ratio of mutants “killed” (i.e., revealed) by the test set to the number of “survivors.” --- The higher the “kill ratio” the better the test set. What are some of the potential draw-backs of this approach?

Error Seeding A similar approach, Error Seeding, has been used to estimate the “number of errors” remaining in a program. But such metrics are inherently problematic. For example, how many “errors” are in the following Quick Sort program?

Error Seeding A similar approach, Error Seeding, has been used to estimate the “number of errors” remaining in a program. But such metrics are inherently problematic. For example, how many “errors” are in the following Quick Sort program?

Error Seeding A similar approach, Error Seeding, has been used to estimate the “number of errors” remaining in a program. But such metrics are inherently problematic. For example, how many “errors” are in the following Quick Sort program? QSORT(X,N) Return(X) END

Error Seeding Procedure Before testing, “seed” the program with a number of “typical errors,” keeping careful track of the changes made. After a period of testing, compare the number of seeded and non-seeded errors detected. (cont’d)

Error Seeding Procedure Before testing, “seed” the program with a number of “typical errors,” keeping careful track of the changes made. After a period of testing, compare the number of seeded and non-seeded errors detected. (cont’d)

Error Seeding Procedure (cont’d) If N is the total number of errors seeded, n is the number of seeded errors detected, and x is the number of non-seeded errors detected, the number of remaining (non-seeded) errors in the program is about x(N/n – 1) What assumptions underlie this formula? Consider its derivation…

Error Seeding Procedure (cont’d) If N is the total number of errors seeded, n is the number of seeded errors detected, and x is the number of non-seeded errors detected, the number of remaining (non-seeded) errors in the program is about x(N/n – 1) What assumptions underlie this formula? Consider its derivation…

Error Seeding Procedure (cont’d) If N is the total number of errors seeded, n is the number of seeded errors detected, and x is the number of non-seeded errors detected, the number of remaining (non-seeded) errors in the program is about x(N/n – 1) What assumptions underlie this formula? Consider its derivation…

Derivation of Error Seeding Formula Let X be the total number of NON- SEEDED errors in the program Assuming seeded and non-seeded errors are equally easy/hard to detect, after some period of testing, x:n  X:N. So, X  xN/n X – x  xN/n – x  x(N/n – 1) as claimed.

White-Box Testing Techniques IV Software Testing and Verification Lecture 10 Prepared by Stephen M. Thebaut, Ph.D. University of Florida