White-Box Testing Techniques IV

Slides:



Advertisements
Similar presentations
Functional Verification III Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture Notes 23.
Advertisements

Practical Testing Techniques. Verification and Validation Validation –does the software do what was wanted? “Are we building the right system?” –This.
Unit Testing CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology March 27, 2007.
SOFTWARE TESTING WHITE BOX TESTING 1. GLASS BOX/WHITE BOX TESTING 2.
Testing Dr. Andrew Wallace PhD BEng(hons) EurIng
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
System/Software Testing
Topics in Software Dynamic White-box Testing Part 2: Data-flow Testing
Class Specification Implementation Graph By: Njume Njinimbam Chi-Chang Sun.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Exam 1 Review Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 15.
Testing Tools Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 14.
1 Testing Course notes for CEN Outline  Introduction:  terminology and philosophy  Factors that influence testing  Testing techniques.
Overview of Software Testing 07/12/2013 WISTPC 2013 Peter Clarke.
© SERG Dependable Software Systems (Mutation) Dependable Software Systems Topics in Mutation Testing and Program Perturbation Material drawn from [Offutt.
White-Box Testing Techniques II Originals prepared by Stephen M. Thebaut, Ph.D. University of Florida Dataflow Testing.
Coverage – “Systematic” Testing Chapter 20. Dividing the input space for failure search Testing requires selecting inputs to try on the program, but how.
From Use Cases to Test Cases 1. A Tester’s Perspective  Without use cases testers will approach the system to be tested as a “black box”. “What, exactly,
Agenda Introduction Overview of White-box testing Basis path testing
Testing Testing Techniques to Design Tests. Testing:Example Problem: Find a mode and its frequency given an ordered list (array) of with one or more integer.
Test Drivers and Stubs More Unit Testing Test Drivers and Stubs CEN 5076 Class 11 – 11/14.
Test Coverage CS-300 Fall 2005 Supreeth Venkataraman.
White-box Testing.
CS 217 Software Verification and Validation Week 7, Summer 2014 Instructor: Dong Si
Summarizing “Structural” Testing Now that we have learned to create test cases through both: – a) Functional (blackbox)and – b) Structural (whitebox) testing.
1 Program Testing (Lecture 14) Prof. R. Mall Dept. of CSE, IIT, Kharagpur.
Mutation Testing G. Rothermel. Fault-Based Testing White-box and black-box testing techniques use coverage of code or requirements as a “proxy” for designing.
Software Testing Part II March, Fault-based Testing Methodology (white-box) 2 Mutation Testing.
White Box Testing Arun Lakhotia University of Southwestern Louisiana P.O. Box Lafayette, LA 70504, USA
Software Testing. System/Software Testing Error detection and removal determine level of reliability well-planned procedure - Test Cases done by independent.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Week 5-6 MondayTuesdayWednesdayThursdayFriday Testing III No reading Group meetings Testing IVSection ZFR due ZFR demos Progress report due Readings out.
Testing Data Structures Tao Xie Visiting Professor, Peking University Associate Professor, North Carolina State University
Dynamic Testing.
1 © 2011 Professor W. Eric Wong, The University of Texas at Dallas Requirements-based Test Generation for Functional Testing W. Eric Wong Department of.
Mutation Testing Breaking the application to test it.
White-Box Testing Techniques I Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 7.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 23, 1999.
Foundations of Software Testing Chapter 7: Test Adequacy Measurement and Enhancement Using Mutation Last update: September 3, 2007 These slides are copyrighted.
CSC 395 – Software Engineering Lecture 27: White-Box Testing.
Dynamic White-Box Testing What is code coverage? What are the different types of code coverage? How to derive test cases from control flows?
Mutation Testing Laraib Zahid & Mariam Arshad. What is Mutation Testing?  Fault-based Testing: directed towards “typical” faults that could occur in.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
SOFTWARE TESTING AND QUALITY ASSURANCE. Software Testing.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
A Review of Software Testing - P. David Coward
White-Box Testing Techniques IV
Testing Tutorial 7.
Rekayasa Perangkat Lunak Part-13
Software Testing.
White-Box Testing Techniques IV
Structural testing, Path Testing
White-Box Testing.
White-Box Testing Techniques II
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
UNIT-4 BLACKBOX AND WHITEBOX TESTING
White-Box Testing Techniques III
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
Software Testing (Lecture 11-a)
White-Box Testing Techniques II
White-Box Testing.
White-Box Testing Techniques III
White-Box Testing Techniques I
Black-Box Testing Techniques III
Functional Verification II
Whitebox Testing.
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

White-Box Testing Techniques IV Software Testing and Verification Lecture 10 Prepared by Stephen M. Thebaut, Ph.D. University of Florida

White-Box Testing Topics Logic coverage (lecture I) Dataflow coverage (lecture II) Path conditions and symbolic evaluation (lecture III) Other white-box testing strategies (e.g., “fault-based testing”) (lecture IV)

Other white-box testing strategies Program instrumentation Boundary value analysis (revisited) Fault-based testing Mutation analysis Error seeding

Program Instrumentation Allows for the measurement of white-box coverage during program execution. Code is inserted into a program to record the cumulative execution of statements, branches, du-paths, etc. Execution takes longer and program timing may be altered.

Program Instrumentation Allows for the measurement of white-box coverage during program execution. Code is inserted into a program to record the cumulative execution of statements, branches, du-paths, etc. Execution takes longer and program timing may be altered.

Program Instrumentation Allows for the measurement of white-box coverage during program execution. Code is inserted into a program to record the cumulative execution of statements, branches, du-paths, etc. Execution takes longer and program timing may be altered.

Boundary Value Analysis (1) if (X<Y) then A (2) else B end_if_else Applies to both program control and data structures. Strategies are analogous to black-box boundary value analysis.

Boundary Value Analysis (1) if (X<Y) then A (2) else B end_if_else Applies to both program control and data structures. Strategies are analogous to black-box boundary value analysis.

Fault-Based Testing Suppose a test case set reveals NO program errors – should you celebrate or mourn the event? Answer: it depends on whether you’re the developer or the tester...  Serious answer: it depends on the error-revealing capability of your test set. Mutation Analysis attempts to measure test case set sufficiency.

Fault-Based Testing Suppose a test case set reveals NO program errors – should you celebrate or mourn the event? Answer: it depends on whether you’re the developer or the tester...  Serious answer: it depends on the error-revealing capability of your test set. Mutation Analysis attempts to measure test case set sufficiency.

Fault-Based Testing Suppose a test case set reveals NO program errors – should you celebrate or mourn the event? Answer: it depends on whether you’re the developer or the tester...  Serious answer: it depends on the error-revealing capability of your test set. Mutation Analysis attempts to measure test case set sufficiency.

Fault-Based Testing Suppose a test case set reveals NO program errors – should you celebrate or mourn the event? Answer: it depends on whether you’re the developer or the tester...  Serious answer: it depends on the error-revealing capability of your test set. Mutation Analysis attempts to measure test case set sufficiency.

Mutation Analysis Procedure Generate a large number of “mutant” programs by replicating the original program except for one small change (e.g., change the “+” in line 17 to a “-”, change the “<“ in line 132 to a “<=“, etc.). Compile and run each mutant program against the test set. (cont’d)

Mutation Analysis Procedure Generate a large number of “mutant” programs by replicating the original program except for one small change (e.g., change the “+” in line 17 to a “-”, change the “<“ in line 132 to a “<=“, etc.). Compile and run each mutant program against the test set. (cont’d)

Mutation Analysis Procedure (cont’d) Compare the ratio of mutants “killed” (i.e., revealed) by the test set to the number of “survivors.” --- The higher the “kill ratio” the better the test set. What are some of the potential draw-backs of this approach?

Mutation Analysis Procedure (cont’d) Compare the ratio of mutants “killed” (i.e., revealed) by the test set to the number of “survivors.” --- The higher the “kill ratio” the better the test set. What are some of the potential draw-backs of this approach?

Mutation Analysis Procedure (cont’d) Compare the ratio of mutants “killed” (i.e., revealed) by the test set to the number of “survivors.” --- The higher the “kill ratio” the better the test set. What are some of the potential draw-backs of this approach?

Error Seeding A similar approach, Error Seeding, has been used to estimate the “number of errors” remaining in a program. But such metrics are inherently problematic. For example, how many “errors” are in the following Quick Sort program?

Error Seeding A similar approach, Error Seeding, has been used to estimate the “number of errors” remaining in a program. But such metrics are inherently problematic. For example, how many “errors” are in the following Quick Sort program?

Error Seeding A similar approach, Error Seeding, has been used to estimate the “number of errors” remaining in a program. But such metrics are inherently problematic. For example, how many “errors” are in the following Quick Sort program? QSORT(X,N) Return(X) END

Error Seeding Procedure Before testing, “seed” the program with a number of “typical errors,” keeping careful track of the changes made. After a period of testing, compare the number of seeded and non-seeded errors detected. (cont’d)

Error Seeding Procedure Before testing, “seed” the program with a number of “typical errors,” keeping careful track of the changes made. After a period of testing, compare the number of seeded and non-seeded errors detected. (cont’d)

Error Seeding Procedure (cont’d) If N is the total number of errors seeded, n is the number of seeded errors detected, and x is the number of non-seeded errors detected, the number of remaining (non-seeded) errors in the program is about x(N/n – 1) What assumptions underlie this formula? Consider its derivation…

Error Seeding Procedure (cont’d) If N is the total number of errors seeded, n is the number of seeded errors detected, and x is the number of non-seeded errors detected, the number of remaining (non-seeded) errors in the program is about x(N/n – 1) What assumptions underlie this formula? Consider its derivation…

Error Seeding Procedure (cont’d) If N is the total number of errors seeded, n is the number of seeded errors detected, and x is the number of non-seeded errors detected, the number of remaining (non-seeded) errors in the program is about x(N/n – 1) What assumptions underlie this formula? Consider its derivation…

Derivation of Error Seeding Formula Let X be the total number of NON- SEEDED errors in the program Assuming seeded and non-seeded errors are equally easy/hard to detect, after some period of testing, x:n  X:N. So, X  xN/n X – x  xN/n – x  x(N/n – 1) as claimed.

White-Box Testing Techniques IV Software Testing and Verification Lecture 10 Prepared by Stephen M. Thebaut, Ph.D. University of Florida