Mary Jean Harrold Georgia Institute of Technology

Slides:



Advertisements
Similar presentations
Software Testing Technique. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves.
Advertisements

Software Testing. Quality is Hard to Pin Down Concise, clear definition is elusive Not easily quantifiable Many things to many people You'll know it when.
Lecture 8: Testing, Verification and Validation
DETAILED DESIGN, IMPLEMENTATIONA AND TESTING Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Unit Testing CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology March 27, 2007.
Ch. 1: Software Development (Read) 5 Phases of Software Life Cycle: Problem Analysis and Specification Design Implementation (Coding) Testing, Execution.
Testing an individual module
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Chapter 13 & 14 Software Testing Strategies and Techniques
Software Systems Verification and Validation Laboratory Assignment 3
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software Testing.
Prof. Mohamed Batouche Software Testing.
Overview of Software Testing 07/12/2013 WISTPC 2013 Peter Clarke.
Introduction to Software Testing
Agenda Introduction Overview of White-box testing Basis path testing
Software Testing. 2 CMSC 345, Version 4/12 Topics The testing process  unit testing  integration and system testing  acceptance testing Test case planning.
1 Software Testing. 2 Path Testing 3 Structural Testing Also known as glass box, structural, clear box and white box testing. A software testing technique.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
INTRUDUCTION TO SOFTWARE TESTING TECHNIQUES BY PRADEEP I.
1 Program Testing (Lecture 14) Prof. R. Mall Dept. of CSE, IIT, Kharagpur.
1 Introduction to Software Testing. Reading Assignment P. Ammann and J. Offutt “Introduction to Software Testing” ◦ Chapter 1 2.
1 Ch. 1: Software Development (Read) 5 Phases of Software Life Cycle: Problem Analysis and Specification Design Implementation (Coding) Testing, Execution.
Software Development Problem Analysis and Specification Design Implementation (Coding) Testing, Execution and Debugging Maintenance.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Integration testing Integrate two or more module.i.e. communicate between the modules. Follow a white box testing (Testing the code)
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Software Quality Assurance and Testing Fazal Rehman Shamil.
Dillon: CSE470: SYSTEM INTEGRATION 1 Build Plan l Development or integration strategies l Decide the order in which components of the system will be developed.
White-Box Testing Techniques I Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 7.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
ANOOP GANGWAR 5 TH SEM SOFTWARE TESTING MASTER OF COMPUTER APPLICATION-V Sem.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
Software TestIng White box testing.
Software Testing.
Software Testing.
Software Testing.
Control Flow Testing Handouts
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 4 Control Flow Testing
Software Engineering (CSI 321)
Paul Ammann & Jeff Offutt
Verification and Testing
Chapter 13 & 14 Software Testing Strategies and Techniques
Structural testing, Path Testing
White-Box Testing Techniques
Types of Testing Visit to more Learning Resources.
White-Box Testing.
Regression Testing.
Paul Ammann & Jeff Offutt
UNIT-IV ECS-602 Software engineering PART-I
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Dataflow Testing G. Rothermel.
Software Testing (Lecture 11-a)
Verification and Validation Unit Testing
White-Box Testing.
Paul Ammann & Jeff Offutt
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
White-Box Testing Techniques I
Software Testing “If you can’t test it, you can’t design it”
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software Testing and QA Theory and Practice (Chapter 5: Data Flow Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Unit III – Chapter 3 Path Testing.
Presentation transcript:

Mary Jean Harrold Georgia Institute of Technology Testing Techniques Mary Jean Harrold Georgia Institute of Technology

Outline Introduction Black-box coverage measures equivalence partitioning boundary-value analysis White-box coverage measures control flow condition data-flow fault-based Integration techniques Regression testing techniques

Verification and Validation (V&V) Verification answers the question: Are we building the product right? Thus, it refers to the testing we do to assure that that the product resulting from one phase is a correct translation from the product resulting from the previous phase. For example, we attempt to verify that the implementation is consistent with the design. Validation answers the question: Are we building the right product? Thus, it refers to the testing we do to assure that the resulting software is traceable to customer requirements.

Software Development Phases and Testing Requirements Analysis Phase: Design Phase: Implementation Phase: Integration Phase: Maintenance Phase: Develop test plan and system tests; perform technical review Develop integration tests; perform technical review Develop and run unit tests; perform technical review Run integration tests Run system tests Run regression tests

Software Development Phases and Testing (Graphical View) Requirements Analysis System High-Level Design Integration Low-Level Design Unit Coding Unit Delivery Acceptance Maintenance Regression

Some Important Facts Fact 1: It’s cheaper to find and fix faults at the unit level -- most faults are incorrect computations within subroutines Fact 2: It’s good to do as much testing early as you can -- you can’t test quality in, you have to build it in

Testing Terminology P is a program, S is the specification for P P is correct for d in D if P(d) = S(d) A d in D is called a test case A finite subset of D is called a test suite or test set P D d R S

Testing Terminology* Failure: an erroneous result incorrect outputs/response for given inputs/stimuli fails to meet real-time constraints Error: incorrect concept may cause failures if not corrected Fault: the cause of one or more failures discovered after release *Additional definitions in “Testing Definitions” handout

Testing Terminology IMPORTANT: Fault doesn’t imply failure - a program can be coincidentally correct in that it executes the fault but does not fail SQUARE(z) // finds square of z y = 2 * z print y Fault: line “y = 2 * z” should be “y = z * z” Failure: for input y = 2 Coincidentally correct: for other inputs

Coverage Criteria Test Selection (Coverage) Criterion C: a rule for selecting d in D to place in T We want a C that gives test suites that guarantee correctness We settle for a C that gives test suites that improve confidence Limitations of coverage criteria: achieving coverage doesn’t show correctness Types of criteria: black-box: specification, state white-box: control flow, condition, data-flow, fault-based

Coverage Criteria Coverage criteria C is used in two ways: 1. To measure the quality of a test suite T (i.e., help us evaluate T) the adequacy score is the percentage of “coverable” items identified using C that are covered by T 2. To guide the selection of an C-adequate test suite T (i.e., tell us when to stop testing): the adequacy requires that some percentage of “coverable” items are covered by tests in T

Black-Box Testing Selection of test suite is based on some aspect (such as requirements, specification, function) other than the code We won’t discuss further

White-Box Testing Selection of test suite is based on some aspect of the code We’ll consider several examples control flow statement branch basis path path condition simple multiple loop dataflow all-uses all-du-paths fault based mutation

Some Terminology Test Requirements: those aspects of the program that must be covered according to the coverage criterion Test Specification: constraints on the input that will cause a test requirement to be satisfied Test Case (Test Suite): a set of inputs that satisfies a test specification

Sum: Example Program 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j endif 7. i= i + 1 8. read j endwhile 9. print sum

White-Box Testing Statement Coverage For Sum test requirements: statements in the program test specifications: constraints on inputs that cause statements to be executed test cases: inputs that satisfy the constraints in the test specifications For Sum test requirements: statements 1-9 test specifications: (0<i<=10 and j>0 at least once) test suite: i = 10, j1 = 4, j2 = 10; expect sum = 4

White-Box Testing Branch Coverage For branch coverage, use test requirements: branches in program test specifications: constraints on inputs that cause branches to be executed test cases: inputs that satisfy the constraints in the test specifications For branch coverage, use control-flow graph (CFG): nodes represent statements (or sequences of statements) in the program; edges represent flow of control between statements

Sum: Control-Flow Graph 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j endif 7. i= i + 1 8. read j endwhile 9. print sum 1,2,3 T 4 5 T F 6 F 9 7,8 Some CFG edges are labeled (e.g., (4,5) and (5,7)) Some CFG edges are backedges (e.g., (7-8,4))

White-Box Testing Branch Coverage For Sum test requirements: 4T, 4F, 5T, 5F test specifications: for example, for 4T (0<i<=10, j’s can be any integer) for 5F (0<i<=10, at least one of the (used) j’s <=0) test suite: i = 10, j1 = 4, j2 = 10; expect sum = 4 i = 10, j1 = -3, j2 =5; expect sum = 0

White-Box Testing Basis-Path Coverage For basis-path coverage, need test requirements: a set of basis paths in the program test specifications: constraints on inputs that cause these basis paths to be executed test cases: inputs that satisfy the constraints in the test specifications For basis-path coverage, need cyclomatic complexity: defines the maximum number of independent paths in the program.

Cyclomatic Complexity; Basis Paths Cyclomatic complexity (CC) can be computed in three ways: 1. e - n + 2p, where e is the number of edges in the CFG, n is the number of nodes in the CFG, p is the number of connected components in the CFG 2. number of predicates + 1 for structured programs 3. number of regions in the program CC is an upper bound on the number of basis paths

Sum: Cyclomatic Complexity and Basis Paths 1,2,3 CC (by three methods) 1. e = 7, n = 6, p = 1 7 - 6 + 2 * 1 = 3 2. 2 predicates (4 and 5) 2 + 1 = 3 3. 3 regions: one inside 5,6,7-8 one inside 4,5,7-8 one outside graph T 4 5 T F 6 F 9 7,8

Sum: Cyclomatic Complexity and Basis Paths 1,2,3 4 5 6 7,8 9 T F Basis Paths there can be many sets but no set can be bigger than CC Two example sets are: 1. 1-2-3, 4, 9 1-2-3, 4, 5, 6, 7-8, 4, 9 1-2-3, 4, 5, 7-8, 4, 9 2. 1-2-3, 4, 5, 6, 7-8, 4, 9

Sum2: Example Program 1,2 3 4,5 6 11 7,8 9 10 T F T F T F main( ) { 3. while (i <= 5) { 4. scanf(“%d”, &j); 5. If (j < 0) 6. continue; 7. sum = sum + j; 8. if (sum > 10) 9. break; 10. i = i + 1; } 11.printf(“sum is %d”, sum); } 1,2 T 3 4,5 F T 6 F 11 7,8 T F 9 10

White-Box Testing Basis Path Coverage For Sum test requirements: a set of basis paths test specifications: for example, (you complete) test suite: (you complete)

White-Box Testing Condition Coverage test requirements: each condition must be T, F under test test specifications: constraints on inputs that cause each condition to be true, each condition to be false test cases: inputs that satisfy the constraints in the test specifications

Sum: Example Program For Sum test requirements: 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j endif 7. i= i + 1 8. read j endwhile 9. print sum For Sum test requirements: in 4, I>0 must be T, F in f4, I<=10 must be T, F 5 must be T, F test specifications: (you complete) test suite: (you complete)

White-Box Testing Multiple Condition Coverage test requirements: each condition possible combination of truth values must be assigned test specifications: constraints on inputs that cause the requirements to hold test cases: inputs that satisfy the constraints in the test specifications

Sum: Example Program For Sum test requirements: 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j endif 7. i= i + 1 8. read j endwhile 9. print sum For Sum test requirements: in 4, conditions must be TT, TF, FT, FF in 5, conditions must be T, F test specifications: (you complete) test suite:

White-Box Testing Loop Coverage For Sum test requirements: skip loop; once through loop; twice through loop; max through loop test specifications: constraints on inputs that cause loop iterations to be executed test cases: inputs that satisfy the constraints in the test specifications For Sum (you complete)

White-Box Testing Data-flow Coverage: All Uses test requirements: sets of definition-use pairs (du-pairs); with respect to variable v, a definition of v is a reference to v where the value of v is changed (e.g., assignment to v, input a value of v) a use v is a reference to v where the value of v is fetched by not changed (e.g., v on right-hand side of assignment, v is output) a definition-clear subpath for a definition d of v and a use u of v is a subpath in the CFG between d and u on which v is not redefined test specifications test cases

Definition-Clear Paths; All-Uses Coverage 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j endif 7. i= i + 1 8. read j endwhile 9. print sum 1,2,3 defs of i, j, sum 4 5 uses of i use of j 6 uses of sum, j def of sum use of sum 9 use of i defs of i, j 7,8 Def-use pairs for i: (1, (4,5)), (1, (4,9)), (1, 7), (7, (4,5)), (7, (4,9))

Definition-Clear Paths; All-Uses Coverage 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum 1,2,3 defs sum 4 5 use of sum def of sum 6 use of sum 9 6a use of sum 7,8

White-Box Testing Mutation Analysis/Testing Major Premise: The quality of a test set is related to the ability of that test set to differentiate the program being tested from a set of marginally different alternative programs Differentiating Programs: A test case differentiates two programs if it causes the two programs to produce different results

Mutate: make a small syntactic change White-Box Testing Mutation Analysis/Testing -- Terminology 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum Mutate: make a small syntactic change

Mutate: make a small syntactic change White-Box Testing Mutation Analysis/Testing -- Terminology 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum Mutate: make a small syntactic change

Mutate: make a small syntactic change White-Box Testing Mutation Analysis/Testing -- Terminology 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum - j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum Mutate: make a small syntactic change

Mutate: make a small syntactic change White-Box Testing Mutation Analysis/Testing -- Terminology 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum * j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum Mutate: make a small syntactic change

Mutate: make a small syntactic change White-Box Testing Mutation Analysis/Testing -- Terminology 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + i 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum Mutate: make a small syntactic change

Mutate: make a small syntactic change White-Box Testing Mutation Analysis/Testing -- Terminology 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < 10) do 5. if (j >0) 6. sum = sum + i 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum Mutate: make a small syntactic change

Mutate: make a small syntactic change White-Box Testing Mutation Analysis/Testing -- Terminology 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < 10) do 5. if (j >0) 6. sum = sum + i 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum Mutate: make a small syntactic change

White-Box Testing Mutation Analysis/Testing -- Terminology 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i > 10) do 5. if (j >0) 6. sum = sum + i 6a. print sum endif 7. i= i + 1 8. read j endwhile 9. print sum Mutate: make a small syntactic change Mutation: the changed statement Mutant: program with a mutated statement

Sum: Example Mutations 1. read i 2. read j 3. sum = 0 4. while (i > 0) and (i < = 10) do 5. if (j >0) 6. sum = sum + j endif 7. i= i + 1 8. read j endwhile 9. print sum 1. SVR -- scalar variable replacement: replace each occurrence of variable v with all other variables in scope e.g., change 1 to read j read sum 2. AOR -- arithmetic operator replacement: replace each occurrence of an arithmetic operator with all possibilities e.g., change 6 to sum = sum -j, etc.

Sum3: Example Program We now have three functions: Main, Sum, and Add int Sum (int I, j) int sum 1. sum = 0 2. while (i > 0) and (i < = 10) do 3. if (j >0) 4. sum = Add(sum, j) endif 7. i= Add(I, 1) 8. read j endwhile 9. return sum We now have three functions: Main, Sum, and Add How do we develop and test a system with many modules? Stubs need to simulate modules used by current module Drivers need to simulate modules that use current module

System Integration, Build Plan, Integration Testing Development or integration strategies Decide the order in which components of the system will be developed

Order of Integration: USES Relation For any two distinct modules Mi and Mj, we say that Mi USES Mj if and only if correct execution of Mj is necessary for Mi to complete the task described in its specification. If Mi USES Mj, we also say that Mi is a client of Mj because Mi requires the services that Mj provides

Integration: Uses Graph Play Game Collect Cards Determine Winner Play Hand Deal Announce Winner Compare Hands Get Hand Value

Common build strategies Big Bang All coding precedes all integration Bottom Up Start at low-level utility modules Top Down Start at high-level control modules Incremental/Sandwich Integrate control modules top down and utility modules bottom up

Big Bang Integration Collect Cards Determine Winner Play Hand Deal Play Game Announce Winner Compare Hands Get Hand Value

Big Bang Integration Advantages Disadvantages No need to write stubs or drivers Disadvantages Difficult to identify units causing errors Critical modules receive no extra testing Major design flaws are discovered very late No flexibility in scheduling

Bottom Up Integration Implement a module Implement a test driver often a quick prototype Execute tests Replace test driver with its implementation Implement a test driver for new module Repeat until all modules are integrated

Bottom Up Integration Driver Announce Winner Compare Hands Get Hand Value

Bottom Up Integration Determine Winner Announce Winner Compare Hands Get Hand Value

Bottom Up Integration Driver Determine Winner Announce Winner Compare Hands Get Hand Value

Bottom Up Integration Driver Collect Cards Play Hand Deal Determine Winner Announce Winner Compare Hands Get Hand Value

Bottom Up Integration Play Game Collect Cards Play Hand Deal Determine Winner Announce Winner Compare Hands Get Hand Value

Bottom Up Integration Advantages no need to write stubs low level utilities are well tested high-level (often simple) drivers are not as well tested Disadvantages not all low level utilities are important emphasis on low level functionality major design flaws are discovered late

Top Down Integration Implement highest level component Create stubs for called component Test the component with the stubs Implement and test stubs one by one use stubs for any called components Repeat until all stubs are implemented

Top Down Integration Play Game Stub

Top Down Integration Play Game Collect Cards Determine Winner Play Hand Deal Stub

Top Down Integration Play Game Collect Cards Determine Winner Play Hand Deal Announce Winner Compare Hands Get Hand Value

Top Down Integration Advantages no need to write drivers high-level drivers are well tested unimportant low-level utilities are not as well tested Disadvantages important low-level utilities are not tested as well simple high-level drivers may not need the extra testing

Incremental Top Down Integration Identify most important path through the uses graph implement and test the highest module in the path creating drivers and stubs as necessary select the next module on that path repeat until the path is done

Incremental Top Down Integration (cont.) Select next most important path and follow the same process Repeat until all paths have been implemented Paths do not have to start at the root or end with leaves

Incremental Integration Play Game Determine Winner Compare Hands Collect Cards Play Hand Deal Announce Winner Get Hand Value

Incremental Top Down Integration Advantages test the most important functionality first easy to isolate errors find major design flaws early have an incomplete but “working” system early good for morale, visibility, early assessment minimizes the need for drivers and stubs

Developing An Incremental Build Plan Identify most important paths through the uses graph most used functionality; centrality complex component that plays a key role Save “frills” for later paths Produce something that works early add features incrementally for easier testing and debugging

Components of Test Plan 1. Scope of testing 2. Test plan A. Test phases and builds B. Schedule C. Overhead software D. Environment and resources 3. Test procedure n (description of test for build n) A. Order of integration B. Unit tests for modules in build C. Test environment D. Test case data E. Expected results for build n 4. Actual test results 5. References 6. Appendices

Software Maintenance Process Identify changes to P Modify P to get P’ Select T’ in T for P’ P’ correct for T’? Find faults in P’ F F Create T’’ for untested parts of P’ T Create T’’’ T + T’’ T P’ correct for T’’?

Selective Retest P’ correct for T’? P’ correct for T’’? Identify changes to P Modify P to get P’ Select T’ in T for P’ P’ correct for T’? Find faults in P’ F F T Create T’’ for untested parts of P’ T P’ correct for T’’? Create T’’’ T + T’’

Selective Retest Test Suite T Program P P’ Version of P ?

Selective Retest Regression test selection problem Q1: Which tests do not need to be rerun? Q2: Which tests need to be run unchanged? Q3: Which tests must be run modified? Q4: How can we prioritize selected tests? Test suite management problem Q5: Which tests are obsolete? Q6: Which tests are redundant? Q7: How can we minimize test suites? Coverage identification problem Q8: What new tests are needed?

Regression Test Selection: Q1, Q2 Suite T Program P P’ Version of P ?

Regression Test Selection: Q1, Q2 Program P P’ Version of P

Regression Test Selection: Q1, Q2 Program P T P’ Version of P

Regression Test Selection: Q1, Q2 Program P T P’ Version of P T’

Sample Program entry exit S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 F T Procedure AVG S1 count = 0 S2 fread(fptr, n) S3 while (not EOF) do S4 if (n < 0) S5 return (error) else S6 nums[count] = n S7 count ++ endif S8 fread(fptr, n) endwhile S9 avg = mean(nums,count) S10 return(avg)

Execution Tracing Test Input Output t1 empty file 0 entry S1 S2 S3 S9 exit F S1 S2 Test Input Output t1 empty file 0 F S3 T S4 T F S5 S6 S7 S8 S9 S10 exit

Execution Tracing Test Input Output t1 empty file 0 t2 -1 error entry exit

Execution Tracing Test Input Output t1 empty file 0 entry S1 S2 S3 S9 exit F S1 S2 Test Input Output t1 empty file 0 F S3 T S4 T F S5 S6 S7 S8 S9 S10 exit

Execution Tracing Test Input Output t1 empty file 0 t2 -1 error entry S1 S2 Test Input Output t1 empty file 0 t2 -1 error t3 1 2 3 2 F S3 T S4 T F S5 S6 S7 S8 S9 S10 exit

Test History Information entry S1 {t1, t2, t3} S2 Test Input Output t1 empty file 0 t2 -1 error t3 1 2 3 2 {t1,t3} S3 {t2, t3} S4 {t2} {t3} S5 S6 S7 S8 S9 S10 exit

Regression Test Selection Algorithm entry exit T F Procedure AVG S1 count = 0 S2 fread(fptr, n) S3 while (not EOF) do S4 if (n < 0) S5 return (error) else S6 nums[count] = n S7 count ++ endif S8 fread(fptr, n) endwhile S9 avg = mean(nums,count) S10 return(avg)

Modified Version of AVG Procedure AVG S1’ count = 0 S2’ fread(fptr, n) S3’ while (not EOF) do S4’ if (n <= 0) S5a print (“bad input”) S5’ return (error) else S6’ nums[count] = n endif S8’ fread(fptr, n) endwhile S9’ avg = mean(nums,count) S10’ return(avg) Procedure AVG S1 count = 0 S2 fread(fptr, n) S3 while (not EOF) do S4 if (n < 0) S5 return (error) else S6 nums[count] = n S7 count ++ endif S8 fread(fptr, n) endwhile S9 avg = mean(nums,count) S10 return(avg)

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry S1 entry’ S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry S1 entry’ S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry entry’ S1 S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry entry’ S1 S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry entry’ S1 S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry entry’ S1 S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry entry’ S1 S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry entry’ S1 S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry entry’ S1 S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

CFG Walk {t1, t2, t3} {t1, t3} {t2, t3} {t2} {t3} entry S1 entry’ S1’ S5a S6’ S5’ S7 S8 S8’ S9 S9’ S10 exit S10’ exit’

Prototype Implementation Program P Test Suite T DejaVu Test Suite T’ Modified Program P’

C Study Seven C programs (300LOC), Siemens Labs, 7-42 versions, tests Internet Empire C program (50KLOC), 5 versions, no tests Microsoft NT Calc C program (2KLOC), 9 versions, 388 tests

Study of C Programs Seven C programs (300LOC), Siemens Labs, 7-42 versions, 1000-5000 tests Microsoft NT Calc C program (2KLOC), 9 versions, 388 tests European Space Agency C program for satellite antennae adjustment (11KLOC), 33 versions, 10,000 tests Internet Empire C program (50KLOC), 5 versions, no tests

Test Selection for C Programs