Handouts Software Testing and Quality Assurance Theory and Practice Chapter 2 Theory of Program Testing ------------------------------------------------------------------

Slides:



Advertisements
Similar presentations
Lecture 8: Testing, Verification and Validation
Advertisements

Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
(c) 2007 Mauro Pezzè & Michal Young Ch 9, slide 1 Test Case Selection and Adequacy Criteria.
Software Testing and QA Theory and Practice (Chapter 2: Theory of Program Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Software Testing Using Model Program DESIGN BY HONG NGUYEN & SHAH RAZA Dec 05, 2005.
Testing an individual module
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 5 Data Flow Testing
Software Testing and QA Theory and Practice (Chapter 15: Software Reliability) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Software Testing and QA Theory and Practice (Chapter 6: Domain Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Software Testing and QA Theory and Practice (Chapter 4: Control Flow Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 9 Functional Testing
Test coverage Tor Stålhane. What is test coverage Let c denote the unit type that is considered – e.g. requirements or statements. We then have C c =
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Dillon: CSE470: QUALITY ASSURANCE1 Software Qualities Maintainer User Customer Good Documentation Readable Code Good Design Low Cost Portability Increased.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 15 Software Reliability
CPIS 357 Software Quality & Testing
Presented By Dr. Shazzad Hosain Asst. Prof., EECS, NSU
Software Engineering Chapter 23 Software Testing Ku-Yaw Chang Assistant Professor Department of Computer Science and Information.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis.
Testing Theory cont. Introduction Categories of Metrics Review of several OO metrics Format of Presentation CEN 5076 Class 6 – 10/10.
Overview of Software Testing 07/12/2013 WISTPC 2013 Peter Clarke.
Chapter 13: Regression Testing Omar Meqdadi SE 3860 Lecture 13 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Test Coverage CS-300 Fall 2005 Supreeth Venkataraman.
CPIS 357 Software Quality & Testing I.Rehab Bahaaddin Ashary Faculty of Computing and Information Technology Information Systems Department Fall 2010.
Today’s Agenda  HW #1  Finish Introduction  Input Space Partitioning Software Testing and Maintenance 1.
Software Construction Lecture 18 Software Testing.
Today’s Agenda  Reminder: HW #1 Due next class  Quick Review  Input Space Partitioning Software Testing and Maintenance 1.
1 Software Testing and Quality Assurance Theory and Practice Chapter 6 Domain Testing.
 Software Testing Software Testing  Characteristics of Testable Software Characteristics of Testable Software  A Testing Life Cycle A Testing Life.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
CS223: Software Engineering Lecture 25: Software Testing.
Testing Integral part of the software development process.
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVIII. Software Testing.
Software Testing and QA Theory and Practice (Chapter 5: Data Flow Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Copyright © Cengage Learning. All rights reserved.
Regression Testing with its types
Overview Theory of Program Testing Goodenough and Gerhart’s Theory
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 1 Basic Concepts and Preliminaries
Testing Tutorial 7.
CS223: Software Engineering
Rekayasa Perangkat Lunak Part-13
Control Flow Testing Handouts
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 4 Control Flow Testing
CompSci 230 Software Construction
Software Testing An Introduction.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 6 Domain Testing
Outline of the Chapter Basic Idea Outline of Control Flow Testing
Structural testing, Path Testing
Testing Approaches.
Software testing.
Lecture 09:Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Software testing.
Chapter 10 – Software Testing
Test coverage Tor Stålhane.
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Verification and Validation
Software Verification and Validation
Software Testing “If you can’t test it, you can’t design it”
Copyright © Cengage Learning. All rights reserved.
Software Verification and Validation
Copyright © Cengage Learning. All rights reserved.
Chapter 7 Software Testing.
Software Testing and QA Theory and Practice (Chapter 5: Data Flow Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Presentation transcript:

Handouts Software Testing and Quality Assurance Theory and Practice Chapter 2 Theory of Program Testing ------------------------------------------------------------------ ------------------------------------------------------------------- -------------------------------------------------------------------- 1

Outline of the Chapter Basic Concepts in Testing Theory Handouts Basic Concepts in Testing Theory Theory of Goodenough and Gerhart Theory of Weyuker and Ostrand Theory of Gourlay Adequacy of Testing Limitations of Testing Summary

Basic Concepts in Testing Theory Handouts Testing theory puts emphasis on Detecting defects through program execution Designing test cases from different sources: requirement specification, source code, and input and output domains of programs. Selecting a subset of tests cases from the entire input domain [1,2] Effectiveness of test selection strategies [3-5] Test oracles used during testing [6,7] Prioritizing the execution of test cases [8] Adequacy analysis of test cases [9-15]

Theory of Goodenough and Gerhart Handouts Theory of Goodenough and Gerhart is theory for selecting test data from input domain of a program. Fundamental Concepts: Let P be a program, and D be its input domain. Let T  D. P(d) is the result of executing P with input d. Figure 2.1: Executing a program with a subset of the input domain. OK(d): Represents the acceptability of P(d). OK(d) = True iff P(d) is acceptable outcome. SUCCESSFUL(T): T is a successful test iff t  T, OK(t). Ideal Test: T is an ideal test if OK(t), t  T => OK(d), d  D. (means if from the successful execution of a sample of the input domain we can conclude that the program contains no errors, then the sample constitutes ideal test). Note: (iff ) means (if and only if )(<==>)

Theory of Goodenough and Gerhart Handouts Fundamental Concepts (Contd.) Reliable Criterion: A test selection criterion C is reliable iff either every test selected by C is successful, or no test selected is successful. it refers to consistency. Valid Criterion: A test selection criterion C is valid iff whenever P is incorrect, C selects at least one test set T which is not successful for P. It refers to ability to produce meaningful results. COMPLETE that defines how some test selection criterion C is used in selecting a particular set of test data T from D. Let C denote a set of test predicates. If d  D satisfies test predicate c  C, then c(d) is said to be true . with above idea in mind, COMPLETE (T,C), where T  D COMPLETE(T, C) ≡ (c  C)(t  T) c(t)  (t  T)(c  C) c(t) (means for every test predicate ,we select a test such that the test predicate is satisfied by selected test and for every test selected, there exists a test predicate such that the test predicate is satisfied by selected test) Fundamental Theorem (T  D) (COMPLETE(T,C)  RELIABLE(C)  VALID(C)  SUCCESSFUL(T)) => (d  D) OK(d)

Theory of Goodenough and Gerhart Handouts Program faults occur due to our inadequate understanding of all conditions that a program must deal with. failure to realize that certain combinations of conditions require special care. Kinds of program faults Logic fault Requirement fault: our failure to capture the real requirements of the customer. Design fault: our failure to satisfy an understood requirement. Construction fault: our failure to satisfy a design. Example: see the blackboard. Performance fault Missing control-flow paths: Example: see the blackboard. Inappropriate path selection: Example: see the blackboard. Inappropriate or missing action: Example: see the blackboard. Test predicate: It is a description of conditions and combinations of conditions relevant to correct operation of the program.

Theory of Weyuker and Ostrand Handouts Weyuker and Ostrand provide a modified theory in which the validity and reliability of test selection criteria are dependent only on the program specification ,rather than a program. d  D, the input domain of program P and T  D. OK(P, d) = true iff P(d) is acceptable outcome of program P. SUCC(P, T): T is a successful test for program P iff ,t  T, OK(P, t). Uniformly valid criterion: Criterion C is uniformly valid iff (P) [ (d  D)(OK(P,d)) => (T  D) (C(T)  SUCC(P, T)) ]. (means for all program for a given specification, a test selection criterion C is valid iff whenever P is incorrect, C selects at least one test set T which is not successful for P) Uniformly reliable criterion: Criterion C is uniformly reliable iff (P) (T1, T2  D) [ (C(T1)  C(T2)) => (SUCC(P, T1) <==> SUCC(P,T2)) ]. (means for all program for a given specification , a test selection criterion C is reliable iff either every test selected by C is successful, or no test selected is successful.) Uniformly Ideal Test Selection A uniformly ideal test selection criterion for a given specification is both uniformly valid and uniformly reliable.

Theory of Gourlay Handouts The theory establishes a relationship between three sets of entities specifications, programs and tests. Notation P: The set of all programs (p  P  P) S: The set of all specifications (s  S  S) T: The set of all tests (t  T  T) “p ok(t) s” means the result of testing p with t is judged to be acceptable by s. “p ok(T) s” means “p ok(t) s,” t  T. “p corr s” means p is correct with respect of to the specification s. A testing system is a collection < P, S, T, corr, ok>, where corr  P x S and ok  T x P x S, and pst(p corr s => p ok(t) s).

Figure 2.3: Different ways of comparing the power of test methods. Theory of Gourlay Handouts A test method is a function M: P x S T (in the general case, a test method takes the specification S and program P and produce test cases). In practice ,test method are predominantly Program dependent: T = M(P) (means test cases derived base on source code only. This called white-box testing) Or Specification dependent: T = M(S) (means test cases derived base on the specification. This called black-box testing) Or customer Expectation dependent (means test cases derived base on customer expectations from the product only.) Power of test methods: Let M and N be two test methods. For M to be at least as good as N, we want the following to occur: Whenever N finds an error, so does M. (FM and FN are sets of faults discovered by test sets produced by test methods M and N, respectively.) (TM and TN are test sets produced by test methods M and N, respectively.) Two cases: 1- TN  TM ( In this case, it is clear that method M is at least as good as method N, because method M produces test cases that reveal all faults that revealed by test cases produce by method N. (see FIGURE a) 2- TM and TN overlap ( in this case, we can say that method M is at least as good as method N if we execute the program p under both sets of cases ,TM and TN , and produces faults by them that satisfies FN  FM. ((see FIGURE b) Figure 2.3: Different ways of comparing the power of test methods.

Adequacy of Testing Handouts Reality: New test cases, in addition to the planned test cases, are designed while performing testing. Let the test set be T. If a test set T does not reveal any more faults, we face a dilemma: P is fault-free (not acceptable). OR T is not good enough to reveal (more) faults.  evaluating the adequacy of T: we need to know that T is “ good enough” to reveal the faults in p. Some ad hoc stopping criteria Allocated time for testing is over. It is time to release the product. Test cases no more reveal faults.

Figure 2.4: Context of applying test adequacy. Adequacy of Testing Handouts Figure 2.4: Context of applying test adequacy.

Adequacy of Testing Two practical methods for evaluating test adequacy Handouts Two practical methods for evaluating test adequacy Fault seeding Program mutation Implant a certain number (say, X) of known faults in P, and test P with T. If k% of the X faults are revealed, T has revealed k% of the unknown faults. So, if 100% of implants faults have been revealed by T, we feel more confidence about the adequacy of T. (More in Chapter 13) A mutation of P is obtained by making a small change to P. Some mutations are faulty, whereas the others are equivalent to P. T is said to be adequate if it causes every faulty mutations to produce unexpected results. (More in Chapter 3)

Limitations of Testing Handouts Dijkstra’s famous observation Testing can reveal the presence of faults, but not their absence. Faults are detected by running P with a small test set T, where |T| << |D|, “<<“ denoted “much smaller.” Testing with a small test set raises the concern of testing efficacy. Testing with a small test set is less expensive. The result of each test must be verified with a test oracle. Verifying a program output is not a trivial task. There are non-testable programs. A program is non-testable if There is no test oracle for the program. It is too difficult to determine the correct output.

Summary Theory of Goodenough and Gerhart Theory of Weyuker and Ostrand Handouts Theory of Goodenough and Gerhart Ideal test, Test selection criteria, Program faults, Test predicates Theory of Weyuker and Ostrand Uniformly ideal test selection Theory of Gourlay Testing system Power of test methods (“at least as good as” relation) Adequacy of Testing Need for evaluating adequacy Methods for evaluating adequacy: fault seeding and program mutation Limitations of Testing Testing is performed with a test set T, s.t. |T| << |D|. Dijkstra’s observation Test oracle problem