Case Study: Black-Box Testing

Slides:



Advertisements
Similar presentations
Testing and Quality Assurance
Advertisements

Black Box Testing Csci 565 Spring 2009.
BBS514 Structured Programming (Yapısal Programlama)1 Functions and Structured Programming.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Black box testing  Black box tests focus on the input/output behavior of the component  Black-box tests do not deal with the internal aspects of the.
Testing an individual module
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Equivalence Class Testing
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Simple Data Type Representation and conversion of numbers
Computer Arithmetic Nizamettin AYDIN
CMSC 345 Fall 2000 Unit Testing. The testing process.
CS4311 Spring 2011 Unit Testing Dr. Guoqiang Hu Department of Computer Science UTEP.
Exam 1 Review Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 15.
Black-Box Testing Techniques I Software Testing Lecture 4.
Black-Box Testing Techniques II Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 5.
Software Testing Testing types Testing strategy Testing principles.
Requirements-based Test Generation for Functional Testing (© 2012 Professor W. Eric Wong, The University of Texas at Dallas) 1 W. Eric Wong Department.
Chapter 3: Functions and Graphs 3.1: Functions Essential Question: How are functions different from relations that are not functions?
Black-Box Testing Techniques I
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
Black-box Testing.
INTRUDUCTION TO SOFTWARE TESTING TECHNIQUES BY PRADEEP I.
CPS120: Introduction to Computer Science Operations Lecture 9.
Introduction to Problem Solving. Steps in Programming A Very Simplified Picture –Problem Definition & Analysis – High Level Strategy for a solution –Arriving.
Today’s Agenda  Reminder: HW #1 Due next class  Quick Review  Input Space Partitioning Software Testing and Maintenance 1.
Bell Work: Evaluate each expression: (-2)3 3. (1/2)3
Case Study: Black-Box Testing Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 6.1.
Test Case Designing UNIT - 2. Topics Test Requirement Analysis (example) Test Case Designing (sample discussion) Test Data Preparation (example) Test.
1 © 2011 Professor W. Eric Wong, The University of Texas at Dallas Requirements-based Test Generation for Functional Testing W. Eric Wong Department of.
White-Box Testing Techniques I Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 7.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
CHAPTER 5: Representing Numerical Data
Department of Computer Science Georgia State University
Modeling and Equation Solving
Software Testing.
Topics Designing a Program Input, Processing, and Output
PREPARED BY G.VIJAYA KUMAR ASST.PROFESSOR
Binary Numbers The arithmetic used by computers differs in some ways from that used by people. Computers perform operations on numbers with finite and.
Software Testing.
Software Testing.
G&W Chapter 22: Test Cases Software Specification Lecture 29
Chapter 4 C Program Control Part I
Floating Point Math & Representation
Black-Box Testing Techniques I
Input Space Partition Testing CS 4501 / 6501 Software Testing
Floating Point Numbers: x 10-18
1 Functions and Applications
MATH 224 – Discrete Mathematics
Chapter 13 & 14 Software Testing Strategies and Techniques
Structural testing, Path Testing
Chapter 2 ERROR ANALYSIS
User-Defined Functions
CHAPTER 4 Test Design Techniques
Variables and Arithmetic Operations
Software Testing (Lecture 11-a)
Black-Box Testing Techniques III
Equivalence Class Testing
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
White-Box Testing Techniques I
Institute of Computing Tech.
Lesson 2 CCSS F-IF.5 Relate the domain of a function to its graph and where applicable, to the quantitative relationship it describes MP 6 Precision.
Topics Designing a Program Input, Processing, and Output
Black-Box Testing Techniques III
Software Testing “If you can’t test it, you can’t design it”
Topics Designing a Program Input, Processing, and Output
Black-Box Testing Techniques II
Prepared by Stephen M. Thebaut, Ph.D. University of Florida
Black-Box Testing Techniques II
Presentation transcript:

Case Study: Black-Box Testing Software Testing and Verification Lecture 6.1 Prepared by Stephen M. Thebaut, Ph.D. University of Florida

Boom! Bang! Pow! The following case study illustrates the application of black-box test case design techniques to a relatively small but “real-world” specification for a standard mathematical library function: pow(). (Slightly simplified versions of actual MAN pages are used.)

Assumptions We will assume that the function in question is currently under development. You are part of the test design team in an independent, system-level testing group. In addition to draft user MAN pages, relevant mathematical reference material and software specification documents are available.

Draft pow MAN page Mathematical Library pow NAME pow - power function SYNOPSIS pow(x,y); DESCRIPTION The pow() function computes the value of x raised to the power y, x**y. If x is negative, y must be an integer value.

Draft pow MAN page (cont’d) Mathematical Library pow RETURN VALUES Upon successful completion, pow() returns the value of x raised to the power y. If x is 0 and y is 0, 1 is returned. If x or y is non-numeric, NaN is returned. If x is 0 and y is negative, +HUGE_VAL is returned. If the correct value would cause overflow, +HUGE_VAL or -HUGE_VAL is returned. If the correct value would cause underflow, +TINY_VAL or -TINY_VAL is returned. For exceptional cases, matherr is called and returns a value (via errno) describing the type of exception that has occurred.

Draft pow MAN page (cont’d) Mathematical Library pow USAGE An application wishing to check for error situations should set errno to 0 before calling pow(). If errno is non-zero on return, or if the return value of pow() is NaN, an error has occurred.

Issues Are the requirements reflected in the draft MAN page for pow complete? consistent? unambiguous? verifiable? Should matherr and the value of errno be taken into account when designing tests for pow? Is it clear what pow does in “error cases”? (E.g., is matherr called when x or y is not a number? Is 0**0 an “error case”?)

Excerpts from the matherr MAN page Mathematical Library matherr NAME matherr - math library exception-handling function SYNOPSIS matherr; DESCRIPTION Certain math functions call matherr when exceptions are detected. matherr stores an integer in errno describing the type of exception that has occurred, from the following list of constants: DOMAIN argument domain exception OVERFLOW overflow range exception UNDERFLOW underflow range exception

Excerpts from the matherr MAN page Mathematical Library matherr STANDARD CONFORMANCE The following table summarizes the pow exceptions detected, the values returned, and the constants stored in errno by matherr. DOMAIN OVERFLOW UNDERFLOW usual cases ----- +/-HUGE_VAL +/-TINY_VAL 0**0 1 ----- ----- (x<0)**(not int) 0 ----- ----- 0**(y<0) +HUGE_VAL ----- ----- The constants HUGE_VAL and TINY_VAL represent the maximum and minimum single-precision floating-point numbers for the given environment, respectively.

Notes from the software specification documents… Overflow occurs if and only if the actual value of x**y is larger than +/-HUGE_VAL. Similarly, underflow occurs if and only if the actual non-zero value of x**y is smaller than +/-TINY_VAL. (Note that +HUGE_VAL is distinct from -HUGE_VAL; similarly for +TINY_VAL and -TINY_VAL.) Like DOMAIN, OVERFLOW, and UNDERFLOW, “NaN” (Not a Number) is a system-defined constant.

Black-box design strategies Which black-box design strategies are appro-priate? Equivalence Partitioning? Cause-Effect Analysis? Boundary Value Analysis? Intuition and Experience? In what order should those that are appro-priate be applied?

A possible scheme… Develop a simple Cause-Effect Model based on the MAN pages and other documentation available. Identify Causes and Effects. Describe relationships and constraints deducible from specification via Cause-Effect graphs that could serve as a basis for test case design.

A possible scheme… (cont’d) Identify specific Boundary Values that should/could be covered using additional test cases. Identify other specific conditions or usage scenarios that may be worthwhile to cover by applying Intuition & Experience.

Developing a Cause-Effect Model What Effects should be modeled for pow()? Should matherr be considered? Should they be mutually exclusive or not? What Causes should be modeled? Should they be based on pairwise (x,y) value classes, or independent x and y value classes? (E.g., should "x=0 & y<0" be a single Cause or should "x=0" and "y<0" be two separate Causes?)

Some Non-Mutually Exclusive Effects… 21. pow returns x**y 22. pow returns 0 23. pow returns 1 24. pow returns NaN 25. pow returns +/- Huge_Val 26. pow returns +/- Tiny_Val 27. matherr returns OVERFLOW 28. matherr returns UNDERFLOW 29. matherr returns DOMAIN

And Some Corresponding Causes 1. x=0 2. x<0 3. x>0 4. y=0 5. y<0 6. y>0 7. y is an integer 8. x not a number (NaN) 9. y not a number (NaN) 10. x**y > Huge_Val 11. x**y < -Huge_Val 12. 0 < x**y < Tiny_Val 13. 0 > x**y > -Tiny_Val

Alternative Approach: Mutually Exclusive Effects Values RETURNED by pow() and SET (via errno) by matherr: 21. pow: NaN, matherr not called --- 22. pow: +Huge_Val, matherr: OVERFLOW 23. pow: -Huge_Val, matherr: OVERFLOW 24. pow: +Tiny_Val, matherr: UNDERFLOW 25. pow: -Tiny_Val, matherr: UNDERFLOW 26. pow: 1, matherr: DOMAIN 27. pow: 0, matherr: DOMAIN 28. pow: +Huge_Val, matherr: DOMAIN 31. pow: returns x**y (???), matherr: not called

Non-error Effects The draft MAN page primarily describes exceptional/error behaviors of pow(). What non-error Effect(s) should be modeled? An implicit part of the specification for pow() includes the mathematical definition of exponentiation. Consider, for example, the following excerpts from a popular math dictionary:

Excerpts from a mathematical definition of pow() "If the exponent (y) is a positive integer, then x**y MEANS x if y=1, and it MEANS the product of y factors each equal to x if y>1." In other words, when y is an integer ≥1, the function associated with x**y is DEFINED to be y  x i=1

Excerpts from a mathematical definition of pow() (cont’d) Similarly, the dictionary states: "If x is a nonzero number, the value of x**0 is DEFINED to be 1." ... "A negative exponent indicates that...the quantity is to be reciprocated. ...If the exponent is a fraction p/q, then x**(p/q) is DEFINED as (x**(1/q))**p, where x**(1/q) is the positive qth root of x if x is positive..."

Non-error Effects (cont’d) Thus, x**y really MEANS different functions depending on what region (point, line, etc.) in the x,y plane we are talking about. Clearly, these differences could be reflected in separate non-error Effects. For example, you might define an Effect to be: y "pow returns  x and matherr is not called" i=1

Non-error Effects (cont’d) The Causes for this Effect could be modeled as "y is a whole number ≥1" (a single Cause) AND'ed with an intermediate node repre-senting "NOT any of the error conditions". We can therefore expand our single non-error Effect to: 31. pow: the product of y factors, each equal to x, matherr: not called 32. pow: 1 by defn. of (x<>0)**0, matherr: not called 33. pow: the reciprocal of x**(-y), matherr: not called 34. pow: a positive root of x is computed, matherr: not called

Corresponding Causes pow() is called with arguments x,y such that... 1. x not a number (NaN) 2. y not a number (NaN) --- 3. x**y > Huge_Val - the mathematical value of input x raised to the power of input y exceeds the constant +Huge_Val 4. x**y < -Huge_Val - (similar to above defn.) 5. 0 < x**y < Tiny_Val - the mathematical value of input x raised to the power of input y is greater than 0 and less than Tiny_Val 6. 0 > x**y > -Tiny_Val - (similar to above defn.)

Corresponding Causes (cont’d) 7. x=0, y=0 8. x<0, y not an integer 9. x=0, y<0 --- 10. y an integer >= 1 11. x<>0, y=0 12. y<0 13. x>=0, y not an integer

Cause-Effect Graphs Error Situations: 1 V  21 2 3 22 . 9 28

Cause-Effect Graphs (cont’d) Non-Error Situations (for use with Effects 31-34): 1 V  NaN 2 Л NES 3 . 9

Cause-Effect Graphs (cont’d) NES 10 31 Л ( 11 32 12 33 13 34

Identifying Boundary Values x,y input boundary points/lines/regions: x=0, y=0 (pow: 1, matherr: domain) x<0, y not an integer (pow: 0, matherr: domain) x=0, y<0 (pow: +HUGE_VAL, matherr: domain) y=1 (pow: x) y an integer > 1 (pow: ) x≠0, y=0 (pow: 1) x≠0, y<0 (pow: reciprocal of x**(-y) ) y Õ x i=1

Identifying Boundary Values (cont’d) pow() output boundaries: +/- HUGE_VAL +/- TINY_VAL +/- 0 +HUGE_VAL - HUGE_VAL TINY_VAL +TINY_VAL 0.0 overflow under flow

Exploring Boundaries y x=0, y=0 x

Exploring Boundaries (cont’d) y x=0, y=0 x

Exploring Boundaries (cont’d) y x≠0, y=0 x

Exploring Boundaries (cont’d) y x≠0, y=0 x

Exploring Boundaries (cont’d) xy = +HUGE_VAL y x

Exploring Boundaries (cont’d) xy = +HUGE_VAL y x

Intuition and Experience Various input TYPE errors (non-numeric chars, blanks, pointers, functions, etc.), including a mix of: no arguments, 1 argument, > 2 arguments Various combinations of system-defined constant inputs: +/- HUGE_VAL, +/- TINY_VAL +/- INF, +/- 0, NaN All documented examples (including invalid or singular inputs) System rounding mode variations, etc.

Case Study: Black Box Testing Software Testing and Verification Lecture 6.1 Prepared by Stephen M. Thebaut, Ph.D. University of Florida