Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Test Case Design – Black Box Testing João Pascoal Faria.

Similar presentations


Presentation on theme: "Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Test Case Design – Black Box Testing João Pascoal Faria."— Presentation transcript:

1 jpf@fe.up.pt www.fe.up.pt/~jpf
TQS - Teste e Qualidade de Software (Software Testing and Quality) Test Case Design – Black Box Testing João Pascoal Faria

2 Index Introduction Black box testing techniques

3 Development of test cases
Complete testing is impossible  Testing cannot guarantee the absence of faults  How to select subset of test cases from all possible test cases with a high chance of detecting most faults?  Test case design strategies and techniques Because : if we have a good test suite then we can have more confidence in the product that passes that test suite

4 Quality attributes of “good” test cases
Capability to find defects Particularly defects with higher risk Risk = frequency of failure (manifestation to users) * impact of failure Cost (of post-release failure) ≈ risk Capability to exercise multiple aspects of the system under test Reduces the number of test cases required and the overall cost Low cost Development: specify, design, code Execution Result analysis: pass/fail analysis, defect localization Easy to maintain Reduce whole life-cycle cost Maintenance cost ≈ size of test artefacts

5 Adequacy criteria Criteria to decide if a given test suite is adequate, i.e., to give us “enough” confidence that “most” of the defects are revealed “enough” and “most” depend on the product (or product component) criticality In practice, reduced to coverage criteria Requirements / specification coverage At least on test case for each requirement Cover all cases described in an informal specification Cover all statements in a formal specification Model coverage State-transition coverage Cover all states, all transitions, etc. Use-case and scenario coverage At least one test case for each use-case or scenario Code coverage Cover 100% of methods, cover 90% of instructions, etc.

6 Adequacy criteria “Coverage-based testing strategy based on our finding: For normal operational testing: specification-based, regardless of code coverage For exceptional testing: code coverage is an important metrics for testing capability” Xia Cai and Michael R. Lyu, “The Effect of Code Coverage on Fault Detection under Different Testing Profiles”, ICSE 2005 Workshop on Advances in Model-Based Software Testing (A-MOST)

7 How do we known what percentage of defects have been revealed?
Goals that require this information: “Find and fix 99% of the defects” “Reduce the number of defects to less than 10 defects / KLOC” Difficulty: the kind of coverage we can measure (statements covered, requirements covered) is not the kind of coverage we would like to measure (defects found and missed) … Techniques: Past experience Mutation testing (defect injection) Static analysis of code samples

8 What is “confidence”? Estimated quality level (1 / defect density)
+ too expensive or too late High confidence Low risk After fixing bugs TARGET ZONE Medium confidence Medium risk Estimated quality level (1 / defect density) Low confidence High risk After testing (product above average) or After testing (more thoroughly) Before testing, based on past experience After testing (product below average) Same number of bugs detected! - - Precision of estimate (knowledge) +

9 Misperceived product quality
… and think you are here Many bugs found Some bugs found High Test suite quality Some bugs found Very few bugs found Low Low High You may be here … Product quality

10 The optimal amount of testing
(source: "Software Testing", Ron Patton)

11 Stop criteria Criteria to decide when to stop testing (and fixing bugs) I.e., stop test-fix iterations and release the product

12 Saturation point (stop testing) Number of bugs found (benefit)
Stop criteria Test effort (cost) Saturation point (stop testing) Number of bugs found (benefit) “90% of the bugs can be found with 10% of the test effort”

13 Test & repair effort (cost)
Stop criteria Test & repair effort (cost) Cost = benefit End of business opportunity Minimum quality achieved Value of bugs found and fixed

14 Stop criteria What bugs should be fixed?
If bugs found are 50% of the total number of bugs, it makes little difference fixing 95% or 100% of the bugs found Some bugs are very difficult to fix Difficult to locate (example: random bugs) Difficult to correct (example: third party component) Reduce overall risk Fix all high severity bugs Fix (the easiest) 95% of the medium severity bugs Fix (the easiest) 70% of low severity bugs

15 Stop criteria Bugs found Bugs fixed Bugs found and not fixed
Number of bugs Bugs found Bugs fixed X% Bugs found and not fixed Time

16 Improve versus assess Different test techniques for different purposes: Find defects and improve the software under test Defect testing Choose test cases that have higher probability of finding defects, particularly defects with higher risk, and provide better hints on how to improve the system Boundary cases Frequent errors Code coverage Stress testing Profiling Focus here Assess software quality Statistical testing Estimate quality metrics (reliability, availability, efficiency, usability, …) Test cases that represent typical use cases (based on usage model/profile)

17 Test iterations: test to pass and test to fail
First test iterations: Test-to-pass check if the software fundamentally works with valid inputs without stressing the system Subsequent test iterations: Test-to-fail try to "break" the system with valid inputs but at the operational limits with invalid inputs (source: Ron Patton)

18 Benefits of test automation
Automatic (manual) test execution Requires that test cases are written in some executable language No impact on defect finding capability Increases test development costs (coding) May increase maintenance effort/cost (~ size of test artifacts) Dramatically reduces execution and result analysis (pass/fail analysis) costs Testing can be repeated more frequently Pays-off when testing is repeated (test iterations, regression testing) Automatic (manual) test case generation Test goals are provided to a tool Requires a formal specification / model of the system (if test outputs are to be generated automatically, and not only test inputs) Reduces design costs Much more test cases can be generated (than manually) Inferior capability to find defects (per test case), but overall capability may be similar or higher

19 Test harness Auxiliary code developed to support testing Test drivers
call the target code simulate calling units or a user where test procedures and test cases are coded (for automatic test case execution) or a user interface is created (for manual test case execution) Test stubs simulate called units simulate modules/units/systems called by the target code

20 Exhaustive, random, point and parameterized testing
Exhaustive testing In general, impossible or infeasible But some parts/features of the system might be tested this way Random testing Useful to generate many test cases, particularly if the test oracle is automated But low probability to exercise boundary cases Repeatability is important “Point” testing With very specific, well designed, test data “Intelligent” testing Focus here Parameterized testing Reuse the same test procedure for different test data (passed as parameter) Tests as (better) specifications

21 Test case design strategies and techniques
Tester's View Knowledge sources Techniques / Methods Inputs Requirements document User manual Specifications Models Domain knowledge Defect analysis data Intuition Experience Heuristics Equivalence class partitioning Boundary value analysis Cause effect graphing Error guessing State-transition testing Random testing Scenario-based testing Black-box testing (not code-based) (sometimes called functional testing) Outputs Control flow testing/coverage: - Statement coverage - Branch (or decision) coverage - Condition coverage - Branch and condition coverage - Modified condition/decision coverage - Multiple condition coverage - Independent path coverage - Path coverage Data flow testing/coverage Class testing/coverage Mutation testing Program code Control flow graphs Data flow graphs Cyclomatic complexity High-level design Detailed design White-box testing (also called code-based or structural testing) (adapted from: I. Burnstein, pg.65)

22 Index Introduction Black box testing techniques

23 Equivalence class partitioning
Divide all possible inputs into classes (partitions) such that : There is a finite number of input equivalence classes You may reasonably assume that the program behaves analogously for inputs in the same class one test with a representative value from a class is sufficient if the representative detects a defect then other class members would detect the same defect (Can also be applied to outputs) all inputs i1 i4 i2 i3

24 Equivalence class partitioning
Strategy : Identify (valid and invalid) input equivalence classes Based on (test) conditions on inputs/outputs in specification/description Based on heuristics and experience : “input x in [1..10]”  classes : x  1, 1  x  10, x  10 “enumeration A, B, C"  classes : A, B, C, not{A,B,C} "input integer n"  classes : n not an integer, nmin, minn0, 0nmax, nmax …… Define one (or a couple of) test cases for each class Test cases that cover valid classes (1 test case for 1 or more valid classes) Test cases that cover at most one invalid class (1 test case for 1 invalid class) Usually useful to test for 0 / null / empty and other special cases

25 Equivalence class partitioning Example 1
Test a function for calculation of absolute value of an integer x Equivalence classes : Criteria Valid eq. classes Invalid eq. classes nr of inputs , > 1 input type integer non-integer particular x < , >= 0 (1) (2) (3) (4) (5) (6) (7) Test cases : x = (1,4,6) x = (1,4,7) x = (2) x = (3) x = “XYZ” (5)

26 Equivalence class partitioning Example 2
Test a program that computes the sum of the first N integers as long as this sum is less than maxint. Otherwise an error should be reported. If N is negative, then it takes the absolute value N. Formally: Given integer inputs N and maxint compute result : result = if this <= maxint, error otherwise K=0 |N| k

27 Equivalence class partitioning Example 2
Equivalence classes: Condition Valid eq. classes Invalid eq. classes nr of inputs 2 < 2, > 2 type of input int int int no-int, no-int int abs(N) N  0, N  0 maxint  k  maxint,  k > maxint Test Cases : maxint N result Valid error Invalid error error “XYZ” 10 error E4 error

28 Boundary value analysis
Based on experience / heuristics : Testing boundary conditions of equivalence classes is more effective, i.e. values directly on, above, and beneath edges of classes If a system behaves correctly at boundary values, than it probably will work correctly at "middle" values Choose input boundary values as tests in input classes instead of, or additional to arbitrary values Choose also inputs that invoke output boundary values (values on the boundary of output classes) Example strategy as extension of equivalence class partitioning: choose one (or more) arbitrary value(s) in each eq. class choose values exactly on lower and upper boundaries of eq. class choose values immediately below and above each boundary (if applicable)

29 Boundary value analysis
“Bugs lurk in corners and congregate at boundaries.” [Boris Beizer, "Software testing techniques"] "Os bugs escondem-se nos cantos e reúnem-se nas fronteiras"

30 Boundary value analysis Example 1
Test a function for calculation of absolute value of an integer Valid equivalence classes : Condition Valid eq. classes Invalid eq. Classes particular abs < 0, >= 0 Test cases : class x < 0, arbitrary value: x = -10 class x >= 0, arbitrary value: x = 100 class x >= 0, on boundary: x = 0 classes x < 0, on boundary: x = -1

31 Boundary value analysis A self-assessment test 1 [Myers]
“A program reads three integer values. The three values are interpreted as representing the lengths of the sides of a triangle. The program prints a message that states whether the triangle is scalene (all lengths are different), isosceles (two lengths are equal), or equilateral (all lengths are equal).” Write a set of test cases to test this program. Inputs: l1, l2, l3 , integer, li > 0, li < lj + lk Output: error, scalene, isosceles or equilateral

32 Boundary value analysis A self-assessment test 1 [Myers]
Test cases for: valid inputs: invalid inputs: valid scalene triangle ? valid equilateral triangle ? valid isosceles triangle ? 3 permutations of previous ? side = 0 ? negative side ? one side is sum of others ? 3 permutations of previous ? one side larger than sum of others ? all sides = 0 ? non-integer input ? wrong number of values ? l3 5. 11. l1 l2 “Bugs lurk in corners and congregate at boundaries.”

33 Boundary value analysis Example 2
Given inputs maxint and N compute result : result = if this <= maxint, error otherwise K=0 |N| k Valid equivalence classes : condition valid eq. classes boundary values. abs(N) N  0, N  N = (-2), -1, 0, 1 maxint  k  maxint,  k = maxint-1,  k > maxint maxint, maxint+1, (maxint+2)

34 Boundary value analysis Example 2
Test Cases : maxint N result maxint N result error … … … How to combine the boundary conditions of different inputs ? Take all possible boundary combinations ? This may blow up …… maxint maxint = |N| error error N N = 0

35 Boundary value analysis Example 3: search routine specification
procedure Search (Key : ELEM ; T: ELEM_ARRAY; Found : out BOOLEAN; L: out ELEM_INDEX) ; Pre-condition -- the array has at least one element T’FIRST <= T’LAST Post-condition -- the element is found and is referenced by L ( Found and T (L) = Key) or -- the element is not in the array ( not Found and not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key )) (source: Ian Sommerville)

36 Boundary value analysis Example 3 - input partitions
P1 - Inputs which conform to the pre-conditions (valid) array with 1 value (boundary) array with more than one value (different size from test case to test case) P2 - Inputs where a pre-condition does not hold (invalid) array with zero length P3 - Inputs where the key element is a member of the array first, last and middle positions in different test cases P4 - Inputs where the key element is not a member of the array

37 Boundary value analysis Example 3 – test cases (valid cases only)

38 Cause-effect graphing
Black-box technique to analyze combinations of input conditions Identify causes and effects in specification   inputs / outputs / initial state final state conditions conditions Make Boolean Graph linking causes and effects Annotate impossible combinations of causes and effects Develop decision table from graph with a particular combination of inputs and outputs in each column Transform each column into a test case

39 Cause-effect graphing Example 2
 k  maxint  k  maxint N  0 N  0  k error and xor Boolean graph bastava segundo partição em classes de equivalência causes  k  maxint (inputs)  k  maxint N  N  effects  k (outputs) error decision table ("truth table")

40 Cause-effect graphing
Systematic method for generating test cases representing combinations of conditions Differently from eq. class partitioning, we define a test case for each possible combination of conditions Drawback: Combinatorial explosion of number of possible combinations

41 Independent effect testing
With multiple input parameters Show that each input parameter affects the outcome (the value of at least one output parameter) independently of other input parameters Need at most two test cases for each parameter, with different outcomes, different values of that parameter, and equal values of all the other parameters Avoids combinatorial explosion of number of test cases

42 Domain partitioning Partition the domain of each input or output parameter into equivalence classes Boolean type: true, false Enumeration type: value1, value2, … (if few values) Integer type: <0, 0, >0 Sequence type (array, string, list): empty, not-empty, with/without repetitions, … Set type: empty, not-empty Etc. Special case of equivalence class partitioning General case: analyze combinations of parameters

43 Error guessing Just ‘guess’ where the errors are ……
Intuition and experience of tester Ad hoc, not really a technique But can be quite effective Strategy: Make a list of possible errors or error-prone situations (often related to boundary conditions) Write test cases based on this list

44 Risk based testing More sophisticated ‘error guessing’
Try to identify critical parts of program ( high risk code sections ): parts with unclear specifications developed by junior programmer while his wife was pregnant …… complex code (white box?): measure code complexity - tools available (McGabe, Logiscope,…) High-risk code will be more thoroughly tested ( or be rewritten immediately ……)

45 Testing for race conditions
Also called bad timing and concurrency problems Problems that occur in multitasking systems (with multiple threads or processes) A kind of boundary analysis related to the dynamic views of a system (state-transition view and process-communication view) Examples of situations that may expose race conditions: problems with shared resources: saving and loading the same document at the same time with different programs sharing the same printer, communications port or other peripheral using different programs (or instances of a program) to simultaneously access a common database problems with interruptions: pressing keys or sending mouse clicks while the software is loading or changing states other problems: shutting down or starting two or more instances of the software at the same time Knowledge used: dynamic models (state-transition models, process models) (source: Ron Patton)

46 Random testing Input values are (pseudo) randomly generated
(-) Need some automatic way to check the outputs (for functional / correctness testing) By comparing the actual output with the output produced by a "trusted" implementation By checking the result with some kind of procedure or expression Some times it's much easier to check a result than to actually computing it Example: sorting - O(n log n) to perform; O(n) to check (final array is sorted and has same elements than initial array) (+) Many test cases may be generated (+) Repeatable (*): pseudo-random generators produce the same sequence of values when started with the same initial value (*) essential to check if a bug was corrected (-) May not cover special cases that are discovered by "manual" techniques Combine with "manual" generation of test cases for boundary values (+) Particularly adequate for performance testing (it's not necessary to check the correctness of outputs)

47 Requirements based testing
You have a list (or tree) of requirements (or features or properties) Define at least one test case for each requirement Build and maintain a (tests to requirements) traceability matrix Particularly adequate for system and acceptance testing, but applicable in other situations Hints: Write test cases as specifications by examples Write distinctive test cases (examples) Not enough: Requirement1 or Requirement2 => TestCase1Behaviour Test separately and in combination Write “positive” (examples in favor) and “negative” (examples against) test cases

48 Model based testing Model = visual model Semi-formal specifications
Behavioral UML models Use case models Interaction models (sequence diagrams, collaborations diagrams) State machine (state-transition) models Activity models Same techniques as white box Can be used to generate test cases in a more or less automated way

49 State-transition testing
Construct a state-transition model (state machine view) of the item to be tested (from the perspective of a user / client) E.g., with a state diagram in UML Define test cases to exercise all states and all transitions between states Usually, not all possible paths (sequences of states and transitions), because of combinatorial explosion Each test case describes a sequence of inputs and outputs (including input and output states), and may cover several states and transitions Also test to fail – with unexpected inputs for a particular state Particularly useful for testing user interfaces state – a particular form/screen/page or a particular mode (inspect, insert, modify, delete; draw line mode, brush mode, etc.) transition – navigation between forms/screens/pages or transition between modes Also useful to test object oriented software Particularly useful when a state-transition model already exists as part of the specification

50 Use case and scenario testing
Particularly adequate for system and integration testing Use cases capture functional requirements Each use case is described by one or more normal flow of events and zero or more exceptional flow of events (also called scenarios) Define at least one test case for each scenario Build and maintain a (tests to use cases) traceability matrix Example: library (MFES)

51 Formal specification based testing
Formal specification = formal model Non executable formal specifications: Constraint language Operations pre/post conditions (restrictions/effects) Can be expressed in OCL – Object Constraint Language Post conditions can be used to check outcomes – test oracle Executable formal specifications Action language Executable test oracle for conformance testing Example: “Colocação de professores”

52 Black box testing: which One ?
Black box testing techniques : Equivalence class partitioning Boundary value analysis Cause-effect graphing Error guessing ………… Which one to use ? None of them is complete All are based on some kind of heuristics They are complementary

53 Black box testing: which one ?
Always use a combination of techniques When a formal specification is available try to use it Identify valid and invalid input equivalence classes Identify output equivalence classes Apply boundary value analysis on valid equivalence classes Guess about possible errors Cause-effect graphing for linking inputs and outputs

54 References and further reading
Practical Software Testing, Ilene Burnstein, Springer-Verlag, 2003 Software Testing, Ron Patton, SAMS, 2001 The Art of Software Testing, Glenford J. Myers, Wiley & Sons, (Chapter 4 - Test Case Design) Classical Software testing techniques, Boris Beizer, Van Nostrand Reinhold, 2nd Ed, 1990 Bible Testing Computer Software,  2nd Edition, Cem Kaner, Jack Falk, Hung Nguyen, John Wiley & Sons, 1999 Practical, black box only Software Engineering, Ian Sommerville, 6th Edition, Addison-Wesley, 2000 Guide to the Software Engineering Body of Knowledge (SWEBOK), IEEE Computer Society


Download ppt "Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Test Case Design – Black Box Testing João Pascoal Faria."

Similar presentations


Ads by Google