ECE 355: Software Engineering

Slides:



Advertisements
Similar presentations
Defect testing Objectives
Advertisements

SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software Engineering, COMP201 Slide 1 Software Testing Lecture 28 & 29.
April 1, R McFadyen1 A Traditional Software Development Process Unit test Integration test System test Detailed design Architectural design.
Software testing.
Nov R McFadyen1 A Traditional Software Development Process Unit test Integration test System test Detailed design Architectural design Analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing 2.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software Engineering Software Testing.
CS 425/625 Software Engineering Software Testing
Testing an individual module
Chapter 11, Testing.
- Testing programs to establish the presence of system defects -
Outline Types of errors Component Testing Testing Strategy
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 1995 Software Engineering, 5th edition. Chapter 22Slide 1 Verification and Validation u Assuring that a software system meets a user's.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
System/Software Testing
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Hoang Huu Hanh, Hue University hanh-at-hueuni.edu.vn.
Testing.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Testing phases. Test data Inputs which have been devised to test the system Test cases Inputs to test the system and the predicted outputs from these.
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Software testing techniques 3. Software testing
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Chapter 20 Software Testing.
CSC 480 Software Engineering Lecture 14 Oct 16, 2002.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Software Testing Testing types Testing strategy Testing principles.
1 Software Defect Testing Testing programs to establish the presence of system defects.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
INTRUDUCTION TO SOFTWARE TESTING TECHNIQUES BY PRADEEP I.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
The Software Development Life Cycle: An Overview Presented by Maxwell Drew and Dan Kaiser Southwest State University Computer Science Program.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Defect testing Testing programs to establish the presence of system defects.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
These slides are based on:
Software Testing.
Chapter 9, Testing.
IS301 – Software Engineering V:
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software testing.
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing “If you can’t test it, you can’t design it”
Chapter 11, Testing.
CS410 – Software Engineering Lecture #11: Testing II
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

ECE 355: Software Engineering CHAPTER 11 Part II

Course outline Unit 1: Software Engineering Basics Unit 2: Process Models and Software Life Cycles Unit 3: Software Requirements Unit 4: Unified Modeling Language (UML) Unit 5: Design Basics and Software Architecture Unit 6: OO Analysis and Design Unit 7: Design Patterns  Unit 8: Testing and Reliability Unit 9: Software Engineering Management and Economics

Course Outline Introduction to software engineering Requirements Engineering Design Basics Traditional Design OO Design Design Patterns Software Architecture Design Documentation Verification & Validation Software Process Management and Economics

These slides are based on: Lecture slides by Ian Summerville, see http://www.comp.lancs.ac.uk/computing/resources/ser/ ECE355 Lecture slides by Sagar Naik Lecture Notes from Bernd Bruegge, Allen H. Dutoit “Object-Oriented Software Engineering – Using UML, Patterns and Java”

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

Static and dynamic V&V Static verification Requirements specification Architecture Detailed design Implementation V1 Implementation V1 Prototype Dynamic V &V Special cases Executable specifications Animation of formal specs

Program testing Can reveal the presence of errors NOT their absence Only exhaustive testing can show a program is free from defects. However, exhaustive testing for any but trivial programs is impossible A successful test is a test which discovers one or more errors Should be used in conjunction with static verification Run all tests after modifying a system ©Ian Sommerville 1995

Testing in the V-Model Acceptance Requirements test Customer Developer Architectural Design System test Run tests Integration test Write tests Detailed Design Functional (BB) Structural (WB) Module implementation Unit test

Testing stages Unit testing Integration testing System testing Testing of individual components Integration testing Testing to expose problems arising from the combination of components System testing Testing the complete system prior to delivery Acceptance testing Testing by users to check that the system satisfies requirements. Sometimes called alpha testing

Types of testing Statistical testing Defect testing Tests designed to reflect the frequency of user inputs. Used for reliability estimation. Covered later in section on Software reliability. Defect testing Tests designed to discover system defects. A successful defect test is one which reveals the presence of defects in a system. ©Ian Sommerville 1995

Some Terminology Failure Error Fault A failure is said to occur whenever the external behavior does not conform to system spec. Error An error is a state of the system which, in the absence of any corrective action, could lead to a failure. Fault An adjudged cause of an error.

Some Terminology fault, bug, error, defect It is there in the program… Program state Error Observed Failure

Testing Activities Unit T est Unit T est Integration Functional Test Requirements Analysis Document Subsystem Code Unit Requirements Analysis Document System Design Document T est Tested Subsystem User Manual Subsystem Code Unit T est Tested Subsystem Integration Functional Test Test Functioning System Integrated Subsystems Tested Subsystem Subsystem Code Unit T est All tests by developer

Testing Activities continued Client’s Understanding of Requirements Global Requirements User Environment Validated System Accepted System Functioning System Performance Acceptance Installation Test Test Test Usable System Tests by client Tests by developer User’s understanding System in Use Tests (?) by user

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

Testing and debugging Defect testing and debugging are distinct processes Defect testing is concerned with confirming the presence of errors Debugging is concerned with locating and repairing these errors Debugging involves formulating a hypothesis about program behaviour then testing these hypotheses to find the system error ©Ian Sommerville 1995

Debugging Activities Locate error & fault Design fault repair Repair fault Re-test program ©Ian Sommerville 1995

Testing Activities Test result Identify Test conditions (“What”): an item or event to be verified. How the “what” can be tested: realization Design Build test cases (imp. scripts, data) Build Run the system Execute Test case outcome with expected outcome Compare Test result

Testing Activities Test condition What: Descriptions of circumstances that could be examined (event or item). Categories: functionality, performance, stress, robustness… Derive Using testing techniques (to be discussed) (Refer to the V-Model)

Testing Activities Design test cases: the details Input values Expected outcomes Things created (output) Things changed/updated  database? Things deleted Timing … Known Unknown (examine the first actual outcome) Environment prerequisites: file, net connection …

Testing Activities Build test cases (implement) Implement the preconditions (set up the environment) Prepare test scripts (may use test automation tools) Structure of a test case Simple linear Tree (I, EO) {(I1, EO1), (I2, EO2), …} I EO1 EO2

Testing Activities Scripts contain data and instructions for testing Comparison information What screen data to capture When/where to read input Control information Repeat a set of inputs Make a decision based on output Testing concurrent activities Sync.: when to enter input

Testing Activities Compare (test outcomes, expected outcomes) Simple/complex (known differences) Different types of outcomes Variable values (in memory) Disk-based (textual, non-textual, database, binary) Screen-based (char., GUI, images) Others (multimedia, communicating apps.)

Testing Activities Compare: actual output == expected output?? Yes No Pass (Assumption: Test case was “instrumented.”) No Fail (Assumption: No error in test case, preconditions)

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

Goodness of test cases Exec. of a test case against a program P Covers certain requirements of P; Covers certain parts of P’s functionality; Covers certain parts of P’s internal logic.  Idea of coverage guides test case selection.

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

Black-box Testing Focus: I/O behavior. If for any given input, we can predict the output, then the module passes the test. Almost always impossible to generate all possible inputs ("test cases") Goal: Reduce number of test cases by equivalence partitioning: Divide input conditions into equivalence classes Choose test cases for each equivalence class. (Example: If an object is supposed to accept a negative number, testing one negative number is enough)

Black-box Testing (Continued) Selection of equivalence classes (No rules, only guidelines): Input is valid across range of values. Select test cases from 3 equivalence classes: Below the range Within the range Above the range Input is valid if it is from a discrete set. Select test cases from 2 equivalence classes: Valid discrete value Invalid discrete value Another solution to select only a limited amount of test cases: Get knowledge about the inner workings of the unit being tested => white-box testing

Equivalence partitioning ©Ian Sommerville 1995

Equivalence partitioning Partition system inputs and outputs into ‘equivalence sets’ If input is a 5-digit integer between 10000 and 99999, equivalence partitions are <10000, 10000-99999 and >99999 If you can predict that certain set of inputs will be treated differently in processing from another one, put them into separate partitions, e.g., Canadian and US addresses (will validate state/province), addresses in other countries (no state validation), etc. Two test case selection strategies Choose one random test case from each partition Choose test cases at the boundary of the partitions 09999, 10000, 99999, 100000 ©Ian Sommerville 1995

Equivalence partitions 09999 10000 50000 99999 100000 Less than 10000 Between 10000 and 99999 More than 99999 ©Ian Sommerville 1995

Search routine specification procedure Search (Key : ELEM ; T: ELEM_ARRAY; Found : in out BOOLEAN; L: in out ELEM_INDEX) ; Pre-condition -- the array has at least one element T’FIRST <= T’LAST Post-condition -- the element is found and is referenced by L ( Found and T (L) = Key) or -- the element is not in the array ( not Found and not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key )) ©Ian Sommerville 1995

Strategy for input partitions (I) (General) Inputs which conform to the pre-conditions Inputs where a pre-condition does not hold Inputs derived from the post-condition Inputs where the key element is a member of the array Inputs where the key element is not a member of the array ©Ian Sommerville 1995 [modified]

Strategy for input partitions (II) (Arrays) Test software with arrays which have only a single value Test with arrays of zero length (if allowed by programming language) Use arrays of different sizes in different tests Derive tests so that the first, middle and last elements of the array are accessed ©Ian Sommerville 1995

Search routine - input partitions No value Not in array ©Ian Sommerville 1995 [modified]

Search routine - test cases () 1 ??,?? ©Ian Sommerville 1995 [modified]

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

White-box Testing Statement Testing (Algebraic Testing): Test single statements (Choice of operators in polynomials, etc) Loop Testing: Cause execution of the loop to be skipped completely. (Exception: Repeat loops) Loop to be executed exactly once Loop to be executed more than once Path testing: Make sure all paths in the program are executed Branch Testing (Conditional Testing): Make sure that each possible outcome from a condition is tested at least once if ( i = TRUE) printf("YES\n"); else printf("NO\n"); Test cases: 1) i = TRUE; 2) i = FALSE

Binary search (Ada) ©Ian Sommerville 1995

Binary search (C++) void Binary_search (elem key, elem* T, int size, boolean &found, int &L) { int bott, top, mid ; bott = 0 ; top = size -1 ; L = ( top + bott ) / 2 ; if (T[L] == key) found = true ; else found = false ; while (bott <=top && !found) mid = top + bott / 2 ; if ( T [mid] == key ) found = true; L = mid ; } else if (T [mid] < key ) bott = mid + 1 ; top = mid-1 ; } // while } //binary_search ©Ian Sommerville 1995

Binary search - equiv. partitions Pre-conditions satisfied, key element in array Pre-conditions satisfied, key element not in array Pre-conditions unsatisfied, key element in array Pre-conditions unsatisfied, key element not in array Input array has a single value Input array has an even number of values Input array has an odd number of values ©Ian Sommerville 1995

Binary search equiv. partitions ©Ian Sommerville 1995

Binary search - test cases ©Ian Sommerville 1995

Code Coverage Statement coverage Elementary statements: assignment, I/O, call Select a test set T such that by executing P for each case in T, each statement of P is executed at least once. Read(x); read(y); if x > 0 then write(“1”); else write(“2”); if y > 0 then write(“3”); else write(“4”); T: {<x = -13, y = 51>, <x = 2, y = -3>}

White-box Testing: Determining the Paths FindMean (FILE ScoreFile) { float SumOfScores = 0.0; int NumberOfScores = 0; float Mean=0.0; float Score; Read(ScoreFile, Score); while (! EOF(ScoreFile) { if (Score > 0.0 ) { SumOfScores = SumOfScores + Score; NumberOfScores++; } /* Compute the mean and print the result */ if (NumberOfScores > 0) { Mean = SumOfScores / NumberOfScores; printf(“ The mean score is %f\n”, Mean); } else printf (“No scores found in file\n”); 1 2 3 4 5 6 7 8 9

Constructing the Logic Flow Diagram

Code Coverage - Construct a control-flow graph of a program (module) c Assignment, I/O, call If c then S1 else S2 If c then S1 while c S1 Edge coverage Select a set T such that, by executing P for each member in T, each edge of P’s control-flow graph is traversed at least once.

Code Coverage Condition coverage Edge coverage plus All possible values of the constituents of compound conditions are exercised at least once. (constituents: atomic formula  rel./Boolean var) Example: while (~found) and counter <= NumOfItems

Code Coverage Path coverage Path coverage focuses on executing distinct paths rather than just edges or statements Paths through a program can be feasible or infeasible (e.g., due to contradicting conditions) Ideally, one would like to cover all feasible paths, but the number of feasible paths usually explodes very quickly with the size of the program Therefore, there are various path coverage strategies targeting specific subsets of the feasible paths

Code Coverage Path coverage - Select a test set T such that, by executing P for each member of T, all paths leading from the initial node to the final node of P’s control-flow graph are traversed. - Control-flow graph S1 S c Seq. do S until c If c While c then S S2 If c then S1 else S2 Note: Separate node for each member of a compound condition How many paths?

Code Coverage Path testing based on cyclomatic complexity (McCabe’s) Cyclomatic complexity V(G) = E – N + 2  E: # of edges, N: # of nodes V(G) = p + 1  p is the # of predicate nodes V(G) = # of regions (area surrounded by nodes/edges) V(G): Upper bound on the # of independent paths Independent path: A path with at least one new node/edge Example: V(G) = E – N + 2 = 17 - 13 + 2 = 6 V(G) = p + 1 = 5 + 1 V(G) = 6 Advantage: The number of test cases is proportional to the size of the program

Code Coverage Path coverage: example 1 : 2 1 F 3 2 3 4 R1 5 6 F 4 7 R4 I = 1; TI = TV = 0; sum = 0; DO WHILE (value[I] <> -999 and TI < 100) TI++; if (value[I] >= min and value[I] <= max){ then {TV++; sum = sum + value[I];} } I++; ENDDO If TV > 0 av = sum/TV; Else av = -999; 2 1 F 3 2 3 4 R1 5 6 F 4 7 R4 5 8 F 10 R2 T 9 6 T R5 10 12 11 R3 11 8 7 13 12 R6 9 13  final node Outer region

V(G) as An Upper Bound 1 2 3 1 2 V(G) = 3

V(G)-Preserving Transformation SWITCH V(G) = 3 Nested IFs V(G) = 3

Code Coverage More complete path testing strategies target loops, e.g., 0 and 1 passes through every loop is tested Multiple passes exercising different paths through the body of a loop n passes (max = n) Trying to be more complete on loop testing quickly becomes infeasible

Code Coverage Data flow testing ( D = Definition, U = Use) DU chain: Assign a unique number to each statement DEF(S): set of all variables defined in statement number S USE(S): set of all variables used in statement number S Live variable: Def.of var X at S is live at S’ if There is a path from S to S’ that contains no other def of X DU chain: [X, S, S’] X is in DEF(S) and in USE(S’), and Def of X at S is live at S’ DU coverage Every DU chain be covered at least once.

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

Unit Testing Objective: Find differences between specified units and their imps. Unit: component ( module, function, class, objects, …) Unit test environment: Driver Test cases Test result Effectiveness? Partitioning Code coverage Unit under test Stub Stub Dummy modules

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

Integration Testing Objectives: Problem areas To expose problems arising from the combination To quickly obtain a working solution from comps. Problem areas Internal: between components Invocation: call/message passing/… Parameters: type, number, order, value Invocation return: identity (who?), type, sequence External: Interrupts (wrong handler?) I/O timing Interaction

Integration Testing Types of integration Structural Behavioral “Big bang”  no error localization Bottom-up: terminal, driver/module, (driver  module) Top-down: top, stubs, (stub  module), early demo Behavioral (next slide)

Integration Testing (Behavioral: Path-Based) C MM-path: Interleaved sequence of module exec path and messages Module exec path: entry-exit path in the same module Atomic System Function: port input, … {MM-paths}, … port output Test cases: exercise ASFs

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

System Testing Concerns with the app’s externals Much more than functional Load/stress testing Usability testing Performance testing Resource testing

System Testing Functional testing Objective: Assess whether the app does what it is supposed to do Basis: Behavioral/functional specification Test case: A sequence of ASFs (thread) (Refer to pages 22-24 of ECE 355 PBX Project Desc.)

System Testing Functional testing: coverage Event-based coverage PI1: each port input event occurs PI2: common sequences of port input event occurs PI3: each port input in every relevant data context PI4: for a given context, all possible input events PO1: each port output event PO2: each port output event occurs for each cause Data-based DM1: Exercise cardinality of every relationship DM2: Exercise (functional) dependencies among relationships

System Testing Stress testing: push it to its limit + beyond Volume Application (System) Users response : rate Resources: phy. + logical

System Testing Performance testing Usability testing Performance seen by users: delay, throughput System owner: memory, CPU, comm Performance Explicitly specified or expected to do well Unspecified  find the limit Usability testing Human element in system operation GUI, messages, reports, …

Test Stopping Criteria Meet deadline, exhaust budget, …  management Achieved desired coverage Achieved desired level failure intensity

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

Acceptance Testing Purpose: ensure that end users are satisfied Basis: user expectations (documented or not) Environment: real Performed: for and by end users (commissioned projects) Test cases: May reuse from system test Designed by end users

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

Regression Testing Whenever a system is modified (fixing a bug, adding functionality, etc.), the entire test suite needs to be rerun Make sure that features that already worked are not affected by the change Automatic re-testing before checking in changes into a code repository Incremental testing strategies for big systems

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations

How to Design Practical Test Cases (Experience of Hitachi Software Engineering) Bug detection during a development phase 100% 50% 0% Desk Unit Integrate System Field failure 0.02% 10 engineers x 12 months  100,000 LOC 1000 bugs  < 1 bug appears at the customer’s site

Testing effectiveness In an experiment, black-box testing was found to be more effective than structural testing in discovering defects Static code reviewing was less expensive and more effective in discovering program faults

Practical test cases Document test cases New viewpoint to functional spec Validation of test cases Well-balanced: normal, abnormal, boundary Correctness test cases Estimating quality 100/1000 test cases show 4 bugs  40 bugs

Practical test cases Schedule (rough idea) 10 engineers, 12 months, 100,000 LOC in C Apportioning 12 months SRS: 2; Design: 3; Coding: 2 Debugging (U + I + S tests): 3 QA testing: 2 (shippable??, redesign tests—no reuse) How many test cases (30-year empirical study) 1 test case per 10-15 LOC 100,000 LOC  1000 test cases 2 weeks to design 1000 test cases 2 months to execute 1000 test cases (25/day)

Comparison of White & Black-box Testing 25.1.2002 White-box Testing: Potentially infinite number of paths have to be tested White-box testing often tests what is done, instead of what should be done Cannot detect missing use cases Black-box Testing: Potential combinatorical explosion of test cases (valid & invalid data) Often not clear whether the selected test cases uncover a particular error Does not discover extraneous use cases ("features") Both types of testing are needed White-box testing and black box testing are the extreme ends of a testing continuum. Any choice of test case lies in between and depends on the following: Number of possible logical paths Nature of input data Amount of computation Complexity of algorithms and data structures

The 4 Testing Steps 1. Select what has to be measured Analysis: Completeness of requirements Design: tested for cohesion Implementation: Code tests 2. Decide how the testing is done Code inspection Proofs (Design by Contract) Black-box, white box, Select integration testing strategy (big bang, bottom up, top down, sandwich) 3. Develop test cases A test case is a set of test data or situations that will be used to exercise the unit (code, module, system) being tested or about the attribute being measured 4. Create the test oracle An oracle contains of the predicted results for a set of test cases The test oracle has to be written down before the actual testing takes place

Guidance for Test Case Selection Use analysis knowledge about functional requirements (black-box testing): Use cases Expected input data Invalid input data Use design knowledge about system structure, algorithms, data structures (white-box testing): Control structures Test branches, loops, ... Data structures Test records fields, arrays, ... Use implementation knowledge about algorithms: Examples: Force division by zero Use sequence of test cases for interrupt handler

Unit-testing Heuristics 1. Create unit tests as soon as object design is completed: Black-box test: Test the use cases & functional model White-box test: Test the dynamic model Data-structure test: Test the object model 2. Develop the test cases Goal: Find the minimal number of test cases to cover as many paths as possible 3. Cross-check the test cases to eliminate duplicates Don't waste your time! 4. Desk check your source code Reduces testing time 5. Create a test harness Test drivers and test stubs are needed for integration testing 6. Describe the test oracle Often the result of the first successfully executed test 7. Execute the test cases Don’t forget regression testing Re-execute test cases every time a change is made. 8. Compare the results of the test with the test oracle Automate as much as possible

Practical test cases Designing test cases (contd.) Distribution of test cases Normal: 60% Boundary: 10% Error: 15% Environmental (platform + performance): 15% Finishing touch 48-hour continuous operation test (basic function)  memory leakage, deadlock, connection time-out

Overview Basics of Testing Testing & Debugging Activities Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations