Download presentation
Presentation is loading. Please wait.
1
ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis
2
These slides are based on:
Lecture slides by Ian Summerville, see ECE355 Lecture slides by Sagar Naik Lecture Notes from Bernd Bruegge, Allen H. Dutoit “Object-Oriented Software Engineering – Using UML, Patterns and Java”
3
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
4
Static and dynamic V&V Static verification Requirements specification
Architecture Detailed design Implementation V1 Implementation V2 Prototype Dynamic V &V Special cases Executable specifications Animation of formal specs
5
Program testing Can reveal the presence of errors NOT their absence
Only exhaustive testing can show a program is free from defects. However, exhaustive testing for any but trivial programs is impossible A successful test is a test which discovers one or more errors Should be used in conjunction with static verification Run all tests after modifying a system ©Ian Sommerville 1995
6
Testing in the V-Model Acceptance Requirements test Customer Developer
Architectural Design System test Run tests Integration test Write tests Detailed Design Functional (BB) Structural (WB) Module implementation Unit test
7
Testing stages Unit testing Integration testing System testing
Testing of individual components Integration testing Testing to expose problems arising from the combination of components System testing Testing the complete system prior to delivery Acceptance testing Testing by users to check that the system satisfies requirements. Sometimes called alpha testing
8
Types of testing Statistical testing Defect testing
Tests designed to reflect the frequency of user inputs. Used for reliability estimation. Covered in section on Software reliability. Defect testing Tests designed to discover system defects. A successful defect test is one which reveals the presence of defects in a system. ©Ian Sommerville 1995
9
Some Terminology Failure Error Fault
A failure is said to occur whenever the external behavior does not conform to system spec. Error An error is a state of the system which, in the absence of any corrective action, could lead to a failure. Fault An adjudged cause of an error.
10
Some Terminology fault, bug, error, defect It is there in the program…
Program state Error Observed Failure
11
Testing Activities Unit T est Unit T est Integration Functional Test
Requirements Analysis Document Subsystem Code Unit Requirements Analysis Document System Design Document T est Tested Subsystem User Manual Subsystem Code Unit T est Tested Subsystem Integration Functional Test Test Functioning System Integrated Subsystems Tested Subsystem Subsystem Code Unit T est All tests by developer
12
Testing Activities continued
Client’s Understanding of Requirements Global Requirements User Environment Validated System Accepted System Functioning System Performance Acceptance Installation Test Test Test Usable System Tests by client Tests by developer User’s understanding System in Use Tests (?) by user
13
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
14
Testing and debugging Defect testing and debugging are distinct processes Defect testing is concerned with confirming the presence of errors Debugging is concerned with locating and repairing these errors Debugging involves formulating a hypothesis about program behaviour then testing these hypotheses to find the system error ©Ian Sommerville 1995
15
Debugging Activities Locate error & fault Design fault repair
Repair fault Re-test program ©Ian Sommerville 1995
16
Testing Activities Test result Identify
Test conditions (“What”): an item or event to be verified. How the “what” can be tested: realization Design Build test cases (imp. scripts, data) Build Run the system Execute Test case outcome with expected outcome Compare Test result
17
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
18
Goodness of test cases Exec. of a test case against a program P
Covers certain requirements of P; Covers certain parts of P’s functionality; Covers certain parts of P’s internal logic. Idea of coverage guides test case selection.
19
Black-box Testing Focus: I/O behavior. If for any given input, we can predict the output, then the module passes the test. Almost always impossible to generate all possible inputs ("test cases") Goal: Reduce number of test cases by equivalence partitioning: Divide input conditions into equivalence classes Choose test cases for each equivalence class. (Example: If an object is supposed to accept a negative number, testing one negative number is enough)
20
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
21
White-box Testing Statement Testing (Algebraic Testing): Test single statements (Choice of operators in polynomials, etc) Loop Testing: Cause execution of the loop to be skipped completely. (Exception: Repeat loops) Loop to be executed exactly once Loop to be executed more than once Path testing: Make sure all paths in the program are executed Branch Testing (Conditional Testing): Make sure that each possible outcome from a condition is tested at least once if ( i = TRUE) printf("YES\n"); else printf("NO\n"); Test cases: 1) i = TRUE; 2) i = FALSE
22
Code Coverage Statement coverage
Elementary statements: assignment, I/O, call Select a test set T such that by executing P in all cases in T, each statement of P is executed at least once. read(x); read(y); if x > 0 then write(“1”); else write(“2”); if y > 0 then write(“3”); else write(“4”); T: {<x = -13, y = 51>, <x = 2, y = -3>}
23
White-box Testing: Determining the Paths
FindMean (FILE ScoreFile) { float SumOfScores = 0.0; int NumberOfScores = 0; float Mean=0.0; float Score; Read(ScoreFile, Score); while (! EOF(ScoreFile) { if (Score > 0.0 ) { SumOfScores = SumOfScores + Score; NumberOfScores++; } /* Compute the mean and print the result */ if (NumberOfScores > 0) { Mean = SumOfScores / NumberOfScores; printf(“ The mean score is %f\n”, Mean); } else printf (“No scores found in file\n”); 1 2 3 4 5 6 7 8 9
24
Constructing the Logic Flow Diagram
25
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
26
Unit Testing Objective: Find differences between specified units and their imps. Unit: component ( module, function, class, objects, …) Unit test environment: Driver Test cases Test result Effectiveness? Partitioning Code coverage Unit under test Stub Stub Dummy modules
27
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
28
Integration Testing Objectives: Problem areas
To expose problems arising from the combination To quickly obtain a working solution from components. Problem areas Internal: between components Invocation: call/message passing/… Parameters: type, number, order, value Invocation return: identity (who?), type, sequence External: Interrupts (wrong handler?) I/O timing Interaction
29
Integration Testing Types of integration Structural Behavioral
“Big bang” no error localization Bottom-up: terminal, driver/module, (driver module) Top-down: top, stubs, (stub module), early demo Behavioral (next slide)
30
Integration Testing (Behavioral: Path-Based)
C MM-path: Interleaved sequence of module exec path and messages Module exec path: entry-exit path in the same module Atomic System Function: port input, … {MM-paths}, … port output Test cases: exercise ASFs
31
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
32
System Testing Concerns with the app’s externals
Much more than functional Load/stress testing Usability testing Performance testing Resource testing
33
System Testing Functional testing
Objective: Assess whether the app does what it is supposed to do Basis: Behavioral/functional specification Test case: A sequence of ASFs (thread)
34
System Testing Functional testing: coverage Event-based coverage
PI1: each port input event occurs PI2: common sequences of port input event occurs PI3: each port input in every relevant data context PI4: for a given context, all possible input events PO1: each port output event PO2: each port output event occurs for each cause Data-based DM1: Exercise cardinality of every relationship DM2: Exercise (functional) dependencies among relationships
35
System Testing Stress testing: push it to its limit + beyond Volume
Application (System) Users response : rate Resources: phy. + logical
36
System Testing Performance testing Usability testing
Performance seen by users: delay, throughput System owner: memory, CPU, comm Performance Explicitly specified or expected to do well Unspecified find the limit Usability testing Human element in system operation GUI, messages, reports, …
37
Test Stopping Criteria
Meet deadline, exhaust budget, … management Achieved desired coverage Achieved desired level failure intensity
38
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
39
Acceptance Testing Purpose: ensure that end users are satisfied
Basis: user expectations (documented or not) Environment: real Performed: for and by end users (commissioned projects) Test cases: May reuse from system test Designed by end users
40
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
41
Regression Testing Whenever a system is modified (fixing a bug, adding functionality, etc.), the entire test suite needs to be rerun Make sure that features that already worked are not affected by the change Automatic re-testing before checking in changes into a code repository Incremental testing strategies for big systems
42
Comparison of White & Black-box Testing 25.1.2002
White-box Testing: Potentially infinite number of paths have to be tested White-box testing often tests what is done, instead of what should be done Cannot detect missing use cases Black-box Testing: Potential combinatorical explosion of test cases (valid & invalid data) Often not clear whether the selected test cases uncover a particular error Does not discover extraneous use cases ("features") Both types of testing are needed White-box testing and black box testing are the extreme ends of a testing continuum. Any choice of test case lies in between and depends on the following: Number of possible logical paths Nature of input data Amount of computation Complexity of algorithms and data structures
43
The 4 Testing Steps 1. Select what has to be measured
Analysis: Completeness of requirements Design: tested for cohesion Implementation: Code tests 2. Decide how the testing is done Code inspection Proofs (Design by Contract) Black-box, white box, Select integration testing strategy (big bang, bottom up, top down, sandwich) 3. Develop test cases A test case is a set of test data or situations that will be used to exercise the unit (code, module, system) being tested or about the attribute being measured 4. Create the test oracle An oracle contains of the predicted results for a set of test cases The test oracle has to be written down before the actual testing takes place
44
Guidance for Test Case Selection
Use analysis knowledge about functional requirements (black-box testing): Use cases Expected input data Invalid input data Use design knowledge about system structure, algorithms, data structures (white-box testing): Control structures Test branches, loops, ... Data structures Test records fields, arrays, ... Use implementation knowledge about algorithms: Examples: Force division by zero Use sequence of test cases for interrupt handler
45
Unit-testing Heuristics
1. Create unit tests as soon as object design is completed: Black-box test: Test the use cases & functional model White-box test: Test the dynamic model Data-structure test: Test the object model 2. Develop the test cases Goal: Find the minimal number of test cases to cover as many paths as possible 3. Cross-check the test cases to eliminate duplicates Don't waste your time! 4. Desk check your source code Reduces testing time 5. Create a test harness Test drivers and test stubs are needed for integration testing 6. Describe the test oracle Often the result of the first successfully executed test 7. Execute the test cases Don’t forget regression testing Re-execute test cases every time a change is made. 8. Compare the results of the test with the test oracle Automate as much as possible
46
Overview Basics of Testing Testing & Debugging Activities
Testing Strategies Black-Box Testing White-Box Testing Testing in the Development Process Unit Test Integration Test System Test Acceptance Test Regression Test Practical Considerations
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.