Download presentation
1
Software Testing
2
Software Testing ‘The process of executing a program... with the intention of finding errors’ (Myers, 1976) Example: Test a module that inputs three integers (i1, i2, and i3) and outputs that they make an equilateral triangle make an isosceles triangle make a scalene triangle do not make a triangle (an error) Adapted from Binder (1999), originally proposed by Myers (1979).
3
Testing example Take the time now to work out what tests need to be done, for example: i1 not > 0 i1 + i2 <= i3 i1, i2, i3 all > 0 and all = etc
4
Use this space to write some test cases
Testing example (2) Use this space to write some test cases
5
How many cases did you identify?
Testing example (3) How many cases did you identify? If you thought that 10 or fewer tests were enough, you were missing essential cases If you thought that 20 or fewer tests were enough, you were making assumptions about the implementation
6
Testing example (4) To test the module thoroughly (exhaustively), we need to test every possible input For 32 bit integers, we have 4x10^9 possible values 3 integers have 6.4 x 10^28 values If we test 1000 cases per second, testing this module exhaustively would take more than a billion times the age of the universe! Clearly, we will need to be more selective in the way we test!
7
Finding Errors ‘The process of executing a program... with the intention of finding errors’ (Myers, 1976) There are two kinds of errors Errors of omission we have left something out that we should put in test that all the requirements and specifications are satisfied Errors of commission we have put something in that we should leave out impractical to test that only the requirements and specifications are satisfied, ie the product does nothing else
8
Context of testing Since we cannot test exhaustively, we compliment our testing with Quality Assurance techniques such as code walkthroughs inspections reviews
9
Equivalence Classes Many exhaustive tests would be testing the same logical path, so we assume these can be combined Such a set of cases is called an equivalence class These equivalence classes can be derived from the requirements or the specification
10
Hazards We test for known or likely causes of error
Test every line of code Test every path in a module Test equivalence class boundary conditions Test complex logic Test interfaces Re-test super-class in sub-class Test object life cycles Test using “real” data volumes etc
11
Selecting tests Testing for known hazards
From experience we have learned that we can get good results by Testing for known hazards Testing every known “business scenario” and “use case” Major Decisions: WHO WHAT HOW
12
Who does the Testing? The programmer ? An independent team ?
The users ? Everyone!
13
What to Test Unit testing procedural function, OO method
Module testing procedural program, OO class Subsystem testing Integration testing Acceptance testing
14
What to Test ... Unit Testing
Testing is not debugging If our software fails a test, we do debugging to find the cause Test The lowest level functions and procedures in isolation Each logic path in the component specifications Every line of code (white box) What features of the Eiffel language assist with testing? What do preconditions do? What do postconditions do?
15
What to Test ... Module Testing
The interaction of all the related components of a module / class Tests the module as a stand-alone entity What do Eiffel invariants do? In an OO system how would you create a ‘test harness’ or ‘test driver’ for a module?
16
What to Test ... Subsystem Testing
Tests the interfaces between the modules Scenarios or Use Cases are employed to test module interaction
17
What to Test ... Integration (System) Testing
Tests interactions between sub-systems and components System performance Stress testing Volume testing Scenarios or Use Cases specified in the requirements
18
What to Test ... Acceptance Testing
Tests the whole system with Scenarios or Use Cases specified in the requirements Tests the whole system with live data Establishes the ‘validity’ of the system
19
How to Test Software 2. Establish a test strategy 1. Use a Test Plan
Top/Down Bottom/Up Techniques White Box Black Box 3. Use appropriate tools Automate as much testing as possible
20
How to Test Software... 1. Use a Test Plan The Test Plan establishes:
Standards, deliverables Major testing phases Traceability of testing to requirements Test schedule and resource allocation Test record documentation Testing strategies Examples of Test Plans are provided in the tutorial notes and the assignment
21
How to Test Software … Assignment Testing Documents - Test Plan
1. To be completed from the requirement specifications before any code is written, i.e., Expected Results - Test Plan - Test Summary 2. To be completed during testing, i.e., Expected + Actual Results - Test Report
22
How to Test Software … Assignment Testing Documents … - Test Plan
- Describe how you are going to verify that your system modifications will work as specified - Only include tests for your modifications (+ any missing tests from Stage 0)) - Test Summary - show all the tests, including regression tests, i.e. show all Stage 1 tests as well as your own tests - use as a checklist of all tests
23
How to Test Software … Assignment Testing Documents … Test Report
Includes black and white box testing: Test Plan showing actual results for your tests Test Summary of all tests with indication of all working tests (i.e. a checklist ‘tick’ for all working tests)
24
Top-Down and Bottom-Up Testing
How to Test Software... 2. Establish a Test Strategy Top-Down and Bottom-Up Testing Top-Down Testing Major Features: . Control program is tested first . Modules are integrated one at a time . Major emphasis is on interface testing . Starts at sub-system level with modules as ‘stubs’ . Then tests modules with functions as ‘stubs’ . Used in conjunction with top-down program development
25
How to Test Software... 2. Establish a Test Strategy
Top-Down and Bottom-Up Testing Bottom-Up Testing Major Features: Allows early testing aimed at proving feasibility of particular modules Modules can be integrated in various clusters as desired Major emphasis is on module functionality and performance
26
How to Test Software... Testing Strategies Top-Down Testing Advantages
Design errors are trapped earlier A working (prototype) system Enables early validation of design Disadvantages Difficult to produce a stub for a complex component Difficult to observe test output from top-level modules
27
How to Test Software... Testing Strategies ... Bottom-Up Testing
Advantages Easier to create test cases and observe output Uses simple drivers for low-level modules to provide data and the interface Natural fit with OO development techniques Disadvantages No ‘program’ until all modules tested High-level errors may cause changes in lower modules
28
How to Test Software... Testing Strategies … Advantages (again)
Top-Down No test drivers are needed The control program plus a few modules forms a basic early prototype Interface errors discovered early Modular features aid debugging Bottom-Up No test stubs are needed Easier to adjust manpower needs Errors in critical modules are found early Natural fit with OO development techniques
29
How to Test Software... Testing Strategies … Disadvantages (again)
Top-Down . Test stubs are needed . Extended early phases dictate a slow manpower buildup . Errors in critical modules at low levels are found late Bottom-Up . Test drivers are needed . Many modules must be integrated before a working program is available . Interface errors are discovered late
30
How to Test Software... 2. Establish a Test Strategy ...
Testing Techniques Structural (White Box) Testing Functional (Black Box) Testing Equivalence Partitioning Mathematical / Formal Verification Paradigm / Language hazards
31
How to Test Software... Testing Techniques
Structural (White Box) Testing Use code to derive a program flow-graph which shows all logic paths Maximum test paths = program’s ‘Cyclomatic Complexity’ (McCabe) Use a dynamic program analyser to identify code not executed White Box Testing attempts to test every line of code
32
How to Test Software... Testing Techniques ...
Functional (Black Box) Testing Test cases are based on functions as outlined in specification Can be developed when program specifications are complete ie before code is started The Test Plan with the test cases can be developed from the functional requirements Black Box testing only checks the outputs for accuracy – not every line of code is tested.
33
How to Test Software... Testing Techniques ...
White Box testing should only be used for Unit testing White Box and Black Box testing can be used for Module testing Only use Black Box testing for Subsystem testing Integration testing Acceptance testing
34
How to Test Software... 3. Use Appropriate Testing Tools
Test data generators Test control / evaluation environments Program analysers Static Dynamic File comparators Simulators Symbolic program dumps Trace packages Interactive debugging environments
35
Regression Testing Maintenance/Development Testing Differences
Only changes need to be reviewed Only new test cases that exercise the change need to be developed Test results compared against previous test results (Regression Testing) Data collected during impact analysis identifies testing at each level
36
When to Stop Testing ... When do you stop testing?
NOT - When delivery date arrives!! - usually what happens!! It should be: When test goals have been reached code coverage case coverage risk coverage defect density (defects / LOC) defect arrival rate
37
Testing Summary When testing always: Use a Test Plan
Use testing strategies and techniques Top/Down - Bottom/Up White Box - Black Box Use appropriate tools : automate tests For maintenance – Regression Test
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.