Download presentation
Presentation is loading. Please wait.
Published byBranden Bryant Modified over 6 years ago
1
jpf@fe.up.pt www.fe.up.pt/~jpf
TQS - Teste e Qualidade de Software (Software Testing and Quality) Software Testing Concepts João Pascoal Faria
2
Software testing Software testing consists of the dynamic (1) verification of the behavior of a program on a finite (2) set of test cases, suitably selected (3) from the usually infinite executions domain, against the specified expected (4) behavior [source: SWEBOK] (1) testing always implies executing the program on some inputs (2) for even simple programs, too many test cases are theoretically possible that exhaustive testing is infeasible trade-off between limited resources and schedules and inherently unlimited test requirements (3) see test case design techniques later on how to select the test cases (4) it must be possible to decide whether the observed outcomes of the program are acceptable or not the pass/fail decision is commonly referred to as the oracle problem
3
Purpose of software testing
"Program testing can be used to show the presence of bugs, but never to show their absence!” [Dijkstra, 1972] Because exhaustive testing is usually impossible "The goal of a software tester is to find bugs, find them as early as possible, and make sure that they get fixed“ (Ron Patton) A secondary goal is to assess software quality Defect testing – find defects, using test data and test cases that have higher probability of finding defects Statistical testing – estimate the value of software quality metrics, using representative test cases and test data
4
Test cases Test case - inputs to test the system and the expected outputs (or predicted results) from these inputs if the system operates correctly under specified execution conditions Inputs may include an initial state of the system Outputs may include a final state of the system When test cases are executed, the system is provided with the specified inputs and the actual outputs are compared with the outputs expected Example for a calculator: (input) should give 8 (output)
5
Test types Level or phase Accessibility Quality attributes system
integration unit Accessibility (test case design strategy/technique) security robustness white box (or structural) black box (or functional) performance usability reliability functional behaviour focus here Quality attributes
6
Test levels or phases (1)
Unit testing Testing of individual program units or components Usually the responsibility of the component developer (except sometimes for critical systems) Tests are based on experience, specifications and code A principal goal is to detect functional and structural defects in the unit
7
Test levels or phases (2)
Integration testing Testing of groups of components integrated to create a sub-system Usually the responsibility of an independent testing team (except sometimes in small projects) Tests are based on a system specification (technical specifications, designs) A principal goal is to detect defects that occur on the interfaces of units and their common behavior
8
Test levels or phases (3)
System testing Testing the system as a whole Usually the responsibility of an independent testing team Tests are usually based on a requirements document (functional requirements/specifications and quality requirements) A principal goal is to evaluate attributes such as usability, reliability and performance (assuming unit and integration testing have been performed) (source: I. Sommerville)
9
Test levels or phases (4)
Acceptance testing Testing the system as a whole Usually the responsibility of the customer Tests are based on a requirements specification or a user manual A principal goal is to check if the product meets customer requirements and expectations Regression testing Repetition of tests at any level after a software change (source: I. Sommerville)
10
Test levels and the extended V-model of software development
Execute acceptance tests Specify Requirements Execute system tests System/acceptance test plan & test cases review/audit Requirements review Specify/Design Code System/acceptance tests Design Execute integration tests Integration test plan & test cases review/audit Design review Specify/Design Code Integration tests Code Execute unit tests Unit test plan & test cases review/audit Code reviews (source: I. Burnstein, pg.15) Specify/Design Code Unit tests
11
What is a good test case? Capability to find defects
Particularly defects with higher risk Risk = frequency of failure (manifestation to users) * impact of failure Cost (of post-release failure) ≈ risk Capability to exercise multiple aspects of the system under test Reduces the number of test cases required and the overall cost Low cost Development: specify, design, code Execution Result analysis: pass/fail analysis, defect localization Easy to maintain Reduce whole life-cycle cost Maintenance cost ≈ size of test artefacts (See also an article with this title by Cem Kaner)
12
The importance of good test cases
… and think you are here Many bugs found Some bugs found High Test suite quality Some bugs found Very few bugs found Low Low High You may be here … Product quality
13
Test selection/adequacy/coverage/stop criteria
“A selection criteria can be used for selecting the test cases or for checking whether or nor a selected test suite is adequate, that is, to decide whether or not the testing can be stopped” [SWEBOK 2004] Adequacy criteria - Criteria to decide if a given test suite is adequate, i.e., to give us “enough” confidence that “most” of the defects are revealed In practice, reduced to coverage criteria Coverage criteria Requirements/specification coverage At least one test case for each requirement Cover all statements in a formal specification Model coverage State-transition coverage Use-case and scenario coverage Code coverage Statement coverage Data flow coverage, ... Fault coverage “Although it is common in current software testing practice that the test processes at both the higher and lower levels stop when money or time runs out, there is a tendency towards the use of systematic testing methods with the application of test adequacy criteria.” Software Unit Test Coverage and Adequacy, Hong Zhu et al, ACM Computing Surveys, December 1997
14
Test case design strategies and techniques
Tester's View Knowledge sources Techniques / Methods Inputs Requirements document Specifications User manual Models Domain knowledge Defect analysis data Intuition Experience Equivalence class partitioning Boundary value analysis Cause effect graphing Error guessing Random testing State-transition testing Scenario-based testing Black-box testing (not code-based) (sometimes called functional testing) Outputs Control flow testing/coverage: - Statement coverage - Branch (or decision) coverage - Condition coverage - Branch and condition coverage - Modified condition/decision coverage - Multiple condition coverage - Independent path coverage - Path coverage Data flow testing/coverage Class testing/coverage Mutation testing Program code Control flow graphs Data flow graphs Cyclomatic complexity High-level design Detailed design White-box testing (also called code-based or structural testing) (adapted from: I. Burnstein, pg.65)
15
Test iterations: test to pass and test to fail
First test iterations: Test-to-pass check if the software fundamentally works with valid inputs without stressing the system Subsequent test iterations: Test-to-fail try to "break" the system with valid inputs but at the operational limits with invalid inputs (source: Ron Patton)
16
Test automation Automatic test case execution
Requires that test cases are written in some executable language Increases test development costs (coding) but practically eliminates test (re)execution costs, which is particularly important in regression testing Unit testing frameworks and tools for API testing Capture/replay tools for GUI testing Automatic test case generation Automatic generation of test inputs is easier than the automatic generation of test outputs (usually requires a formal specification) Reduces test development costs Usually, inferior capability to find defects per test case, but overall capability may be higher because much more test cases can be generated (than manually)
17
Some good practices Test as early as possible
Write the test cases before the software to be tested applies to any level: unit, integration or system helps getting insight into the requirements Code the test cases because of the frequent need for regression testing (repetition of testing each time the software is modified) The more critical the system the more independent should be the tester colleague, other department, other company Be conscious about cost Derive expected test outputs from specification (formal/informal, explicit/implicit), not from code
18
References and further reading
Practical Software Testing, Ilene Burnstein, Springer-Verlag, 2003 Software Testing, Ron Patton, SAMS, 2001 Testing Computer Software, 2nd Edition, Cem Kaner, Jack Falk, Hung Nguyen, John Wiley & Sons, 1999 Guide to the Software Engineering Body of Knowledge (SWEBOK), IEEE Computer Society, Software Engineering, Ian Sommerville, 6th Edition, Addison-Wesley, 2000 What Is a Good Test Case?, Cem Kaner, Florida Institute of Tecnology, 2003 Software Unit Test Coverage and Adequacy, Hong Zhu et al, ACM Computing Surveys, December 1997
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.