Testing Techniques Software Testing Module ( ) Dr. Samer Hanna
Testing Techniques Testing consists of the following steps (Harrold, 2000): Designing test data Executing the system under test with those test cases Examining the results of the execution and comparing them with the expected results.
Test Data and Test Cases Program to be tested is executed using representative data samples or test data and the results are compared with the expected results. Test cases include input test data and the expected output for each input.
Exhaustive Testing It is impossible to test a piece of code, such as a method or function, with every possible input to check if the code produces the expected output. This is known as exhaustive testing However, there are many testing techniques that are used to design test data such as boundary value testing and equivalent partitioning.
Testing Techniques Categories Testing techniques can be categorized along various dimensions depending on: 1) The availability of the source code Testing techniques can be categorized to black- box or white-box testing according to the availability of the source code:
White-Box testing If the source code of the system under test is available then the test data is based on the structure of this source code (Jorgensen, 2002). Examples of white-box testing are: path testing and data flow testing (Jorgensen, 2002).
Black-Box testing If the source code is not available then test data is based on the function of the software without regard to how it was implemented (Jorgensen, 2002). Examples of black-box testing are: boundary value testing (Jorgensen, 2002) and equivalence partitioning (Myers, 1979).
2) The role of testing Testing techniques can also be categorized according to the type of testing (Sommerville, 2004) which is based on the role or goal of this test; some testing techniques belong to the validation testing and others belong to the defect or fault-based testing:
Validation testing This kind of testing is intended to show that the software meets the customer requirement. In validation testing each requirement must be tested by at least one test case. An example of a testing technique that belong to this type of testing is specification-based testing where test data are generated from state-based specifications that describes what functions the software is supposed to provide.
Validation testing If the specification is written by a model such as UML and the test case generation is based on that model, then the testing is called model-based testing (Toth et al. 2003), This testing also belong to validation testing.
Defect testing (fault-based testing or negative testing) This type of testing is intended to detect faults (bugs or defects) in the software system rather than testing the functional use of the system like validation testing (Sommerville, 2004). Examples of the testing techniques that belong to this type of testing include: fault injection (Voas and McGraw, 1998), boundary value based robustness testing (Jargensen, 2002), and syntax testing (Beizer, 1990).
Defect testing Defect testing contribute to the assessment of the following quality attributes: robustness, fault- tolerance and security
3) The level of testing Testing techniques can be distinguished according to the scope or level of a test: Unit testing Testing individual or independent software unit (IEEE, 1990). A unit is defined as the smallest piece of software that can be independently tested (Beizer, 1990).
The level of testing Integration testing This kind of testing is used to test the interaction between the units that was already tested using Unit testing (IEEE, 1990). System testing This kind of testing is conducted on a complete and integrated software system to evaluate its compliance with its specified requirements (IEEE, 1999).
4) The quality attribute or system behavior Testing techniques can be distinguished according to the quality attribute or system behavior being tested such as performance, robustness, and correctness. Examples of these kinds of testing are:
Performance testing Used to assess the performance quality attribute of a system or component and is defined as: “Testing conducted to evaluate the compliance of a system or components with specified performance requirements” (IEEE, 1990). A performance requirement may include speed with which a given function must be performed (IEEE, 1990).
Robustness testing Robustness testing is used to assess the robustness quality attribute of a software system. Robustness testing include some testing techniques such as boundary value based robustness testing technique (Jorgensen, 2002) and Interface Propagation Analysis (IPA Robustness testing is defined as: Testing how a system or software component reacts when the environment shows unexpected behavior.
Security testing Used to assess the security quality attribute of a system or component by testing if an intruder can read or modify the system data or functionality.
Load testing Used to test if a system or component can cope with heavy loads such as being used by many users at the same time.
Relation between categories The above four categories or dimensions (the availability of source code, the role of testing, the level of testing, and the quality attribute or system behaviour) of testing techniques are not disjoint; for example the boundary value based robustness testing (Jorgensen, 2002) belongs to the following types of testing: black-box testing, unit testing and fault-based testing at the same time. Other black-box testing techniques can also be considered fault- based testing techniques such as syntax testing (Beizer, 1990).