Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Software Testing Concepts João Pascoal Faria jpf@fe.up.pt.

Similar presentations


Presentation on theme: "Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Software Testing Concepts João Pascoal Faria jpf@fe.up.pt."— Presentation transcript:

1 jpf@fe.up.pt www.fe.up.pt/~jpf
TQS - Teste e Qualidade de Software (Software Testing and Quality) Software Testing Concepts João Pascoal Faria

2 Software testing Software testing consists of the dynamic (1) verification of the behavior of a program on a finite (2) set of test cases, suitably selected (3) from the usually infinite executions domain, against the specified expected (4) behavior [SWEBOK] (1) testing always implies executing the program on some inputs (2) for even simple programs, too many test cases are theoretically possible that exhaustive testing is infeasible trade-off between limited resources and schedules and inherently unlimited test requirements (3) see test case design techniques later on how to select the test cases (4) it must be possible to decide whether the observed outcomes of the program are acceptable or not the pass/fail decision is commonly referred to as the oracle problem

3 Purpose of software testing
"Program testing can be used to show the presence of bugs, but never to show their absence!” [Dijkstra, 1972] … because exhaustive testing is usually impossible The primary goal of software testing is to find defects "The goal of a software tester is to find bugs, find them as early as possible, and make sure that they get fixed“ [Ron Patton] Defect testing – find defects, using test data and test cases that have higher probability of finding defects A secondarily goal is to increase the confidence on the software correctness and to assess software quality Statistical testing – estimate the value of software quality metrics, using representative test cases and test data

4 Test cases test case. (1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. (2) (IEEE Std ) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item [IEEE Standard Glossary of Software Engineering Terminology ] May be accompanied by: identifier, description, test item under test, test procedure, dependencies with respect to other test cases Inputs / outputs may include an initial / final state of the system A test case may also be defined by a sequence of inputs and outputs When test cases are executed, the system is provided with the specified inputs and the actual outputs are compared with the outputs expected Test case Input data Expected output a b mdc(a, b) 1 2 3 4

5 Test types f out out Level or phase
Other criteria: API / GUI testing developer / costumer tests statistical / defect testing manual / automated acceptance system integration unit Test case design strategy/technique security robustness white box (or structural) black box (or functional) performance in in usability reliability f functionality out out Quality attributes

6 Test phases and the extended V-model of software development
Execute unit tests Execute integration tests Execute acceptance tests Specify/Design Code Unit tests system tests Specify/Design Code System/acceptance tests Specify/Design Code Integration tests Specify Requirements Design Code Unit test plan & test cases review/audit System/acceptance Integration test plan & test cases review/audit Requirements review Design review Code reviews (source: I. Burnstein, pg.15) E com TDD?

7 Test phases – Unit Testing
Testing of individual (hardware or) software units or groups of related units IEEE Standard Glossary of Software Engineering Terminology Usually API testing Usually the responsibility of the developer -> developer tests Tests are usually based on experience, specifications and code A principal goal is to detect functional and structural defects in the unit

8 Test phases – Integration testing
Testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them. IEEE Standard Glossary of Software Engineering Terminology Usually the responsibility of an independent test team (except sometimes in small projects) Tests are usually based on a system specification (technical specifications, designs) A principal goal is to detect defects that occur on the interfaces of units (interaction testing)

9 Test phases – System testing
Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements IEEE Standard Glossary of Software Engineering Terminology Usually GUI testing Usually the responsibility of an independent test team Tests are usually based on a requirements document The goal is to ensure that the system performs according to its requirements, by evaluating both functional behavior and quality requirements such as reliability, usability, performance and security

10 Test phases – Acceptance testing
(1) Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. (2) Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component. IEEE Standard Glossary of Software Engineering Terminology Usually the responsibility of the customer -> costumer tests Tests are usually based on a requirements document or a user manual A principal goal is to check if customer requirements and expectations are met

11 Test phases – Regression testing
Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements. IEEE Standard Glossary of Software Engineering Terminology

12 What is a good test case? Capability to find defects
Particularly defects with higher risk Risk = frequency of failure * impact of failure ≈ cost (of post-release failure) Capability to exercise multiple aspects of the system under test Reduces the number of test cases required and the overall cost Low cost Development: specify, design, code Execution (fast) Result analysis: pass/fail analysis, defect localization Easy to maintain Reduce whole life-cycle cost Maintenance cost ≈ size of test artefacts (See also: “What Is a Good Test Case?”, Cem Kaner, Florida Institute of Technology, 2003)

13 Test adequacy/coverage criteria
Adequacy criteria - Criteria to decide if a given test suite is adequate, i.e., to give us “enough” confidence that “most” of the defects are revealed Used in the evaluation and in the design/selection of test cases In practice, reduced to coverage criteria Coverage criteria Requirements/specification coverage (black-box) At least one test case for each requirement / specification statement Code coverage (white-box) Control flow coverage (statement, decision, MC/DC coverage …) Data flow coverage Model coverage State-transition coverage Use case and scenario coverage Fault coverage See also: “Software Unit Test Coverage and Adequacy”, Hong Zhu et al, ACM Computing Surveys, December 1997

14 Example: test coverage requirements in the aerospace sector
DAL: development assurance level (consequences of failure in component) Source: Nuno Silva (Critical Software), FEUP,

15 Test case design strategies and techniques
Strategy Tester's View Knowledge sources Techniques / Methods Inputs Requirements document Specifications User manual Models Domain knowledge Defect analysis data Intuition Experience Equivalence class partitioning Boundary value analysis Cause effect graphing Error guessing Random testing State-transition testing Scenario-based testing Black-box testing (not code-based) (sometimes called functional testing) Outputs Control flow testing/coverage: - Statement coverage - Branch (or decision) coverage - Condition coverage - Branch and condition coverage - Modified condition/decision coverage - Multiple condition coverage - Independent path coverage - Path coverage Data flow testing/coverage Class testing/coverage Mutation testing Program code Control flow graphs Data flow graphs Cyclomatic complexity High-level design Detailed design White-box testing (also called code-based or structural testing) (adapted from: I. Burnstein, pg.65)

16 Test automation Automatic test case execution
Requires that test cases are written in some executable language Increases test development costs (coding) but practically eliminates test (re)execution costs, which is particularly important in regression testing Unit testing frameworks and tools for API testing Capture/replay tools for GUI testing Performance testing tools Automatic test case generation Automatic generation of test inputs is easier than the automatic generation of test outputs (usually requires a formal specification) May reduce test development costs Usually, inferior capability to find defects per test case, but overall capability may be higher because much more test cases can be generated (than manually)

17 Testing best practices (1)
Test as early as possible Automate test case execution  JUnit/NUnit, etc. because of the frequent need for regression testing (repetition of testing each time the software is modified) Write the test cases before the software to be tested  TDD; FIT applies to any level: unit, integration or acceptance/system helps getting insight into the requirements test cases are verifiable partial specifications The more critical the system the more independent should be the tester peer, other department, other company (*) (*) ISVV - Independent Software V&V - assures technical, managerial and financial independence Be conscious about cost

18 Testing best practices (2)
Use test cases to objectively measure project progress Combine tests with reviews Start to design test cases based on the specification (black-box) and subsequently refine to cover the code (white box) Test-to-pass (1) in the first test iterations, and test-to-fail (2) in subsequent test iterations (1) check if the software fundamentally works, with valid input, without stressing the system (2) try to "break" the system, with valid inputs but at the operational limits or with invalid inputs (source: Ron Patton)

19 References and further reading
Practical Software Testing, Ilene Burnstein, Springer-Verlag, 2003 Software Testing, Ron Patton, SAMS, 2001 Testing Computer Software,  2nd Edition, Cem Kaner, Jack Falk, Hung Nguyen, John Wiley & Sons, 1999 Guide to the Software Engineering Body of Knowledge (SWEBOK), IEEE Computer Society, Software Engineering, Ian Sommerville, 6th Edition, Addison-Wesley, 2000 What Is a Good Test Case?, Cem Kaner, Florida Institute of Tecnology, 2003 Software Unit Test Coverage and Adequacy, Hong Zhu et al, ACM Computing Surveys, December 1997


Download ppt "Jpf@fe.up.pt www.fe.up.pt/~jpf TQS - Teste e Qualidade de Software (Software Testing and Quality) Software Testing Concepts João Pascoal Faria jpf@fe.up.pt."

Similar presentations


Ads by Google