Download presentation
Presentation is loading. Please wait.
Published byBlake Sherman Modified over 8 years ago
1
Chapter 3- BASIC CONCEPTS OF TESTING Why software can never be perfect The terms commonly used by software testers
2
QUALITY REVOLUTION Global competition Outsourcing Off-shoring Increasing customer expectations + Tighter schedules Traditionally, quality... end of the product development cycle new approach, quality … all phases of a product development process Quality
3
SOFTWARE QUALITY User View: quality is fitness for purpose Manufacturing View: quality is conformance to the specification Value-Based View: quality depends on the amount a customer is willing to pay for it Product View: quality is tied to the inherent characteristics of the product
4
SOFTWARE QUALITY A quality factor represents a behavioral characteristic of a system. Examples of high-level quality factors: Correctness reliability Efficiency Testability maintainability reusability
5
ROLE OF TESTING A verification process for software quality assessment and improvement Friedman and Voas, 1995. The process consisting of all lifecycle activities concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects
6
Stakeholders Stakeholders in a test process are: programmers test engineers project managers Software engineers Customers
7
Verification AND Validation Activities for software quality assessment is divided into two broad categories Verification: Confirm that a product of a given development phase satisfies the requirements established before the start of that phase Validation: confirm that a product meets its intended use (its customer’s expectations)
8
Software Testing Definitions In April 1990, the Hubble space telescope was launched into orbit around the Earth Testing of the mirror was difficult since the telescope was designed for use in space Tests were made to measure all its attributes and compare the measurements with what was specified Hubble was declared fit for launch. After it was put into operation, the images it returned were found to be out of focus. ….The specification was wrong, They didn't confirm that it met the original requirement—validation
9
Static vs. Dynamic Analysis Static Analysis: based on the examination of a number of documents, namely requirements documents, software models, design documents, and source code. Dynamic Analysis: involves actual program execution in order to expose possible program failures Static analysis and dynamic analysis are complementary
10
Static and Dynamic Analysis Static verification Requirements specification Architecture Detailed design Implementation code1 Dynamic V &V Prototype Implementation code2
11
OBJECTIVES OF TESTING It does work: in normal circumstances, system performs the basic functions It does not work: the idea is to try to make the unit (or system) fail Reduce the risk of failure: As faults are discovered and fixed, the failure rate of a system generally decreases. Reduce the cost of testing: produce low-risk software with fewer number of test cases. effectiveness of test cases
12
OBJECTIVES OF TESTING Measure the system quality Confirm to a given standard Assess conformance to a specification (requirements, design, or product claims) Quality Assurance (QA) The responsibility of QA is to create and enforce standards and methods to improve the development process and to prevent bugs from ever occurring
13
Testing Principles 1.Testing shows the presence of bugs, but can’t prove that there are no bugs. 2.Exhaustive testing is impossible 3.Early testing: start activities early in SDLC 4.Defect clustering 5.The Pesticide Paradox 6.Not All the Bugs Found Will Be Fixed
14
CONCEPT OF COMPLETE TESTING Complete, or exhaustive, testing means there are no undiscovered faults at the end of the test phase For most of the systems, complete testing is near impossible, Why? The input domain of a system can be very large There is both valid inputs and invalid inputs The program may have a large number of states Timing constraints on the inputs
15
CONCEPT OF COMPLETE TESTING Example Windows Calculator: The calculator accepts a 32 digit number Test the addition using mouse & keyboard Test for alphabet input Test Backspace and Delete keys If you decide to eliminate any of the test conditions because you feel they're redundant or unnecessary, or just to save time, you've decided not to test the program completely
16
CONCEPT OF COMPLETE TESTING select a subset of the input domain to test a program selection of the subset of the input domain must be done in a systematic manner, such that subset D1, attempts to deduce properties (behavior) of an entire program P
17
How Much Testing is Enough? If you decide not to test every possible test scenario, you've chosen to take on risk If you attempt to test everything, the costs go up dramatically and the number of missed bugs declines to the point that it's no longer cost effective to continue. The goal is to hit that optimal amount of testing so that you don't test too much or too little.
18
How Much Testing is Enough?
19
Testing Principles 4- Defect clustering Defects tend to be found in clusters 20% (or less) of the modules account for 80% (or more) of the defects Programmers have bad days Programmers often make the same mistake Testers should use this knowledge as a guide for input selection.
20
Testing Principles 5- The Pesticide Paradox The more you test software, the more immune it becomes to your tests. In iterative development models, the test process repeats each time around the iteration. software testers must continually write new and different tests to exercise different parts of the program and find more bugs
21
Testing Principles 6- Not All the Bugs Found Will Be Fixed Reasons not to fix a bug: There's not enough time It's really not a bug, it is a feature It's too risky to fix. Bugs that would occur infrequently or appear in little-used features may be dismissed. The decision-making process usually involves the software testers, the project managers, and the programmers (Intel Pentium bug)
22
WHAT IS A TEST CASE? Basic form:. In stateless systems, where the outcome depends solely on the current input In state-oriented systems, where the program outcome depends both on the current state of the system and the current input
23
TESTING ACTIVITIES 1.Identify an objective to be tested: A clear purpose must be associated with every test case 2.Select inputs: Input is selected based on the test objective 3.Compute the expected outcome:
24
TESTING ACTIVITIES 4.Set up the execution environment of the program: network connection, right database, remote sites,.. 5.Execute the program: 6.Analyze the test result: compare the actual outcome of program execution with the expected outcome 7.A test report must be written after analyzing
25
TESTING ACTIVITIES
26
Testing Activities Identify Design Build Execute Compare “What” item to be verified? What are the objectives? How the “what” can be tested? (identify inputs to and the expected outcomes) Build test cases (implement scripts, data) Run the system Test case outcome with expected outcome Test result
27
SOURCES OF INFORMATION FOR TEST CASE SELECTION Requirements and functional specifications Source code Input and output domains
28
WHITE-BOX AND BLACK-BOX TESTING White-box testing techniques (structural) Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system examines source code with a focus on control flow and data flow. Control flow refers to flow of control from one instruction to another. Data flow refers to the propagation of values from one variable or constant to another variable
29
WHITE-BOX AND BLACK-BOX TESTING black-box testing techniques (functional) Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
30
Black-box and White-box Testing
31
WHITE-BOX AND BLACK-BOX TESTING Scope of White-box and Black-box Structural testing techniques applied to individual units of a program Functional testing techniques can be applied to both an entire system and the individual program units Who performs White-box and Black-box? Structural testing: programmer Functional testing: separate testing team
32
WHITE-BOX AND BLACK-BOX TESTING A combination of both structural and functional testing techniques must be used in program testing
33
Fundamental Test Process
34
TEST PLANNING AND CONTROL The purpose of test planning is to get ready and organized for test execution. A test plan provides a framework, scope, details of resource needed, effort required, schedule of activities, and a budget Define the test Exit criteria Exit criteria are how we know when testing is finished A framework: a set of ideas, facts, or circumstances within which the tests will be conducted
35
TEST Analysis and DESIGN Test design is a phase of software testing The system requirements are critically studied System features to be tested are identified Objectives of test cases and the detailed behavior of test cases are defined creating a set of inputs that will provide a set of expected outputs Write test steps
36
Test Implementation and Execution Developing and prioritizing test cases, creating test data, writing test procedures & optionally, preparing test harnesses and writing automated test scripts Collecting test cases into test suites Checking the test environment set-up is correct Running test cases Keeping a log of testing activities, including the outcome Comparing actual results with expected results Reporting discrepancies as incidents repeating test activities when changes have been made: retesting or regression testing
37
Regression testing Performed whenever a component of the system is modified. Ascertain (determine) that the modification has not introduced any new faults (after fixing code) A subset of existing test cases are repeated
38
Evaluating Exit Criteria and Reporting Checking whether the previously determined exit criteria have been met Writing up the result of the testing activities
39
Test Closure Activities Ensuring that the documentation is in order Closing down and archiving the test environment
40
TEST LEVELS
41
A software system goes through four stages of testing before it is actually deployed. unit testing programmers test individual program units, such as a procedures, functions, methods, or classes, in isolation. Integration testing Jointly performed by software developers and integration test engineers modules are assembled to construct larger subsystems by following integration testing techniques
42
TEST LEVELS System-level testing functionality testing, security testing, robustness testing, load testing, stability testing, stress testing, performance testing, and reliability testing acceptance testing Done by the customer to measure the quality of the product
43
TEST TEAM ORGANIZATION AND MANAGEMENT Testing is a distributed activity conducted at different levels throughout the life cycle of a software Unit-level tests be developed and executed by the programmers System integration testing is performed by the system integration test engineers performing system-level testing team Testing manager
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.