Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Engineering Principle (SEP521)

Similar presentations


Presentation on theme: "Software Engineering Principle (SEP521)"— Presentation transcript:

1 Software Engineering Principle (SEP521)
Software Testing - I Jongmoon Baik

2 Objectives To define and understand what software testing is
To understand software testing strategies To describe software testing processes To understand different software testing approach Different testing levels and approaches Whitebox vs. Blackbox tests

3 What is Software Testing?
“[Software testing] is the design and implementation of a special kind of software system: one that exercises another software system with the intent of finding bugs.” Robert V. Binder, Testing Object-Oriented Systems: Models, Patterns, and Tools (1999) “[Software testing] is the technical operation that consists of the determination of one or more characteristics of a given product, process, or service according to a specified procedure.” ISO/IEC 1991

4 What is Software Testing? (Cont.)
Software Testing typically involves: Execution of the software with representative inputs under an actual operational conditions Comparison of predicted outputs with actual outputs Comparison of expected states with resulting states Measurement of the execution characteristics (execution time, memory usage, etc)

5 Goals of Software Testing
Immediate goal Identify discrepancies b/w actual results and expected behavior Ultimate goal Demonstrate conformance to specification Guaranteeing correctness is not realistic in general. Provide an indication of correctness, reliability, safety, security, performance, usability, etc. Practical goal Improve software reliability at an acceptable cost Increase its usefulness

6 Software Testing Issues
Is an exhaustive test possible? How much testing is required? How to find a few input that represent the entire input domain? When we can stop testing? Which procedure is proper? How to generate test data?

7 Test Case, Test Oracle, & Test Bed
A test related items which contains the following information A set of test inputs Execution conditions Expected outputs Test Oracle: A document, or piece of software that allows tester to determine whether a test has been passed or failed e.g.: a specification (pre- and post conditions), a design document, and a set of requirements Test Bed An environment that contains all the hardware and software needed to test a software component or a software system

8 Software Testing Principles (I)
Testing is the process of exercising a software component using a selected set of test cases, with the intent of Revealing defects Evaluating quality When the test objective is to detect defects, then a good test case is one that has a high probability of revealing a yet undetected defect (s) Test results should be inspected meticulously A test case must contain the expected output or result Test cases should be developed for both valid and invalid input conditions

9 Software Testing Principles (II)
The probability of the existence of additional defects in a software component is proportional to the number of defects already detected in that component. Testing should be carried out by a group that is independent of the development group Tests must be repeatable and reusable Testing should be planned Testing activities should be integrated into the software life cycle Testing is a creative and challenging task Ilene Burstein, “Practical Software Testing”, 2003

10 Exhaustive Testing 3170 years (24 hrs, 7days working) Go with
How many possible paths? About 1014 paths Assume that one test case/millisecond How long it will take to execute all the test cases? 3170 years (24 hrs, 7days working) 20 times Go with SELECTIVE TESTING

11 What is S/W Testing Strategy?
“An elaborate and systematic plan of series of actions to be taken for software testing to locate defects and remove them before the software system is released” “No Universal Testing Strategy”

12 Questions to be Answered
How do you conduct the tests? Should you develop a formal plan for your tests? If then When? Should you test the entire program as whole or run tests only on a small part of it? Should you rerun tests you've already conducted as you add new components to a large system? When should you involve the customer?

13 A Testing Strategy High Level Low Level Validation Verification
Acceptance Test System Requirement Validation Verification System Test Software Requirement High Level Integration Test High Level Design Low Level Software Test Levels Low Level Design Unit Test Testing: a series of four steps shown in the figure. To develop the software, move downward from system engineering to code by decreasing the level of abstraction. ->Unit testing begins at the lowest level focusing on each unit of the software as implemented in source code ensure that it functions properly as a unit Makes heavy sue of testing techniques that exercise specific paths in a component’s control structure to ensure complete coverage and maximum error detection -> Progress upward to integration testing focusing on design and the construction of software architecture After assembling or integrating components as a complete software package, Integration testing address the issues associated with the dual problems of verification and program construction -> Validation testing, where requirements established and analyzed are validated against the software Provides final assurance that software meets all functional, behavioral, and performance requirements. -> Finally system testing where the software and other system elements are tested as a whole After combining with other system elements (hardware, people, databases), system testing verifies that all elements mesh properly and that overall system function/performance is achieved Test Level: a group of test activities that are organized and directed collectively Code (Implentation)

14 Testing Scope “Testing in the small” “Testing in the large”
Exercising the smallest executable unit of the software system “Testing in the large” Putting the entire system to the test Begin with “Testing in the small” and then progress to “Testing in the large” Early testing focuses on a single component ot a small group of related components and applies tests to uncover errors in the data and processing logic that have been encapsulated by the component(s) The integrated until the complete system is constructed. At this point, a series of high-order tests are executed to uncover errors in meeting customer requirements. Test Specification: defines a plan that describes an overall strategy and a procedure that defines specific testing steps and the tests that will be conducted

15 Characteristics for S/W Testing Strategies
To perform effective testing, a software team should conduct effective formal technical reviews Testing begins at the components level and works “outward” Different testing techniques are appropriate at different point in time Testing is conducted by developer and an independent test group Testing and debugging are different activities, but debugging must be accommodated in any testing strategy

16 A Way of Thinking about Testing
Constructive tasks – Software Analysis, Design, Coding, Documentation Destructive tasks – Testing Considered to break the software Developers treats lightly, designing and executing tests that will demonstrate that the program works, rather than finding errors Misconceptions: The software developer should not do no testing at all The software developer should be tossed over the wall to strangers who will test it mercilessly Testers get involved with the project only when the testing steps are about to begin

17 Software Testing Organization
Software Developers Responsible for testing the individual unit of the program Also conduct the integration testing Correct uncovered errors ITG (Independent Testing Group) Remove the inherent problems of developers’ testing their programs Report to SQA organization - Independence Close Collaboration b/w Development Group and ITG are required for successful tests throughout the life Cycle

18 Strategic Issues Specify product requirements in a quantifiable manner long before testing commences. State testing objectives explicitly. Understand the users of the software and develop a profile for each user category. Develop a testing plan that emphasizes “rapid cycle testing”. Although the overriding objective of testing is to find errors, a good testing strategy also assesses other quality characteristics such as portability, maintainability, and usability. These should be specified in a way that is measurable so that testing results are unambiguous. The specific objectives of testing should be stated in measurable terms. For example, test effectiveness, test coverage, mean time to failure, the cost to find and fix defects, remaining defect density or frequency of occurrence, and test work-hours per regression test all should be stated within the test plan. Use-cases that describe the interaction scenario for each class of user can reduce overall testing effort by focusing testing on actual use of the product. A software engineering team “learn to test in rapid cycles (2 percent of project effort) of customer-useful, at least ‘trialable’ increments of functionality and/or quality improvement”. The feedback generated from these rapid cycle tests can be used to control quality levels and the corresponding test strategies. Tom Gilb ’85

19 Strategic Issues (Cont.)
Build “robust” software that is designed to test itself Use effective formal technical reviews as a filter prior to testing Conduct formal technical reviews to assess the test strategy and test cases themselves Develop a continuous improvement approach for testing process Software should be designed to in a manner that uses anti-bugging techniques. That is, software should be capable of diagnosing certain classes of errors. In addition, the design should accommodate automated testing and regression testing. Formal technical reviews can be as effective as testing in uncovering errors. For this reason, reviews can reduce the amount of testing effort that is required to produce high-quality product. Formal technical reviews can uncover inconsistencies, omissions, and outright errors in the testing approach. This saves time and also improves product quality. The test strategy should me measured. The metrics collected during testing should be used as part of a statistical process control approach for software testing.

20 Characteristics of “Testability”
Operability—it operates cleanly Observability—the results of each test case are readily observed Controllability—the degree to which testing can be automated and optimized Decomposability—testing can be targeted Simplicity—reduce complex architecture and logic to simplify tests Stability—few changes are requested during testing Understandability—of the design “The better it works. The more efficiently it can be tested” If a system is designed and implemented with quality in mind, relatively few bugs will block the execution of tests, allowing testing to progress without fits and starts. “What you see is what you test” Inputs provided as part of testing produce distinct outputs. System states and variable are visible or queriable during execution. “The better we can control the software, the more the testing can be automated and optimized” Software and hardware states and variables can be controlled directly by test engineer, Tests can be conveniently specified, automated, and reproduced. “By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting” The software system is built from independent modules that can be tested independently :The less there is to test, the more quickly we can test it” The program should exhibit functional simplicity (the feature set is minimum necessary to meet requirements), structural simplicity (architecture is modularized to limit the propagation of faults) and code simplicity (a coding standard is adopted for ease of inspection and maintenance) “The fewer the changes, the fewer the disruption to testing” changes to the software are infrequent, controlled when they do occur, and do not invalidate existing tests, the software recovers well from failures. “The more information we have, the smarter we will test” The architectural design and the dependencies b/w internal, external, and shared components are well understood. Technical documentation is instantly accessible, well organized, specifi and detailed, and accurate. Changes to the design are communicated to tester Software James Bach, 1994

21 What is a good test? A high probability of finding an error
Not redundant “Best of breed” Neither too simple nor too complex To achieve this goal, the tester must understand the software and attempt to develop a mental picture of how the software might fail. Ideally the classes of failure are probed. Testing time and resources are limited. There is no point in conducting a test that has the same purpose as another test. Every test should have a different purpose In a group of tests that have a similar intent, time and resource limitations may mitigate toward the execution of only a subset of these tests. In such a cases, the test that has highest likelihood of uncovering a whole class of errors should be used Although it is sometimes possible to combine a series of tests into on test case. The possible side effects associated with this approach may make errors. In general test should ne executed separately

22 When to Stop Testing? No definitive answer to the question
A few pragmatic answers You’re never done testing Stop when you’re out of budget, money, or schedule Stop based on statistical criteria e.g.: statistical modeling, software reliability theory Rule of Thumb Stop when error detection rate drops Basic questions: When we are done testing? How to know that we have tested enough? The burden simply shift from you to your customer. Every time the customer/user executes a computer program, the program is being tested. This sobering fact underlines the importance of other software quality assurance activities Somewhat cynical but nonetheless accurate By collecting metrics during software testing and making use of existing software reliability models, it is possible to develop meaningful guidelines for answering the question: When are we done testing The empirical approaches are considerably better than raw intuition

23 Example: Reliability Growth Model

24 Software Test Process Included in a entire software development process Test plan start with requirement analysis Test plan & Test procedure: systematic and performed in parallel with s/w development Test Case Design Test Data Selection Test Execution Evaluation Test Plan Test Cases Selection criteria Test Results Test Oracles Target Software Test Reports Input Data Domain Completion criteria specification

25 Test Planning Start after requirement spec. and dev. Plan is made
Goal of Test Plan Define the scope, approach, resources, and schedule of the intended testing activities Activities of test planning Definition of what will be tested and the approach that will be used Mapping of tests to the specifications Definition of the entry and exit criteria for each phase of testing Organization of a team Estimation of the time & effort needed Schedule of major milestones Definition of the test environment (hardware and software) Definition of the work products for each phase of testing. An assessment of test-related risks and a plan for their mitigation

26 Test Design Test Design phase includes the following activities:
Testability review of the test basis Define testing units and pass/fail criteria Allocating test techniques Specify the testing environment Generate test procedure Produce test cases/scripts/scenarios Preparing test specification

27 Test Execution Begins with the first delivery of testable component (s) Check the completeness of the test object & test environment Install the test object into the environment Execute (re) tests Compare and Analyze the test results

28 Test Evaluation Evaluate the test object Evaluate the test process
Produce an evaluation report Preserve testware Discharge the test team

29 Software Test Management Practices
Initial test planning activities including planning for V&V activities at each lifecycle phase Defining and managing test objectives test requirements include the functions and logic to be tested, program constraints, software states, input and output data conditions, and usage scenarios can be selected based on concerns, risks, and business logic Test progress monitoring activities. Product and test quality tracking activities. Test configuration control activities CROSSTALK, June 1996

30 Q & A


Download ppt "Software Engineering Principle (SEP521)"

Similar presentations


Ads by Google