Download presentation
Presentation is loading. Please wait.
Published byKathleen Andrews Modified over 9 years ago
1
Unit 7 Chapter 8 Testing the Programs
2
Unit 7 Requirements Read Chapters 8 and 9 Respond to the Unit 7 Discussion Board (25 points) Attend seminar/Take the seminar quiz (20 points) Complete your assignment(50 points) Complete learning journal (12 points)
3
Chapter 8 Objectives Types of faults and how to clasify them The purpose of testing Unit testing Integration testing strategies Test planning When to stop testing
4
8.1 Software Faults and Failures Why Does Software Fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible to implement Faulty design Faulty code Improperly implemented design
5
8.1 Software Faults and Failures Objective of Testing Objective of testing: discover faults A test is successful only when a fault is discovered ◦Fault identification is the process of determining what fault caused the failure ◦Fault correction is the process of making changes to the system so that the faults are removed
6
8.1 Software Faults and Failures Types of Faults Algorithmic fault Computation and precision fault ◦a formula’s implementation is wrong Documentation fault ◦Documentation doesn’t match what program does Capacity or boundary faults ◦System’s performance not acceptable when certain limits are reached Timing or coordination faults Performance faults ◦System does not perform at the speed prescribed Standard and procedure faults
7
8.1 Software Faults and Failures Typical Algorithmic Faults An algorithmic fault occurs when a component’s algorithm or logic does not produce proper output ◦Branching too soon ◦Branching too late ◦Testing for the wrong condition ◦Forgetting to initialize variable or set loop invariants ◦Forgetting to test for a particular condition ◦Comparing variables of inappropriate data types Syntax faults
8
8.2 Testing Issues Testing Organization Module testing, component testing, or unit testing Integration testing Function testing Performance testing Acceptance testing Installation testing
9
8.2 Testing Issues Attitude Toward Testing Egoless programming: programs are viewed as components of a larger system, not as the property of those who wrote them
10
8.2 Testing Issues Who Performs the Test? Independent test team ◦avoid conflict ◦improve objectivity ◦allow testing and coding concurrently
11
8.2 Testing Issues Views of the Test Objects Closed box or black box: functionality of the test objects Clear box or white box: structure of the test objects
12
8.2 Testing Issues White Box Advantage ◦free of internal structure’s constraints Disadvantage ◦not possible to run a complete test
13
8.2 Testing Issues Sidebar 8.2 Box Structures Black box: external behavior description State box: black box with state information White box: state box with a procedure
14
8.2 Testing Issues Factors Affecting the Choice of Test Philosophy The number of possible logical paths The nature of the input data The amount of computation involved The complexity of algorithms
15
8.3 Unit Testing Code Review Code walkthrough Code inspection
16
8.3 Unit Testing Sidebar 8.3 The Best Team Size for Inspections The preparation rate, not the team size, determines inspection effectiveness The team’s effectiveness and efficiency depend on their familiarity with their product
17
8.3 Unit Testing Testing versus Proving Proving: hypothetical environment Testing: actual operating environment
18
8.3 Unit Testing Steps in Choosing Test Cases Determining test objectives Selecting test cases Defining a test
19
8.3 Unit Testing Test Thoroughness Statement testing Branch testing Path testing Definition-use testing All-uses testing All-predicate-uses/some-computational- uses testing All-computational-uses/some-predicate- uses testing
20
8.4 Integration Testing Bottom-up Top-down Big-bang Sandwich testing Modified top-down Modified sandwich
21
8.4 Integration Testing Terminology Component Driver: a routine that calls a particular component and passes a test case to it Stub: a special-purpose program to simulate the activity of the missing component
22
8.5 Testing Object-Oriented Systems Questions at the Beginning of Testing OO System Is there a path that generates a unique result? Is there a way to select a unique result? Are there useful cases that are not handled?
23
8.5 Testing Object-Oriented Systems Easier and Harder Parts of Testing OO Systems OO unit testing is less difficult, but integration testing is more extensive
24
8.6 Test Planning Establish test objectives Design test cases Write test cases Test test cases Execute tests Evaluate test results
25
8.6 Test Planning Purpose of the Plan Test plan explains ◦who does the testing ◦why the tests are performed ◦how tests are conducted ◦when the tests are scheduled
26
8.6 Test Planning Contents of the Plan What the test objectives are How the test will be run What criteria will be used to determine when the testing is complete
27
8.7 Automated Testing Tools Code analysis ◦Static analysis code analyzer structure checker data analyzer sequence checker Output from static analysis
28
8.7 Automated Testing Tools (continued) Dynamic analysis ◦program monitors: watch and report program’s behavior Test execution ◦Capture and replay ◦Stubs and drivers ◦Automated testing environments Test case generators
29
8.8 When to Stop Testing Identifying Fault-Prone Code Track the number of faults found in each component during the development Collect measurement (e.g., size, number of decisions) about each component Classification trees: a statistical technique that sorts through large arrays of measurement information and creates a decision tree to show best predictors ◦A tree helps in deciding the which components are likely to have a large number of errors
30
8.11 What this Chapter Means for You It is important to understand the difference between faults and failures The goal of testing is to find faults, not to prove correctness
31
Chapter 9 Testing the System
32
Chapter 9 Objectives Function testing Performance testing Acceptance testing Software reliability, availability, and maintainability Installation testing Test documentation Testing safety-critical systems
33
9.1 Principles of System Testing System Testing Process Function testing: does the integrated system perform as promised by the requirements specification? Performance testing: are the non- functional requirements met? Acceptance testing: is the system what the customer expects? Installation testing: does the system run at the customer site(s)?
34
9.1 Principles of System Testing Techniques Used in System Testing Build or integration plan Regression testing Configuration management ◦versions and releases ◦production system vs. development system ◦deltas, separate files and conditional compilation ◦change control
35
9.1 Principles of System Testing Build or Integration Plan Define the subsystems (spins) to be tested Describe how, where, when, and by whom the tests will be conducted
36
9.1 Principles of System Testing Regression Testing Identifies new faults that may have been introduced as current one are being corrected Verifies a new version or release still performs the same functions in the same manner as an older version or release
37
9.1 Principles of System Testing Configuration Management Versions and releases Production system vs. development system Deltas, separate files and conditional compilation Change control
38
9.1 Principles of System Testing Sidebar 9.3 Microsoft’s Build Control The developer checks out a private copy The developer modifies the private copy A private build with the new or changed features is tested The code for the new or changed features is placed in master version Regression test is performed
39
9.1 Principles of System Testing Test Team Professional testers: organize and run the tests Analysts: who created requirements System designers: understand the proposed solution Configuration management specialists: to help control fixes Users: to evaluate issues that arise
40
9.2 Function Testing Purpose and Roles Compares the system’s actual performance with its requirements Develops test cases based on the requirements document
41
9.3 Performance Tests Types of Performance Tests Stress tests Volume tests Configuration tests Compatibility tests Regression tests Security tests Timing tests Environmental tests Quality tests Recovery tests Maintenance tests Documentation tests Human factors (usability) tests
42
9.4 Reliability, Availability, and Maintainability Definition Software reliability: operating without failure under given condition for a given time interval Software availability: operating successfully according to specification at a given point in time Software maintainability: for a given condition of use, a maintenance activity can be carried out within stated time interval, procedures and resources
43
9.4 Reliability, Availability, and Maintainability Different Level of Failure Severity Catastrophic: causes death or system loss Critical: causes severe injury or major system damage Marginal: causes minor injury or minor system damage Minor: causes no injury or system damage
44
9.4 Reliability, Availability, and Maintainability Sidebar 9.4 Difference Between Hardware and Software Reliability Complex hardware fails when a component breaks and no longer functions as specified Software faults can exist in a product for long time, activated only when certain conditions exist that transform the fault into a failure
45
9.5 Acceptance Tests Purpose and Roles Enable the customers and users to determine if the built system meets their needs and expectations Written, conducted and evaluated by the customers
46
9.5 Acceptance Tests Types of Acceptance Tests Pilot test: install on experimental basis Alpha test: in-house test Beta test: customer pilot Parallel testing: new system operates in parallel with old system
47
9.6 Installation Testing Before the testing ◦Configure the system ◦Attach proper number and kind of devices ◦Establish communication with other system The testing ◦Regression tests: to verify that the system has been installed properly and works
48
9.7 Automated System Testing Simulator Presents to a system all the characteristics of a device or system without actually having the device or system available Looks like other systems with which the test system must interface Provides the necessary information for testing without duplication the entire other system
49
9.8 Test Documentation Test Plan The plan begins by stating its objectives, which should ◦guide the management of testing ◦guide the technical effort required during testing ◦establish test planning and scheduling ◦explain the nature and extent of each test ◦explain how the test will completely evaluate system function and performance ◦document test input, specific test procedures, and expected outcomes
50
9.8 Test Documentation Sidebar 9.8 Measuring Test Effectiveness and Efficiency Test effectiveness can be measured by dividing the number of faults found in a given test by the total number of faults found Test efficiency is computed by dividing the number of faults found in testing by the effort needed to perform testing
51
9.8 Test Documentation Test Description Including ◦the means of control ◦the data ◦the procedures
52
9.8 Test Documentation Test Analysis Report Documents the result of test Provides information needed to duplicate the failure and to locate and fix the source of the problem Provides information necessary to determine if the project is complete Establish confidence in the system’s performance
53
9.8 Test Documentation Problem Report Forms Location: Where did the problem occur? Timing: When did it occur? Symptom: What was observed? End result: What were the consequences? Mechanism: How did it occur? Cause: Why did it occur? Severity: How much was the user or business affected? Cost: How much did it cost?
54
9.9 Testing Safety-Critical Systems Design diversity: use different kinds of designs, designers Software safety cases: make explicit the ways the software addresses possible problems ◦failure modes and effects analysis ◦hazard and operability studies (HAZOPS) Cleanroom: certifying software with respect to the specification
55
9.9 Testing Safety-Critical Systems Sidebar 9.9 Software Quality Practices at Baltimore Gas and Electric To ensure high reliability ◦checking the requirements definition thoroughly ◦performing quality reviews ◦testing carefully ◦documenting completely ◦performing thorough configuration control
56
9.9 Testing Safety-Critical Systems Sidebar 9.10 Suggestions for Building Safety-Critical Software Recognize that testing cannot remove all faults or risks Do not confuse safety, reliability and security Tightly link the organization’s software and safety organizations Build and use a safety information system Instill a management culture safety Assume that every mistakes users can make will be made Do not assume that low-probability, high-impacts event will not happen Emphasize requirements definition, testing, code and specification reviews, and configuration control Do not let short-term considerations overshadow long-term risks and cost
57
9.10 Information Systems Example The Piccadilly System Many variables, many different test cases to consider ◦An automated testing tool may be useful
58
9.10 Information Systems Example Things to Consider in Selecting a Test Tool Capability Reliability Capacity Learnability Operability Performance Compatibility Nonintrusiveness
59
9.10 Information Systems Example Sidebar 9.13 Why Six-Sigma Efforts Do Not Apply to Software A six-sigma quality constraint says that in a billion parts, we can expect only 3.4 to be outside the acceptable range It is not apply to software because ◦People are variable, the software process inherently contains a large degree of uncontrollable variation ◦Software either conforms or it does not, there are no degree of conformance ◦Software is not the result of a mass-production process
60
9.12 What This Chapter Means for You Should anticipate testing from the very beginning of the system life cycle Should think about system functions during requirement analysis Should use fault-tree analysis, failure modes and effect analysis during design Should build safety case during design and code reviews Should consider all possible test cases during testing
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.