Download presentation
Presentation is loading. Please wait.
1
Slide 13.1 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. An Introduction to Object-Oriented Systems Analysis and Design with UML and the Unified Process McGraw-Hill, 2004 Stephen R. Schach srs@vuse.vanderbilt.edu
2
Slide 13.2 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. CHAPTER 13 TESTING
3
Slide 13.3 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter Overview l Introduction to Testing l Quality Issues l Nonexecution-Based Testing l Execution-Based Testing l The Two Basic Types of Test Cases l What Execution-Based Testing Should Test l Who Should Perform Execution-Based Testing? l When Testing Stops
4
Slide 13.4 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing l Traditional life-cycle models usually include a separate testing phase, after implementation and before maintenance –This cannot lead to high-quality information systems l Testing is an integral component of the information system development process –An activity that must be carried out throughout the life cycle
5
Slide 13.5 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l It is insufficient to test the artifacts of a workflow merely at the end of that workflow l Continual testing carried out by the development team while it performs each workflow is essential, –In addition to more methodical testing at the end of each workflow
6
Slide 13.6 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l Verification –The process of determining whether a specific workflow has been correctly carried out –This takes place at the end of each workflow l Validation –The intensive evaluation process that takes place just before the information system is delivered to the client –Its purpose is to determine whether the information system as a whole satisfies its specifications l The term V & V is often used to denote testing
7
Slide 13.7 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l The words verification and validation are used as little as possible in this book –The phrase verification and validation (or V & V) implies that the process of checking a workflow can wait until the end of that workflow –On the contrary, this checking must be carried out in parallel with all information system development and maintenance activities l To avoid the undesirable implications of the phrase V & V, the term testing is used instead –This terminology is consistent with the Unified Process, which uses the term “test workflow”
8
Slide 13.8 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l There are two types of testing –Execution-based testing of an artifact means running (“executing”) the artifact on a computer and checking the output l However, a written specification, for example, cannot be run on a computer –The only way to check it is to read through it as carefully as possible –This type of checking is termed nonexecution-based testing »(Unfortunately, the term verification is sometimes also used to mean nonexecution-based testing. This can also cause confusion.)
9
Slide 13.9 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l Clearly, computer code can be tested both ways –It can be executed on a computer, or –It can be carefully reviewed l Reviewing code is at least as good a method of testing code as executing it on a computer
10
Slide 13.10 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Quality Issues l The quality of an information system is the extent to which it satisfies its specifications l The term quality does not imply “excellence” in the information systems context –Excellence is generally an order of magnitude more than what is possible with our technology today
11
Slide 13.11 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Quality Issues (contd) l The task of every information technology professional is to ensure a high-quality information system at all times –However, the information system quality assurance group has additional responsibilities with regard to information system quality
12
Slide 13.12 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Quality Assurance Terminology (contd) l A fault is the standard IEEE terminology for what is popularly called a “bug” l A failure is the observed incorrect behavior of the information system as a consequence of the fault l An error is the mistake made by the programmer l In other words, –A programmer makes an error that results in a fault in the information system that is observed as a failure
13
Slide 13.13 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Managerial Independence (contd) l It is important to have managerial independence between –The development team and –The quality assurance group
14
Slide 13.14 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Managerial Independence (contd) l Serious faults are often found in an information system as the delivery deadline approaches –The information system can be released on time but full of faults »The client then struggles with a faulty information system or –The developers can fix the information system but deliver it late l Either way the client will lose confidence in the information system development organization
15
Slide 13.15 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Managerial Independence (contd) l A senior manager should decide when to deliver the information system –The manager responsible for development, and –The quality assurance manager –Should report to the more senior manager l The senior manager can decide which of the two choices would be in the best interest of both the development organization and the client
16
Slide 13.16 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Managerial Independence (contd) l A separate quality assurance group appears to add greatly to the cost of information system development –The additional cost is one manager to lead the quality assurance group l The advantage is a quality assurance group consisting of independent specialists l In a development organization with under six employees –Ensure that each artifact is checked by someone other than the person responsible for producing that artifact
17
Slide 13.17 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Nonexecution-Based Testing l When we give a document we have prepared to someone else to check –He or she immediately finds a mistake that we did not see l It is therefore a bad idea if the person who draws up a document is the only one who reviews it –The review task must be assigned to someone other than the author of the document –Better still, it should be assigned to a team
18
Slide 13.18 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Nonexecution-Based Testing (contd) l This is the principle underlying the inspection –A review technique used to check artifacts of all kinds –In this form of nonexecution-based testing, an artifact is carefully checked by a team of information technology professionals with a broad range of skills
19
Slide 13.19 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Nonexecution-Based Testing (contd) l Advantages: l The different skills of the participants increase the chances of finding a fault l A team often generates a synergistic effect –When people work together as a team, the result is often more effective than if the team members work independently as individuals
20
Slide 13.20 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l An inspection team should consist of from 4 to 6 individuals –Example: An analysis workflow inspection team »At least one systems analyst »The manager of the analysis team »A representative of the next team (design team) »A client representative »A representative of the quality assurance group l An inspection team should be chaired by the quality assurance representative –He or she has the most to lose if the inspection is performed poorly and faults slip through
21
Slide 13.21 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The inspection leader guides the other members of the team through the artifact to uncover any faults –The team does not correct faults –It records them for later correction
22
Slide 13.22 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l There are four reasons for this: –A correction produced by a committee is likely to be lower in quality than a correction produced by a specialist –A correction produced by an team of (say) five individuals will take at least as much time as a correction produced by one person and, therefore, costs five times as much –Not all items flagged as faults actually are incorrect »It is better for possible faults to be examined carefully at a later time and then corrected only if there really is a problem –There is not enough time to both detect and correct faults »No inspection should last longer than 2 hours
23
Slide 13.23 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l During an inspection, a person responsible for the artifact walks the participants through that artifact –Reviewers interrupt when they think they detect a fault –However, the majority of faults at an inspection are spontaneously detected by the presenter
24
Slide 13.24 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The primary task of the inspection leader is –To encourage questions about the artifact being inspected and promote discussion l It is absolutely essential that the inspection not be used as a means of evaluating the participants l If that happens –The inspection degenerates into a point-scoring session –Faults are not detected
25
Slide 13.25 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The sole aim of an inspection is to highlight faults –Performance evaluations of participants should not be based on the quality of the artifact being inspected –If this happens, the participant will try to prevent any faults coming to light
26
Slide 13.26 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The manager who is responsible for the artifact being reviewed should be a member of the inspection team –This manager should not be responsible for evaluating members of the inspection team (and particularly the presenter) –If this happens, the fault detection capabilities of the team will be fatally weakened
27
Slide 13.27 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. How Inspections are Performed l An inspection consists of five steps: l An overview of the artifact to be inspected is given –Then the artifact is distributed to the team members l In the second step, preparation, the participants try to understand the artifact in detail –Lists of fault types found in recent inspections ranked by frequency help team members concentrate on areas where the most faults have occurred
28
Slide 13.28 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. How Inspections are Performed l In the inspection, one participant walks through the artifact with the inspection team –Fault finding now commences –The purpose is to find and document faults, not to correct them –Within one day the leader of the inspection team (the moderator) produces a written report of the inspection l The fourth stage is the rework –The individual responsible for that artifact resolves all faults and problems noted in the written report
29
Slide 13.29 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. How Inspections are Performed (contd) l In the follow-up, the moderator ensures that every single issue raised has been resolved satisfactorily –By either fixing the artifact or –Clarifying items incorrectly flagged as faults –If more than 5 percent of the material inspected has been reworked, the team must reconvene for a 100 percent reinspection
30
Slide 13.30 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. How Inspections are Performed (contd) l Input to the inspection: –The checklist of potential faults for artifacts of that type l Output from the inspection –The record of fault statistics »Recorded by severity (major or minor), and »Fault type l The fault statistics can be used in a number of different ways
31
Slide 13.31 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Use of Fault Statistics l The number of faults observed can be compared with averages of faults detected in those same artifact types in comparable information systems –This gives management an early warning that something is wrong, and –Allows timely corrective action to be taken l If a disproportionate number of faults of one type are observed, management can take corrective action
32
Slide 13.32 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Use of Fault Statistics (contd) l If the detailed design of a module reveals far more faults than in any other module –That module should be redesigned from scratch l Information regarding the number and types of faults detected at a detailed design inspection will aid the team performing the code inspection of that module at a later stage
33
Slide 13.33 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The results of experiments on inspections have been overwhelmingly positive l Typically, 75 percent or more of all the faults detected over the lifetime of an information system are detected during inspections before execution- based testing of the modules is started l Savings of up to $25,000 per inspection have been reported l Inspections lead to early detection of faults
34
Slide 13.34 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Execution-Based Testing l Requirements workflow l Analysis workflow l Design workflow –The artifacts of these workflow are diagrams and documents –Testing has to be nonexecution-based l Why then do systems analysts need to know about execution-based testing?
35
Slide 13.35 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. The Relevance of Execution-Based Testing l Not all information systems are developed from scratch l The client’s needs may be met at lower cost by a COTS package l In order to provide the client with adequate information about a COTS package, –The systems analyst has to know about execution-based testing
36
Slide 13.36 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Execution-Based Testing l Claim: –Testing is a demonstration that faults are not present l Fact: –Execution-based testing can be used to show the presence of faults –It can never be used to show the absence of faults
37
Slide 13.37 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Execution-Based Testing (contd) l Run an information system with a specific set of test data –If the output is wrong then the information system definitely contains a fault, but –If the output is correct, then there still may be a fault in the information system »All that is known from that test is that the information system runs correctly on that specific set of test data
38
Slide 13.38 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Execution-Based Testing (contd) l If test data are chosen cleverly –Faults will be highlighted l If test data are chosen poorly –Nothing will be learned about the information system
39
Slide 13.39 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. The Two Basic Types of Test Cases l Black-box test cases: –Drawn up by looking at only the specifications »The code is treated as a “black box” (in the engineering sense) l Glass-box test cases –Drawn up by carefully examining the code and finding a set of test cases that, when executed, will together ensure that every line of code is executed at least once »These are called glass-box test cases because now we look inside the “box” and examine the code itself to draw up the test cases
40
Slide 13.40 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Correctness is by no means enough l Four other qualities need to be tested: –Utility –Reliability –Robustness –Performance
41
Slide 13.41 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Utility is the measure of the extent to which an information system meets the user’s needs –Is it easy to use? –Does it perform useful functions? –Is it cost effective?
42
Slide 13.42 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Reliability is a measure of the frequency and criticality of information system failure –How often does the information system fail? » (Mean time between failures) –How bad are the effects of that failure? –How long does it take to repair the system? »(Mean time to repair) –How long does it take to repair the results of the failure?
43
Slide 13.43 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Robustness is a measure of a number of factors including –The range of operating conditions –The possibility of unacceptable results with valid input, and –The acceptability of effects when the information system is given invalid input
44
Slide 13.44 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Performance constraints must be met –Are average response times met? »(Hard real-time constraints rarely apply to information systems)
45
Slide 13.45 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l An information system is correct if it satisfies its specifications l Every information system has to be correct l But in addition, it must pass execution-based testing of –Utility –Reliability –Robustness, and –Performance
46
Slide 13.46 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l If a test case executes correctly, –Nothing is learned l If there is a failure –There is no doubt there is a fault l The aim of testing is come up with test cases that will highlight faults l Testing is therefore a destructive process
47
Slide 13.47 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l Programming is a creative process l Asking programmer to test a module he or she has implemented means –Asking him or her to execute the module in such a way that a failure (incorrect behavior) ensues l This goes against the creative instincts of programmers
48
Slide 13.48 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l Programmers should not test their own modules –Testing that module requires the creator to perform a destructive act and attempt to destroy that creation –Also, the programmer may have misunderstood some aspect of the design or specification document »If testing is done by someone else, such faults may be discovered
49
Slide 13.49 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l The programmer desk checks the design before coding it l Then, he or she executes the module using test data –Probably the same test data that were used to desk check the design l Next, the programmer tests the robustness of the module by running incorrect data
50
Slide 13.50 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l When the programmer is satisfied that the module is operating correctly –Systematic execution-based testing commences l This systematic testing should not be performed by the programmer l Independent testing must be performed by the quality assurance group – Quality assurance professionals must report to their own managers and thus protect their independence
51
Slide 13.51 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l Performing systematic execution-based testing –An essential part of a test case is a statement of the expected output before the test is executed »Both the test data and the expected results of that test must be recorded –After the test has been performed, the actual results should be recorded and compared with the expected results –Recording must be done in machine-readable form »For later regression testing during maintenance
52
Slide 13.52 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. When Testing Stops l After many years of maintenance, an information system may lose its usefulness –It is decommissioned and removed from service l Only at that point, when the information system has been irrevocably discarded, can testing stop
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.