Slide 13.1 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. An Introduction to Object-Oriented Systems Analysis and Design with.

Slides:



Advertisements
Similar presentations
Object Oriented Analysis And Design-IT0207 iiI Semester
Advertisements

Testing Relational Database
Test process essentials Riitta Viitamäki,
Verification and Validation
CIS-74 Computer Software Quality Assurance Systematic Software Testing Chapter 1: An Overview of the Testing Process.
Programming Types of Testing.
Software Quality Assurance Inspection by Ross Simmerman Software developers follow a method of software quality assurance and try to eliminate bugs prior.
Slide 1.1 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. An Introduction to Object-Oriented Systems Analysis and Design with.
Testing Xiaojun Qi.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
Slide 9.1 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. An Introduction to Object-Oriented Systems Analysis and Design with.
(c) 2007 Mauro Pezzè & Michal Young Ch 1, slide 1 Software Test and Analysis in a Nutshell.
CSC 395 – Software Engineering Lecture 9: Testing -or- How I Stopped Worrying and Learned to Love the Bug.
Slide 7E.1 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. An Introduction to Object-Oriented Systems Analysis and Design with.
Software Quality Assurance
Verification and Validation
Slide 6.1 Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Object-Oriented and Classical Software Engineering Eighth Edition, WCB/McGraw-Hill,
1CMSC 345, Version 4/04 Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 1995 Software Engineering, 5th edition. Chapter 22Slide 1 Verification and Validation u Assuring that a software system meets a user's.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
Extreme Programming Software Development Written by Sanjay Kumar.
Slide 6.1 CHAPTER 6 TESTING. Slide 6.2 Overview l Quality issues l Nonexecution-based testing l Execution-based testing l What should be tested? l Testing.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Chapter 8: Systems analysis and design
Software Testing Sudipto Ghosh CS 406 Fall 99(Also used later) November 2, 1999.
1 Debugging and Testing Overview Defensive Programming The goal is to prevent failures Debugging The goal is to find cause of failures and fix it Testing.
Teaching material for a course in Software Project Management & Software Engineering – part II.
Testing -- Part II. Testing The role of testing is to: w Locate errors that can then be fixed to produce a more reliable product w Design tests that systematically.
Slide 10.1 Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Object-Oriented and Classical Software Engineering Eighth Edition,
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Slide 6.1 © The McGraw-Hill Companies, 2002 Object-Oriented and Classical Software Engineering Fifth Edition, WCB/McGraw-Hill, 2002 Stephen R. Schach
... there is no particular reason why your friend and colleague cannot also be your sternest critic. --Jerry Weinberg --Jerry Weinberg.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Historical Aspects Origin of software engineering –NATO study group coined the term in 1967 Software crisis –Low quality, schedule delay, and cost overrun.
Software Testing and Maintenance 1 Code Review  Introduction  How to Conduct Code Review  Practical Tips  Tool Support  Summary.
Slide 20.1 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. An Introduction to Object-Oriented Systems Analysis and Design with.
Chapter 12: Software Inspection Omar Meqdadi SE 3860 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Chapter 1: Fundamental of Testing Systems Testing & Evaluation (MNN1063)
Slide 6.1 CHAPTER 6 TESTING. Slide 6.2 Overview l Quality issues l Nonexecution-based testing l Execution-based testing l What should be tested? l Testing.
Software Engineering At Glance. Why We Need Software Engineering? The aim of software engineering is to solve the software crisis Software is delivered.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Software Quality Assurance SOFTWARE DEFECT. Defect Repair Defect Repair is a process of repairing the defective part or replacing it, as needed. For example,
© Michael Crosby and Charles Sacker, 2001 Systematic Software Reviews Software reviews are a “quality improvement process for written material”.
CS451 Software Maintenance Yugi Lee STB #555 (816) Note: This lecture was designed based on Stephen Schach’s.
Software Quality Assurance and Testing Fazal Rehman Shamil.
Object-Oriented and Classical Software Engineering Seventh Edition, WCB/McGraw-Hill, 2007 Stephen R. Schach
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Testing and Debugging. Testing Fundamentals  Test as you develop Easier to find bugs early rather than later Prototyping helps identify problems early.
Object-Oriented and Classical Software Engineering Eighth Edition, WCB/McGraw-Hill Stephen R. Schach 1.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
This chapter is extracted from Sommerville’s slides. Textbook chapter 22 1 Chapter 8 Validation and Verification 1.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Testing and Evolution CSCI 201L Jeffrey Miller, Ph.D. HTTP :// WWW - SCF. USC. EDU /~ CSCI 201 USC CSCI 201L.
Week # 4 Quality Assurance Software Quality Engineering 1.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
Object-Oriented Software Engineering WCB/McGraw-Hill, 2008 Stephen R
Software Configuration Management (SCM)
Software Testing An Introduction.
Verification and Testing
Lecture 09:Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Fundamental Test Process
Object-Oriented and Classical Software Engineering Fifth Edition, WCB/McGraw-Hill, 2002 Stephen R. Schach
WALKTHROUGH and INSPECTION
Presentation transcript:

Slide 13.1 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. An Introduction to Object-Oriented Systems Analysis and Design with UML and the Unified Process McGraw-Hill, 2004 Stephen R. Schach

Slide 13.2 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. CHAPTER 13 TESTING

Slide 13.3 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter Overview l Introduction to Testing l Quality Issues l Nonexecution-Based Testing l Execution-Based Testing l The Two Basic Types of Test Cases l What Execution-Based Testing Should Test l Who Should Perform Execution-Based Testing? l When Testing Stops

Slide 13.4 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing l Traditional life-cycle models usually include a separate testing phase, after implementation and before maintenance –This cannot lead to high-quality information systems l Testing is an integral component of the information system development process –An activity that must be carried out throughout the life cycle

Slide 13.5 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l It is insufficient to test the artifacts of a workflow merely at the end of that workflow l Continual testing carried out by the development team while it performs each workflow is essential, –In addition to more methodical testing at the end of each workflow

Slide 13.6 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l Verification –The process of determining whether a specific workflow has been correctly carried out –This takes place at the end of each workflow l Validation –The intensive evaluation process that takes place just before the information system is delivered to the client –Its purpose is to determine whether the information system as a whole satisfies its specifications l The term V & V is often used to denote testing

Slide 13.7 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l The words verification and validation are used as little as possible in this book –The phrase verification and validation (or V & V) implies that the process of checking a workflow can wait until the end of that workflow –On the contrary, this checking must be carried out in parallel with all information system development and maintenance activities l To avoid the undesirable implications of the phrase V & V, the term testing is used instead –This terminology is consistent with the Unified Process, which uses the term “test workflow”

Slide 13.8 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l There are two types of testing –Execution-based testing of an artifact means running (“executing”) the artifact on a computer and checking the output l However, a written specification, for example, cannot be run on a computer –The only way to check it is to read through it as carefully as possible –This type of checking is termed nonexecution-based testing »(Unfortunately, the term verification is sometimes also used to mean nonexecution-based testing. This can also cause confusion.)

Slide 13.9 Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Introduction to Testing (contd) l Clearly, computer code can be tested both ways –It can be executed on a computer, or –It can be carefully reviewed l Reviewing code is at least as good a method of testing code as executing it on a computer

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Quality Issues l The quality of an information system is the extent to which it satisfies its specifications l The term quality does not imply “excellence” in the information systems context –Excellence is generally an order of magnitude more than what is possible with our technology today

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Quality Issues (contd) l The task of every information technology professional is to ensure a high-quality information system at all times –However, the information system quality assurance group has additional responsibilities with regard to information system quality

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Quality Assurance Terminology (contd) l A fault is the standard IEEE terminology for what is popularly called a “bug” l A failure is the observed incorrect behavior of the information system as a consequence of the fault l An error is the mistake made by the programmer l In other words, –A programmer makes an error that results in a fault in the information system that is observed as a failure

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Managerial Independence (contd) l It is important to have managerial independence between –The development team and –The quality assurance group

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Managerial Independence (contd) l Serious faults are often found in an information system as the delivery deadline approaches –The information system can be released on time but full of faults »The client then struggles with a faulty information system or –The developers can fix the information system but deliver it late l Either way the client will lose confidence in the information system development organization

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Managerial Independence (contd) l A senior manager should decide when to deliver the information system –The manager responsible for development, and –The quality assurance manager –Should report to the more senior manager l The senior manager can decide which of the two choices would be in the best interest of both the development organization and the client

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Managerial Independence (contd) l A separate quality assurance group appears to add greatly to the cost of information system development –The additional cost is one manager to lead the quality assurance group l The advantage is a quality assurance group consisting of independent specialists l In a development organization with under six employees –Ensure that each artifact is checked by someone other than the person responsible for producing that artifact

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Nonexecution-Based Testing l When we give a document we have prepared to someone else to check –He or she immediately finds a mistake that we did not see l It is therefore a bad idea if the person who draws up a document is the only one who reviews it –The review task must be assigned to someone other than the author of the document –Better still, it should be assigned to a team

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Nonexecution-Based Testing (contd) l This is the principle underlying the inspection –A review technique used to check artifacts of all kinds –In this form of nonexecution-based testing, an artifact is carefully checked by a team of information technology professionals with a broad range of skills

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Nonexecution-Based Testing (contd) l Advantages: l The different skills of the participants increase the chances of finding a fault l A team often generates a synergistic effect –When people work together as a team, the result is often more effective than if the team members work independently as individuals

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l An inspection team should consist of from 4 to 6 individuals –Example: An analysis workflow inspection team »At least one systems analyst »The manager of the analysis team »A representative of the next team (design team) »A client representative »A representative of the quality assurance group l An inspection team should be chaired by the quality assurance representative –He or she has the most to lose if the inspection is performed poorly and faults slip through

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The inspection leader guides the other members of the team through the artifact to uncover any faults –The team does not correct faults –It records them for later correction

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l There are four reasons for this: –A correction produced by a committee is likely to be lower in quality than a correction produced by a specialist –A correction produced by an team of (say) five individuals will take at least as much time as a correction produced by one person and, therefore, costs five times as much –Not all items flagged as faults actually are incorrect »It is better for possible faults to be examined carefully at a later time and then corrected only if there really is a problem –There is not enough time to both detect and correct faults »No inspection should last longer than 2 hours

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l During an inspection, a person responsible for the artifact walks the participants through that artifact –Reviewers interrupt when they think they detect a fault –However, the majority of faults at an inspection are spontaneously detected by the presenter

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The primary task of the inspection leader is –To encourage questions about the artifact being inspected and promote discussion l It is absolutely essential that the inspection not be used as a means of evaluating the participants l If that happens –The inspection degenerates into a point-scoring session –Faults are not detected

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The sole aim of an inspection is to highlight faults –Performance evaluations of participants should not be based on the quality of the artifact being inspected –If this happens, the participant will try to prevent any faults coming to light

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The manager who is responsible for the artifact being reviewed should be a member of the inspection team –This manager should not be responsible for evaluating members of the inspection team (and particularly the presenter) –If this happens, the fault detection capabilities of the team will be fatally weakened

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. How Inspections are Performed l An inspection consists of five steps: l An overview of the artifact to be inspected is given –Then the artifact is distributed to the team members l In the second step, preparation, the participants try to understand the artifact in detail –Lists of fault types found in recent inspections ranked by frequency help team members concentrate on areas where the most faults have occurred

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. How Inspections are Performed l In the inspection, one participant walks through the artifact with the inspection team –Fault finding now commences –The purpose is to find and document faults, not to correct them –Within one day the leader of the inspection team (the moderator) produces a written report of the inspection l The fourth stage is the rework –The individual responsible for that artifact resolves all faults and problems noted in the written report

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. How Inspections are Performed (contd) l In the follow-up, the moderator ensures that every single issue raised has been resolved satisfactorily –By either fixing the artifact or –Clarifying items incorrectly flagged as faults –If more than 5 percent of the material inspected has been reworked, the team must reconvene for a 100 percent reinspection

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. How Inspections are Performed (contd) l Input to the inspection: –The checklist of potential faults for artifacts of that type l Output from the inspection –The record of fault statistics »Recorded by severity (major or minor), and »Fault type l The fault statistics can be used in a number of different ways

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Use of Fault Statistics l The number of faults observed can be compared with averages of faults detected in those same artifact types in comparable information systems –This gives management an early warning that something is wrong, and –Allows timely corrective action to be taken l If a disproportionate number of faults of one type are observed, management can take corrective action

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Use of Fault Statistics (contd) l If the detailed design of a module reveals far more faults than in any other module –That module should be redesigned from scratch l Information regarding the number and types of faults detected at a detailed design inspection will aid the team performing the code inspection of that module at a later stage

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Inspections (contd) l The results of experiments on inspections have been overwhelmingly positive l Typically, 75 percent or more of all the faults detected over the lifetime of an information system are detected during inspections before execution- based testing of the modules is started l Savings of up to $25,000 per inspection have been reported l Inspections lead to early detection of faults

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Execution-Based Testing l Requirements workflow l Analysis workflow l Design workflow –The artifacts of these workflow are diagrams and documents –Testing has to be nonexecution-based l Why then do systems analysts need to know about execution-based testing?

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. The Relevance of Execution-Based Testing l Not all information systems are developed from scratch l The client’s needs may be met at lower cost by a COTS package l In order to provide the client with adequate information about a COTS package, –The systems analyst has to know about execution-based testing

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Execution-Based Testing l Claim: –Testing is a demonstration that faults are not present l Fact: –Execution-based testing can be used to show the presence of faults –It can never be used to show the absence of faults

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Execution-Based Testing (contd) l Run an information system with a specific set of test data –If the output is wrong then the information system definitely contains a fault, but –If the output is correct, then there still may be a fault in the information system »All that is known from that test is that the information system runs correctly on that specific set of test data

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Principles of Execution-Based Testing (contd) l If test data are chosen cleverly –Faults will be highlighted l If test data are chosen poorly –Nothing will be learned about the information system

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. The Two Basic Types of Test Cases l Black-box test cases: –Drawn up by looking at only the specifications »The code is treated as a “black box” (in the engineering sense) l Glass-box test cases –Drawn up by carefully examining the code and finding a set of test cases that, when executed, will together ensure that every line of code is executed at least once »These are called glass-box test cases because now we look inside the “box” and examine the code itself to draw up the test cases

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Correctness is by no means enough l Four other qualities need to be tested: –Utility –Reliability –Robustness –Performance

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Utility is the measure of the extent to which an information system meets the user’s needs –Is it easy to use? –Does it perform useful functions? –Is it cost effective?

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Reliability is a measure of the frequency and criticality of information system failure –How often does the information system fail? » (Mean time between failures) –How bad are the effects of that failure? –How long does it take to repair the system? »(Mean time to repair) –How long does it take to repair the results of the failure?

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Robustness is a measure of a number of factors including –The range of operating conditions –The possibility of unacceptable results with valid input, and –The acceptability of effects when the information system is given invalid input

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l Performance constraints must be met –Are average response times met? »(Hard real-time constraints rarely apply to information systems)

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. What Execution-Based Testing Should Test l An information system is correct if it satisfies its specifications l Every information system has to be correct l But in addition, it must pass execution-based testing of –Utility –Reliability –Robustness, and –Performance

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l If a test case executes correctly, –Nothing is learned l If there is a failure –There is no doubt there is a fault l The aim of testing is come up with test cases that will highlight faults l Testing is therefore a destructive process

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l Programming is a creative process l Asking programmer to test a module he or she has implemented means –Asking him or her to execute the module in such a way that a failure (incorrect behavior) ensues l This goes against the creative instincts of programmers

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l Programmers should not test their own modules –Testing that module requires the creator to perform a destructive act and attempt to destroy that creation –Also, the programmer may have misunderstood some aspect of the design or specification document »If testing is done by someone else, such faults may be discovered

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l The programmer desk checks the design before coding it l Then, he or she executes the module using test data –Probably the same test data that were used to desk check the design l Next, the programmer tests the robustness of the module by running incorrect data

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l When the programmer is satisfied that the module is operating correctly –Systematic execution-based testing commences l This systematic testing should not be performed by the programmer l Independent testing must be performed by the quality assurance group – Quality assurance professionals must report to their own managers and thus protect their independence

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Who Should Perform Execution-Based Testing? l Performing systematic execution-based testing –An essential part of a test case is a statement of the expected output before the test is executed »Both the test data and the expected results of that test must be recorded –After the test has been performed, the actual results should be recorded and compared with the expected results –Recording must be done in machine-readable form »For later regression testing during maintenance

Slide Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. When Testing Stops l After many years of maintenance, an information system may lose its usefulness –It is decommissioned and removed from service l Only at that point, when the information system has been irrevocably discarded, can testing stop