Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007.

Slides:



Advertisements
Similar presentations
Testing Relational Database
Advertisements

Defect testing Objectives
Test process essentials Riitta Viitamäki,
Lecture 8: Testing, Verification and Validation
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
System/Software Testing Error detection and removal determine level of reliability well-planned procedure - Test Cases done by independent quality assurance.
Software Quality Assurance Plan
Software Failure: Reasons Incorrect, missing, impossible requirements * Requirement validation. Incorrect specification * Specification verification. Faulty.
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
PVK-HT061 Contents Introduction Requirements Engineering Project Management Software Design Detailed Design and Coding Quality Assurance Maintenance.
Illinois Institute of Technology
SE 555 Software Requirements & Specification Requirements Validation.
School of Computer ScienceG53FSP Formal Specification1 Dr. Rong Qu Introduction to Formal Specification
Outline Types of errors Component Testing Testing Strategy
Lecturer: Dr. AJ Bieszczad Chapter 87-1 How does software fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible.
Software Testing & Strategies
SOFTWARE QUALITY ASSURANCE Maltepe University Faculty of Engineering SE 410.
BY RAJESWARI S SOFTWARE TESTING. INTRODUCTION Software testing is the process of testing the software product. Effective software testing will contribute.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
System/Software Testing
Testing. What is Testing? Definition: exercising a program under controlled conditions and verifying the results Purpose is to detect program defects.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
SOFTWARE TESTING STRATEGIES CIS518001VA : ADVANCED SOFTWARE ENGINEERING TERM PAPER.
© 2012 IBM Corporation Rational Insight | Back to Basis Series Chao Zhang Unit Testing.
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee
CMSC 345 Fall 2000 Unit Testing. The testing process.
Verification and Validation Overview References: Shach, Object Oriented and Classical Software Engineering Pressman, Software Engineering: a Practitioner’s.
Software Testing Testing principles. Testing Testing involves operation of a system or application under controlled conditions & evaluating the results.
Lecture 11 Testing and Debugging SFDV Principles of Information Systems.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Software Reviews & testing Software Reviews & testing An Overview.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Software Testing Testing types Testing strategy Testing principles.
Software Development Software Testing. Testing Definitions There are many tests going under various names. The following is a general list to get a feel.
Testing -- Part II. Testing The role of testing is to: w Locate errors that can then be fixed to produce a more reliable product w Design tests that systematically.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Historical Aspects Origin of software engineering –NATO study group coined the term in 1967 Software crisis –Low quality, schedule delay, and cost overrun.
CHAPTER 9: VERIFICATION AND VALIDATION 1. Objectives  To introduce software verification and validation and to discuss the distinction between them 
KUFA UNIVERSITY Department of Computer Science 06/12/2015.
Software Development Problem Analysis and Specification Design Implementation (Coding) Testing, Execution and Debugging Maintenance.
CPSC 873 John D. McGregor Session 9 Testing Vocabulary.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
Integration testing Integrate two or more module.i.e. communicate between the modules. Follow a white box testing (Testing the code)
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
Chapter 1 Software Engineering Principles. Problem analysis Requirements elicitation Software specification High- and low-level design Implementation.
Software Quality Assurance and Testing Fazal Rehman Shamil.
Testing the Programs CS4311 – Spring 2008 Software engineering, theory and practice, S. Pfleeger, Prentice Hall ed. Object-oriented and classical software.
 Software Testing Software Testing  Characteristics of Testable Software Characteristics of Testable Software  A Testing Life Cycle A Testing Life.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Chapter 8 Testing the Programs 8.1 Software Faults and Failures 1. Introduction  faults: A: definition: the problem caused by error B: cause: X: the software.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Defect testing Testing programs to establish the presence of system defects.
Software Testing.
Software Engineering TESTING Compiled by: Dr. S. Prem Kumar
Integration Testing.
Software Testing.
Verification and Validation Overview
Some Simple Definitions for Testing
Lecture 09:Software Testing
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing “If you can’t test it, you can’t design it”
Software Testing Strategies
Presentation transcript:

Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006(c) Dr. David A. Workman2 Software Testing Reference Software Engineering: Theory and Practice by Sheri Lawrence Pfleeger Prentice Hall, © 2001, ISBN =

October 2, 2006(c) Dr. David A. Workman3 Software Faults & Failures Why Does Software Fail? –Useful Software Systems are large and complex and require complicated processes to build – complexity increases the likelihood that errors will be made in development and that faults will be introduced. –The customer and users of a system may not be clear on what they want or need, or may simply change their mind. Uncertainty and confusion increases the likelihood that errors will be made and faults introduced. Changes to the requirements, design and code increase the opportunities for errors and the introduction of faults. What do we mean by “Software Failure”? We usually mean that the software does not do what the requirements describe.

October 2, 2006(c) Dr. David A. Workman4 Software Faults & Failures Reasons for Software Failures: –The specification may be wrong or misleading. It may not reflect the actual customer or user needs. –The specification may contain a requirement that is impossible to meet given prescribed hardware and/or software. –The system design may contain a fault. –The software design may contain a fault. –The program code may incorrectly implement a requirement. Purpose of Software Testing: –Fault Identification: the process of determining what fault or faults caused an observed failure. –Fault Correction & Removal: the process of making changes to the software to remove identified faults

October 2, 2006(c) Dr. David A. Workman5 Software Faults & Failures Types of Faults –Algorithm Faults (logic errors): algorithm does not give the correct output when presented with a given input. Examples: Branching in the wrong place Testing the wrong condition Forgetting to initialize variables Forgetting to check for data and parameters outside design limits Comparing values of incompatible types. –Computation & Precision Faults: the implementation of a formula is wrong or does not compute the result with sufficient accuracy; e.g. truncation, rounding, use of real data when integers are called for. –Documentation Faults: comments do not describe what the code is doing or should be doing; requirements are poorly or ambiguously stated, or perhaps even wrong. –Stress & Overload Faults: data structures are filled beyond their capacity; e.g. array index out of bounds. –Capacity or Boundary Faults: system performance deteriorates when design limits are approached.

October 2, 2006(c) Dr. David A. Workman6 Software Faults & Failures Types of Faults –Timing & Coordination Faults: synchronizing processing steps and/or events in time. –Throughput & Performance Faults: the system does not perform at the speed specified by the requirements – insufficient work accomplished per unit time. –Recovery Faults: the system does not behave as prescribed by requirements when execution failures are detected; e.g., Word does not correctly recover when power goes out. –Hardware & System Software Faults: third party hardware or software components (reusable components or COTS(Commerical Off The Shelf) or GOTS(Governmen Off The Shelf) products) do not actually work according to documented operating conditions and procedures. –Standards and Procedure Faults: failing to follow prescribed standards may foster an environment where faults are more likely.

October 2, 2006(c) Dr. David A. Workman7 Software Faults and Failures Orthogonal Defect Classification IBM & Hewlett-Packard (and others) capture, classify and track various types of software faults. Historical information about faults can help predict what faults are likely to occur. This information helps focus testing efforts and makes the overall testing process more efficient and effective. Fault patterns and fault frequencies may indicate deficiencies in the development process. Specification/ Requirements DesignCode Environment/ Support DocumentationOther Requirements Specifications Functionality HW Interface SW Interface User Interface Functional Description Inter-process Communication Data Definition Module Design Logic Description Error Checking Standards Logic Computation Data Handling Module Interface/ Implementation Standards Test HW Test SW Integration SW Development Tools Missing Unclear Wrong Changed Better-Way ORIGIN TYPE MODE Hewlett-Packard (Grady 1997) See Notes

October 2, 2006(c) Dr. David A. Workman8 Software Faults and Failures Testing Issues –Test Organization & Stages 1.Module (Component)(Unit) Testing – testing the smallest software building blocks in a controlled environment to meet functional requirements. 2.Integration Testing – testing component aggregates to ensure interface requirements are met and that inter-module communication works according to design. 3.Function Testing – testing system functionality (use cases) to insure it meets system requirements specifications. 4.Performance Testing – testing speed and capacity of the system to verify the system meets non-functional execution requirements and constraints. (Validation) 5.Acceptance Testing – customer testing to insure that end-users are satisfied that the system meets quality requirements. (Validation) 6.Installation Testing – making sure the system runs in the target environment Testing Goals Black Box Testing – treat test object as a "black box" with inputs and outputs, internal structure and logic pathways not a consideration in designing tests. Clear or White Box Testing – design tests to exercise internal components and execution pathways. (cf. McCabe's Metric)

October 2, 2006(c) Dr. David A. Workman9 Software Faults and Failures 1.Unit Testing Basic steps: (1) Write code from design or a Unit specification. (2) Manually review code to make sure it agrees with the specification. (Verification) (3) Compile code to eliminate syntax and some semantic errors. (4) Design test cases to achieve unit testing goals. (5) Run test cases. (6) Remove faults and repeat (5) until testing goals are met. Manual Code Reviews: form a team consisting of the author and three technical experts – people in the developer's organization that are technically qualified to conduct the review. –Code Walkthroughs: code + documentation presented to the review team who comments on correctness; author presents the code and supporting documentation; the process is informal; the focus is on finding faults, not fixing them; the discovery of faults should not reflect on the author's competence.

October 2, 2006(c) Dr. David A. Workman10 Software Faults and Failures 1.Unit Testing (continued) Manual Code Reviews: form a team consisting of the author and three technical experts – people in the developer's organization that are technically qualified to conduct the review. –Code Inspections: more formal than Walkthroughs; typically performed in three steps; (1) Author presents code and documentation to Inspection team, much like Walkthrough, except the focus in more on informing the Inspection team and making them familiar with the code and documentation; (2) Inspection team members individually scrutinize inspection materials and form two lists: a list of discovered faults, and a list of concerns or points of confusion that may or may not be faults – items that need clarification. (3) Author meets with Inspection Team to discuss the lists of actual faults and potential faults. A scribe documents the identified faults for future tracking. Author and Inspection team signoff on the outcome of the meeting – that is, the meeting and the findings of the Inspection team is formally recorded and tracked.

October 2, 2006(c) Dr. David A. Workman11 Software Faults and Failures 1.Unit Testing (continued) Development ArtifactPreparation TimeMeeting Time Requirements Document25 pages/hr12 pages/hr Functional Specification45 pages/hr15 pages/hr Logic Specification50 pages/hr20 pages/hr Source Code150 SLOC/hr75 SLOC/hr User Documents35 pages/hr20 pages/hr Inspection Preparation and Meeting Times (Capers Jones 1991) Discovery ActivityFaults/KSLOC Requirements review2.5 Design review5.0 Code Inspections10.0 Integration Tests3.0 Acceptance Tests2.0 Faults Found During Discovery (Capers Jones 1991)

October 2, 2006(c) Dr. David A. Workman12 Software Faults and Failures Unit Testing Strategies Definitions: Test Case – a particular choice of input data that has a predictable output Test Objective – a well-defined outcome that demonstrates the presence or absence of a particular type of fault Test – a collection of Test Cases relating to a single Test Objective All Paths All Def-Use Paths All Use Paths All Defs All Computational & Some Predicate Uses All Computational Uses All Predicate & Some Computational Uses All Predicate Uses All Branches All Statements Relative Strengths of Test Strategies (Beizer 1990)

October 2, 2006(c) Dr. David A. Workman13 Software Faults and Failures Integration Testing –Bottom-up Integration: components at the lowest level of the system call hierarchy are tested individually first using specially written test drivers, then test those that make immediate calls to components already tested, again test drivers may have to be written. Useful approach when: many low level routines are general-purpose and are called often by others, or when the design is OO, or when the system is integrating a large number of standalone reusable components. Disadvantages: most important modules are tested last; faults at the top levels may indicate design flaws – these should be detected sooner, rather than later. Advantages: most appropriate for OO designs. –Top-Down: components at the top of the call hierarchy are tested first, replacing any modules they call by temporary stubs; then modules at the next call level are integrated replacing their stubs but including stubs for modules they call, etc. Advantages: special test drivers need not be written, the modules themselves are the test drivers; the highest level modules tend to be more control oriented and less data oriented – design flaws in system level processing and timing will be detected early; development and testing can focus on delivering complete use cases – typically sooner than the bottom-up approach. Disadvantages: many stubs have to be written, and these may not be trivial to write.