CS3773 Software Engineering Lecture 9 Software Testing.

Slides:



Advertisements
Similar presentations
Verification and Validation
Advertisements

Overview Functional Testing Boundary Value Testing (BVT)
Testing and Quality Assurance
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing an individual module
CS4723 Software Validation and Quality Assurance Lecture 02 Overview of Software Testing.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
1CMSC 345, Version 4/04 Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
System/Software Testing
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 9 Instructor Paulo Alencar.
Extreme Programming Software Development Written by Sanjay Kumar.
Verification and Validation Yonsei University 2 nd Semester, 2014 Sanghyun Park.
Testing. Definition From the dictionary- the means by which the presence, quality, or genuineness of anything is determined; a means of trial. For software.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 23 Slide 1 Software testing Slightly adapted by Anders Børjesson.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
CMSC 345 Fall 2000 Unit Testing. The testing process.
CS4311 Spring 2011 Unit Testing Dr. Guoqiang Hu Department of Computer Science UTEP.
Software Testing Testing types Testing strategy Testing principles.
Software Testing The process of operating a system or component under specified conditions, observing and recording the results, and making an evaluation.
Agenda Introduction Overview of White-box testing Basis path testing
1 Software Testing. 2 Path Testing 3 Structural Testing Also known as glass box, structural, clear box and white box testing. A software testing technique.
CSE403 Software Engineering Autumn 2001 More Testing Gary Kimura Lecture #10 October 22, 2001.
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
1 Program Testing (Lecture 14) Prof. R. Mall Dept. of CSE, IIT, Kharagpur.
1 Introduction to Software Testing. Reading Assignment P. Ammann and J. Offutt “Introduction to Software Testing” ◦ Chapter 1 2.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Overview Structural Testing Introduction – General Concepts
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Software Quality Assurance and Testing Fazal Rehman Shamil.
1 Software Testing & Quality Assurance Lecture 5 Created by: Paulo Alencar Modified by: Frank Xu.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Dynamic White-Box Testing What is code coverage? What are the different types of code coverage? How to derive test cases from control flows?
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
Subject Name: Software Testing Subject Code: 10CS842 Prepared By:
Testing Tutorial 7.
Software Testing.
Software Testing.
Chapter 9, Testing.
Software Engineering (CSI 321)
CS5123 Software Validation and Quality Assurance
Structural testing, Path Testing
Types of Testing Visit to more Learning Resources.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software testing strategies 2
Software Testing (Lecture 11-a)
Software testing.
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
CSE403 Software Engineering Autumn 2000 More Testing
Software Testing “If you can’t test it, you can’t design it”
Overview Functional Testing Boundary Value Testing (BVT)
Overview Functional Testing Boundary Value Testing (BVT)
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software Testing and QA Theory and Practice (Chapter 5: Data Flow Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Presentation transcript:

CS3773 Software Engineering Lecture 9 Software Testing

UTSA CS Software Verification and Validation  Software verification and validation ( V & V) techniques are applied to improve the quality of software  V & V takes place at each stage of software process – Requirements analysis – Design analysis – Implementation checking Inspection Testing

UTSA CS Goal of Verification and Validation  Establish confidence that the software systems is “fit for purpose” – Software function – User expectations – Marketing environment  Two verification and validation ( V & V) approaches – Software inspections or peer reviews Manual or automated – Software testing Exercising program using data and discovering defects through output

UTSA CS V & V versus Debugging  V & V process are interleaved with debugging  V & V process are intended to establish the existence of defects in a software system  Debugging is a process that locates and corrects these defects – Patterns in the test output – Design additional tests – Trace the program manually – Use debugging tools

UTSA CS Software Inspection  Software inspection is a static process  Compared with testing, software inspections have some advantages – During testing, errors can hide other errors; during inspection, a single session can discover many error – During testing, you have to develop test harnesses to test available parts; during inspection, incomplete system can be checked without additional cost – During testing, only program defects are disclosed; during inspection, broader quality attributes are considered

UTSA CS Program Inspection  Manual program inspection detects defects, other types of inspections may be concerned with schedule, costs, etc.  Manual program inspection is carried out by a team  Program inspection activities – Planning – Overview – Individual preparation – Inspection meeting – Rework – Follow-up

UTSA CS Issues of Program Inspection  Have a precise specification of the code to be inspected  Inspection team members are familiar with the organizational standards  Compilable version of the code has to be distributed to all team members  Program inspection is driven by checklist of errors  Program inspection should focus on defect detection, standards conformance, and poor quality programming

UTSA CS Automated Static Analysis  Automated static analysis tools scan the source code and detect possible faults and anomalies, such as variables used without initialization – Statements are well-formed – Make inference about the control flow – Compute the set of all possible values for program data  Static analysis complements the error detection facilities provided by compiler

UTSA CS Activities in Static Analysis  Control flow analysis: loop identification – e.g., Unreachable code  Data use analysis: highlighting variables – e.g., Variables declared but never used  Type checking  Information flow analysis: detecting dependencies between input and output variables  Path analysis: path examination

UTSA CS Verification and Formal Methods  Formal methods are mathematical notations and analysis techniques for enhancing the quality of systems  Confidence in software can be obtained by using formal methods – Formal methods are rigorous means for specification and verification – Formal requirements models can be automatically analyzed and requirement errors are easier and cheaper to fix at the requirement stage – Powerful tools (e.g., model checkers) have been increasingly applied in modeling and reasoning about computer-based systems

UTSA CS Software Testing  Testing is the most commonly used validation technique  Testing is an important part of the Software Lifecycle  Testing is the process of devising a set of inputs to a given piece of software that will cause the software to exercise some portion of its code  The developer of the software can then check that the results produced by the software are in accord with his or her expectations

UTSA CS Testing Levels Based on Test Process Maturity  Level 0 : There’s no difference between testing and debugging  Level 1 : The purpose of testing is to show correctness  Level 2 : The purpose of testing is to show that the software doesn’t work  Level 3 : The purpose of testing is not to prove anything specific, but to reduce the risk of using the software  Level 4 : Testing is a mental discipline that helps all IT professionals develop higher quality software

UTSA CS Software Testing Objectives  Find as many defects as possible  Find important problems fast  Assess perceived quality risks  Advise about perceived project risks  Certify to a given standard  Assess conformance to a specification (requirements, design, or product claims)

UTSA CS Software Testing Stages  Unit testing – Testing of individual components  Integration testing – Testing to expose problems arising from the combination of components  System testing – Testing the overall functionality of the system  Acceptance testing – Testing by users to check that the system satisfies requirements. Sometimes called alpha testing

UTSA CS Software Testing Activities  Test planning: design test strategy and test plan  Test development: develop test procedures, test scenarios, test cases, and test scripts to use in testing software  Test execution: execute the software based on the plans and test cases, and report any errors found to the development team  Test reporting: generate metrics and make final reports on their test effort and whether or not the software tested is ready for release  Retesting the revised software

UTSA CS Test Case  Input values  Expected outcomes – Things created (output) – Things changed/updated  database? – Things deleted – Timing …  Environment prerequisites: file, net connection …

UTSA CS Design Test Case  Build test cases (implement) – Implement the preconditions (set up the environment) – Prepare test scripts (may use test automation tools) Structure of a test case Simple linear Tree (I, EO) {(I1, EO1), (I2, EO2), …} I EO1EO2

UTSA CS Test Script  Scripts contain data and instructions for testing – Comparison information – What screen data to capture – When/where to read input – Control information Repeat a set of inputs Make a decision based on output – Testing concurrent activities

UTSA CS Test Results  Compare (test outcomes, expected outcomes) – Simple/complex (known differences) – Different types of outcomes Variable values (in memory) Disk-based (textual, non-textual, database, binary) Screen-based (char., GUI, images) Others (multimedia, communicating apps.)

UTSA CS Software Testing Techniques  Functional testing is applied to demonstrate the system meets its requirements, it is also called black-box testing – Testers are only concerned with the functionality, performance, and dependability – The system is treated as a black box that takes input and produces output  Structural testing is applied to expose defects and tests are derived from the knowledge of the internal workings of items – Testers understand the algorithm and the structure of systems – The system is treated as a white box

UTSA CS Functional Testing  Boundary value testing – Boundary value analysis – Robustness testing – Worst case testing – Special value testing  Equivalence class testing  Decision table based testing

UTSA CS Boundary Value Analysis  Errors tend to occur near the extreme values of an input variables  Boundary value analysis focuses on the boundary of the input space to identity test cases  Boundary value analysis selects input variable values at their – Minimum – Just above the minimum – A nominal value – Just below the maximum – Maximum

UTSA CS Example of Boundary Value Analysis  Assume a program accepting two inputs y1 and y2, such that a < y1< b and c < y2 < d

UTSA CS Single Fault Assumption for Boundary Value Analysis  Boundary value analysis is also augmented by the single fault assumption principle “Failures occur rarely as the result of the simultaneous occurrence of two (or more) faults”  In this respect, boundary value analysis test cases can be obtained by holding the values of all but one variable at their nominal values, and letting that variable assume its extreme values

UTSA CS Generalization of Boundary Value Analysis  The basic boundary value analysis can be generalized in two ways: – By the number of variables - (4n +1) test cases for n variables – By the kinds of ranges of variables Programming language dependent Bounded discrete Unbounded discrete (no upper or lower bounds clearly defined) Logical variables

UTSA CS Limitations of Boundary Value Analysis  Boundary value analysis works well when the program to be tested is a function of several independent variables that represent bounded physical quantities  Boundary value analysis selected test data with no consideration of the function of the program, nor of the semantic meaning of the variables  We can distinguish between physical and logical type of variables as well (e.g. temperature, pressure speed, or PIN numbers, telephone numbers etc.)

UTSA CS Robustness Testing  Robustness testing is a simple extension of boundary value analysis  In addition to the five boundary value analysis values of variables, we add values slightly greater that the maximum (max+) and a value slightly less than the minimum (min-)  The main value of robustness testing is to force attention on exception handling  In some strongly typed languages values beyond the predefined range will cause a run-time error

UTSA CS Example of Robustness Testing.... …......

UTSA CS Worst Case Testing  In worst case testing we reject the single fault assumption and we are interested what happens when more than one variable has an extreme value  Considering that we have five different values that can be considered during boundary value analysis testing for one variable, now we take the Cartesian product of these possible values for 2, 3, … n variables  We can have 5 n test cases for n input variables  The best application of worst case testing is where physical variables have numerous interactions

UTSA CS Example of Worst Case Testing.....

UTSA CS Special Value Testing  Special value testing is probably the most widely practiced form of functional testing, most intuitive, and least uniform  Utilizes domain knowledge and engineering judgment about program’s “soft spots” to devise test cases  Event though special value testing is very subjective on the generation of test cases, it is often more effective on revealing program faults

UTSA CS Equivalence Class Testing  The use of equivalence class testing has two motivations: – Sense of complete testing – Avoid redundancy  Equivalence classes form a partition of a set that is a collection of mutually disjoint subsets whose union is the entire set  Two important implications for testing: – The entire set is represented provides a form of completeness – The disjointness assures a form of non-redundancy

UTSA CS Example of Equivalence Class Testing  The program P with 3 inputs: a, b and c and the corresponding input domains are A, B, and C

UTSA CS Example of Equivalence Class Testing  Define a 1, a 2 and a 3 as: – let a i be a “representative” or “typical” value within its respective equivalence class (e.g. the midpoint in a linear equivalence class). – similarly define b i and c i.  Test cases can be stated for the inputs in terms of the representative points  The basic idea behind the techniques is that one point within an equivalence class is just as good as any other point within the same class

UTSA CS Decision Table  Decision tables make it easy to observe that all possible conditions are accounted for  Decision tables can be used for: – Specifying complex program logic – Generating test cases (Also known as logic-based testing)  Logic-based testing is considered as: – structural testing when applied to structure, i.e. control flow graph of an implementation – functional testing when applied to a specification

UTSA CS Decision Table Usage  The use of the decision-table model is applicable when : – Specification is given or can be converted to a decision table – The order in which the predicates are evaluated does not affect the interpretation of the rules or resulting action – The order of rule evaluation has no effect on resulting action – Once a rule is satisfied and the action selected, no other rule need be examined – The order of executing actions in a satisfied rule is of no consequence

UTSA CS Example of Decision Table Conditions Printer does not print YYYYNNNN A red light is flashing YYNNYYNN Printer is unrecognized YNYNYNYN Actions Heck the power cable X Check the printer-computer cable XX Ensure printer software is installed XXXX Check/replace ink XXXX Check for paper jam XX Printer Troubleshooting

UTSA CS Structural Testing  Program Flow Graph Testing – Basis Path Testing – Decision-to-Decision Path – Test Coverage Metrics  Data Flow Testing

UTSA CS Program Flow Graph  “Given a program written in an imperative programming language, its Program Graph, is a directed labeled graph in which nodes are either groups of entire statements or fragments of a statement, and edges represent flow of control” – by P. Jorgensen  If i, j, are nodes (basic block) in the program graph, there is an edge from node i, to node j in the program graph if an only if, the statement corresponding to node j, can be executed immediately after the last statement of the group of statement(s) that correspond to node i.

UTSA CS Determine the Basic Block FindMean (FILE ScoreFile) { float SumOfScores = 0.0; int NumberOfScores = 0; float Mean=0.0; float Score; Read(ScoreFile, Score); while (! EOF(ScoreFile) { if (Score > 0.0 ) { SumOfScores = SumOfScores + Score; NumberOfScores++; } Read(ScoreFile, Score); } /* Compute the mean and print the result */ if (NumberOfScores > 0) { Mean = SumOfScores / NumberOfScores; printf(“ The mean score is %f\n”, Mean); } else printf (“No scores found in file\n”); }

UTSA CS Example of Program Flow Graph

UTSA CS Path Testing  Path Testing is focusing on test techniques that are based on the selection of test paths through a program graph. If the set of paths is properly chosen, then we can claim that we have achieved a measure of test thoroughness  The fault assumption for path testing techniques is that something has gone wrong with the software that makes it take a different path than the one intended  Structurally, a path is a sequence of statements in a program unit. Semantically, a path is an execution instance of the program unit. For software testing we are interested in entry-exit paths

UTSA CS Path Testing Process  Unit Input: – Source code and a path selection criterion  Process: – Generation of a Program Flow Graph (PFG) – Selection of Paths – Generation of Test Input Data – Feasibility Test of a Path – Evaluation of Program’s Output for the Selected Test Cases

UTSA CS Example for a Simple PFG

UTSA CS Decision-to-Decision Path  A DD-Path is a chain obtained from a program graph, where a chain is a path in which the initial and terminal nodes are distinct, and every interior node has indegree = 1, and outdegree = 1  Internal node is 2-connected to every other node in the chain, and there are no instances of 1- or 3- connected nodes. – Feasibility Test of a Path – Evaluation of Program’s Output for the Selected Test Cases  DD-Paths are used to create DD-Path Graphs.

UTSA CS Decision-to-Decision Path Graph  Given a program written in an imperative language, its DD- Path graph is a labeled directed graph, in which nodes are DD-Paths of its program graph, and edges represent control flow between successor DD-Paths  In this respect, a DD-Path is a condensation graph. For example 2-connected program graph nodes are collapsed to a single DD-Path graph node

UTSA CS Example of Path Graph Program Graph Nodes DD-Path Name Case # 4first1 5-8A5 9B4 10C4 11D E5 15F4 16G3 17H4 18I3 19J4 20K3 21L4 22last2 first A B C D L E F G H A I J K last

UTSA CS Test Coverage  The motivation of using DD-paths is that they enable very precise descriptions of test coverage  In our quest to identify gaps and redundancy in our test cases as these are used to exercise (test) different aspects of a program we use formal models of the program structure to reason about testing effectiveness  Test coverage metrics are a device to measure the extend to which a set of test cases covers a program

UTSA CS Test Coverage Metrics MetricDescription of Coverage C0C0 Every Statement C1C1 Every DD-Path C1PC1P Every predicate to each outcome C2C2 C 1 Coverage + loop coverage CdCd C 1 Coverage + every dependent pair of DD-Paths C MCC Multiple condition coverage CikCik Every program path that contains up to k repetitions of a loop (usually k=2) C stat “Statistically significant” fraction of paths C∞C∞ All possible execution paths

UTSA CS Data Flow Testing  Data flow testing refers to a category of structural testing techniques that focus on the points of the code variables obtain values (are defined) and the points of the program these variables are referenced (are used) – Around faults that may occur when a variable is defined and referenced in not a proper way A variable is defined but never used A variable is used but never defined A variable that is defined twice (or more times) before it is used – Parts of a program that constitute a slice – a subset of program statements that comply with a specific slicing criterion (i.e. all program statements that are affected by variable x at point P)

UTSA CS Data Flow Testing Process  Data-flow testing involves selecting entry/exit paths with the objective of covering certain data definition and use patterns, commonly known as data-flow criteria  An outline of data-flow testing is as follows: – Draw a data flow graph for the program – Select data-flow testing criteria – Identify paths in the data-flow graph to satisfy the selection criteria – Produce test cases for the selected paths

UTSA CS Reading Assignments  Sommerville’s Book, 8 th edition – Chapter 22, “Software Inspection” – Chapter 23, “Software Testing”  Somerville’s Book, 9 th edition – Chapter 8, “Software Testing”