Software Test Metrics When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure,

Slides:



Advertisements
Similar presentations
Chapter 14 Software Testing Techniques - Testing fundamentals - White-box testing - Black-box testing - Object-oriented testing methods (Source: Pressman,
Advertisements

SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
David Woo (dxw07u).  What is “White Box Testing”  Data Processing and Calculation Correctness Tests  Correctness Tests:  Path Coverage  Line Coverage.
1 Estimating Software Development Using Project Metrics.
Metrics for Process and Projects
Metrics for Process and Projects
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Software Metrics.
Metrics Project and Process Metrics. Why do we measure? Assessing project status Allows us to track risks Before they go critical Adjust workflow See.
Software engineering for real-time systems
Software Engineering II - Topic: Software Process Metrics and Project Metrics Instructor: Dr. Jerry Gao San Jose State University
Testing an individual module
Chapter 18 Testing Conventional Applications
Unit Testing CS 414 – Software Engineering I Don Bagert Rose-Hulman Institute of Technology January 16, 2003.
Software Process and Product Metrics
Cyclomatic Complexity Dan Fleck Fall 2009 Dan Fleck Fall 2009.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
Software Systems Verification and Validation Laboratory Assignment 3
Software Engineering Software Process and Project Metrics.
Chapter 6 : Software Metrics
Software Reviews & testing Software Reviews & testing An Overview.
Software Test Metrics When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure,
Software Measurement & Metrics
Product Metrics An overview. What are metrics? “ A quantitative measure of the degree to which a system, component, or process possesses a given attribute.”
Test Metrics. Metrics Framework “Lord Kelvin, a renowned British physicist, is reputed to have said: ‘When you can measure what you are speaking about,
Software Metrics Software Engineering.
Agenda Introduction Overview of White-box testing Basis path testing
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
Software Metrics – part 2 Mehran Rezaei. Software Metrics Objectives – Provide State-of-art measurement of software products, processes and projects Why.
INTRUDUCTION TO SOFTWARE TESTING TECHNIQUES BY PRADEEP I.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
White-box Testing.
1 Program Testing (Lecture 14) Prof. R. Mall Dept. of CSE, IIT, Kharagpur.
SOFTWARE METRICS. Software Process Revisited The Software Process has a common process framework containing: u framework activities - for all software.
Computing and SE II Chapter 15: Software Process Management Er-Yu Ding Software Institute, NJU.
Creator: ACSession No: 7 Slide No: 1Reviewer: SS CSE300Advanced Software EngineeringSeptember 2005 Software Measurement – Estimation and Productivity CSE300.
SOFTWARE PROCESS AND PROJECT METRICS. Topic Covered  Metrics in the process and project domains  Process, project and measurement  Process Metrics.
Software Metric; defect removal efficiency, Cyclomate Complexity Defect Seeding Mutation Testing.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Advanced Software Engineering Lecture 4: Process & Project Metrics.
Theory and Practice of Software Testing
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Chapter 22 Metrics for Process and Projects Software Engineering: A Practitioner’s Approach 6 th Edition Roger S. Pressman.
White Box Testing by : Andika Bayu H.
CS223: Software Engineering Lecture 21: Unit Testing Metric.
Dynamic White-Box Testing What is code coverage? What are the different types of code coverage? How to derive test cases from control flows?
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
ANOOP GANGWAR 5 TH SEM SOFTWARE TESTING MASTER OF COMPUTER APPLICATION-V Sem.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
Based on slides created by Ian Sommerville & Gary Kimura
BASIS PATH TESTING.
Software Metrics 1.
Software Testing.
Software Testing.
Software Engineering (CSI 321)
CPSC 873 John D. McGregor GQM.
Software Project Sizing and Cost Estimation
Structural testing, Path Testing
Types of Testing Visit to more Learning Resources.
White Box Testing.
Software testing strategies 2
Software Testing (Lecture 11-a)
Chapter 14 Software Testing Techniques
Software metrics.
Metrics for process and Projects
Software Testing “If you can’t test it, you can’t design it”
Metrics for Process and Projects
By: Lecturer Raoof Talal
Presentation transcript:

Software Test Metrics When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science. (reference: unknown)

Measure - quantitative indication of extent, amount, dimension, capacity, or size of some attribute of a product or process. Metric - quantitative measure of degree to which a system, component or process possesses a given attribute. “A handle or guess about a given attribute.”

Why We need Metrics? “You cannot improve what you cannot measure.” “You cannot control what you cannot measure” AND TEST METRICS HELPS IN – Take decision for next phase of activities – Evidence of the claim or prediction – Understand the type of improvement required – Take decision on process or technology change – Improve software quality – Anticipate and reduce future maintenance needs

Metric Classification

Products – Explicit results of software development activities – Deliverables, documentation, by products Processes – Activities related to production of software Resources – Inputs into the software development activities – hardware, knowledge, people

Product vs. Process Process Metrics – Insights of process paradigm, software engineering tasks, work product, or milestones – Lead to long term process improvement Product Metrics – Assesses the state of the project – Track potential risks – Uncover problem areas – Adjust workflow or tasks – Evaluate teams ability to control quality

Types of Measures Direct Measures (internal attributes) – Cost, effort, LOC, speed, memory Indirect Measures (external attributes) – Functionality, quality, complexity, efficiency, reliability, maintainability

Size-Oriented Metrics Size of the software produced LOC - Lines Of Code KLOC Lines Of Code SLOC – Statement Lines of Code (ignore whitespace) Typical Measures: – Errors or Defects/KLOC, Cost/LOC, Documentation Pages/KLOC

McCabe’s Complexity Measures McCabe’s metrics are based on a control flow representation of the program. A program graph is used to depict control flow. Nodes represent processing tasks (one or more code statements) Edges represent control flow between nodes

Steps for CC: 1: Draw a control flow graph 2: Calculate Cyclomatic complexity (CC) 3: Choose a “basis set” of paths – A basis path is a unique path through the software where no iterations are allowed - all possible paths through the system are linear combinations of them. 4: Generate test cases to exercise each path

Control Flow Graph – Any procedural design can be translated into a control flow graph – Lines (or arrows) called edges represent flow of control – Circles called nodes represent one or more actions – Areas bounded by edges and nodes called regions – A predicate node is a node containing a condition

Control Flow Graph Structures Basic control flow graph structures:

Cyclomatic Complexity Set of independent paths through the graph (basis set) V(G) = E – N + 2 – E is the number of flow graph edges – N is the number of nodes V(G) = P + 1 – P is the number of predicate nodes

Example One int example1 (int value, boolean cond1, boolean cond2) { 1 if ( cond1 ) 2 value ++; 3 if ( cond2 ) 4 value --; 5 return value; } V(G) = E – N + 2; 6 – 5+2 Complexity = 3 Basis PathsTest Data false false true false true true

Cyclomatic Complexity - Exercise - Calculate CC for the following Control flow graphs

Solution Cyclomatic complexity for the two control flow graphs; V(G) = edges - nodes + 2 – 12 edges – 9 nodes – =5 – 13 edges – 10 nodes – =5

Predicate Nodes Formula One possible way of calculating V(G) is to use the predicate nodes formula: V(G) = Number of Predicate Nodes + 1 Thus for this particular graph: V(G)=1+1=2

A limitation on the predicate nodes formula is that it assumes that there are only two outgoing flows for each of such nodes.

Basis Path Testing During Integration Basis path testing can also be applied to integration testing when units/components are integrated together McCabe’s Design Predicate approach: Draw a “structure chart” Calculate integration complexity Select tests to exercise every “type” of interaction, not every combination

Types of Interactions Types of unit/component interactions:

Unconditional – Unit/component A always calls unit/component B. A calling tree always has an integration complexity of at least one. The integration complexity is NOT incremented for each occurrence of an unconditional interaction. Conditional – Unit/component A calls unit/component B only if certain conditions are met. The integration complexity is incremented by one for each occurrence of a conditional interaction in the structure chart. Mutually Exclusive both) – Unit/component A calls either unit/component B or unit/component C (but not based upon certain conditions being met.) The integration complexity is incremented by one for each occurrence of a mutually exclusive conditional interaction in the structure chart. Iterative – Unit/component A calls unit/component B one or more times based upon certain conditions being met. The integration complexity is incremented by one for each occurrence of an iterative interaction in the structure chart.

Integration Basis Path Test Cases Basis path testing can also be applied to integration testing when units/components are integrated together Complexity: 5 Basis set of paths: – 1. A,B,C,E & then A calls I – 2. A,B,C,F & then A calls I – 3. A,B,D,G (only once) & then A calls I – 4. A,B,D,G (more than once) & then A calls I – 5. A,B,D,G,H & then A calls I

Integration Basis Path Testing Interaction types: – 2 conditional – 1 mutually exclusive conditional – 1 iteration – 1 simply because we have a graph Basis set of paths: 1. A,C,F (only once) 2. A,C,F (more than once) 3. A,B,D,G & then B calls E & then A calls C 4. A,B,D,H & then B calls E & then A calls C,F 5. A,B,D,H & then B calls E, I & then A calls C,F

Test Case defect density Total number of errors found in test scripts v/s developed and executed. (Defective Test Scripts /Total Test Scripts) * 100 Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215 So, test case defect density is 215 X = 16.8% 1280 This 16.8% value can also be called as test case efficiency %, which depends upon total number of test cases which uncovered defects ***( =215)

Review Efficiency The Review Efficiency is a metric that offers insight on the review quality and testing Some organization also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing Review efficiency=100*Total number of defects found by reviews/Total number of project defects Example: A project found total 269 defects in different reviews, which were fixed and test team got 476 defects which were reported and valid So, Review efficiency is [269/( )] X 100 = 36.1%

Defect Removal Effectiveness DRE= Defects removed during development phase x100% Defects latent in the product Defects latent in the product = Defects removed during development phase+ defects found later by user The Defect Removal Effectiveness for each of the phases would be as follows: Requirements DRE = 10 / ( ) x 100% = 63% Design DRE = (3+18) / ( ) x 100% = 60% Coding DRE = (0+4+26) / ( ) x 100% = 55% Testing DRE = (2+5+8) / ( ) x 100% = 60% Development DRE = (Pre-release Defect) / (Total Defects) x 100% ( ) / ( ) x 100 = 88% Example:

According to Capers Jones, world class organizations have Development DRE greater than 95%. The longer a defect exists in a product before it is detected, the more expensive it is to fix. Knowing the DRE for each phase can help an organization target its process improvement efforts to improve defect detection methods where they can be most effective.