Software Testing and Quality Assurance Software Quality Metrics 1.

Slides:



Advertisements
Similar presentations
Metrics to improve software process
Advertisements

Software Engineering CSE470: Process 15 Software Engineering Phases Definition: What? Development: How? Maintenance: Managing change Umbrella Activities:
Chapter 4 Quality Assurance in Context
1 In-Process Metrics for Software Testing Kan Ch 10 Steve Chenoweth, RHIT Left – In materials testing, the goal always is to break it! That’s how you know.
Swami NatarajanJune 10, 2015 RIT Software Engineering Activity Metrics (book ch 4.3, 10, 11, 12)
Software Quality Engineering Roadmap
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Software Quality Metrics Overview
R R R CSE870: Advanced Software Engineering (Cheng): Intro to Software Engineering1 Advanced Software Engineering Dr. Cheng Overview of Software Engineering.
CMPUT Software QualityMajor Software Strategies - 1 © Paul Sorenson MAJOR SW STRATEGIES BASED HEAVILY ON ROBERT GRADY'S BOOK “Practical Software.
SE 450 Software Processes & Product Metrics Software Metrics Overview.
RIT Software Engineering
SE 450 Software Processes & Product Metrics 1 Defect Removal.
SE 450 Software Processes & Product Metrics Activity Metrics.
Validating and Improving Test-Case Effectiveness Author: Yuri Chernak Presenter: Lam, Man Tat.
(c) 2007 Mauro Pezzè & Michal Young Ch 1, slide 1 Software Test and Analysis in a Nutshell.
EXAMPLES OF METRICS PROGRAMS
Lecture Nine Database Planning, Design, and Administration
3. Software product quality metrics The quality of a product: -the “totality of characteristics that bear on its ability to satisfy stated or implied needs”.
Software Process and Product Metrics
1 Software Quality Metrics Ch 4 in Kan Steve Chenoweth, RHIT What do you measure?
Chapter 11: Testing The dynamic verification of the behavior of a program on a finite set of test cases, suitable selected from the usually infinite execution.
Software Testing and QA Theory and Practice (Chapter 15: Software Reliability) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
12 Steps to Useful Software Metrics
Using A Defined and Measured Personal Software Process Watts S. Humphrey CS 5391 Article 8.
Cost Estimation Van Vliet, chapter 7 Glenn D. Blank.
Kan Ch 7 Steve Chenoweth, RHIT
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
Chapter 9 Database Planning, Design, and Administration Sungchul Hong.
1 Software Quality Engineering CS410 Class 5 Seven Basic Quality Tools.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 15 Software Reliability
Quality Planning & Defect Estimation
What is Software Engineering? the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software”
Chapter 2 The process Process, Methods, and Tools
Dillon: CSE470: SE, Process1 Software Engineering Phases l Definition: What? l Development: How? l Maintenance: Managing change l Umbrella Activities:
N By: Md Rezaul Huda Reza n
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
S oftware Q uality A ssurance Part One Reviews and Inspections.
Validation Metrics. Metrics are Needed to Answer the Following Questions How much time is required to find bugs, fix them, and verify that they are fixed?
Data Structures & AlgorithmsIT 0501 Algorithm Analysis I.
Software Reliability SEG3202 N. El Kadri.
CS 360 Lecture 3.  The software process is a structured set of activities required to develop a software system.  Fundamental Assumption:  Good software.
1 Quiz T/F: TQM is a clearly defined quality management process standard. Define the following: –Defect Rate –FPA –Ratio Scale –OO –Ordinal Scale List.
1 POP Quiz T/F Defect Removal Effectiveness and Defect Removal Models are not true Predictive Models Define DRE What is a Checklist? What is it for? What.
Software Measurement & Metrics
1 Pop Quiz What is an Orthogonal Defect? Which is more precise: predictive models or management models? Why? Essay: Why are metrics and models an important.
Software Engineering Modern Approaches Eric Braude and Michael Bernstein 1.
Database System Development Lifecycle 1.  Main components of the Infn System  What is Database System Development Life Cycle (DSDLC)  Phases of the.
Quality Software Project Management Software Size and Reuse Estimating.
SOFTWARE METRICS. Software Process Revisited The Software Process has a common process framework containing: u framework activities - for all software.
Chapter 3: Software Project Management Metrics
Chapter 13: Software Quality Project Management Afnan Albahli.
Software Quality Metrics III. Software Quality Metrics  The subset of metrics that focus on quality  Software quality metrics can be divided into: End-product.
Carnegie Mellon Software Engineering Institute © 2006 by Carnegie Mellon University Software Process Performance Measures James Over Software Engineering.
1 Software Quality Engineering. 2 Quality Management Models –Tools for helping to monitor and manage the quality of software when it is under development.
1 The Requirements Problem Chapter 1. 2 Standish Group Research Research paper at:  php (1994)
Chapter 22 Metrics for Process and Projects Software Engineering: A Practitioner’s Approach 6 th Edition Roger S. Pressman.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
Software Development Process CS 360 Lecture 3. Software Process The software process is a structured set of activities required to develop a software.
Chapter 05 Quality Planning SaigonTech – Engineering Division Software Project Management in Practice By Pankaj Jalote © 2003 by Addison Wesley.
Testing throughout Lifecycle Ljudmilla Karu. Verification and validation (V&V) Verification is defined as the process of evaluating a system or component.
 System Requirement Specification and System Planning.
Swami NatarajanOctober 1, 2016 RIT Software Engineering Software Metrics Overview.
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
12 Steps to Useful Software Metrics
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
Cost Estimation Van Vliet, chapter 7 Glenn D. Blank.
Chapter # 7 Software Quality Metrics
Presentation transcript:

Software Testing and Quality Assurance Software Quality Metrics 1

2 Reading Assignment Stephen H. Kan "Metrics and Models in Software Quality Engineering", Addison Wesley, Second Edition, ◦ Chapter 4: Sections 1, 2, 3 and 4.

3 Objectives Software Metrics Classification Examples of Metric Programs

Software Metrics Software metrics can be classified into three categories: ◦ Product Metrics: Describe the characteristics of the product such as size, complexity, design features, performance, and quality level. ◦ Process Metrics: Used to improve software development and maintenance. Examples include the effectiveness of defect removal during development, the pattern of testing defect arrival, and the response time of the fix process. ◦ Project Metrics: Describe the project characteristics and execution. Examples include the number of software developers, the staffing pattern over the life cycle of the software, cost, schedule, and productivity. Some metrics belong to multiple categories. For example, the inprocess quality metrics of a project are both process metrics and project metrics. 4

Product Quality Metrics Mean time to failure Defect density Customer problems Customer satisfaction 5

Product Quality Metrics Mean time to failure (MTTF) ◦ Safety-critical systems  airline traffic control systems, avionics, and weapons. ◦ U.S. government mandates that its air traffic control system cannot be unavailable for more than three seconds per year. Defect density ◦ Related to MTTF, but different. ◦ Can be looked at from the development team perspective (discussed here) or from the customer perspective (discussed in the book) ◦ To define a rate, we first have to operationalize the numerator and the denominator, and specify the time frame.  Observed failures can be used to approximate the number of defects in the software (numerator).  The denominator is the size of the software, usually expressed in thousand lines of code (KLOC) or in the number of function points. 6

Product Quality Metrics Defect density (cont.) ◦ How to count the Lines of Code metric?  executable lines.  executable lines plus data definitions.  executable lines, data definitions, and comments.  executable lines, data definitions, comments, and job control language.  physical lines on an input screen.  lines terminated by logical delimiters. 7

Product Quality Metrics Defect density (cont.) ◦ Function points: A collection of executable statements that performs a certain task, together with declarations of the formal parameters and local variables manipulated by those statements.  The defect rate metric is indexed to the number of functions a software provides.  Ultimate measure of software productivity  Measuring functions is theoretically promising but realistically very difficult. 8

Product Quality Metrics Defect density (cont.)  At IBM Rochester, lines of code data is based on instruction statements (logical LOC) and includes executable code and data definitions but excludes comments.  Because the LOC count is based on source instructions, the two size metrics are called shipped source instructions (SSI) and new and changed source instructions (CSI), respectively.  The relationship between SSI and CSI: 9

Product Quality Metrics Defect density (cont.) ◦ The several postrelease defect rate metrics per thousand SSI (KSSI) or per thousand CSI (KCSI) are:  Total defects per KSSI (a measure of code quality of the total product)  Field defects per KSSI (a measure of defect rate in the field)  Release-origin defects (field and internal) per KCSI (a measure of development quality)  Release-origin field defects per KCSI (a measure of development quality per defects found by customers) 10

Product Quality Metrics Customer problems ◦ Measures the problems customers encounter when using the product. ◦ from the customers' standpoint, all problems they encounter while using the software product, not just the valid defects, are problems with the software.  For example: usability problems, unclear documentation or information, duplicates of valid defects, or user errors. 11

Product Quality Metrics Customer problems (cont.) Problems per user month (PUM): where ◦ Approaches to achieve a low PUM include:  Improve the development process and reduce the product defects.  Reduce the non-defect-oriented problems by improving all aspects of the products (such as usability, documentation), customer education, and support.  Increase the sale (the number of installed licenses) of the product. 12

Product Quality Metrics Customer problems (cont.)  The customer problems metric can be regarded as an intermediate measurement between defects measurement and customer satisfaction. 13 Defects Customer Problems Customer Satisfaction Defect RateProblems Per User-Month Numerator Valid and unique product defects All customer problems (defects and nondefects, first time and repeated) Denominator Size of product (KLOC or function point)Customer usage of the product (user-months) Measurement perspective Producer—software development organizationCustomer Scope Intrinsic product qualityIntrinsic product quality plus other factors

Product Quality Metrics Customer satisfaction ◦ Measured by customer survey data via the five-point scale: Very satisfied, Satisfied, Neutral, Dissatisfied and Very dissatisfied. ◦ Specific parameters of customer satisfaction in software monitored by  IBM include the CUPRIMDSO categories (capability, functionality, usability, performance, reliability, installability, maintainability, documentation/information, service, and overall)  Hewlett-Packard include FURPS (functionality, usability, reliability, performance, and service). ◦ Based on the five-point-scale data, several metrics with slight variations can be constructed and used:  Percent of completely satisfied customers  Percent of satisfied customers (satisfied and completely satisfied)  Percent of dissatisfied customers (dissatisfied and completely dissatisfied)  Percent of nonsatisfied (neutral, dissatisfied, and completely dissatisfied) 14

Product Quality Metrics Customer satisfaction  Some companies use the net satisfaction index (NSI) to facilitate comparisons across product.  The NSI has the following weighting factors:  Completely satisfied = 100%  Satisfied = 75%  Neutral = 50%  Dissatisfied = 25%  Completely dissatisfied = 0%  This weighting approach may mask the satisfaction profile of one's customer set.  It is inferior to the simple approach of calculating percentage of specific categories.  A weighted index is for data summary when multiple indicators are too cumbersome to be shown. 15

In-Process Quality Metrics In-process quality metrics are less formally defined than end-product metrics, and their practices vary greatly among software developers. ◦ From tracking defect arrival during formal machine testing to covering various parameters in each phase of the development cycle. In-process Quality Metrics ◦ Defect Density During Machine Testing ◦ Defect Arrival Pattern During Machine Testing ◦ Phase-Based Defect Removal Pattern ◦ Defect Removal Effectiveness 16

Defect Density During Machine Testing Higher defect rates found during testing is an indicator that either ◦ The software has experienced higher error injection during its development process, ◦ Extraordinary testing effort has been exerted, due to  additional testing  new testing approach that was deemed more effective in detecting defects. This simple metric of defects per KLOC or function point is a good indicator of quality while the software is still being tested. ◦ Also useful to monitor subsequent releases of a product in the same development organization. 17

Defect Density During Machine Testing The development team or the project manager can use the following scenarios to judge the release quality: ◦ If the defect rate during testing is the same or lower than that of the previous release (or a similar product), then ask: Does the testing for the current release deteriorate?  If the answer is no, the quality perspective is positive, otherwise more testing is needed (e.g., add test cases to increase coverage, customer testing, stress testing, etc.). ◦ If the defect rate during testing is substantially higher than that of the previous release (or a similar product), then ask: Did we plan for and actually improve testing effectiveness?  If the answer is no, the quality perspective is negative, implying the need for more testing (which can result in higher defect rates!!!!). Otherwise, the quality perspective is the same or positive. 18

Defect Arrival Pattern During Machine Testing The pattern of defect arrivals (or for that matter, times between failures) gives more information than defect density during testing. The objective is always to look for defect arrivals that stabilize at a very low level, or times between failures that are far apart, before ending the testing effort and releasing the software to the field. 19

20

Defect Arrival Pattern During Machine Testing Three different quality metrics need to be looked at simultaneously: ◦ The defect arrivals (defects reported) during the testing phase by time interval (e.g., week). ◦ The pattern of valid defect arrivals ◦ The pattern of defect backlog overtime. 21

Phase-Based Defect Removal Pattern An extension of the test defect density metric. In addition to testing, it requires the tracking of defects at all phases of the development cycle, including the design reviews, code inspections, and formal verifications before testing. The pattern of phase-based defect removal reflects the overall defect removal ability of the development process. Quality metrics include defect rates, inspection coverage and inspection effort. 22

Phase-Based Defect Removal Pattern 23 I0 : high-level design review I1 : low-level design review I2 : code inspection UT: unit test CT: component test ST: system test

Defect Removal Effectiveness 24 Because the total number of latent defects in the product at any given phase is not known, the denominator of the metric can only be approximated. It is usually estimated by: Defects removed during the phase + Defects found later

Defect Removal Effectiveness 25 I0 : high-level design review I1 : low-level design review I2 : code inspection UT: unit test CT: component test ST: system test

Metrics for Software Maintenance During this phase the defect arrivals by time interval and customer problem calls by time interval are the de facto metrics. ◦ Largely determined by the development process before the maintenance phase. ◦ Hence, not much can be done w.r.t. the quality of the product at the maintenance phase. ◦ What is needed is a measure of how quick and efficient defects are fixed. Metrics for Software Maintenance ◦ Fix Backlog and Backlog Management Index ◦ Fix Response Time and Fix Responsiveness ◦ Percent Delinquent Fixes ◦ Fix Quality 26

Fix Backlog and Backlog Management Index A simple count of reported problems that remain at the end of each month or each week. ◦ If BMI > 100%, the backlog is reduced 27

Fix Backlog and Backlog Management Index 28

Fix Response Time and Fix Responsiveness Mean time of all problems from open to closed. Usually depends on the severity of the problems. ◦ Less for severe problems and more for minor problems. 29

Percent Delinquent Fixes Captures the latency in fixing software that was beyond the time allotted. Accounts for closed problems ◦ What about still open problems? Active backlog refers to all opened problems for the week.  The sum of the existing backlog at the beginning of the week and new problem arrivals during the week. 30

Fix Quality Finding a defect by the customer is bad, however receiving a defective fix or a fix that introduced a defect in another component is even worse. The metric of percent defective fixes is the percentage of all fixes in a time interval (e.g., 1 month) that are defective. Why not use percentages for defective fixes? 31

Examples of Metric Programs The book presents three sample programs ◦ Motorola ◦ HP ◦ IBM Rochester We will only look at one, viz., Motorola 32

Motorola’s Software Metrics Program Followed the Goal/Question/Metric paradigm of Basili and Weiss as follows: ◦ goals were identified, ◦ questions were formulated in quantifiable terms, and ◦ metrics were established 33

Motorola’s Software Metrics Program Goal 1: Improve Project Planning  Question 1.1: What was the accuracy of estimating the actual value of project schedule?  Metric 1.1 : Schedule Estimation Accuracy (SEA)  Question 1.2: What was the accuracy of estimating the actual value of project effort?  Metric 1.2 : Effort Estimation Accuracy (EEA) 34

Goal 2: Increase Defect Containment  Question 2.1: What is the currently known effectiveness of the defect detection process prior to release?  Metric 2.1: Total Defect Containment Effectiveness (TDCE)  Question 2.2: What is the currently known containment effectiveness of faults introduced during each constructive phase of software development for a particular software product?  Metric 2.2: Phase Containment Effectiveness for phase i (PCEi) 35 Motorola’s Software Metrics Program

Goal 3: Increase Software Reliability  Question 3.1: What is the rate of software failures, and how does it change over time?  Metric 3.1: Failure Rate (FR) 36 Motorola’s Software Metrics Program

Goal 4: Decrease Software Defect Density  Question 4.1: What is the normalized number of in- process faults, and how does it compare with the number of in-process defects?  Metric 4.1a: In-process Faults (IPF)  Metric 4.1b: In-process Defects (IPD) 37 Motorola’s Software Metrics Program

 Question 4.2: What is the currently known defect content of software delivered to customers, normalized by Assembly-equivalent size?  Metric 4.2a: Total Released Defects (TRD) total  Metric 4.2b: Total Released Defects (TRD) delta 38 Motorola’s Software Metrics Program

 Question 4.3: What is the currently known customer-found defect content of software delivered to customers, normalized by Assembly- equivalent source size?  Metric 4.3a: Customer-Found Defects (CFD) total  Metric 4.3b: Customer-Found Defects (CFD) delta 39 Motorola’s Software Metrics Program

Goal 5: Improve Customer Service  Question 5.1 What is the number of new problems opened during the month?  Metric 5.1: New Open Problems (NOP)  Question 5.2 What is the total number of open problems at the end of the month?  Metric 5.2: Total Open Problems (TOP)  Question 5.3: What is the mean age of open problems at the end of the month?  Metric 5.3: Mean Age of Open Problems (AOP)  Question 5.4: What is the mean age of the problems that were closed during the month?  Metric 5.4: Mean Age of Closed Problems (ACP) 40 Motorola’s Software Metrics Program

Goal 6: Reduce the Cost of Nonconformance  Question 6.1: What was the cost to fix postrelease problems during the month?  Metric 6.1: Cost of Fixing Problems (CFP) Goal 7: Increase Software Productivity  Question 7.1: What was the productivity of software development projects (based on source size)?  Metric 7.1a: Software Productivity total (SP total)  Metric 7.1b: Software Productivity delta (SP delta) 41 Motorola’s Software Metrics Program

Other In-Process Metrics Life-cycle phase and schedule tracking metric: Track schedule based on lifecycle phase and compare actual to plan. Cost/earned value tracking metric: Track actual cumulative cost of the project versus budgeted cost, and actual cost of the project so far, with continuous update throughout the project. Requirements tracking metric: Track the number of requirements change at the project level. Design tracking metric: Track the number of requirements implemented in design versus the number of requirements written. Fault-type tracking metric: Track causes of faults. Remaining defect metrics: Track faults per month for the project and use Rayleigh curve to project the number of faults in the months ahead during development. Review effectiveness metric: Track error density by stages of review and use control chart methods to flag the exceptionally high or low data points. 42

Key Points Software quality can be grouped according to the software life cycle into: end-product, in-process, and maintenance quality metrics. Product quality metrics ◦ Mean time to failure ◦ Defect density ◦ Customer-reported problems ◦ Customer satisfaction In-process quality metrics ◦ Phase-based defect removal pattern ◦ Defect removal effectiveness ◦ Defect density during formal machine testing ◦ Defect arrival pattern during formal machine testing Maintenance quality metrics ◦ Fix backlog ◦ Backlog management index ◦ Fix response time and fix responsiveness ◦ Percent delinquent fixes ◦ Defective fixes 43