Software Quality Metrics III. Software Quality Metrics  The subset of metrics that focus on quality  Software quality metrics can be divided into: End-product.

Slides:



Advertisements
Similar presentations
Web Development Engineering Processes Introduction to Web Development Outsourcing Processes.
Advertisements

Metrics to improve software process
Project Estimation: Metrics and Measurement
Metrics. A Good Manager Measures measurement What do we use as a basis? size? size? function? function? project metrics process metrics process product.
Metrics for Process and Projects
Metrics for Process and Projects
1 In-Process Metrics for Software Testing Kan Ch 10 Steve Chenoweth, RHIT Left – In materials testing, the goal always is to break it! That’s how you know.
Software Quality Assurance Inspection by Ross Simmerman Software developers follow a method of software quality assurance and try to eliminate bugs prior.
Software Quality Engineering Roadmap
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Software Quality Metrics Overview
R&D SDM 1 Metrics How to measure and assess software engineering? 2009 Theo Schouten.
SE 450 Software Processes & Product Metrics Software Metrics Overview.
Software project management (intro)
RIT Software Engineering
SIM5102 Software Evaluation
SE 450 Software Processes & Product Metrics 1 Defect Removal.
EXAMPLES OF METRICS PROGRAMS
SOFTWARE PROJECT MANAGEMENT AND COST ESTIMATION © University of LiverpoolCOMP 319slide 1.
The Software Product Life Cycle. Views of the Software Product Life Cycle  Management  Software engineering  Engineering design  Architectural design.
Informatics 43 – June 2, Some Announcements Discussion on Friday – review. Bring questions. 0.5% extra credit for submitting the EEE Course Evaluation.
3. Software product quality metrics The quality of a product: -the “totality of characteristics that bear on its ability to satisfy stated or implied needs”.
Software Process and Product Metrics
Software Metrics/Quality Metrics
1 Software Quality Metrics Ch 4 in Kan Steve Chenoweth, RHIT What do you measure?
Cost Management Week 6-7 Learning Objectives
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
1 Software Quality Engineering CS410 Class 5 Seven Basic Quality Tools.
What is Software Engineering? the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software”
CLEANROOM SOFTWARE ENGINEERING.
Software Testing and Quality Assurance Software Quality Metrics 1.
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
COCOMO Models Ognian Kabranov SEG3300 A&B W2004 R.L. Probert.
Software Metrics - Data Collection What is good data? Are they correct? Are they accurate? Are they appropriately precise? Are they consist? Are they associated.
1 Software Quality CIS 375 Bruce R. Maxim UM-Dearborn.
Software Engineering Software Process and Project Metrics.
Chapter 6 : Software Metrics
Software Measurement & Metrics
Software Engineering SM ? 1. Outline of this presentation What is SM The Need for SM Type of SM Size Oriented Metric Function Oriented Metric 218/10/2015.
1 Estimation Function Point Analysis December 5, 2006.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
Lecture 4 Software Metrics
CHAPTER 9 INSPECTIONS AS AN UP-FRONT QUALITY TECHNIQUE
Copyright © 1994 Carnegie Mellon University Disciplined Software Engineering - Lecture 3 1 Software Size Estimation I Material adapted from: Disciplined.
Disciplined Software Engineering Lecture #3 Software Engineering Institute Carnegie Mellon University Pittsburgh, PA Sponsored by the U.S. Department.
SEG3300 A&B W2004R.L. Probert1 COCOMO Models Ognian Kabranov.
Introduction to Software Project Estimation I (Condensed) Barry Schrag Software Engineering Consultant MCSD, MCAD, MCDBA Bellevue.
Chapter 3: Software Project Management Metrics
SOFTWARE PROCESS AND PROJECT METRICS. Topic Covered  Metrics in the process and project domains  Process, project and measurement  Process Metrics.
Estimating “Size” of Software There are many ways to estimate the volume or size of software. ( understanding requirements is key to this activity ) –We.
Effort Estimation In WBS,one can estimate effort (micro-level) but needed to know: –Size of the deliverable –Productivity of resource in producing that.
© 2006 Pearson Addison-Wesley. All rights reserved 2-1 Chapter 2 Principles of Programming & Software Engineering.
Hussein Alhashimi. “If you can’t measure it, you can’t manage it” Tom DeMarco,
Advanced Software Engineering Lecture 4: Process & Project Metrics.
1 Software Quality Engineering. 2 Quality Management Models –Tools for helping to monitor and manage the quality of software when it is under development.
Chapter 22 Metrics for Process and Projects Software Engineering: A Practitioner’s Approach 6 th Edition Roger S. Pressman.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
FUNCTION POINT ANALYSIS & ESTIMATION
 System Requirement Specification and System Planning.
Estimation Questions How do you estimate? What are you going to estimate? Where do you start?
Swami NatarajanOctober 1, 2016 RIT Software Engineering Software Metrics Overview.
Regression Testing with its types
Lecture 15: Technical Metrics
Function Point.
Software Metrics “How do we measure the software?”
More on Estimation In general, effort estimation is based on several parameters and the model ( E= a + b*S**c ): Personnel Environment Quality Size or.
COCOMO Models.
Software metrics.
COCOMO MODEL.
Presentation transcript:

Software Quality Metrics III

Software Quality Metrics  The subset of metrics that focus on quality  Software quality metrics can be divided into: End-product quality metrics In-process quality metrics  The essence of software quality engineering is to investigate the relationships among in- process metric, project characteristics, and end-product quality, and, based on the findings, engineer improvements in quality to both the process and the product.

Three Groups of Software Quality Metrics  Product quality  In-process quality  Maintenance quality

Product Quality Metrics  Intrinsic product quality Mean time to failure Defect density  Customer related Customer problems Customer satisfaction

Intrinsic Product Quality  Intrinsic product quality is usually measured by: the number of “bugs” (functional defects) in the software (defect density), or how long the software can run before “crashing” (MTTF – mean time to failure)  The two metrics are correlated but different

Difference Between Errors, Defects, Faults, and Failures (IEEE/ANSI)  An error is a human mistake that results in incorrect software.  The resulting fault is an accidental condition that causes a unit of the system to fail to function as required.  A defect is an anomaly in a product.  A failure occurs when a functional unit of a software-related system can no longer perform its required function or cannot perform it within specified limits

What’s the Difference between a Fault and a Defect?

Defect Rate for New and Changed Lines of Code  Depends on the availability on having LOC counts for both the entire produce as well as the new and changed code  Depends on tracking defects to the release origin (the portion of code that contains the defects) and at what release that code was added, changed, or enhanced

Function Points  A function can be defined as a collection of executable statements that performs a certain task, together with declarations of the formal parameters and local variables manipulated by those statements.  In practice functions are measured indirectly.  Many of the problems associated with LOC counts are addressed.

Measuring Function Points  The number of function points is a weighted total of five major components that comprise an application. Number of external inputs x 4 Number of external outputs x 5 Number of logical internal files x10 Number of external interface files x 7 Number of external inquiries x 4

Measuring Function Points (Cont’d)  The function count (FC) is a weighted total of five major components that comprise an application. Number of external inputs x (3 to 6) Number of external outputs x (4 to 7) Number of logical internal files x (7 to 15) Number of external interface files x (5 to 10) Number of external inquiries x (3 to 6) the weighting factor depends on complexity

Measuring Function Points (Cont’d)  Each number is multiplied by the weighting factor and then they are summed.  This weighted sum (FC) is further refined by multiplying it by the Value Adjustment Factor (VAF).  Each of 14 general system characteristics are assessed on a scale of 0 to 5 as to their impact on (importance to) the application.

The 14 System Characteristics 1. Data Communications 2. Distributed functions 3. Performance 4. Heavily used configuration 5. Transaction rate 6. Online data entry 7. End-user efficiency

The 14 System Characteristics (Cont’d) 8. Online update 9. Complex processing 10. Reusability 11. Installation ease 12. Operational ease 13. Multiple sites 14. Facilitation of change

The 14 System Characteristics (Cont’d)  VAF is the sum of these 14 characteristics divided by 100 plus  Notice that if an average rating is given each of the 14 factors, their sum is 35 and therefore VAF =1  The final function point total is then the function count multiplied by VAF  FP = FC x VAF

Customer Problems Metric  Customer problems are all the difficulties customers encounter when using the product.  They include: Valid defects Usability problems Unclear documentation or information Duplicates of valid defects (problems already fixed but not known to customer) User errors  The problem metric is usually expressed in terms of problems per user month (PUM)

Customer Problems Metric (Cont’d)  PUM = Total problems that customers reported for a time period Total number of license-months of the software during the period where Number of license-months = Number of the install licenses of the software x Number of months in the calculation period

Approaches to Achieving a Low PUM  Improve the development process and reduce the product defects.  Reduce the non-defect-oriented problems by improving all aspects of the products (e.g., usability, documentation), customer education, and support.  Increase the sale (number of installed licenses) of the product.

Customer Satisfaction Metrics Customer Satisfaction Issues Customer Problems Defects

Customer Satisfaction Metrics (Cont’d)  Customer satisfaction is often measured by customer survey data via the five- point scale: Very satisfied Satisfied Neutral Dissatisfied Very dissatisfied

IBM Parameters of Customer Satisfaction  CUPRIMDSO Capability (functionality) Usability Performance Reliability Installability Maintainability Documentation Service Overall

HP Parameters of Customer Satisfaction  FURPS Functionality Usability Reliability Performance Service

Examples Metrics for Customer Satisfaction 1. Percent of completely satisfied customers 2. Percent of satisfied customers (satisfied and completely satisfied) 3. Percent of dissatisfied customers (dissatisfied and completely dissatisfied) 4. Percent of nonsatisfied customers (neutral, dissatisfied, and completely dissatisfied)

In-Process Quality Metrics  Defect density during machine testing  Defect arrival pattern during machine testing  Phase-based defect removal pattern  Defect removal effectiveness

Defect Density During Machine Testing  Defect rate during formal machine testing (testing after code is integrated into the system library) is usually positively correlated with the defect rate in the field.  The simple metric of defects per KLOC or function point is a good indicator of quality while the product is being tested.

Defect Density During Machine Testing (Cont’d)  Scenarios for judging release quality: If the defect rate during testing is the same or lower than that of the previous release, then ask: Does the testing for the current release deteriorate?  If the answer is no, the quality perspective is positive.  If the answer is yes, you need to do extra testing.

Defect Density During Machine Testing (Cont’d)  Scenarios for judging release quality (cont’d): If the defect rate during testing is substantially higher than that of the previous release, then ask: Did we plan for and actually improve testing effectiveness?  If the answer is no, the quality perspective is negative.  If the answer is yes, then the quality perspective is the same or positive.

Defect Arrival Pattern During Machine Testing  The pattern of defect arrivals gives more information than defect density during testing.  The objective is to look for defect arrivals that stabilize at a very low level, or times between failures that are far apart before ending the testing effort and releasing the software.

Two Contrasting Defect Arrival Patterns During Testing

Three Metrics for Defect Arrival During Testing  The defect arrivals during the testing phase by time interval (e.g., week). These are raw arrivals, not all of which are valid.  The pattern of valid defect arrivals – when problem determination is done on the reported problems. This is the true defect pattern.  The pattern of defect backlog over time. This is needed because development organizations cannot investigate and fix all reported problems immediately. This metric is a workload statement as well as a quality statement.

Phase-Based Defect Removal Pattern  This is an extension of the test defect density metric.  It requires tracking defects in all phases of the development cycle.  The pattern of phase-based defect removal reflects the overall defect removal ability of the development process.

Defect Removal by Phase for Two Products

Examples of Metrics Programs  Motorola Follows the Goal/Question/Metric paradigm of Basili and Weiss Goals: 1. Improve project planning 2. Increase defect containment 3. Increase software reliability 4. Decrease software defect density 5. Improve customer service 6. Reduce the cost of nonconformance 7. Increase software productivity

Examples of Metrics Programs (Cont’d)  Motorola (cont’d) Measurement Areas  Delivered defects and delivered defects per size  Total effectiveness throughout the process  Adherence to schedule  Accuracy of estimates  Number of open customer problems  Time that problems remain open  Cost of nonconformance  Software reliability

Examples of Metrics Programs (Cont’d)  Motorola (cont’d) For each goal the questions to be asked and the corresponding metrics were formulated:  Goal 1: Improve Project Planning  Question 1.1: What was the accuracy of estimating the actual value of project schedule?  Metric 1.1: Schedule Estimation Accuracy (SEA) SEA = (Actual project duration)/(Estimated project duration)

Examples of Metrics Programs (Cont’d)  Hewlett-Packard The software metrics program includes both primitive and computed metrics. Primitive metrics are directly measurable Computed metrics are mathematical combinations of primitive metrics  (Average fixed defects)/(working day)  (Average engineering hours)/(fixed defect)  (Average reported defects)/(working day)  Bang – A quantitative indicator of net usable function from the user’s point of view

Examples of Metrics Programs (Cont’d)  Hewlett-Packard (cont’d) Computed metrics are mathematical combinations of primitive metrics (cont’d)  (Branches covered)/(total branches)  Defects/KNCSS (thousand noncomment source statements)  Defects/LOD (lines of documentation not included in program source code)  Defects/(testing time)  Design weight – sum of module weights (function of token and decision counts) over the set of all modules in the design

Examples of Metrics Programs (Cont’d)  Hewlett-Packard (cont’d) Computed metrics are mathematical combinations of primitive metrics (cont’d)  NCSS/(engineering month)  Percent overtime – (average overtime)/(40 hours per week)  Phase – (engineering months)/(total engineering months)

Examples of Metrics Programs (Cont’d)  IBM Rochester Selected quality metrics  Overall customer satisfaction  Postrelease defect rates  Customer problem calls per month  Fix response time  Number of defect fixes  Backlog management index  Postrelease arrival patterns for defects and problems

Examples of Metrics Programs (Cont’d)  IBM Rochester (cont’d) Selected quality metrics (cont’d)  Defect removal model for the software development process  Phase effectiveness  Inspection coverage and effort  Compile failures and build/integration defects  Weekly defect arrivals and backlog during testing  Defect severity

Examples of Metrics Programs (Cont’d)  IBM Rochester (cont’d) Selected quality metrics (cont’d)  Defect cause and problem component analysis  Reliability (mean time to initial program loading during testing)  Stress level of the system during testing  Number of system crashes and hangs during stress testing and system testing  Various customer feedback metrics  S curves for project progress

Collecting Software Engineering Data  The challenge is to collect the necessary data without placing a significant burden on development teams.  Limit metrics to those necessary to avoid collecting unnecessary data.  Automate the data collection whenever possible.

Data Collection Methodology (Basili and Weiss) 1. Establish the goal of the data collection 2. Develop a list of questions of interest 3. Establish data categories 4. Design and test data collection forms 5. Collect and validate data 6. Analyze data

Reliability of Defect Data  Testing defects are generally more reliable than inspection defects since inspection defects are more subjective  An inspection defect is a problem found during the inspection process that, if not fixed, would cause one or more of the following to occur: A defect condition in a later inspection phase

Reliability of Defect Data  An inspection defect is one which would cause: (cont’d) : A defect condition during testing A field defect Nonconformance to requirements and specifications Nonconformance to established standards

An Inspection Summary Form