Software Metrics.

Slides:



Advertisements
Similar presentations
SPEC Workshop 2008 Laboratory for Computer Architecture1/27/2008 On the Object Orientedness of C++ programs in SPEC CPU 2006 Ciji Isen & Lizy K. John University.
Advertisements

Software Metrics for Object Oriented Design
Refactoring Overview  What is refactoring?  What are four good reasons to refactor?  When should you refactor?  What is a bad smell (relative to refactoring.
Prediction of fault-proneness at early phase in object-oriented development Toshihiro Kamiya †, Shinji Kusumoto † and Katsuro Inoue †‡ † Osaka University.
Figures – Chapter 24.
Metrics for Object Oriented Design Shyam R. Chidamber Chris F. Kemerer Presented by Ambikadevi Damodaran.
Applying and Interpreting Object Oriented Metrics
March 25, R. McFadyen1 Metrics Fan-in/fan-out Lines of code Cyclomatic complexity Comment percentage Length of identifiers Depth of conditional.
Nov R. McFadyen1 Metrics Fan-in/fan-out Lines of code Cyclomatic complexity* Comment percentage Length of identifiers Depth of conditional.
Page 1 Building Reliable Component-based Systems Chapter 7 - Role-Based Component Engineering Chapter 7 Role-Based Component Engineering.
Object-Oriented Metrics. Characteristics of OO ● Localization ● Encapsulation ● Information hiding ● Inheritence ● Object abstraction.
Design Metrics Software Engineering Fall 2003 Aditya P. Mathur Last update: October 28, 2003.
© S. Demeyer, S. Ducasse, O. Nierstrasz Duplication.1 7. Problem Detection Metrics  Software quality  Analyzing trends Duplicated Code  Detection techniques.
Software Engineering Metrics for Object Oriented Programming Developed by – Shrijit Joshi.
Object-Oriented Metrics
March R. McFadyen1 Software Metrics Software metrics help evaluate development and testing efforts needed, understandability, maintainability.
1 Complexity metrics  measure certain aspects of the software (lines of code, # of if-statements, depth of nesting, …)  use these numbers as a criterion.
Predicting Class Testability using Object-Oriented Metrics M. Bruntink and A. van Deursen Presented by Tom Chappell.
Software Metrics.
Software Process and Product Metrics
Comp 587 Parker Li Bobby Kolski. Automated testing tools assist software engineers to gauge the quality of software by automating the mechanical aspects.
1 NASA OSMA SAS02 Software Reliability Modeling: Traditional and Non-Parametric Dolores R. Wallace Victor Laing SRS Information Services Software Assurance.
Lecture 17 Software Metrics
Chidamber & Kemerer Suite of Metrics
11 1 Object oriented DB (not in book) Database Systems: Design, Implementation, & Management, 6 th Edition, Rob & Coronel Learning objectives: What.
GENERAL CONCEPTS OF OOPS INTRODUCTION With rapidly changing world and highly competitive and versatile nature of industry, the operations are becoming.
Software Engineering Laboratory, Department of Computer Science, Graduate School of Information Science and Technology, Osaka University 1 Refactoring.
Object-Oriented Metrics Alex Evans Jonathan Jakse Cole Fleming Matt Keran Michael Ababio.
Chapter 6 : Software Metrics
Introduction CS 3358 Data Structures. What is Computer Science? Computer Science is the study of algorithms, including their  Formal and mathematical.
An Introduction to Design Patterns. Introduction Promote reuse. Use the experiences of software developers. A shared library/lingo used by developers.
SOFTWARE DESIGN.
Software Measurement & Metrics
1 Software Engineering: A Practitioner’s Approach, 6/e Chapter 15b: Product Metrics for Software Software Engineering: A Practitioner’s Approach, 6/e Chapter.
Software Engineering SM ? 1. Outline of this presentation What is SM The Need for SM Type of SM Size Oriented Metric Function Oriented Metric 218/10/2015.
The CK Metrics Suite. Weighted Methods Per Class b To use this metric, the software engineer must repeat this process n times, where n is the number of.
The CK Metrics Suite. Weighted Methods Per Class b To use this metric, the software engineer must repeat this process n times, where n is the number of.
Concepts of Software Quality Yonglei Tao 1. Software Quality Attributes  Reliability  correctness, completeness, consistency, robustness  Testability.
Introduction CS 3358 Data Structures. What is Computer Science? Computer Science is the study of algorithms, including their  Formal and mathematical.
ECE450 - Software Engineering II1 ECE450 – Software Engineering II Today: Design Patterns IX Interpreter, Mediator, Template Method recap.
1 Metrics and lessons learned for OO projects Kan Ch 12 Steve Chenoweth, RHIT Above – New chapter, same Halstead. He also predicted various other project.
An Automatic Software Quality Measurement System.
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Software Architecture Evaluation Methodologies Presented By: Anthony Register.
Measurement and quality assessment Framework for product metrics – Measure, measurement, and metrics – Formulation, collection, analysis, interpretation,
Object-Oriented Programming © 2013 Goodrich, Tamassia, Goldwasser1Object-Oriented Programming.
Ontology Support for Abstraction Layer Modularization Hyun Cho, Jeff Gray Department of Computer Science University of Alabama
OOAD UNIT V B RAVINDER REDDY PROFESSOR DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e and are provided with permission by.
Object Oriented Metrics
Rohini Sharma Roll No. RA1809A01 Regd. No M.Tech.(CSE) Part Time 3 rd Semester.
Software Engineering Lecture 19: Object-Oriented Testing & Technical Metrics.
Software Engineering Object Oriented Metrics. Objectives 1.To describe the distinguishing characteristics of Object-Oriented Metrics. 2.To introduce metrics.
CS223: Software Engineering Lecture 21: Unit Testing Metric.
Basic Characteristics of Object-Oriented Systems
Design Metrics CS 406 Software Engineering I Fall 2001 Aditya P. Mathur Last update: October 23, 2001.
Object Oriented Systems Design
Object Oriented Metrics
Unit - 3 OBJECT ORIENTED DESIGN PROCESS AND AXIOMS
A Hierarchical Model for Object-Oriented Design Quality Assessment
Software Metrics 1.
Software Metrics.
Course Notes Set 12: Object-Oriented Metrics
Design Characteristics and Metrics
Object-Oriented Metrics
Design Metrics Software Engineering Fall 2003
Design Metrics Software Engineering Fall 2003
Mei-Huei Tang October 25, 2000 Computer Science Department SUNY Albany
Software Metrics SAD ::: Fall 2015 Sabbir Muhammad Saleh.
Chapter 8: Design: Characteristics and Metrics
Presentation transcript:

Software Metrics

Engineering Engineering is a discipline based on quantitative approaches All branches of engineering rely upon the use measurements Software Engineering is not an exception SE uses measurements to evaluate the quality of software products

Software Metrics Software metrics are an attempt to quantify the quality of the software Measures are expressed in terms of computable formulas or expressions Elements of these expressions are derived from source code, design and other artifacts used in software development

Quality Measurement Direct measurement Indirect measurement Are values calculated directly from known sources (source code, design diagram etc.) E.g. number of software failures over some time period Indirect measurement Cannot be so accurately defined nor directly realized E.g. reliability, maintainability, usability

Advantages of Metrics Metrics are a tool for comparing certain aspects of software Sometimes metrics provide guidance for evaluating complexity, time and cost of a future product Metrics can be used to evaluate software engineers Metrics can even be applied to the creation of development teams

Example A general rule in allocating project time: 25%: Analysis & Design 50%: Coding & Unit Test 25%: Integration/System/Acceptance Test

What makes a software metric effective? It must be easy to derive, calculate and understand It must be useful for improving some aspect of software engineering It should be independent of any programming language It should be platform and environment independent

Calculation of ‘Lack of Cohesion’ metrics Lack of Cohesion (LCOM) for a class is defined as follows: Let ‘C’ denote the class for which LCOM is computed. Let ‘m’ denote the number of methods in C. Let ‘a’ denote the attributes of C. Let ‘m(a)’ denote the methods of C that access the attribute ‘a’ in C. Let ‘Sum(m(a))’ denote the sum of all ‘m(a)’ over all the attributes in C. Now, LCOM( C ) is defined as ( m – Sum (m(a)) / a) / (m – 1)

Cohesion Metrics If class C has only one method or no method, LCOM is undefined. If there are no attributes in C, then also LCOM is undefined. In both situations, for computational purposes, LCOM is set to zero. Normally, LCOM value is expected to be between 0 and 2. A value greater than 1 is an indication that the class must be redesigned. The higher is the value of LCOM, the lower is the cohesion and hence the need for redesign.

Example

Calculation The class ‘Account’ (without constructor) ‘Account’ (with constructor) m = 4, a = 3 m = 5, a = 3 m(Account#) = 1 m(Account#) = 2 m(UserID) = 1 m(UserID) = 2 m(Balance) = 3 m(Balance) = 3 sum(m(a)) = 5 sum(m(a)) = 7 LCOM (Account) = (4 – 5 / 3) / (3 – 1) = 1.667 LCOM(Account) = (5 – 7 / 3) (5 – 1) = 0.667

Calculation m = 1, a = 2 m = 2, a = 2 m(UserID) = 0 m(UserID) = 1 The class ‘LoginAccount’ (without constructor) ‘LoginAccount’ (with constructor) m = 1, a = 2 m = 2, a = 2 m(UserID) = 0 m(UserID) = 1 m(Password) = 1 m(Password) = 2 sum(m(a)) = 1 sum(m(a)) = 3 LCOM (LoginAccount) = (1 – 1 / 2) / (1- 1) LCOM(LoginAccount) = (2 = 0 (set to zero because m = 1) – 3 / 2) (2 – 1) = 0.5

Calculation m = 4, a = 1 m = 5, a = 1 m(Accounts) = 4 m(Accounts) = 5 The class ‘AccountsDatabase’ (without constructor) ‘AccountsDatabase’ (with constructor) m = 4, a = 1 m = 5, a = 1 m(Accounts) = 4 m(Accounts) = 5 sum(m(a)) = 4 sum(m(a)) = 5 LCOM (AccountsDatabase) = (4 – 4 / 1) / (4 – 1) LCOM(AccountsDatabase) = (5 – 5/1)(5-1) = 0 = 0 / 3 = 0

Calculation The class ‘Login Accounts Database’ Exactly similar to ‘Accounts Database’ and hence LCOM (Login Accounts Database) = 0, both for with constructor and without constructor.

Problem with metrics Defining metrics formulas is difficult No standards and no guidelines: each equation defines a metric for a particular application Correctness of the metrics formulas must be established before using them Metrics measure a product, too late to predict the complexity of the product

Applying and Interpreting Object Oriented Metrics Dr. Linda H. Rosenberg, Mr. Larry Hyatt Software Assurance Technology Center NASA SATC 1998

Dr. Linda H. Rosenberg Dr. Rosenberg is an Engineering Section Head at Unisys Government Systems in Lanham, MD. She is contracted to manage the Software Assurance Technology Center (SATC) through the System Reliability and Safety Office in the Flight Assurance Division at Goddard Space Flight Center, NASA, in Greenbelt, MD. The SATC has four primary responsibilities: Metrics, Standards and Guidance, Assurance tools and techniques, and Outreach programs. Although she oversees all work areas of the SATC, Dr. Rosenberg's area of expertise is metrics. She is responsible for overseeing metric programs to establish a basis for numerical guidelines and standards for software developed at NASA, and to work with project managers to use metrics in the evaluation of the quality of their software. Dr. Rosenberg’s work in software metrics outside of NASA includes work with the Joint Logistics Command’s efforts to establish a core set of process, product and system metrics with guidelines published in the Practical Software Measurement. In addition, Dr. Rosenberg worked with the Software Engineering Institute to develop a risk management course. She is now responsible for risk management training at all NASA centers, and the initiation of software risk management at NASA Goddard.

Mr. Larry Hyatt Mr. Larry Hyatt is retired from the Systems Reliability and Safety Office at NASA’s Goddard Space Flight Center where he was responsible for the development of software implementation policy and requirements. He founded and led the Software Assurance Technology Center, which is dedicated to making measured improvements in software developed for GSFC and NASA. Mr. Hyatt has over 35 years experience in software development and assurance, 25 with the government at GSFC and at NOAA. Early in his career, while with IBM Federal Systems Division, he managed the contract support staff that developed science data analysis software for GSFC space scientists. He then moved to GSFC, where he was responsible for the installation and management of the first large scale IBM System 360 at GSFC. At NOAA, he was awarded the Department of Commerce Silver Medal for his management of the development of the science ground system for the first TIROS-N Spacecraft. He then headed the Satellite Service Applications Division, which developed and implemented new uses for meteorological satellite data in weather forecasting. Moving back to NASA/GSFC, Mr. Hyatt developed GSFC’s initial programs and policies in software assurance and was active in the development of similar programs for wider agency use. For this he was awarded the NASA Exceptional Service Medal in 1990. He founded the SATC in 1992 as a center of excellence in software assurance. The SATC carries on a program of research and development in software assurance, develops software assurance guidance and standards, and assists GSFC and NASA software development projects and organizations in improving software processes and products.

Key Points Which object oriented metrics should a project use, and can any of the traditional metrics be adapted to the object oriented environment object oriented technology uses objects and not algorithms as its fundamental building blocks, the approach to software metrics for object oriented programs must be different from the standard metrics set

Approach First identify the attributes associated with object oriented development Select metrics that evaluate the primary critical constructs Help project managers choose a comprehensive set of metrics, not by default, but by using a set of metrics based on attributes and features of object oriented technology Nine metrics for Object Oriented selected: three traditional metrics adapted for an OO environment, six “new” metrics to evaluate the principle OO structures and concepts

SATC Metrics for Object Oriented Systems

Cyclomatic Complexity A method with a low cyclomatic complexity is generally better. This may imply decreased testing and increased understandability or that decisions are deferred through message passing, not that the method is not complex. Cyclomatic complexity cannot be used to measure the complexity of a class because of inheritance, but the cyclomatic complexity of individual methods can be combined with other measures to evaluate the complexity of the class

Size Size of a class is used to evaluate the ease of understanding of code by developers and maintainers. Size can be measured in a variety of ways Thresholds for evaluating the meaning of size measures vary depending on the coding language used and the complexity of the method. However, since size affects ease of understanding by the developers and maintainers, classes and methods of large size will always pose a higher risk

Comment Percentage The comment percentage is calculated by the total number of comments divided by the total lines of code less the number of blank lines Since comments assist developers and maintainers, higher comment percentages increase understandability and maintainability

Object Oriented Structures a class is a template from which objects can be created. This set of objects shares a common structure and a common behavior manifested by the set of methods. A method is an operation upon an object and is defined in the class declaration. A message is a request that an object makes of another object to perform an operation. The operation executed as a result of receiving a message is called a method. Cohesion is the degree to which methods within a class are related to one another and work together to provide well-bounded behavior. Effective object oriented designs maximize cohesion because cohesion promotes encapsulation. Coupling is a measure of the strength of association established by a connection from one entity to another. Classes (objects) are coupled when a message is passed between objects; when methods declared in one class use methods or attributes of another class. Inheritance is the hierarchical relationship among classes that enables programmers to reuse previously defined objects including variables and operators.

Weighted Methods per Class (WMC) The WMC is a count of the methods implemented within a class or the sum of the complexities of the methods (method complexity is measured by cyclomatic complexity). The second measurement is difficult to implement since not all methods are assessable within the class hierarchy due to inheritance. The number of methods and the complexity of the methods involved is a predictor of how much time and effort is required to develop and maintain the class. The larger the number of methods in a class, the greater the potential impact on children; children inherit all of the methods defined in the parent class. Classes with large numbers of methods are likely to be more application specific, limiting the possibility of reuse

Response for a Class (RFC) The RFC is the count of the set of all methods that can be invoked in response to a message to an object of the class or by some method in the class. This includes all methods accessible within the class hierarchy. This metric looks at the combination of the complexity of a class through the number of methods and the amount of communication with other classes. The larger the number of methods that can be invoked from a class through messages, the greater the complexity of the class. If a large number of methods can be invoked in response to a message, the testing and debugging of the class becomes complicated since it requires a greater level of understanding on the part of the tester. A worst case value for possible responses will assist in the appropriate allocation of testing time.

Lack of Cohesion (LCOM) Lack of Cohesion (LCOM) measures the dissimilarity of methods in a class by instance variable or attributes. A highly cohesive module should stand alone; high cohesion indicates good class subdivision. Lack of cohesion or low cohesion increases complexity, thereby increasing the likelihood of errors during the development process. High cohesion implies simplicity and high reusability. High cohesion indicates good class subdivision. Lack of cohesion or low cohesion increases complexity, thereby increasing the likelihood of errors during the development process. Classes with low cohesion could probably be subdivided into two or more subclasses with increased cohesion.

Coupling Between Object Classes (CBO) Coupling Between Object Classes (CBO) is a count of the number of other classes to which a class is coupled. It is measured by counting the number of distinct non-inheritance related class hierarchies on which a class depends. Excessive coupling is detrimental to modular design and prevents reuse. The more independent a class is, the easier it is reuse in another application. The larger the number of couples, the higher the sensitivity to changes in other parts of the design and therefore maintenance is more difficult. Strong coupling complicates a system since a class is harder to understand, change or correct by itself if it is interrelated with other classes. Complexity can be reduced by designing systems with the weakest possible coupling between classes. This improves modularity and promotes encapsulation

Depth of Inheritance Tree (DIT) The depth of a class within the inheritance hierarchy is the maximum number of steps from the class node to the root of the tree and is measured by the number of ancestor classes. The deeper a class is within the hierarchy, the greater the number methods it is likely to inherit making it more complex to predict its behavior. Deeper trees constitute greater design complexity, since more methods and classes are involved, but the greater the potential for reuse of inherited methods. A support metric for DIT is the number of methods inherited (NMI)

Number of Children (NOC) The number of children is the number of immediate subclasses subordinate to a class in the hierarchy. It is an indicator of the potential influence a class can have on the design and on the system. The greater the number of children, the greater the likelihood of improper abstraction of the parent and may be a case of misuse of subclassing. But the greater the number of children, the greater the reuse since inheritance is a form of reuse. If a class has a large number of children, it may require more testing of the methods of that class, thus increase the testing time