Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 COSC 4406 Software Engineering COSC 4406 Software Engineering Haibin Zhu, Ph.D. Dept. of Computer Science and mathematics, Nipissing University, 100.

Similar presentations


Presentation on theme: "1 COSC 4406 Software Engineering COSC 4406 Software Engineering Haibin Zhu, Ph.D. Dept. of Computer Science and mathematics, Nipissing University, 100."— Presentation transcript:

1 1 COSC 4406 Software Engineering COSC 4406 Software Engineering Haibin Zhu, Ph.D. Dept. of Computer Science and mathematics, Nipissing University, 100 College Dr., North Bay, ON P1B 8L7, Canada, haibinz@nipissingu.ca, http://www.nipissingu.ca/faculty/haibinz haibinz@nipissingu.ca

2 2 Lecture 12 Product Metrics for Software

3 3 McCall’s Triangle of Quality Maintainability Maintainability Flexibility Flexibility Testability Testability Portability Portability Reusability Reusability Interoperability Interoperability Correctness Correctness Reliability Reliability Efficiency Efficiency Integrity Integrity Usability Usability PRODUCT TRANSITION PRODUCT TRANSITION PRODUCT REVISION PRODUCT REVISION PRODUCT OPERATION PRODUCT OPERATION

4 McCall’s Quality Factors Correctness: The extent to which a program satisfies its specification and fulfils the customer’s mission objectives. Correctness: The extent to which a program satisfies its specification and fulfils the customer’s mission objectives. Reliability: The extent to which a program can be expected to perform its intended function with required precision. Reliability: The extent to which a program can be expected to perform its intended function with required precision. Efficiency: The amount of computing resources and code required by a program to perform its functions. Efficiency: The amount of computing resources and code required by a program to perform its functions. Integrity: The extent to which access to software or data by unauthorized persons can be controlled. Integrity: The extent to which access to software or data by unauthorized persons can be controlled. Usability: The effort required to learn, operate input for, and interpret output of a program. Usability: The effort required to learn, operate input for, and interpret output of a program. Maintainability: The effort required to locate and fix an error in a program. Maintainability: The effort required to locate and fix an error in a program. 4

5 McCall’s Quality Factors Flexibility: The effort required to modify an operational program. Flexibility: The effort required to modify an operational program. Testability: The effort required to test a program to ensure that it performs its intended functions. Testability: The effort required to test a program to ensure that it performs its intended functions. Portability: The effort required to transfer the program from one hardware and/or software system environment to another. Portability: The effort required to transfer the program from one hardware and/or software system environment to another. Reusability: The extent to which a program [or part of the program] can be reused in other applications. Reusability: The extent to which a program [or part of the program] can be reused in other applications. Interoperability: The effort required to couple one system to another. Interoperability: The effort required to couple one system to another. 5

6 6 A Comment McCall’s quality factors were proposed in the early 1970s. They are as valid today as they were in that time. It’s likely that software built to conform to these factors will exhibit high quality well into the 21st century, even if there are dramatic changes in technology.

7 ISO 9126 Quality Factors Functionality: suitability, accuracy, interoperability, compliance and security. Functionality: suitability, accuracy, interoperability, compliance and security. Reliability: maturity, fault tolerance, recoverability. Reliability: maturity, fault tolerance, recoverability. Usability: understandability, learnability, operability. Usability: understandability, learnability, operability. Efficiency: time behavior, resource behavior. Efficiency: time behavior, resource behavior. Maintainability: analyzability, changeability, stability, testability. Maintainability: analyzability, changeability, stability, testability. Portability: adaptibility, installability, conformance, replaceability. Portability: adaptibility, installability, conformance, replaceability. 7

8 8 Measures, Metrics and Indicators A measure provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of a product or process A measure provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of a product or process The IEEE glossary defines a metric as “a quantitative measure of the degree to which a system, component, or process possesses a given attribute.” The IEEE glossary defines a metric as “a quantitative measure of the degree to which a system, component, or process possesses a given attribute.” An indicator is a metric or combination of metrics that provide insight into the software process, a software project, or the product itself An indicator is a metric or combination of metrics that provide insight into the software process, a software project, or the product itself

9 9 Measurement Principles The objectives of measurement should be established before data collection begins; The objectives of measurement should be established before data collection begins; Each technical metric should be defined in an unambiguous manner; Each technical metric should be defined in an unambiguous manner; Metrics should be derived based on a theory that is valid for the domain of application (e.g., metrics for design should draw upon basic design concepts and principles and attempt to provide an indication of the presence of an attribute that is deemed desirable); Metrics should be derived based on a theory that is valid for the domain of application (e.g., metrics for design should draw upon basic design concepts and principles and attempt to provide an indication of the presence of an attribute that is deemed desirable); Metrics should be tailored to best accommodate specific products and processes [BAS84] Metrics should be tailored to best accommodate specific products and processes [BAS84]

10 10 Measurement Process Formulation. The derivation of software measures and metrics appropriate for the representation of the software that is being considered. Formulation. The derivation of software measures and metrics appropriate for the representation of the software that is being considered. Collection. The mechanism used to accumulate data required to derive the formulated metrics. Collection. The mechanism used to accumulate data required to derive the formulated metrics. Analysis. The computation of metrics and the application of mathematical tools. Analysis. The computation of metrics and the application of mathematical tools. Interpretation. The evaluation of metrics results in an effort to gain insight into the quality of the representation. Interpretation. The evaluation of metrics results in an effort to gain insight into the quality of the representation. Feedback. Recommendations derived from the interpretation of productmetrics transmitted to the software team. Feedback. Recommendations derived from the interpretation of product metrics transmitted to the software team.

11 11 Goal-Oriented Software Measurement The Goal/Question/Metric Paradigm The Goal/Question/Metric Paradigm (1) establish an explicit measurement goal that is specific to the process activity or product characteristic that is to be assessed (1) establish an explicit measurement goal that is specific to the process activity or product characteristic that is to be assessed (2) define a set of questions that must be answered in order to achieve the goal, and (2) define a set of questions that must be answered in order to achieve the goal, and (3) identify well-formulated metrics that help to answer these questions. (3) identify well-formulated metrics that help to answer these questions. Goal definition template Goal definition template Analyze {the name of activity or attribute to be measured} Analyze {the name of activity or attribute to be measured} for the purpose of {the overall objective of the analysis} for the purpose of {the overall objective of the analysis} with respect to {the aspect of the activity or attribute that is considered} with respect to {the aspect of the activity or attribute that is considered} from the viewpoint of {the people who have an interest in the measurement} from the viewpoint of {the people who have an interest in the measurement} in the context of {the environment in which the measurement takes place}. in the context of {the environment in which the measurement takes place}.

12 12 Metrics Attributes simple and computable. It should be relatively easy to learn how to derive the metric, and its computation should not demand inordinate effort or time simple and computable. It should be relatively easy to learn how to derive the metric, and its computation should not demand inordinate effort or time empirically and intuitively persuasive. The metric should satisfy the engineer’s intuitive notions about the product attribute under consideration empirically and intuitively persuasive. The metric should satisfy the engineer’s intuitive notions about the product attribute under consideration consistent and objective. The metric should always yield results that are unambiguous. consistent and objective. The metric should always yield results that are unambiguous. consistent in its use of units and dimensions. The mathematical computation of the metric should use measures that do not lead to bizarre combinations of unit. consistent in its use of units and dimensions. The mathematical computation of the metric should use measures that do not lead to bizarre combinations of unit. programming language independent. Metrics should be based on the analysis model, the design model, or the structure of the program itself. programming language independent. Metrics should be based on the analysis model, the design model, or the structure of the program itself. an effective mechanism for quality feedback. That is, the metric should provide a software engineer with information that can lead to a higher quality end product an effective mechanism for quality feedback. That is, the metric should provide a software engineer with information that can lead to a higher quality end product

13 13 Collection and Analysis Principles Whenever possible, data collection and analysis should be automated; Whenever possible, data collection and analysis should be automated; Valid statistical techniques should be applied to establish relationship between internal product attributes and external quality characteristics Valid statistical techniques should be applied to establish relationship between internal product attributes and external quality characteristics Interpretative guidelines and recommendations should be established for each metric Interpretative guidelines and recommendations should be established for each metric

14 14 Analysis Metrics Function-based metrics: use the function point as a normalizing factor or as a measure of the “size” of the specification Function-based metrics: use the function point as a normalizing factor or as a measure of the “size” of the specification Function points are a measure of the size of computer applications and the projects that build them. The size is measured from a functional, or user, point of view. It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application. Function points are a measure of the size of computer applications and the projects that build them. The size is measured from a functional, or user, point of view. It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application. http://ourworld.compuserve.com/homepages/softcomp/fpfaq.htm#What AreFunctionPoints http://ourworld.compuserve.com/homepages/softcomp/fpfaq.htm#What AreFunctionPoints Specification metrics: used as an indication of quality by measuring number of requirements by type Specification metrics: used as an indication of quality by measuring number of requirements by type

15 15 Function-Based Metrics The function point metric (FP), first proposed by Albrecht [ALB79], can be used effectively as a means for measuring the functionality delivered by a system. The function point metric (FP), first proposed by Albrecht [ALB79], can be used effectively as a means for measuring the functionality delivered by a system. Function points are derived using an empirical relationship based on countable (direct) measures of software's information domain and assessments of software complexity Function points are derived using an empirical relationship based on countable (direct) measures of software's information domain and assessments of software complexity Information domain values are defined in the following manner: Information domain values are defined in the following manner: number of external inputs (EIs) number of external inputs (EIs) number of external outputs (EOs) number of external outputs (EOs) number of external inquiries (EQs) number of external inquiries (EQs) number of internal logical files (ILFs) number of internal logical files (ILFs) Number of external interface files (EIFs) Number of external interface files (EIFs)

16 16 Function Points Information Domain ValueCountsimple average complex Weighting factor External Inputs (EIs) External Outputs (EOs) External Inquiries (EQs) Internal Logical Files (ILFs) External Interface Files (EIFs) 346 457 346 71015 5710 = = = = = Count total * * * * * FP = count total*[0.65 + 0.01*∑(F i )] Fi : http://www.ifpug.com/fpafund.htm

17 17 Information Domain ValueCountsimple average complex Weighting factor External Inputs (EIs) External Outputs (EOs) External Inquiries (EQs) Internal Logical Files (ILFs) External Interface Files (EIFs) 346 457 346 71015 5710 = = = = = Count total * * * * * 2 3 2 1 4 9 8 6 7 20 50 Suppose *∑(Fi) = 46, then FP = count total*[0.65 + 0.01*∑(Fi)] = 50*[0.65+0.01*46] = 56

18 18 General System Characteristic (0-5)Brief Description 1.Data communicationsHow many communication facilities are there to aid in the transfer or exchange of information with the application or system? 2.Distributed data processingHow are distributed data and processing functions handled? 3.PerformanceWas response time or throughput required by the user? 4.Heavily used configurationHow heavily used is the current hardware platform where the application will be executed? 5.Transaction rateHow frequently are transactions executed daily, weekly, monthly, etc.? 6.On-Line data entryWhat percentage of the information is entered On-Line? 7.End-user efficiencyWas the application designed for end-user efficiency? 8.On-Line updateHow many ILF’s are updated by On-Line transaction? 9.Complex processingDoes the application have extensive logical or mathematical processing? 10.ReusabilityWas the application developed to meet one or many user’s needs? 11.Installation easeHow difficult is conversion and installation? 12.Operational easeHow effective and/or automated are start-up, back-up, and recovery procedures? 13.Multiple sitesWas the application specifically designed, developed, and supported to be installed at multiple sites for multiple organizations? 14.Facilitate changeWas the application specifically designed, developed, and supported to facilitate change?

19 19 Metrics for specification Quality The number of requirements: n r = n f +n nf The number of requirements: n r = n f +n nf n f is the number of function req. n f is the number of function req. n nf is the number of non-functional req. n nf is the number of non-functional req. Specificity: Q 1 = n ui /n r Specificity: Q 1 = n ui /n r n ui is the number of req. that all reviewers have the same interpretation. n ui is the number of req. that all reviewers have the same interpretation. Completeness: Q 2 = n u /[n i *n s ] Completeness: Q 2 = n u /[n i *n s ] n u is the number of unique function req. n u is the number of unique function req. n i is the number of inputs defined (implied) by the req. n i is the number of inputs defined (implied) by the req. n s is the number of states defined. n s is the number of states defined. Degree of validation: Q 3 = n c /[n c +n nv ] Degree of validation: Q 3 = n c /[n c +n nv ] n c is the number of req. that have been validated as correct. n c is the number of req. that have been validated as correct. n nv is the number of req. that have not yet been validated. n nv is the number of req. that have not yet been validated.

20 20 Architectural Design Metrics Architectural design metrics Architectural design metrics Structural complexity: S(i) = f 2 fan-out (i) Structural complexity: S(i) = f 2 fan-out (i) Data complexity: D(i) = v(i)/[f fan-out (i)+1] Data complexity: D(i) = v(i)/[f fan-out (i)+1] System complexity: C(i) = S(i)+D(i) System complexity: C(i) = S(i)+D(i) Morphology metrics: a function of the number of modules and the number of interfaces between modules Morphology metrics: a function of the number of modules and the number of interfaces between modules Size = n (number of nodes )+a (number of arcs) Size = n (number of nodes )+a (number of arcs) Arc-to-node ratio: r = a/n Arc-to-node ratio: r = a/n

21 21 Metrics for OO Design-I Whitmire [WHI97] describes nine distinct and measurable characteristics of an OO design: Whitmire [WHI97] describes nine distinct and measurable characteristics of an OO design: Size Size Size is defined in terms of four views: population, volume, length, and functionality Size is defined in terms of four views: population, volume, length, and functionality Complexity Complexity How classes of an OO design are interrelated to one another How classes of an OO design are interrelated to one another Coupling Coupling The physical connections between elements of the OO design The physical connections between elements of the OO design Sufficiency Sufficiency “the degree to which an abstraction possesses the features required of it, or the degree to which a design component possesses features in its abstraction, from the point of view of the current application.” “the degree to which an abstraction possesses the features required of it, or the degree to which a design component possesses features in its abstraction, from the point of view of the current application.” Completeness Completeness An indirect implication about the degree to which the abstraction or design component can be reused An indirect implication about the degree to which the abstraction or design component can be reused

22 22 Metrics for OO Design-II Cohesion Cohesion The degree to which all operations working together to achieve a single, well-defined purpose The degree to which all operations working together to achieve a single, well-defined purpose Primitiveness Primitiveness Applied to both operations and classes, the degree to which an operation is atomic Applied to both operations and classes, the degree to which an operation is atomic Similarity Similarity The degree to which two or more classes are similar in terms of their structure, function, behavior, or purpose The degree to which two or more classes are similar in terms of their structure, function, behavior, or purpose Volatility Volatility Measures the likelihood that a change will occur Measures the likelihood that a change will occur

23 23 Distinguishing Characteristics Localization—the way in which information is concentrated in a program Localization—the way in which information is concentrated in a program Encapsulation—the packaging of data and processing Encapsulation—the packaging of data and processing Information hiding—the way in which information about operational details is hidden by a secure interface Information hiding—the way in which information about operational details is hidden by a secure interface Inheritance—the manner in which the responsibilities of one class are propagated to another Inheritance—the manner in which the responsibilities of one class are propagated to another Abstraction—the mechanism that allows a design to focus on essential details Abstraction—the mechanism that allows a design to focus on essential details Berard [BER95] argues that the following characteristics require that special OO metrics be developed:

24 24 Object-Oriented Metrics 1) Average Method Size (Line of Code-LOC): It should be less than 8 LOC for Smalltalk and 24 LOC for C++. 1) Average Method Size (Line of Code-LOC): It should be less than 8 LOC for Smalltalk and 24 LOC for C++. 2) Average Number of Methods per Class : It should be less than 20. Bigger averages indicate too much responsibility in too few classes. 2) Average Number of Methods per Class : It should be less than 20. Bigger averages indicate too much responsibility in too few classes. 3) Average Number of Instance Variables per Class: It should be less than 6. More instance variables indicate that one class is doing too more than it should. 3) Average Number of Instance Variables per Class: It should be less than 6. More instance variables indicate that one class is doing too more than it should. 4) Class Hierarchy Nesting Level (Depth of Inheritance-DIT) : It should be less than 6, starting from the framework classes or the root class. 4) Class Hierarchy Nesting Level (Depth of Inheritance-DIT) : It should be less than 6, starting from the framework classes or the root class. 5) Number of Subsystem/Subsystem Relationships : It should be less than the number in metric 6. 5) Number of Subsystem/Subsystem Relationships : It should be less than the number in metric 6. 6) Number of Class/Class Relationships in Each Subsystem: It should be relatively high. This relates to high cohesion of classes in the same subsystem. If one or more classes in a subsystem do not interact with many of the other classes, they might be better placed in another subsystem. 6) Number of Class/Class Relationships in Each Subsystem: It should be relatively high. This relates to high cohesion of classes in the same subsystem. If one or more classes in a subsystem do not interact with many of the other classes, they might be better placed in another subsystem. 7) Instance Variable Usage :If groups of methods in a class use different sets of instance variables, examine the design closely to see if the class should be split into multiple classes along those “service” lines. 7) Instance Variable Usage :If groups of methods in a class use different sets of instance variables, examine the design closely to see if the class should be split into multiple classes along those “service” lines. 8) Average Number of Comment Lines (Per Method ): It should be greater than 1. 8) Average Number of Comment Lines (Per Method ): It should be greater than 1. 9) Number of Problem Reports: It should be low. 9) Number of Problem Reports: It should be low. 10) Number of Times Class is Reused : A class should be redesigned if it is not reused in a different application. 10) Number of Times Class is Reused : A class should be redesigned if it is not reused in a different application. 11) Number of Classes and Methods Thrown away : It should occur at a steady rate throughout most of the development process. If this is not occurring, it is probably doing an incremental development rather than an iterative OO development. 11) Number of Classes and Methods Thrown away : It should occur at a steady rate throughout most of the development process. If this is not occurring, it is probably doing an incremental development rather than an iterative OO development. Proposed by Lorenz in 1993:

25 25 Some OO Metrics for Six Projects MetricProject A (C++) Project B (C++) Project C (C++) Project D (IBM Smalltal k) Project E (OTI Smalltalk) Project F (Digital Smalltalk) Number of Class5,7412,5133,000100566492 Methods per Class 837173621 LOC per Method 2119155.35.25.7 LOC per Class2076010097188117 Max DIT6NA568 Average DITNA 34.82.8NA

26 26 The CK OO Metrics Suite Metric 1: Weighted Methods per Class (WMC) Metric 1: Weighted Methods per Class (WMC) WMC = ∑C i, where, C i is the complexity of method i WMC = ∑C i, where, C i is the complexity of method i Metric 2: Depth of Inheritance Tree (DIT) Metric 2: Depth of Inheritance Tree (DIT) Metric 3: Number Of Children (NOC) Metric 3: Number Of Children (NOC) Metric 4: Coupling Between Object classes (CBO) Metric 4: Coupling Between Object classes (CBO) Metric 5: Response For Class (RFC) Metric 5: Response For Class (RFC) Metric 6: Lack of Cohesion On Methods (LCOM) Metric 6: Lack of Cohesion On Methods (LCOM) Proposed by Chidamber and Kemerer, 94:

27 27 Productivity in Terms of Number of Classes per PY MetricProject A (C++) Project B (C++) Project C (C++) Project D (IBM Smalltalk) Project F (Digital Smalltalk) PY1003590210 Classes per PY57.471.833.35049.2 Classes per PM4.862.84.24.1 Methods per PM 818207186 LOC per PM168342300376456 LOC per PH1.052.141.882.352.85

28 28 Component-Level Design Metrics Cohesion metrics: a function of data objects and the locus of their definition Cohesion metrics: a function of data objects and the locus of their definition Coupling metrics: a function of input and output parameters, global variables, and modules called Coupling metrics: a function of input and output parameters, global variables, and modules called Complexity metrics: hundreds have been proposed (e.g., cyclomatic complexity) Complexity metrics: hundreds have been proposed (e.g., cyclomatic complexity)

29 29 Interface Design Metrics Layout appropriateness: a function of layout entities, the geographic position and the “cost” of making transitions among entities. Layout appropriateness: a function of layout entities, the geographic position and the “cost” of making transitions among entities. Cohesion metric: measures the relative connection of a on-screen content to other on-screen content. Cohesion metric: measures the relative connection of a on-screen content to other on-screen content. If data presented on a screen belongs to a single major data object, UI cohesion for that screen is high. If data presented on a screen belongs to a single major data object, UI cohesion for that screen is high. If many different types of data or content are presented and these data are related to different data objects, UI cohesion is low. If many different types of data or content are presented and these data are related to different data objects, UI cohesion is low.

30 30 Code Metrics Halstead’s Software Science: a comprehensive collection of metrics all predicated on the number (count and occurrence) of operators and operands within a component or program Halstead’s Software Science: a comprehensive collection of metrics all predicated on the number (count and occurrence) of operators and operands within a component or program It should be noted that Halstead’s “laws” have generated substantial controversy, and many believe that the underlying theory has flaws. However, experimental verification for selected programming languages has been performed (e.g. [FEL89]). It should be noted that Halstead’s “laws” have generated substantial controversy, and many believe that the underlying theory has flaws. However, experimental verification for selected programming languages has been performed (e.g. [FEL89]).

31 31 Code n 1 :the number of distinct operators that appear in a program n 1 :the number of distinct operators that appear in a program n 2 : the number of distinct operands … n 2 : the number of distinct operands … N 1 : the total number of operator occurrences N 1 : the total number of operator occurrences N 2 :the total number of operand occurrences N 2 :the total number of operand occurrences Length: N = n 1 log 2 n 1 + n 2 log 2 n 2 Length: N = n 1 log 2 n 1 + n 2 log 2 n 2 Volume: Nlog 2 (n 1 +n 2 ) Volume: Nlog 2 (n 1 +n 2 ) Volume ratio: L = 2/n 1 * n 2 / N 2 Volume ratio: L = 2/n 1 * n 2 / N 2

32 32 Metrics for Testing Testing effort can also be estimated using metrics derived from Halstead measures Testing effort can also be estimated using metrics derived from Halstead measures Binder [BIN94] suggests a broad array of design metrics that have a direct influence on the “testability” of an OO system. Binder [BIN94] suggests a broad array of design metrics that have a direct influence on the “testability” of an OO system. Lack of cohesion in methods (LCOM). Lack of cohesion in methods (LCOM). Percent public and protected (PAP). Percent public and protected (PAP). Public access to data members (PAD). Public access to data members (PAD). Number of root classes (NOR). Number of root classes (NOR). Fan-in (FIN) (Multiple Inheritance). Fan-in (FIN) (Multiple Inheritance). Number of children (NOC) and depth of the inheritance tree (DIT). Number of children (NOC) and depth of the inheritance tree (DIT).

33 33 Metrics for Maintenance The number of modules: The number of modules: M T : ~ in the current release M T : ~ in the current release F c : ~ that have been changed in the current release F c : ~ that have been changed in the current release F a : ~ that have been added~~ F a : ~ that have been added~~ F d : ~ that have been deleted~~ from the preceding release F d : ~ that have been deleted~~ from the preceding release Software maturity index: SMI = [M T - (F c +F a +F d )]/ M T Software maturity index: SMI = [M T - (F c +F a +F d )]/ M T

34 34 Summary Software Quality Software Quality Measure, Measurement, and Metrics Measure, Measurement, and Metrics Metrics for Analysis Metrics for Analysis Metrics for Design Metrics for Design Metrics for Source Code Metrics for Source Code Metrics for Testing Metrics for Testing Metrics for Maintenance Metrics for Maintenance


Download ppt "1 COSC 4406 Software Engineering COSC 4406 Software Engineering Haibin Zhu, Ph.D. Dept. of Computer Science and mathematics, Nipissing University, 100."

Similar presentations


Ads by Google