Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Metrics Presented By Dr. Iyad Alazzam

Similar presentations


Presentation on theme: "Software Metrics Presented By Dr. Iyad Alazzam"— Presentation transcript:

1 Software Metrics Presented By Dr. Iyad Alazzam
Department of Computer Information Systems Faculty of Information Technology and Computer Science Yarmouk University

2 A Quote on Measurement “When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science.” LORD WILLIAM KELVIN (1824 – 1907)

3 Software Quality Metrics
“If you can’t measure it, you can’t manage it” Tom DeMarco, 1982

4 Measurement provides a mechanism for objective evaluation
Metrics Express in Numbers Measurement provides a mechanism for objective evaluation

5 Uses of Measurement Can be applied to the software process with the intent of improving it on a continuous basis Can be used throughout a software project to assist in estimation, quality control, productivity assessment, and project control Can be used to help assess the quality of software work products and to assist in tactical decision making as a project proceeds

6 Reasons to Measure To characterize in order to To evaluate in order to
Gain an understanding of processes, products, resources, and environments Establish baselines for comparisons with future assessments To evaluate in order to Determine status with respect to plans To predict in order to Gain understanding of relationships among processes and products Build models of these relationships To improve in order to Identify roadblocks, root causes, inefficiencies, and other opportunities for improving product quality and process performance

7 Metrics of Project Management
Budget Schedule/ReResource Management Risk Management Project goals met or exceeded Customer satisfaction Budgeted Cost Work Scheduled (BCWS) Budgeted Cost of Work Performed (BCWP) Actual Cost of Work Performed (ACWP) Budget at Completion (BAC) BAC= (BCWS) Cost Variance CV=BCWP-ACWP Cost Performance Index (CPI) CPI=BCWP/ACWP Estimate at Completion EAC = BAC/CPI Estimate to Completion (ETC) ETC = EAC-ACWP Effort/Time per SE Task Distribution of Effort per SE Task Schedule vs Actual Milestones Errors uncovered per review hour Percentage slipage per time period Risk type: Technical, Personnel, Vendor... Probability Impact Overall Risk Risk Management Strategy effort expended (force) schedule expended (distance) productivity rates ( pages/hour, delivered documents, sloc/day )

8 Metrics of the Software Product
Focus on Deliverable Quality Analysis Products Design Product Complexity – algorithmic, architectural, data flow Code Products Production System Size (Function Pts, SLOC, Modules, Subsystems, Pages, Documents ) Error density ( errors/ KSLOC ) Defect density ( defects/ KSLOC or defects/ FP ) Quality (Correctness, Maintainability, Reliability, Efficiency, Integrity, Usability, Flexibility, Testability, Portability, Reusability, Availability, Complexity, Understandability, Modifiability )

9 goal of metrics to improve product quality and development-team productivity concerned with productivity and quality measures measures of SW development output as function of effort and time measures of usability

10 Definitions Measure - quantitative indication of extent, amount, dimension, capacity, or size of some attribute of a product or process. E.g., Number of errors Metric - quantitative measure of degree to which a system, component or process possesses a given attribute. “A handle or guess about a given attribute.” E.g., Number of errors found per person hours expended

11 Why Measure Software? Determine the quality of the current product or process Predict qualities of a product/process Improve quality of a product/process

12 Motivation for Metrics
Estimate the cost & schedule of future projects Evaluate the productivity impacts of new tools and techniques Establish productivity trends over time Improve software quality Forecast future staffing needs Anticipate and reduce future maintenance needs

13 CK Metrics: Objective CK metrics were designed :
To measure unique aspects of the OO approach. To measure complexity of the design. To improve the development of the software

14 CK Metrics: Objective SW development Improvement
Managers can improve the development of the SW by : Analysing CK metrics through the identification of outlying values (extreme deviations), which may be a signal of: high complexity and/or possible design violations Taking managerial decisions, such as: Re-designing and/or assigning extra or higher skilled resources (to develop, to test and to maintain the SW).

15 CK Metrics: Definition WMC (Weighted Methods per Class)
WMC is the sum of the complexity of the methods of a class. WMC = Number of Methods (NOM), when all method’s complexity are considered UNITY. Viewpoints WMC is a predictor of how much TIME and EFFORT is required to develop and to maintain the class. The larger NOM the greater the impact on children. Classes with large NOM are likely to be more application specific, limiting the possibility of RE-USE and making the EFFORT expended one-shot investment. Objective: Low

16 CK Metrics: Definition DIT (Depth of Inheritance Tree)
The maximum length from the node to the root of the tree Viewpoints The greater values of DIT : The greater the NOM it is likely to inherit, making more COMPLEX to predict its behaviour The greater the potential RE-USE of inherited methods Small values of DIT in most of the system’s classes may be an indicator that designers are forsaking RE-USABILITY for simplicity of UNDERSTANDING. Objective: Trade-off

17 CK Metrics: Definition NOC (Number of Children)
Number of immediate subclasses subordinated to a class in the class hierarchy Viewpoints The greater the NOC is: the greater is the RE-USE the greater is the probability of improper abstraction of the parent class, the greater the requirements of method's TESTING in that class. Small values of NOC, may be an indicator of lack of communication between different class designers. Objective: Trade-off

18 CK Metrics: Definition CBO (Coupling Between Objects)
It is a count of the number of other classes to which it is coupled Viewpoints Small values of CBO : Improve MODULARITY and promote ENCAPSULATION Indicates independence in the class, making easier its RE-USE Makes easier to MAINTAIN and to TEST a class. Objective: Low

19 CK Metrics: Definition RFC (Response for Class)
It is the number of methods of the class plus the number of methods called by any of those methods. Viewpoints If a large numbers of methods are invoked from a class (RFC is high): TESTING and MAINTANACE of the Class becomes more COMPLEX. Objective: Low

20 CK Metrics: Definition LCOM (Lack of Cohesion of Methods)
Measures the dissimilarity of methods in a class via instanced variables. Viewpoints Great values of LCOM: Increases COMPLEXITY Does not promotes ENCAPSULATION and implies classes should probably be split into two or more subclasses Helps to identified low-quality design Objective: Low

21 CK Metrics: Guidelines
GOAL LEVEL COMPLEXITY (To develop, to test and to maintain) RE-USABILITY ENCAPSULATION, MODULARITY WMC Low DIT Trade-off NOC CBO RFC LCOM

22 CK Metrics: Thresholds
Thresholds of the CK metrics [2,3,4]: Can not be determined before their use Should be derived and use locally for each dataset 80th and 20th percentiles of the distributions can be used to determine high and low values of the metrics. Are not indicators of “badness” but indicators of difference that needs to be investigated.

23 CK in the Literature CK Metrics & other Managerial performance indicators
Chidamber & Kemerer study the relation of CK metrics with : Productivity SIZE [LOC] / EFFORT of Development [Hours] Rework Effort for re-using classes Effort to specify high-level design of classes

24 CK in the Literature CK Metrics & Maintenance effort
Li and Henry (1993) use CK metrics (among others) to predict : Maintenance effort, which is measured by the number of lines that have changed in a class during 3 years that they have collected the measurement .

25 CK in the Literature DIT & Maintenance effort
Daly et al. (1996) in his study concludes that: That subjects maintaining OO SW with three levels of inheritance depth performed maintenance tasks significantly quicker than those maintaining an equivalent OO SW with no inheritance.

26 CK in the Literature DIT & Maintenance effort
However, Hand Harrisson (2000) used DIT metric to demonstrate : That systems without inheritance are easier to understand and modify than systems with 3 or 5 levels of inheritance.

27 CK in the Literature DIT & Maintenance effort
Poels (2001) uses DIT metric, and demonstrate : The extensive use of inheritance leads to modules that are more difficult to modify.

28 CK in the Literature DIT & Maintenance effort
Prechelt (2003) concludes that : Programs with less inheritance were faster to maintain and The code maintenance effort is hardly correlated with inheritance depth but rather depends on other factors such as number of relevant methods.

29 CK in the Literature CK Metrics & Fault-proneness prediction
Study Input: Design Complexity Metrics Output Prediction Technique 1996 Basili et al. CK Metrics among others Fault-prone classes Multivariate Logistic Regression 2000 Briand et al. 2004 Kanmani et al. Fault ratio General Regression Neural Network 2005 Nachiappan et al. Multiple Linear Regression 2007 Olague et al. CK, QMOOD CK : Chidamber & Kemerer, QMOOD: Quality Metrics for Object Oriented Design

30 Product Metrics Landscape
Metrics for the analysis model Metrics for the design model Metrics for the source code Metrics for testing

31 Metrics for Requirements Quality
E.g., let nr = nf + nnf , where nr = number of requirements nf = number of functional requirements nnf = number of nonfunctional requirements Specificity (lack of ambiguity) Q = nui/nr nui - number of requirements for which all reviewers had identical interpretations

32 High-Level Design Metrics
Structural Complexity S(i) = f2out(i) fout(i) = fan-out of module i Data Complexity D(i) = v(i)/[fout(i) +1] v(i) = # of input and output variables to and from module i System Complexity C(i) = S(i) + D(i) Fan-out(i) is the number of modules called by the module i. Fan-out is the number of modules called by the module. The number of with clauses in Ada.

33 High-Level Design Metrics
Morphology Metrics size = n + a n = number of modules a = number of arcs (lines of control) arc-to-node ratio, r = a/n depth = longest path from the root to a leaf width = maximum number of nodes at any level

34 Morphology Metrics a b c d e f g i j k l h m n p q r
size: depth:4 width:6 arc-to node ratio:1

35 Component-Level Design Metrics
Cohesion Metrics Coupling Metrics data and control flow coupling global coupling environmental coupling Complexity Metrics Cyclomatic complexity Experience shows that if this > 10, it is very difficult to test

36 Coupling Metrics Data and control flow coupling Global coupling
di = number of input data parameters ci = number of input control parameters d0 = number of output data parameters c0 = number of output control parameters Global coupling gd = number of global variables used as data gc = number of global variables used as control Environmental coupling w = number of modules called (fan-out) r = number of modules calling the module under consideration (fan- in) Module Coupling: mc = 1/ (di + 2*ci + d0 + 2*c0 + gd + 2*gc + w + r)

37 Metrics for Source Code
Maurice HALSTEAD’s Software Science n1 = the number of distinct operators n2 = the number of distinct operands N1 = the total number of operator occurrences N2 = the total number of operand occurrences Length: N = N1 + N2 Volume: V = Nlog2(n1 + n2)

38 Metrics for Maintenance
Software Maturity Index (SMI) MT = number of modules in the current release Fc = number of modules in the current release that have been changed Fa = number of modules in the current release that have been added Fd = number of modules from the preceding release that were deleted in the current release SMI = [MT - (Fc + Fa + Fd)] / MT

39 Process Metrics Average find-fix cycle time
Number of person-hours per inspection Number of person-hours per KLOC Average number of defects found per inspection Number of defects found during inspections in each defect category Average amount of rework time Percentage of modules that were inspected

40 Testing Metrics

41 Defect Category An attribute of the defect in relation to the quality attributes of the product. Quality attributes of a product include functionality, usability, documentation, performance, installation, stability, compatibility, internationalization etc. This metric can provide insight into the different quality attributes of the product. This metric can be computed by dividing the defects that belong to a particular category by the total number of defects.

42 Defect Removal Efficiency
The number of defects that are removed per time unit (hours/days/weeks). Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product. Computed by dividing the effort required for defect detection, defect resolution time and retesting time by the number of defects. This is calculated per test type, during and across test phases.

43 Defect Severity The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence). Provides indications about the quality of the product under test. High-severity defects mean low product quality, and vice versa. At the end of this phase, this information is useful to make the release decision based on the number of defects and their severity levels. Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.

44 Defect Severity Index An index representing the average of the severity of the defects. Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability. Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.

45 Test Case Effectiveness
The extent to which test cases are able to find defects. This metric provides an indication of the effectiveness of the test cases and the stability of the software. Ratio of the number of test cases that resulted in logging defects vs. the total number of test cases.

46 Test Coverage Defined as the extent to which testing covers the product’s complete functionality. This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing. Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.

47 Test Effort Percentage
The effort spent in testing, in relation to the effort spent in the development activities, will give us an indication of the level of investment in testing. This information can also be used to estimate similar projects in the future. This metric can be computed by dividing the overall test effort by the total project effort.

48 Time to Fix a Defect Effort required to resolve a defect (diagnosis and correction). Provides an indication of the maintainability of the product and can be used to estimate projected maintenance costs. Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the same period.

49 Effort Adherence As % of what is committed in contract. Provides a measure of what was estimated at the beginning of the project vs. the actual effort taken. Useful to understand the variance (if any) and for estimating future similar projects.

50 Number of Defects The total number of defects found in a given time period/phase/test type that resulted in software or documentation modifications. Only accepted defects that resulted in modifying the software or the documentation are counted.

51 Test Case Execution Productivity
Review Efficiency # of defects detected /LOC or pages reviewed per day Test Case Execution Productivity # of test cases executed per day per person.

52 Test Case Execution Statistics
This metric provides an overall summary of test execution activities. This can be categorized by build or release, module, by platform (OS, browser, locale etc.).

53 Traceability Matrix Traceability is the ability to determine that each feature has a source in requirements and each requirement has a corresponding implemented feature. This is useful in assessing the test coverage details.

54 Schedule Adherence As % of what is committed in contract. Provides a variance between planned and actual scheduled followed. Useful to understand the variance (if any) and for estimating future similar projects.

55 Scope Changes The number of changes that were made to the test scope (scope creep). Indicates requirements stability or volatility, as well as process stability. Ratio of the number of changed items in the test scope to the total number of items.

56 Defects/ KLOC or FP The number of defects per 1,000 lines of code or Function Points. This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version. Ratio of the number of defects found vs. the total number of lines of code (thousands) or Function Points.

57 Closed Defect Distribution
This chart gives information on how defects with closed status are distributed.

58 Defec Cause Distribution Chart
This chart gives information on the cause of defects.

59 Defect Distribution Across Components
This chart gives information on how defects are distributed across various components of the system.

60 Defect Finding Rate This chart gives information on how many defects are found across a given period. This can be tracked on a daily or weekly basis

61 Metric Tools

62 McCabe & Associates ( founded by Tom McCabe, Sr.)
The Visual Quality ToolSet The Visual Testing ToolSet The Visual Reengineering ToolSet Metrics calculated McCabe Cyclomatic Complexity McCabe Essential Complexity Module Design Complexity Integration Complexity Lines of Code Halstead

63 CCCC A metric analyser C, C++, Java, Ada-83, and Ada-95 (by Tim Littlefair of Edith Cowan University, Australia) Metrics calculated Lines Of Code (LOC) McCabe’s cyclomatic complexity C&K suite (WMC, NOC, DIT, CBO) Generates HTML and XML reports freely available

64 Jmetric OO metric calculation tool for Java code (by Cain and Vasa for a project at COTAR, Australia) Requires Java 1.2 (or JDK with special extensions) Metrics Lines Of Code per class (LOC) Cyclomatic complexity LCOM (by Henderson-Seller) Availability is distributed under GPL

65 GEN++ (University of California, Davis and Bell Laboratories)
GEN++ is an application-generator for creating code analyzers for C++ programs simplifies the task of creating analysis tools for the C++ several tools have been created with GEN++, and come with the package these can both be used directly, and as a springboard for other applications   Freely available

66 More tools on Internet A Source of Information for Mission Critical Software Systems, Management Processes, and Strategies Defense Software Collaborators (by DACS) Object-oriented metrics Software Metrics Sites on the Web )Thomas Fetcke) Metrics tools for C/C++ (Christofer Lott)

67 References [1] Chidamber Shyam, Kemerer Chris, “A metrics suite for object oriented design”, IEEE Transactions on Software Engineering, June1994. [2] Chidamber Shyam, Kemerer Chris, Darcy David, ”Managerial use of Metrics for Object-Oriented Software: an Exploratory Analysis”, IEEE Transactions on software Engineering, August [3] Linda Rosenberg, “Applying and Interpreting Object Oriented Metrics”, Software Assurance Technology Conference, Utah, [4] Stephen H. Kan, “Metrics and models in software Quality Engineering”, Addison-Wesley, [5] Genaros Marcela, Piattini Mario, Calero Coral, “A Survey of Metrics for UML Class Diagrams”, Journal of Object Technology, Nov.-Dec 2005.

68 References [6] Victor R. Basili and Lionel C. Briand and Walcelio L. Melo, A Validation of Object-Oriented Design Metrics as Quality Indicators, IEEE Transactions on Software engineering, Piscataway, NJ, USA, October [7] Lionel C. Briand and Jurgen Wust and John W. Daly and D. Victor Porter, Exploring the relationships between design measures and software quality in object-oriented systems Journal of Systems and Software,2000. [8] Kanmani, S., and Uthariaraj V. Rymend, Object oriented software quality prediction using general regression neural networks, SIGSOFT Soft. Eng. Notes, New York NY, USA, [9] Nachiappan Nagappan, and Williams Laurie, Early estimation of software quality using in-process testing metrics: a controlled case study , Proceedings of the third workshop on Software quality, St. Louis, Missouri, USA. (2005) [10] Hector M. Olague and Sampson Gholston and Stephen Quattlebaum, Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes, IEEE Transactions Software Engineering, Piscataway, NJ, USA, 2007.

69 References [11] Williaw E. Lewis, “Software Testing And Continuous Quality Improvement”, Third Edition, CRC Press, [12]K. Naik and P. Tripathy: “Software Testing and Quality Assurance”, Wiley, [13] Ian Sommerville, Software Engineering, 8th edition, [14] Aditya P. Mathur,“Foundations of Software Testing”, Pearson Education, [15] D. Galin, “Software Quality Assurance: From Theory to Implementation”, Pearson Education, 2004 [16] David Gustafson, “Theory and Problems of Software Engineering”, Schaum’s Outline Series, McGRAW-HILL, 2002.

70 References Textbooks S. R. Scach - Classical and Object-Oriented Software Engineering P. Jalote - An Integrated Approach to Software Engineering

71 Q & A


Download ppt "Software Metrics Presented By Dr. Iyad Alazzam"

Similar presentations


Ads by Google