Prediction of fault-proneness at early phase in object-oriented development Toshihiro Kamiya †, Shinji Kusumoto † and Katsuro Inoue †‡ † Osaka University.

Slides:



Advertisements
Similar presentations
Metrics for OO Design Distinct & measurable characteristics of OO design:- Size:-it is defined as – population,volume,length & functionality Population.
Advertisements

CLUSTERING SUPPORT FOR FAULT PREDICTION IN SOFTWARE Maria La Becca Dipartimento di Matematica e Informatica, University of Basilicata, Potenza, Italy
Software Metrics for Object Oriented Design
Presentation of the Quantitative Software Engineering (QuaSE) Lab, University of Alberta Giancarlo Succi Department of Electrical and Computer Engineering.
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
MetriCon 2.0 Correlating Automated Static Analysis Alert Density to Reported Vulnerabilities in Sendmail Michael Gegick, Laurie Williams North Carolina.
Towards Logistic Regression Models for Predicting Fault-prone Code across Software Projects Erika Camargo and Ochimizu Koichiro Japan Institute of Science.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Case Studies Instructor Paulo Alencar.
1 Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile.
1 Predicting Bugs From History Software Evolution Chapter 4: Predicting Bugs from History T. Zimmermann, N. Nagappan, A Zeller.
Metrics. Basili’s work Basili et al., A Validation of Object- Oriented Design Metrics As Quality Indicators, IEEE TSE Vol. 22, No. 10, Oct. 96 Predictors.
Figures – Chapter 24.
Metrics for Object Oriented Design Shyam R. Chidamber Chris F. Kemerer Presented by Ambikadevi Damodaran.
Applying and Interpreting Object Oriented Metrics
Page 1 Building Reliable Component-based Systems Chapter 7 - Role-Based Component Engineering Chapter 7 Role-Based Component Engineering.
Analysis of CK Metrics “Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults” Yuming Zhou and Hareton Leung,
Design Metrics Software Engineering Fall 2003 Aditya P. Mathur Last update: October 28, 2003.
Object-Oriented Metrics
Empirical Validation of OO Metrics in Two Different Iterative Software Processes Mohammad Alshayeb Information and Computer Science Department King Fahd.
Predicting Class Testability using Object-Oriented Metrics M. Bruntink and A. van Deursen Presented by Tom Chappell.
Object Oriented Metrics XP project group – Saskia Schmitz.
Database Complexity Metrics Brad Freriks SWE6763, Spring 2011.
Software Metrics.
1 Prediction of Software Reliability Using Neural Network and Fuzzy Logic Professor David Rine Seminar Notes.
SEA Side Software Engineering Annotations
1 NASA OSMA SAS02 Software Reliability Modeling: Traditional and Non-Parametric Dolores R. Wallace Victor Laing SRS Information Services Software Assurance.
Chidamber & Kemerer Suite of Metrics
Software Engineering Laboratory, Department of Computer Science, Graduate School of Information Science and Technology, Osaka University 1 Refactoring.
Japan Advanced Institute of Science and Technology
Paradigm Independent Software Complexity Metrics Dr. Zoltán Porkoláb Department of Programming Languages and Compilers Eötvös Loránd University, Faculty.
Software Measurement & Metrics
The CK Metrics Suite. Weighted Methods Per Class b To use this metric, the software engineer must repeat this process n times, where n is the number of.
1 OO Metrics-Sept2001 Principal Components of Orthogonal Object-Oriented Metrics Victor Laing SRS Information Services Software Assurance Technology Center.
The CK Metrics Suite. Weighted Methods Per Class b To use this metric, the software engineer must repeat this process n times, where n is the number of.
A Validation of Object-Oriented Design Metrics As Quality Indicators Basili et al. IEEE TSE Vol. 22, No. 10, Oct. 96.
Concepts of Software Quality Yonglei Tao 1. Software Quality Attributes  Reliability  correctness, completeness, consistency, robustness  Testability.
Software Engineering Research Group, Graduate School of Engineering Science, Osaka University 1 Evaluation of a Business Application Framework Using Complexity.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Supporting Release Management & Quality Assurance for Object-Oriented Legacy Systems - Lionel C. Briand Visiting Professor Simula Research Labs.
1 Metrics and lessons learned for OO projects Kan Ch 12 Steve Chenoweth, RHIT Above – New chapter, same Halstead. He also predicted various other project.
An Automatic Software Quality Measurement System.
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Daniel Liu & Yigal Darsa - Presentation Early Estimation of Software Quality Using In-Process Testing Metrics: A Controlled Case Study Presenters: Yigal.
Object-Oriented (OO) estimation Martin Vigo Gabriel H. Lozano M.
Ontology Support for Abstraction Layer Modularization Hyun Cho, Jeff Gray Department of Computer Science University of Alabama
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e and are provided with permission by.
Object Oriented Metrics
1 OO Technical Metrics CIS 375 Bruce R. Maxim UM-Dearborn.
Software Engineering Object Oriented Metrics. Objectives 1.To describe the distinguishing characteristics of Object-Oriented Metrics. 2.To introduce metrics.
1 740f02classsize18 The Confounding Effect of Class Size on the Validity of Object- Oriented Metrics Khaled El Eman, etal IEEE TOSE July 01.
Graduate School of Information Science, Nara Institute of Science and Technology - Wed. 7 April 2004Profes 2004 Effort Estimation Based on Collaborative.
Design Metrics CS 406 Software Engineering I Fall 2001 Aditya P. Mathur Last update: October 23, 2001.
Architecting Complexity HOW UNDERSTANDING COMPLEXITY PROMOTES SIMPLICITY.
Object Oriented Metrics
A Hierarchical Model for Object-Oriented Design Quality Assessment
Assessment of Geant4 Software Quality
Course Notes Set 12: Object-Oriented Metrics
Design Characteristics and Metrics
Towards a Multi-paradigm Complexity Measure
Object-Oriented Metrics
Design Metrics Software Engineering Fall 2003
A Pluggable Tool for Measuring Software Metrics from Source Code
Design Metrics Software Engineering Fall 2003
Software Metrics Validation using Version control
○Yuichi Semura1, Norihiro Yoshida2, Eunjong Choi3, Katsuro Inoue1
Mei-Huei Tang October 25, 2000 Computer Science Department SUNY Albany
Predict Failures with Developer Networks and Social Network Analysis
Predicting Fault-Prone Modules Based on Metrics Transitions
Software Metrics SAD ::: Fall 2015 Sabbir Muhammad Saleh.
Chapter 8: Design: Characteristics and Metrics
Presentation transcript:

Prediction of fault-proneness at early phase in object-oriented development Toshihiro Kamiya †, Shinji Kusumoto † and Katsuro Inoue †‡ † Osaka University ‡Nara Institute of Science and Technology

1 Background Complexity metrics are used to estimate fault-proneness of software component. According to the value of the metrics, we can allocate the effort of review/testing to the fault-prone component. Chidamber and Kemerer’s metrics are the representative complexity metrics for object-oriented software.

2 Chidamber and Kemerer’s metrics[1] C&K metrics evaluate complexity of classes from the following three viewpoints: Inheritance complexity ・ DIT (Depth of inheritance tree) ・ NOC (Number of children) Coupling complexity ・ RFC ( Response for a class ・ CBO(Coupling between object-class) Class internal complexity ・ WMC(Weighted methods par class ・ LCOM(Lack of cohesion in method) [1] S.R. Chidamber and C.F. Kemerer, A Metrics Suite for Object Oriented Design, IEEE Trans. on software eng., vol., 20, No. 6 (1994)

3 Evaluation of C&K metrics Several research studies evaluate the usefulness of C&K metrics. ・ Chidamber and Kemerer confirm that C&K metrics satisfy Weyuker’s properties [1]. ・ Basili et. al. empirically evaluated that C&K metric suit is better predictor of fault-proneness of class than traditional code metrics [2]. ・ Briand et. al. discussed several design metrics that include C&K Metrics [3]. [2] Basili, V. R., Briand, L. C., and Mélo, W. L., A validation of object-oriented design metrics as quality indicators, IEEE Trans. on Software Eng. Vol. 20, No. 22, (1996) [3] Briand, L. C., Daly, J.W., and Wüst, J.K., A Unified Framework for Coupling Measurement in Object-Oriented Systems, IEEE Trans. on software eng., vol.25, No.1, (1999)

4 Difficulty in applying C&K metrics to design In previous researches, C&K metrics were applied to source code. Because some of C&K metrics need information such as algorithm or call-relationship, which are determined later at design phase. In order to allocate the review and testing effort efficiently, early estimation of the fault-prone classes (components) is preferable.

5 Proposed method We propose a method to predict fault-proneness at early phase in object-oriented development. 1. Introduce four checkpoints into design / implementation phase. 2. Determine the available metric set at each checkpoint. 3. By multivariate logistic regression analysis, estimate fault-proneness of the classes (components) at each checkpoint. We empirically evaluate how the metric sets predict fault- prone classes at each checkpoint.

6 Introduced checkpoints CP1: Association and attributes of classes are determined. CP2: Derivation, interface(method), and reused classes are determined. CP3: Algorithm of each method is developed. CP4: Source code is written. Implemen- tation t Object Design System Design Analysis

7 Metrics We use following metrics in this study. C&K metrics ・ DIT, NOC, RFC, CBO, WMC, and LCOM ・ CBON (Coupling to newly developed classes) ・ CBOR (Coupling to reused classes) CBO = CBON + CBOR Other metrics ・ NIV (Number of instance variables) ・ SLOC (Source lines of code)

8 Checkpoints and metric sets CP1: Association and attributes of classes are determined. { CBON, NIV } CP2: Derivation, interface(method), and reused classes are determined. { CBON, NIV, CBOR, CBO, WMC, DIT, NOC } CP3: Algorithm of each method is developed. { CBON, NIV, CBOR, CBO, WMC, DIT, NOC, RFC, LCOM } CP4: Source code is written. {CBON, NIV, CBOR, CBO, WMC, DIT, NOC, RFC, LCOM, SLOC } Implemen- tation t Object Design System Design Analysis

9 Estimation of fault-proneness of classes “Multivariate logistic regression is a standard technique based on maximum likelihood estimation, to analyze the relationships between measures and fault-proneness of classes.” P 1 : fault-proneness (probability of fault detected) CBO, NIV: metric values C 0, C 1, C 2 : coefficients If P 1 of the target class > 0.5, then the class is predicted as faulty.

10 Outline of the experiment We empirically evaluate the proposed method using the data collected from an experimental project. ・ The experimental project was performed at a computer company for five days in August ・ Developers were new employees who finished on-the- job training of object-oriented design and C++ programming. ・ Developer teams developed an identical delivery system using C++.

11 Experimental data Fault tracking data ・ Location ・ Type ・ Effort to fix Metric data ・ Metric values of developed classes As the result, 80 faults of 141 classes were collected.

12 Statistics of empirical data

13 Prediction by metrics (1/2) With the collected data, we estimate fault-prone classes at each checkpoint. Prediction at CP1

14 Prediction by metrics (2/2)

15 Indicators for evaluation To illustrate the precision of the estimation, two indicators are used [2]. Completeness: percentage of classes correctly predicted faulty in actual faulty. Correctness: percentage of classes actual faulty in predicted faulty. CorrectnessCompleteness

16 Precision of estimation On the whole, the precision of estimation improves as the process progress. –Correctness is relatively high at all checkpoints, so that the estimation used to ‘seed’ the faulty classes. –Completeness becomes better at later checkpoint. The estimation at CP2 does well.

17 Conclusion We have proposed a method to predict fault-proneness at early phase in object-oriented development, and evaluated the method empirically. As further work, we are going to: ・ Use other metrics in the proposed method. ・ Develop the tool which support the proposed method.

18 Weyuker’s properties [4] Let  (c) denote a measurement of metric  for class c, and p + q denote the combined class of class p and q, W1  p  q,  (p)   (q). W2  p  q,  (p) =  (q), and p differs from q. W3  p  q,  (p)   (q), and p's functionality is equal to q's (but p's design differs from q's). W4  p  q,  (p)   (p + q), and  (q)   (p + q). W5  p  q  r,  (p) =  (q), and  (p + r)   (q + r).  W6  p  q,  (p) +  (q)   (p + q). Chidamber and Kemerer proved that each metric WMC, DIT, NOC, CBO, RFC, and LCOM satisfies W1,...,  W6, except for NOC and LCOM which do not satisfy W4. [4]Weyuker, E. J., Evaluating software complexity measures, IEEE Trans. on Software Eng. Vol. 14, No. 9, (1998),

19 Coefficients at each checkpoint