Download presentation
Presentation is loading. Please wait.
Published byBertram Johnston Modified over 8 years ago
1
Defect Prediction using Smote & GA 1 Dr. Abdul Rauf
2
Introduction Software Fault Prediction Probability that a module contains defects. It is a classification problem technique used to predict software module as faulty or non-faulty. Two things needed fault prediction: – Raw data -> Source code – Software Metrics -> Static Code Attributes
3
Defect prediction Techniques Machine Learning Approaches ANNPSOSVMNavie Bayes Statistical Models Poisson Regression Binomial Regression Model Residual Analysis Multivariate Model Defect Prediction Techniques
4
Problem Statement Currently, most of the existing techniques do not deal with Outlier detection [6] Appropriate selection of software metrics [5,6] Research Questions RQ1 How to effectively deal with poor quality data sets used for fault proneness prediction? RQ2 How to select optimize set of software metrics based on the program features efficiently ? RQ3 How to ensure the optimize capability of defect prediction?
5
Proposed Solution Step1: First we have applied Correlation Base Filter Selection (CFS) for the appropriate Selection of software metrics. Step2: Apply Synthetic Minority Over-sampling Technique (SMOTE) to deal with class imbalance Step3: After preprocessing we have divided Data set in to two datasets one for training and other for testing in ratio of 30/70. Step4: The base classifier have been trained with training datasets and weights are assigned to the classifier on the basis of its output i.e. high weights are assigned to the classifier’s that give high accuracy. Experimental Setup
6
Proposed Solution Step5: Weights of the majority voting classifier have been optimized using GA. Step6: GA start with N number of selected classifiers output. Step7: Initial population of the GA has been evaluated by the fitness function, which is set on error rate. Step8: New generation of GA best fitted chromosomes have been selected to maintain the best error rate achieved throughout the evolution process. Step9: Termination of the GA has been set as the maximum no. of generation reaches or population converge to satisfactory solution. Experimental Setup
7
SMOTE: A State-of-the-Art Resampling Approach SMOTE stands for Synthetic Minority Oversampling Technique. It is a technique designed by Chawla, Hall, & Kegelmeyer in 2002. It combines Informed Oversampling of the minority class with Random Undersampling of the majority class. 7
8
SMOTE’s Informed Oversampling Procedure II For each minority Sample Find its k-nearest minority neighbours Randomly select j of these neighbours Randomly generate synthetic samples along the lines joining the minority sample and its j selected neighbours (j depends on the amount of oversampling desired) 8
9
SMOTE’s Informed Oversampling Procedure I 9 : Minority sample : Synthetic sample … But what if there is a majority sample Nearby? : Majority sample
10
SMOTE’s Shortcomings Lack of Flexibility The number of synthetic samples generated by SMOTE is fixed in advance, thus not allowing for any flexibility in the re- balancing rate. 10
11
Proposed Solution (change title) Fitness Function Fitness Function is set on minimum error rate. Chromosomes Presentation: Length of chromosomes is equal to number of classifiers used.
12
Experiments In this study 3 experiment has been preformed on the 5 datasets from a white-goods manufacturer that operates in embedded industry. In first experimentation, classifiers have been trained without applying filters. In second experiment we have employee Correlation base Feature Selection(CFS) approach for appropriate selection of metrics and used reduce datasets for prediction process. In third experiment, CFS for feature selection and SMOTE for all the data sets.
13
The dataset used in this research study are available in the Promise Data Repository (http://promisedata.org or http://www.tunedit.org). We employ five datasets in research study. These datasets are from a white-goods manufacturer that operates in embedded industry and are employee in research studies [5],[16],[8]. Sr#ProjectLanguageNo of Faulty Modules No of Non- Faulty Modules No of attributes 1Ar1C911229 2.Ar3C85529 3.Ar4C198829 4.Ar5C82829 5.Ar6C158629 Empirical Data Collection
14
Evaluation Measure T P : number of true positives FP : number of false positives FN : number of false negatives TN : number of true negatives Probability of Detection= TP/(TP + FN) Probability of False Alarm= FP/(FP+TN) Balance=1- √ (0-pf) 2 +(1-pd) 2 / √ 2
15
EXPERIMENTS ON aR3 Data set
16
Overall Experimental Results of AR3 DataSet Evaluation Measure Naive Bayes Threshold optimization technique [5] NB and VFI Classifier Ensemble [16] Proposed Paper Results EXP1EXP2EXP3 Pd 0.80.620.750.81 Pf 0.13 0.050.020.07 Balance 0.760.720.810.850.94
17
Overall Comparison Chart of AR3 Dataset
18
EXPERIMENTS ON aR4 Data set
19
Overall Experimental Results of AR4 Dataset Evaluation Measure Naive Bayes Threshold optimization technique [5] NB and VFI Classifier Ensemble [16] Proposed Methodology EXP1EXP2EXP3 Pd 0.7 0.75 Pf 0.310.290.120.10.11 Balance 0.570.70.80.820.85
20
Overall Comparison Chart of AR4 Dataset
21
Result Discussion In experiment#1 we trained classifier using all the metrics available in a dataset. Results has shown that ensemble of classifier improves fault prediction as compared to the techniques proposed in [5] and [7]. By applying CFS in experiment#2 we have observed that Pd rate improved as average Pd rate increased by 10, maintaining the balance and the Pf rate. In experiment#3 by removing outliers from the dataset and choosing only appropriate metrics for fault proneness prediction results has been significantly improved. By doing so Pd rate increased by 13, balance increased by 22, whereas maintain Pf rate.
22
Result Discussion (Cont …) Data Sets Experiment#1Experiment#2Experiment#3 PdPfBalPdPfBalPdPfBal ar10.850.10.7810.060.9510.060.95 ar30.750.050.810.80.020.8510.070.94 ar40.750.120.80.750.1 0.830.110.93 ar50.80.050.8510.050.9610.050.96 ar60.570.120.680.660.110.7410.10.92 Avg0.740.090.780.840.070.720.970.080.94 Experimental results of Experiment#1, Experiment#2 and Experiment#3 Results has shown that our proposed technique improve software fault prediction process. To endorse the effectiveness of our proposed approach we have also perform a comparison of our proposed approach with two other techniques proposed in literature.
23
Conclusion Ensemble of classifier improve the fault proneness prediction process. Results has shown that our proposed model increase probability of detection and improve balance rate whereas greatly reduce the probability of false alarms. Our proposed technique can be used to remove the manual effort required in detection of faults in software testing The results have revealed improvement in analyzing earlier rate of fault prediction in comparison of other existing techniques
24
References [1]. N.E.Fenton et al, Predicting software defect as in varying development lifecycles using Bayesian nets, Information and software Technology (2007). [2]. I. Gondra, Applying machine learning to software fault-proneness prediction. The Journal of Systems and Software (2008). [3]. C. Catal, Software fault prediction: A literature review and current trends. In Expert Systems with Applications (2011). [4]. D.Gray et al, The Misuse of the NASA Metrics Data Program Data Sets for Automated Software Defect Prediction (2010). [5]. A.Tosun et al, Reducing False Alarms in Software Defect Prediction by Decision ThresholdOptimization, Third International Symposiumm on Empirical Software Engineering and Measurement (2009). [6]. T.M. Khoshgoftarr et al, Attribute Selection and Imbalanced Data: Problems in software Defect Prediction. In 22 nd International Conference on Tools with Artificial Intelligence (2010). [7]. Q.Wang et al, Feature Selection and Clustering in Software Quality Prediction. Evaluation and Assessment in Software Engineering (2007). [8]. N.E.Fenton, A Critique of Software Defect Prediction Models. In IEEE Transaction on Software Engineering (1999).
25
References [9]. Z.Li et al, A practical method for the software fault-prediction. In international conference on Information reuse and integration (2007). [10]. Emma et al, The prediction of fault classes using object oriented design metrics. Journal of Systems and Software (2001). [11]. J. Zheng, Cost sensitive boosting neural networks for software defect prediction. Journal Expert Systems and Applications (2010). [12]. A.Carvalho et al, A symbolic fault prediction model based on multi-objective practical swarm Optimization. The Journal of Systems and Software (2010). [13]. D. Rodriguez et al, Searching for Rules to find Defective Modules in Unbalanced Data Sets. In Proceedings of 1st International Symposium on Search Based Software Engineering (2009). [14]. P.S.Sandhu et al, A Genetic Algorithm Based Classification Approach for Finding Fault Prone Classes. In Proceedings of World Academy of Science Engineering and Technology (2009). [15]. A.T.Misrih et al, An industrial case study of classifier ensembles for locating software defects. The Software Qual J DOI 10.1007/s11219-010-9128-1 (2011). [16]. A. T. Mısırh et al, An industrial case study of classifier ensembles for locating software defects, Journal Software Quality Control,Volume 19 Issue 3, September 2011, 515-536.
26
Q & A
27
THANKYOU
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.