Download presentation
Presentation is loading. Please wait.
Published byOsborn Pope Modified over 8 years ago
1
Graduate School of Information Science, Nara Institute of Science and Technology - Wed. 7 April 2004Profes 2004 Effort Estimation Based on Collaborative Filtering Naoki Ohsugi, Masateru Tsunoda, Akito Monden, and Ken-ichi Matsumoto
2
Wed. 7 April 2004Profes 20042 of 19 Software Development Effort Estimation There are methods for estimating required efforts to complete ongoing software development projects. We can conduct the estimation based on past projects’ data. Cow
3
Wed. 7 April 2004Profes 20043 of 19 ?? Problems in Estimating Effort Past project’s data usually contain many Missing Values (MVs). –Briand, L., Basili, V., and Thomas, W.: A Pattern Recognition Approach for Software Engineering Data Analysis. IEEE Trans. on Software Eng., vol.18, no.11, pp.931-942 (1992) MVs give bad influences to accuracy of estimation. –Kromrey, J., and Hines, C.: Nonrandomly Missing Data in Multiple Regression: An Empirical Comparison of Common Missing-Data Treatments. Educational and Psychological Measurement, vo.54, no.3, pp.573-593 (1994) Cow? Horse?
4
Wed. 7 April 2004Profes 20044 of 19 Goal and Approach Goal: to achieve accurate estimation using data with many MVs. Approach: to employ Collaborative Filtering (CF). –Technique for estimating user preferences using data with many MVs (e.g. Amazon.com)
5
Wed. 7 April 2004Profes 20045 of 19 CF based User Preference Estimation Evaluating similarities between the target user and the other users. Estimating the target preference using the other users’ preferences. Similar User Dissimilar User ? (target) 5 (prefer) User A User B Book 2 Book 1 5 (prefer) ? (MV) User C User D 1 (not prefer) Book 4 Book 3 Book 5 5 (prefer) Estimate 5 (prefer) 5 (prefer) 5 (prefer) 5 (prefer) 5 (prefer) 1 (not prefer) 1 (not prefer) 1 (not prefer) ? (MV) ? (MV) 3 (so so) 5 (prefer) ? (MV) ? (MV) 3 (so so)
6
Wed. 7 April 2004Profes 20046 of 19 CF based Effort Estimation Evaluating similarities between the target project and the past projects. Estimating the target effort using the other projects’ efforts. Similar Project Dissimilar Project ? (target) 1 ( new develop ) Project A Project B # of faults Project type 60 ? (MV) 0 (maintain) Project C Project D 50 100 Coding cost Design cost 50 80 Testing cost 30 20 80 25 Estimate 40 ? (MV) ? (MV) ? (MV) 200 1 ( new develop ) ? (MV) 40
7
Wed. 7 April 2004Profes 20047 of 19 Similarity: 0.71 Step1. Evaluating Similarities Each project is represented as a vector of normalized metrics. Smaller angle between 2 vectors denotes higher similarity between 2 projects. Project A Project B # of faults Project type Coding cost Design cost Testing cost ? (target) ? (MV) 1 (1.0) 1 (1.0) 50 (0.0625) 20 (0.0) 60 (1.0) 40 (0.0) 100 (1.0) 40 (0.0) 0 Project type # of faults Coding cost Project A Project B
8
Wed. 7 April 2004Profes 20048 of 19 Step2. Calculating Estimated Value Choosing similar k-projects. –k is called Neighborhood Size. Calculating the estimated value from weighted sum of the observed values on the similar k-projects. Similarity: 0.71 Similarity: 0.062 Project A Project B Project C Project D ? (target) 1 ( new develop ) # of faults Project type 60 ? (MV) 0 (maintain) 50 100 Coding cost Design cost 50 80 Testing cost 30 20 80 25 Estimate (k=2) 40 ? (MV) ? (MV) ? (MV) 200 1 ( new develop ) ? (MV) 40
9
Wed. 7 April 2004Profes 20049 of 19 Case Study We evaluated the proposed method, using data collected from large software development company ( over 7,000 employees ). –The data were collected from 1,081 projects in a decade. 13% projects for developing new products. 36% projects for customizing ready-made products. 51% projects were unknown. –The data contained 14 kinds of metrics. Design cost, coding cost, testing cost, # of faults, etc.,...
10
Wed. 7 April 2004Profes 200410 of 19 Unevenly Distributed Missing Values MetricsRate of MVs Mainframe or not 75.76% New development or not 7.49% Total design cost (DC) 0.00% Total coding cost (CC) 0.00% DC for regular staffs of a company 86.68% DC for dispatched staffs from other companies 86.68% DC for subcontract companies 86.59% CC for regular staffs 86.68% CC for dispatched staffs 86.68% CC for subcontract companies 86.59% # of faults found in the review of conceptual design 83.53% # of faults found in the review of functional design 70.77% # of faults found in the review of program design 80.20% Testing cost 0.00% Total 59.83%
11
Wed. 7 April 2004Profes 200411 of 19 Evaluation Procedure 1.We divided the data into 50-50 two datasets randomly; Fit Dataset and Test Dataset 2.We estimated Testing Costs in the Test Dataset using the Fit Dataset. 3.We compared the estimated Costs and the actual Costs. Original Data 1081 projects Fit Dataset 541 projects Test Dataset 540 projects divided used Estimated Testing Costs compared Actual Testing Costs extracted
12
Wed. 7 April 2004Profes 200412 of 19 Regression Model We Used We employed stepwise metrics selection. We employed the following Missing Data Treatments. –Listwise Deletion –Pairwise Deletion –Mean Imputation
13
Wed. 7 April 2004Profes 200413 of 19 Relationships Between the Estimated Costs and the Actual Costs Actual Costs 100 10 1 0.1 0.01 0.001 0.0001 0.0010.010.1110100 Estimated Costs Actual Costs 100 10 1 0.1 0.01 0.001 0.0001 0.0010.010.1110100 Estimated Costs CF (k = 22) Regression (Listwise Deletion)
14
Wed. 7 April 2004Profes 200414 of 19 Evaluation Criteria of Accuracy MAE: Mean Absolute Error VAE: Variance of AE MRE: Mean Relative Error VRE: Variance of RE Pred25 –Ratio of the projects whose Relative Errors are under 0.25. Absolute Error =|Estimated Cost – Actual Cost | |Estimated Cost – Actual Cost | Actual Cost Relative Error =
15
Wed. 7 April 2004Profes 200415 of 19 MRE = 0.82 (k = 22) Accuracy of Each Neighborhood Size The most accurate estimation was observed at k = 22. Neighborhood Size Mean Relative Error 0.81 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 1 5101520253035404550
16
Wed. 7 April 2004Profes 200416 of 19 Accuracies of CF and Regression Models All evaluation criteria indicated CF (k=22) was the most effective for our data. MAEVAEMREVREPred25 CF (k = 22)0.212.20.823.4536% Regression (Listwise Deletion) 0.716.4530.22287581.1810% Regression (Pairwise Deletion) 52.7597171.846344.271893762309812% Regression (Mean Imputation) 1.335.07331.6924208218.494%
17
Wed. 7 April 2004Profes 200417 of 19 Related Work Analogy-based Estimation –It estimates effort using values of the similar projects. Shepperd, M., and Schofield, C.: Estimating Software Project Effort Using Analogies. IEEE Trans. on Software Eng., vol.23, no.12, pp.76-743 (1997) –They had another approach to evaluate similarities between projects. –They never mentioned missing values.
18
Wed. 7 April 2004Profes 200418 of 19 Summary We proposed a method for estimating software development efforts using Collaborative Filtering. We evaluated the proposed method. –The results suggest the proposed method has possibility for making good estimation using data including many MVs.
19
Wed. 7 April 2004Profes 200419 of 19 Future Work Designing the method to find appropriate neighborhood size automatically. Improving accuracy of estimation by other similarity evaluation algorithms. Comparing accuracies to another methods (e.g. analogy-based estimation).
20
Wed. 7 April 2004Profes 200420 of 19 E
21
Wed. 7 April 2004Profes 200421 of 19 N
22
Wed. 7 April 2004Profes 200422 of 19 D
23
Wed. 7 April 2004Profes 200423 of 19 Step1. Normalizing Metrics For unifying each metric’s influence on similarity computation, with the following equation. Project A Project B # of faults Project type Project C Project D Coding cost Design cost Testing cost ? (target) 1 ( new develop ) 1 60 ? (MV) 0 50 ? 10050 40 80 ? ? 200 30 20 80 40110 1.0 0.0 0.0625 0.0 1.0 0.1667 0.0 1.0 0.5 0.0 0.6667 0.0 1.0
24
Wed. 7 April 2004Profes 200424 of 19 Comparison with Stepwise Regression Model 1.MDT (Missing Values Treatments) for regression. –Listwise Deletion –Pairwise Deletion –Mean Imputation 2.We made regression models using the observed data. –e.g. Testing Cost = 5.5×Design Cost – 2.5×Coding Cost 3.We estimated the Testing Costs by assigning the observed values of the target projects. –e.g. Testing Cost = 5.5×30(Design Cost) – 10×10(Coding Cost) = 65
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.