Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating Software Clustering Algorithms

Similar presentations


Presentation on theme: "Evaluating Software Clustering Algorithms"— Presentation transcript:

1 Evaluating Software Clustering Algorithms

2 The problem How do we know that a particular decomposition of a software system is good? What does ‘good’ mean anyway? We can compare against a Mental model Benchmark standard Can be done either manually or automatically 5/26/2019 COSC6431

3 Manual evaluation Have experts evaluate automatic decompositions
Very time-consuming, impractical Also quite subjective Need an automatic, objective way of doing it 5/26/2019 COSC6431

4 Automatic evaluation Usually measures the similarity of an automatic decomposition A to an authoritative decomposition M (prepared manually) Major drawback: Assumes there exists one ‘correct’ decomposition Other evaluation approaches are possible, such as measuring the stability of an SCA 5/26/2019 COSC6431

5 Precision/Recall Standard Information Retrieval measures
Were applied in a software clustering context by Anquetil Definitions: Intra pair: A pair of software entities in the same cluster Inter pair: A pair of entities in different clusters 5/26/2019 COSC6431

6 Precision/Recall Precision: Percentage of intra pairs in A which are also intra in M Recall: Percentage of intra pairs in M found also in A A good algorithm should exhibit high values in both measures 5/26/2019 COSC6431

7 Precision / Recall example
Decomposition A Decomposition B 1 4 5 2 3 8 10 7 9 6 Pair 1-5 : Intra-pair in A but not in B Precision: 16/20 = 80% Recall: 16/21 = 76.2% 5/26/2019 COSC6431

8 Problems with P/R Sensitive to the size and number of clusters
Differences are exaggerated if you have small/many clusters Two values makes comparison harder The two values are interchangeable if there is no “reference” decomposition 5/26/2019 COSC6431

9 Koschke – Eisenbarth measure
Loosely based on Precision/Recall Attempts to be less strict Definitions: GOOD match: Two clusters (one in A, one in M) with both precision and recall larger than a threshold p (typical value 70%) OK match: Two clusters with only one of the measures larger than p 5/26/2019 COSC6431

10 Koschke – Eisenbarth metric
B1B4 A2 A3 A4 A5 B1 B2 B3 A2A3 B5 B4 Good OK 5/26/2019 COSC6431

11 Koschke – Eisenbarth metric
Does not take edges into account In extreme situations (each cluster contains only one element) may provide strange results (similarity of 100%) No penalty for joining clusters 5/26/2019 COSC6431

12 MoJo distance The distance between two different partitions of the same software system is defined as the minimum number of Move and Join operations to transform one to the other Move: Remove an object from a cluster and put it in a different cluster Join: Merge two clusters into one Split: Has to be simulated by Move operations 5/26/2019 COSC6431

13 MoJo example Decomposition A Decomposition B 1 2 1 2 3 4 3 5 7 4 5 6 6
8 7 8 5/26/2019 COSC6431

14 Why only Move and Join? Two clusters can be joined in only one way. One cluster can be split in two in an exponential number of ways Joining two clusters only means that we performed more detailed clustering than required Splitting is effectively assigned a weight equal to the cardinality of the smaller of the two resulting clusters 5/26/2019 COSC6431

15 Computing MoJo distance
Give each object in partition A a tag j if it belongs in cluster Mj in partition M For each cluster in A, assign it to group Gk if it contains more objects with tag k than any other tag. To resolve ties, use the maximum bipartite matching method to maximize the number of groups. Join clusters within same group. Then move all objects with tag k to the clusters in group Gk. 5/26/2019 COSC6431

16 MoJo Distance Calculation
MoJo Distance is M+J, where M is the number of Moves, and J is the number of Joins The moves for each cluster is , and the total moves is Assume the number of non-empty groups is g, from G1 to Gg. The joins of each group is|Gk|-1. Then the number of Joins is 5/26/2019 COSC6431

17 Outline of Proof Move & Join operations are commutative
The algorithm uses the minimum number of Moves Any algorithm that also uses the minimum number of Moves can not beat us For any algorithm that does not use the minimum number of Moves, there exists another one which is better or equal and uses the minimum number of Moves 5/26/2019 COSC6431

18 MoJoFM Effectiveness Metric
Assumes the existence of a ‘correct’ authoritative partition MoJo Distance can be used to measure the distance between two arbitrary partitions 5/26/2019 COSC6431

19 MeCl and EdgeSim Other measures do not consider edges
Edges might convey important information EdgeSim: Rewards clustering algorithms for preserving the edge types Penalizes clustering algorithms for changing the edge types MeCl: Rewards the clustering algorithm for creating cohesive “subclusters” 5/26/2019 COSC6431

20 EdgeSim Inter-edge: Edge between clusters
Intra-edge: Edge within a cluster Y: set of edges that are of the same type in both A and B 5/26/2019 COSC6431

21 EdgeSim example In this example, EdgeSim(A,B) = 52.6% 5/26/2019
COSC6431

22 EdgeSim counterexample
5/26/2019 COSC6431

23 EdgeMoJo philosophy A similarity measure cannot make assumptions as to what cluster a particular object should belong to Dissimilarity between decompositions should increase if the misplacement of an object results in the misplacement of a large number of edges 5/26/2019 COSC6431

24 EdgeMoJo calculation Apply MoJo and obtain a series of Move and Join operations Perform all Join operations The cost of each Move operations increases from 1 to 5/26/2019 COSC6431

25 EdgeMoJo example Decomposition A Decomposition B 1 2 1 2 3 4 5 3 4 5 7
6 8 6 8 m(5) = 1 + |4-1|/(4+1) = 1.6 5/26/2019 COSC6431

26 Real data experiments 5/26/2019 COSC6431

27 Synthetic data experiments
5/26/2019 COSC6431


Download ppt "Evaluating Software Clustering Algorithms"

Similar presentations


Ads by Google