Download presentation
Presentation is loading. Please wait.
Published byArnold Collins Modified over 9 years ago
1
1
2
Clustering is one of the most widely used tools for exploratory data analysis. Social Sciences Biology Astronomy Computer Science …. All apply clustering to gain a first understanding of the structure of large data sets. The Theory-Practice Gap 2
3
“While the interest in and application of cluster analysis has been rising rapidly, the abstract nature of the tool is still poorly understood” (Wright, 1973) “There has been relatively little work aimed at reasoning about clustering independently of any particular algorithm, objective function, or generative data model” (Kleinberg, 2002) Both statements still apply today. 3
4
Clustering aims to assign data into groups of similar items Beyond that, there is very little consensus on the definition of clustering Inherent Obstacles: Clustering is ill-defined 4
5
Clustering is inherently ambiguous – There may be multiple reasonable clusterings – There is usually no ground truth There are many clustering algorithms with different (often implicit) objective functions Inherent Obstacles 5
6
Previous work Clustering algorithm selection Characterization of Linkage-Based clustering – Sketch of proof – Hierarchical algorithms that are not linkage- based Conclusions and future work 6 Outline
7
Clustering in the weighted setting (Wright, ‘73) Axioms of clustering distance functions (Meila, ACM ‘05) Impossibility result (Kleinberg, NIPS ‘02) Rebuttal to impossibility result (Ackerman & Ben- David, NIPS ‘08) 7 Previous Work Towards a General Theory: Axiomatizing clustering
8
Conditions for efficiently uncovering the target clustering [(Balcan, Blum, and Vempala, STOC ‘08),(Balcan, Blum and Gupta, SODA ‘09)] Theoretical study of clusterability (Ackerman & Ben-David, AISTATS ‘09)]. Notions of clusterability are pairwise distinct Data sets that are more clusterable are computationally easier to cluster well. 8 Previous Work Towards a General Theory: Clusterability
9
Previous work Clustering algorithm selection Characterization of Linkage-Based clustering – Sketch of proof – Heirarchical algorithms that are not linkage- based Conclusions and future work 9 Outline
10
There are a wide variety of clustering algorithms, which often produce very different clusterings. Clustering Algorithm Selection 10 How should a user decide which algorithm to use for a given application?
11
Users rely on cost related considerations: running times, space usage, software purchasing costs, etc… There is inadequate emphasis on input-output behaviour Clustering Algorithm Selection 11
12
12 Radical Differences in Input/Output Behavior of Clustering Algorithms
13
13 Radical Differences in Input/Output Behavior of Clustering Algorithms
14
We propose a framework that lets a user utilize prior knowledge to select an algorithm Identify properties that distinguish between different input-output behaviour of clustering paradigms The properties should be: 1) Intuitive and “user-friendly” 2) Useful for distinguishing clustering algorithms Our Framework for Clustering Algorithm Selection 14
15
The long-term goal is to construct a large property-based classification for many useful clustering algorithms This would facilitates the application of prior knowledge. Enables users to identify a suitable algorithm without the overhead of executing many algorithms This framework helps understand behaviour of existing and new algorithms Our Framework for Clustering Algorithm Selection 15
16
Taxonomy of Partitional Algorithms (Ackerman, Ben-David, Loker, NIPS 2010) LocalOuter Con. Inner Con. Refinm. Preserv Order Inv. Outer Rich. Scale Inv. Iso. Inv. Single linkage Average linkage Complete linkage K-means K-median Min-Sum Ratio-cut Normalized- cut 16
17
Axioms VS Properties LocalOuter Con. Inner Con. Refinm. Preserv Order Inv. Outer Rich. Scale Inv. Iso. Inv. Single linkage Average linkage Complete linkage K-means K-median Min-Sum Ratio-cut Normalized- cut Properties Axioms 17
18
Characterization of Linkage-Based Clustering (Ackerman, Ben-David, Loker, COLT 2010) LocalOuter Con. Inner Con. Refinm. Preserv Order Inv. Outer Rich. Scale Inv. Iso. Inv. Single linkage Average linkage Complete linkage K-means K-median Min-Sum Ratio-cut Normalized- cut 18
19
Characterization of Linkage-Based Clustering (Ackerman, Ben-David, Loker, COLT 2010) The 2010 characterization applies in the partitional setting, by using the k-stopping criteria. This characterization distinguished linkage-based algorithms from other partitional algorithms. LocalOuter Con. Inner Con. Refinm. Preserv Order Inv. Outer Rich. Scale Inv. Iso. Inv. Single linkage Average linkage Complete linkage 19
20
Propose two intuitive properties that uniquely indentify hierarchical linkage-based clustering algorithms. Show that common hierarchical algorithms, including bisecting k-means, cannot be simulated by any linkage-based algorithm Characterizing Linkage-Based Clustering in the Heirarchical Setting (Ackerman and Ben-David, IJCAI 2011) 20
21
Previous work Clustering algorithm selection Characterization of Linkage-Based clustering – Sketch of proof – Hierarchical algorithms that are not linkage- based Conclusions and future work 21 Outline
22
C_iD C_i C_i is a cluster in a dendrogram D if there exists a node in the dendrogram so that C_i is the set of its leaf descendents. Formal Setup: Dendrograms and clusterings 22
23
C = {C 1, …, C k } D C = {C 1, …, C k } is a clustering in a dendrogram D if – C i D1≤ i ≤ k – C i is a cluster in D for all 1≤ i ≤ k, and – Clusters are disjoint Formal Setup: Dendrograms and clusterings 23
24
Formal Setup: Heirarchical clustering algorithm A A Hierarchical Clustering Algorithm A maps X d (X,d) Input: A data set X with a dissimilarity function d, denoted (X,d)to X Output: A dendrogram of X 24
25
X Create a leaf node for every elements of X Linkage-Based Algorithm Insert image 25
26
X Create a leaf node for every elements of X Repeat the following until a single tree remains: – Consider clusters represented by the remaining root nodes. Linkage-Based Algorithm 26
27
Create a leaf node for every elements of X Repeat the following until a single tree remains: – Consider clusters represented by the remaining root nodes. Merge the closest pair of clusters by assigning them a common parent node. Linkage-Based Algorithm 27 ?
28
The choice of linkage function distinguishes between different linkage-based algorithms. Examples of common linkage-functions – Single-linkage: shortest between-cluster distance – Average-linkage: average between-cluster distance – Complete-linkage: maximum between-cluster distance Examples of Linkage-Based Algorithms X1X1X1X1 X2X2X2X2 28
29
Locality Informal Definition If we select a set of disjoint clusters from a dendrogram, and run the algorithm on the union of these clusters, we obtain a result that is consistent with the original dendrogram. D = A(X,d) D’ = A(X’,d) X’={x 1, …, x 6 } 29
30
Locality Informal Definition If we select a set of disjoint clusters from a dendrogram, and run the algorithm on the union of these clusters, we obtain a result that is consistent with the original dendrogram. D = A(X,d) D’ = A(X’,d) X’={x 1, …, x 6 } 30
31
A(X,d) C C(X,d) C on dataset (X,d) C(X,d’) C on dataset (X,d’) Outer-consistent change 31 Outer Consistency If A is outer-consistent, then A(X,d’) will also include the clustering C.
32
Theorem (Ackerman & Ben-David, IJCAI 2011): A hierarchical clustering algorithm is Linkage-Based if and only if it is Local and Outer-Consistent. Characterization of Linkage-Based Clustering 32
33
Previous work Clustering algorithm selection Characterization of Linkage-Based clustering – Sketch of proof – Heirarchical algorithms that are not linkage- based Conclusions and future work 33 Outline
34
Every Linkage-Based hierarchical clustering algorithm is Local and Outer-Consistent. The proof is quite straightforward. Easy Direction of Proof 34
35
AA If A is Local and Outer-Consistent, then A is Linkage-Based. To prove this direction we first need to formalize Linkage-Based clustering, by formally defining what is a Linkage Function. Interesting Direction of Proof 35
36
A Linkage Function is a function l :{(X 1, X 2,d): d X 1 u X 2 }→ R + l :{(X 1, X 2,d): d is a distance function over X 1 u X 2 }→ R + that satisfies the following: What Do We Expect From Linkage Functions? -Representation independence: Doesn’t change if we re-label data X 1 X 2 -Monotonicity: if we increase edges that go between X 1 and X 2, l (X 1, X 2,d) then l (X 1, X 2,d) doesn’t decrease. (X 1 u X 2,d) X1X1X1X1 X2X2X2X2 36
37
Recall direction: AA If A satisfies Outer-Consistency and Locality, then A is Linkage-Based. Goal: l l A(X,d) Define a linkage function l so that the linkage-based clustering based on l outputs A(X,d) Xd (for every X and d). Sketch of proof 37
38
Define an operator < A : (X,Y,d 1 ) (Z,W,d 2 ) A(X u Y u Z u W,d) dd 1 d 2 XY ZW (X,Y,d 1 ) < A (Z,W,d 2 ) if when we run A on (X u Y u Z u W,d), where d extends d 1 and d 2, X and Y are merged before Z and W. Sketch of proof A(X,d) Z W X Y Prove that < A can be extended to a partial ordering l Use the ordering to define l 38
39
Sketch of proof continue: Show that < A is a partial ordering We show that < A is cycle-free. A Lemma: Given a hierarchical algorithm A that is Local and Outer-Consistent, there exists no finite sequence so that (X 1,Y 1,d 1 ) < A …. < A (X n,Y n,d n ) < A (X 1,Y 1,d 1 ). 39
40
By the above Lemma, the transitive closure of < A is a partial ordering. l R + This implies that there exists an order preserving function l that maps pairs of data sets to R +. l It can be shown that l satisfies the properties of a Linkage Function. Sketch of proof (continued…) 40
41
Previous work Clustering algorithm selection Characterization of Linkage-Based clustering – Sketch of proof – Hierarchical algorithms that are not linkage- based Conclusions and future work 41 Outline
42
Hierarchical but Not Linkage-Based P P -Divisive algorithms construct dendrograms top-down P using a partitional 2-clustering algorithm P to split nodes. 42 P Apply partitional clustering P Ex. k-means for k=2
43
Hierarchical but Not Linkage-Based P A partitional 2-clustering algorithm P is d ⊂ d’ Context Sensitive if there exist d ⊂ d’ so that P({x,y,z},d) = {x, {y,z}} P({x,y,z,w},d’)= {{x,y}, {z,w}}. P({x,y,z},d) = {x, {y,z}} and P({x,y,z,w},d’)= {{x,y}, {z,w}}. P A partitional 2-clustering algorithm P is d ⊂ d’ Context Sensitive if there exist d ⊂ d’ so that P({x,y,z},d) = {x, {y,z}} P({x,y,z,w},d’)= {{x,y}, {z,w}}. P({x,y,z},d) = {x, {y,z}} and P({x,y,z,w},d’)= {{x,y}, {z,w}}. Ex. K-means, min-sum, min-diameter. 43
44
Hierarchical but Not Linkage-Based The input-output behaviour of some natural divisive algorithms is distinct from that of all linkage-based algorithms. The bisecting k-means algorithm, and other natural divisive algorithms, cannot be simulated by any linkage-based algorithm. 44
45
Conclusions We present a new framework for clustering algorithm selection Provide a property-based classification of common clustering algorithms Characterize linkage-based clustering in terms of two natural properties Show that no linkage-based algorithm can simulate some natural divisive algorithms 45
46
What’s Next? Our approach to selecting clustering algorithms can be applied to any clustering application (ex. phylogeny). Classify applications in terms of their clustering needs – Target research on common clustering needs or specific applications – Identify when results are relevant to specific applications Bridging the gap in other clustering settings (ex. clustering with a “noise cluster”) Axioms of clustering algorithms 46
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.