Presentation is loading. Please wait.

Presentation is loading. Please wait.

PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti.

Similar presentations


Presentation on theme: "PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti."— Presentation transcript:

1 PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

2 2 Today’s Topic: Clustering Document clustering Motivations Document representations Success criteria Clustering algorithms Partitional (Flat) Hierarchical (Tree) PrasadL16FlatCluster

3 3 What is clustering? Clustering: the process of grouping a set of objects into classes of similar objects Documents within a cluster should be similar. Documents from different clusters should be dissimilar. The commonest form of unsupervised learning Unsupervised learning = learning from raw data, as opposed to supervised data where a classification of examples is given A common and important task that finds many applications in IR and other places PrasadL16FlatCluster

4 A data set with clear cluster structure How would you design an algorithm for finding the three clusters in this case?

5 5 Classification vs. Clustering  Classification: supervised learning  Clustering: unsupervised learning  Classification: Classes are human-defined and part of the input to the learning algorithm.  Clustering: Clusters are inferred from the data without human input.  However, there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents,... 5

6 6 Why cluster documents? Whole corpus analysis/navigation Better user interface For improving recall in search applications Better search results (pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Faster search PrasadL16FlatCluster

7 7 Applications of clustering in IR 7 ApplicationWhat is clustered? Benefit Search result clustering search results more effective information presentation to user Scatter-Gather(subsets of) collection alternative user interface: “search without typing” Collection clusteringcollectioneffective information presentation for exploratory browsing Cluster-based retrieval collectionhigher efficiency: faster search

8 8 Yahoo! Hierarchy isn’t clustering but is the kind of output you want from clustering dairy crops agronomy forestry AI HCI craft missions botany evolution cell magnetism relativity courses agriculturebiologyphysicsCSspace... … (30) www.yahoo.com/Science... PrasadL16FlatCluster

9 Introduction to Information Retrieval 9 Global navigation: Yahoo 9

10 10 Global navigation: MESH (upper level) 10

11 11 Global navigation: MESH (lower level) 11

12 12 Navigational hierarchies: Manual vs. automatic creation  Note: Yahoo/MESH are not examples of clustering.  But they are well known examples for using a global hierarchy for navigation.  Some examples for global navigation/exploration based on clustering:  Cartia  Themescapes  Google News 12

13 13 Search result clustering for better navigation 13

14 PrasadL16FlatCluster14

15 PrasadL16FlatCluster15

16 16 For better navigation of search results For grouping search results thematically clusty.com / Vivisimo PrasadL16FlatCluster

17 Google News: automatic clustering gives an effective news presentation metaphor

18 Selection Metrics Google News taps into its own unique ranking signals, which include user clicks, the estimated authority of a publication in a particular topic (possibly taking location into account), Freshness/recency, geography and more. PrasadL16FlatCluster18

19 Scatter/Gather: Cutting, Karger, and Pedersen

20 Scatter/Gather (“Star”)

21 S/G Example: query on “star” Encyclopedia text 14 sports 8 symbols47 film, tv 8 symbols47 film, tv 68 film, tv (p) 7 music 97 astrophysics 67 astronomy(p)12 steller phenomena 10 flora/fauna 49 galaxies, stars 29 constellations 7 miscelleneous 7 miscelleneous Clustering and re-clustering is entirely automated

22 Scatter/Gather Cutting, Pedersen, Tukey & Karger 92, 93, Hearst & Pedersen 95 How it worksHow it works –Cluster sets of documents into general “themes”, like a table of contents –Display the contents of the clusters by showing topical terms and typical titles –User chooses subsets of the clusters and re-clusters the documents within –Resulting new groups have different “themes”

23 For visualizing a document collection and its themes Wise et al, “Visualizing the non-visual” PNNL ThemeScapes, Cartia [Mountain height = cluster size]

24 24 For improving search recall Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs Therefore, to improve search recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in the cluster containing D Example: The query “car” will also return docs containing automobile Because clustering grouped together docs containing car with those containing automobile. Why might this happen? Prasad

25 25 Issues for clustering Representation for clustering Document representation Vector space? Normalization? Need a notion of similarity/distance How many clusters? Fixed a priori? Completely data driven? Avoid “trivial” clusters - too large or small PrasadL16FlatCluster

26 26 What makes docs “related”? Ideal: semantic similarity. Practical: statistical similarity Docs as vectors. For many algorithms, easier to think in terms of a distance (rather than similarity) between docs. We can use cosine similarity (alternatively, Euclidean Distance). PrasadL16FlatCluster

27 More Applications of clustering … Image Processing Cluster images based on their visual content Web Cluster groups of users based on their access patterns on webpages Cluster webpages based on their content Bioinformatics Cluster similar proteins together (similarity w.r.t. chemical structure and/or functionality etc.) …

28 Outliers Outliers are objects that do not belong to any cluster or form clusters of very small cardinality In some applications we are interested in discovering outliers, not clusters (outlier analysis) cluster outliers

29 Nature of Observations to cluster beyond text Real-value attributes/variables e.g., salary, height Binary attributes e.g., gender (M/F), has_cancer(T/F) Nominal (categorical) attributes e.g., religion (Christian, Muslim, Buddhist, Hindu, etc.) Ordinal/Ranked attributes e.g., military rank (soldier, sergeant, lutenant, captain, etc.) Variables of mixed types multiple attributes with various types

30 Distance functions for binary vectors Q1Q1 Q2Q2 Q3Q3 Q4Q4 Q5Q5 Q6Q6 X100111 Y011010 Jaccard similarity between binary vectors X and Y Jaccard distance between binary vectors X and Y Jdist(X,Y) = 1- JSim(X,Y) Example: JSim = 1/6 Jdist = 5/6

31 Distance functions for real-valued vectors L p norms or Minkowski distance: where p is a positive integer If p = 1, L 1 is the Manhattan (or city block) distance:

32 Distance functions for real-valued vectors If p = 2, L 2 is the Euclidean distance: Also one can use weighted distance: Very often L p p is used instead of L p (why?)

33 33 Clustering Algorithms Partitional (Flat) algorithms Usually start with a random (partial) partition Refine it iteratively K means clustering (Model based clustering) Hierarchical (Tree) algorithms Bottom-up, agglomerative (Top-down, divisive) PrasadL16FlatCluster

34 Hard vs. soft clustering Hard clustering: Each document belongs to exactly one cluster More common and easier to do Soft clustering: A document can belong to more than one cluster. Makes more sense for applications like creating browsable hierarchies You may want to put a pair of sneakers in two clusters: (i) sports apparel and (ii) shoes

35 35 Partitioning Algorithms Partitioning method: Construct a partition of n documents into a set of K clusters Given: a set of documents and the number K Find: a partition of K clusters that optimizes the chosen partitioning criterion Globally optimal: exhaustively enumerate all partitions Effective heuristic methods: K-means and K- medoids algorithms PrasadL16FlatCluster

36 36 K-Means Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c. Reassignment of instances to clusters is based on distance to the current cluster centroids. (Or one can equivalently phrase it in terms of similarities) PrasadL16FlatCluster

37 37 K-Means Algorithm Select K random docs {s 1, s 2,… s K } as seeds. Until clustering converges or other stopping criterion: For each doc d i : Assign d i to the cluster c j such that dist(x i, s j ) is minimal. (Update the seeds to the centroid of each cluster) For each cluster c j s j =  (c j ) PrasadL16FlatCluster

38 K-means algorithm 38

39 39 K Means Example (K=2) Pick seeds Reassign clusters Compute centroids x x Reassign clusters x x x x Compute centroids Reassign clusters Converged! PrasadL16FlatCluster

40 Worked Example: Set to be clustered 40

41 41 Worked Example: Random selection of initial centroids Exercise: (i) Guess what the optimal clustering into two clusters is in this case; (ii) compute the centroids of the clusters 41

42 Worked Example: Assign points to closest center 42

43 Worked Example: Assignment 43

44 Worked Example: Recompute cluster centroids 44

45 Worked Example: Assign points to closest centroid 45

46 Worked Example: Assignment 46

47 Worked Example: Recompute cluster centroids 47

48 Worked Example: Assign points to closest centroid 48

49 Worked Example: Assignment 49

50 Worked Example: Recompute cluster centroids 50

51 Worked Example: Assign points to closest centroid 51

52 Worked Example: Assignment 52

53 Worked Example: Recompute cluster centroids 53

54 Worked Example: Assign points to closest centroid 54

55 Worked Example: Assignment 55

56 Worked Example: Recompute cluster centroids 56

57 Worked Example: Assign points to closest centroid 57

58 Worked Example: Assignment 58

59 Worked Example: Recompute cluster centroids 59

60 Worked Example: Assign points to closest centroid 60

61 Worked Example: Assignment 61

62 Worked Example: Recompute cluster caentroids 62

63 Worked Example: Centroids and assignments after convergence 63

64 Two different K-means Clusterings Sub-optimal ClusteringOptimal Clustering Original Points

65 65 Termination conditions Several possibilities, e.g., A fixed number of iterations. Doc partition unchanged. Centroid positions don’t change. Does this mean that the docs in a cluster are unchanged? Prasad

66 66 Convergence Why should the K-means algorithm ever reach a fixed point? A state in which clusters don’t change. K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm. EM is known to converge. Number of iterations could be large. PrasadL16FlatCluster

67 67 Convergence of K-Means Define goodness measure of cluster k as sum of squared distances from cluster centroid: G k = Σ i (d i – c k ) 2 (sum over all d i in cluster k) G = Σ k G k Intuition: Reassignment monotonically decreases G since each vector is assigned to the closest centroid. Lower case PrasadL16FlatCluster

68 Why K-means converges Whenever an assignment is changed, the sum of squared distances of datapoints from their assigned cluster centers is reduced. There is nothing to prevent k-means getting stuck at local minima. Whenever a cluster center is moved the sum of squared distances of the datapoints from their currently assigned cluster centers is reduced. We could try many random starting points Test for convergence: If the reassignment do not change the centroid, we have converged. PrasadL16FlatCluster 68

69 Convergence of K-Means Recomputation monotonically decreases each G k since (m k is number of members in cluster k): Σ (d i – a) 2 reaches minimum for: Σ –2(d i – a) = 0 Σ d i = Σ a m K a = Σ d i a = (1/ m k ) Σ d i = c k K-means typically converges quickly Sec. 16.4

70 70 Time Complexity Computing distance between two docs is O(m) where m is the dimensionality of the vectors. Reassigning clusters: O(Kn) distance computations, or O(Knm). Computing centroids: Each doc gets added once to some centroid: O(nm). Assume these two steps are each done once for I iterations: O(IKnm). PrasadL16FlatCluster

71 71 Optimality of K-means  Convergence does not mean that we converge to the optimal clustering!  This is the great weakness of K- means.  If we start with a bad set of seeds, the resulting clustering can be horrible.

72 72 Seed Choice Results can vary based on random seed selection. Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a heuristic (e.g., doc least similar to any existing mean) Try out multiple starting points Initialize with the results of another method. In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F}. If you start with D and F you converge to {A,B,D,E} {C,F}. Example showing sensitivity to seeds PrasadL16FlatCluster

73 73 K-means issues, variations, etc. Recomputing the centroid after every assignment (rather than after all points are re- assigned) can improve speed of convergence of K-means Assumes clusters are spherical in vector space Sensitive to coordinate changes, weighting etc. PrasadL16FlatCluster

74 74 How Many Clusters? Number of clusters K is given Partition n docs into predetermined number of clusters Finding the “right” number of clusters is part of the problem E.g., for query results - ideal value of K not known up front - though UI may impose limits. PrasadL16FlatCluster

75 PrasadL17HierCluster75 Hierarchical Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

76 76 “The Curse of Dimensionality” Why document clustering is difficult While clustering looks intuitive in 2 dimensions, many of our applications involve 10,000 or more dimensions… High-dimensional spaces look different The probability of random points being close drops quickly as the dimensionality grows. Furthermore, random pair of vectors are all almost perpendicular. PrasadL17HierCluster

77 77 Hierarchical Clustering Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents. One approach: recursive application of a partitional clustering algorithm. animal vertebrate fish reptile amphib. mammal worm insect crustacean invertebrate PrasadL17HierCluster

78 78 connectedClustering obtained by cutting the dendrogram at a desired level: each connected component forms a cluster. Dendogram: Hierarchical Clustering PrasadL17HierCluster

79 79 Agglomerative (bottom-up): Precondition: Start with each document as a separate cluster. Postcondition: Eventually all documents belong to the same cluster. Divisive (top-down): Precondition: Start with all documents belonging to the same cluster. Postcondition: Eventually each document forms a cluster of its own. Does not require the number of clusters k in advance. Needs a termination/readout condition Hierarchical Clustering algorithms

80 80 Hierarchical agglomerative clustering (HAC)  HAC creates a hierarchy in the form of a binary tree.  Assumes a similarity measure for determining the similarity of two clusters.  Up to now, our similarity measures were for documents.  We will look at four different cluster similarity measures. 80

81 81 Hierarchical Agglomerative Clustering (HAC) Algorithm Start with all instances in their own cluster. Until there is only one cluster: Among the current clusters, determine the two clusters, c i and c j, that are most similar. Replace c i and c j with a single cluster c i  c j PrasadL17HierCluster

82 Introduction to Information Retrieval 82 A dendogram  The history of mergers can be read off from bottom to top.  The horizontal line of each merger tells us what the similarity of the merger was.  We can cut the dendrogram at a particular point (e.g., at 0.1 or 0.4) to get a flat clustering. 82

83 83 Dendrogram: Document Example As clusters agglomerate, docs likely to fall into a hierarchy of “topics” or concepts. d1 d2 d3 d4 d5 d1,d2 d4,d5 d3 d3,d4,d5 PrasadL17HierCluster

84 84 Key notion: cluster representative We want a notion of a representative point in a cluster, to represent the location of each cluster Representative should be some sort of “typical” or central point in the cluster, e.g., point inducing smallest radii to docs in cluster smallest squared distances, etc. point that is the “average” of all docs in the cluster Centroid or center of gravity Measure intercluster distances by distances of centroids. PrasadL17HierCluster

85 85 Example: n=6, k=3, closest pair of centroids d1 d2 d3 d4 d5 d6 Centroid after first step. Centroid after second step. Prasad

86 86 Outliers in centroid computation Can ignore outliers when computing centroid. What is an outlier? Lots of statistical definitions, e.g. moment of point to centroid > M  some cluster moment. Centroid Outlier Say 10. PrasadL17HierCluster

87 87 Closest pair of clusters Many variants to defining closest pair of clusters Single-link Similarity of the most cosine-similar (single-link) Complete-link Similarity of the “furthest” points, the least cosine- similar Centroid Clusters whose centroids (centers of gravity) are the most cosine-similar Average-link Average cosine between pairs of elements

88 Introduction to Information Retrieval 88 Key question: How to define cluster similarity  Single-link: Maximum similarity  Maximum similarity of any two documents  Complete-link: Minimum similarity  Minimum similarity of any two documents  Centroid: Average “intersimilarity”  Average similarity of all document pairs (but excluding pairs of docs in the same cluster)  This is equivalent to the similarity of the centroids.  Group-average: Average “intrasimilarity”  Average similary of all document pairs, including pairs of docs in the same cluster 88

89 Introduction to Information Retrieval 89 Cluster similarity: Example 89

90 Introduction to Information Retrieval 90 Single-link: Maximum similarity 90

91 Introduction to Information Retrieval 91 Complete-link: Minimum similarity 91

92 Introduction to Information Retrieval 92 Centroid: Average intersimilarity intersimilarity = similarity of two documents in different clusters 92

93 Introduction to Information Retrieval 93 Group average: Average intrasimilarity intrasimilarity = similarity of any pair, including cases where the two documents are in the same cluster 93

94 Introduction to Information Retrieval 94 Cluster similarity: Larger Example 94

95 Introduction to Information Retrieval 95 Single-link: Maximum similarity 95

96 Introduction to Information Retrieval 96 Complete-link: Minimum similarity 96

97 Introduction to Information Retrieval 97 Centroid: Average intersimilarity 97

98 Introduction to Information Retrieval 98 Group average: Average intrasimilarity 98

99 99 Single Link Agglomerative Clustering Use maximum similarity of pairs: After merging c i and c j, the similarity of the resulting cluster to another cluster, c k, is: PrasadL17HierCluster

100 100 Single Link Example PrasadL17HierCluster

101 Introduction to Information Retrieval 101 Single-link: Chaining Single-link clustering often produces long, straggly clusters. For most applications, these are undesirable. 101

102 102 Complete Link Agglomerative Clustering Use minimum similarity of pairs: Makes “tighter,” spherical clusters that are typically preferable. After merging c i and c j, the similarity of the resulting cluster to another cluster, c k, is: CiCi CjCj CkCk Prasad

103 103 Complete Link Example PrasadL17HierCluster

104 Introduction to Information Retrieval 104 Complete-link: Sensitivity to outliers  The complete-link clustering of this set splits d 2 from its right neighbors – clearly undesirable.  The reason is the outlier d 1.  This shows that a single outlier can negatively affect the outcome of complete-link clustering.  Single-link clustering does better in this case. 104

105 105 Computational Complexity In the first iteration, all HAC methods need to compute similarity of all pairs of n individual instances which is O(n 2 ). In each of the subsequent n  2 merging iterations, compute the distance between the most recently created cluster and all other existing clusters. In order to maintain an overall O(n 2 ) performance, computing similarity to each cluster must be done in constant time. Often O(n 3 ) if done naively or O(n 2 log n) if done more cleverly PrasadL17HierCluster

106 Introduction to Information Retrieval 106 Comparison of HAC algorithms 106 methodcombination similarity time compl.optimal?comment single-linkmax intersimilarity of any 2 docs Ɵ(N 2 )yeschaining effect complete-linkmin intersimilarity of any 2 docs Ɵ(N 2 log N)nosensitive to outliers group-averageaverage of all simsƟ(N 2 log N)nobest choice for most applications centroidaverage intersimilarity Ɵ(N 2 log N)noinversions can occur

107 107 Group Average Agglomerative Clustering Use average similarity across all pairs within the merged cluster to measure the similarity of two clusters. Compromise between single and complete link. Two options: Averaged across all ordered pairs in the merged cluster Averaged over all pairs between the two original clusters No clear difference in efficacy PrasadL17HierCluster

108 108 Computing Group Average Similarity Assume cosine similarity and normalized vectors with unit length. Always maintain sum of vectors in each cluster. Compute similarity of clusters in constant time: PrasadL17HierCluster

109 109 Medoid as Cluster Representative The centroid does not have to be a document. Medoid: A cluster representative that is one of the documents (E.g., the document closest to the centroid) One reason this is useful Consider the representative of a large cluster (>1K docs) The centroid of this cluster will be a dense vector The medoid of this cluster will be a sparse vector Compare: mean/centroid vs. median/medoid PrasadL17HierCluster

110 110 Efficiency: “Using approximations” In standard algorithm, must find closest pair of centroids at each step Approximation: instead, find nearly closest pair use some data structure that makes this approximation easier to maintain simplistic example: maintain closest pair based on distances in projection on a random line Random line Prasad

111 111 Major issue - labeling After clustering algorithm finds clusters - how can they be useful to the end user? Need pithy label for each cluster In search results, say “Animal” or “Car” in the jaguar example. In topic trees (Yahoo), need navigational cues. Often done by hand, a posteriori. PrasadL17HierCluster

112 112 How to Label Clusters Show titles of typical documents Authors create titles for quick scanning! But you can only show a few titles which may not fully represent cluster E.g., Our NLDB-08 paper synthesizes titles for News clusters Show words/phrases prominent in cluster More likely to fully represent cluster (tag cloud) Use distinguishing words/phrases Differential/contrast labeling PrasadL17HierCluster

113 113 Labeling Common heuristics - list 5-10 most frequent terms in the centroid vector. Drop stop-words; stem. Differential labeling by frequent terms Within a collection “Computers”, clusters all have the word computer as frequent term. Discriminant analysis of centroids. Perhaps better: distinctive noun phrase PrasadL17HierCluster

114 114 What is a Good Clustering? Internal criterion: A good clustering will produce high quality clusters in which: the intra-class (that is, intra-cluster) similarity is high the inter-class similarity is low The measured quality of a clustering depends on both the document representation and the similarity measure used PrasadL17HierCluster

115 115 External criteria for clustering quality Quality measured by its ability to discover some or all of the hidden patterns or latent classes in gold standard data Assesses a clustering with respect to ground truth … requires labeled data Assume documents with C gold standard classes, while our clustering algorithms produce K clusters, ω 1, ω 2, …, ω K with n i members. PrasadL17HierCluster

116 116 External Evaluation of Cluster Quality Simple measure: purity, the ratio between the dominant class in the cluster π i and the size of cluster ω i Others are entropy of classes in clusters (or mutual information between classes and clusters) PrasadL17HierCluster

117 117      Cluster ICluster IICluster III Cluster I: Purity = 1/6 * (max(5, 1, 0)) = 5/6 Cluster II: Purity = 1/6 * (max(1, 4, 1)) = 4/6 Cluster III: Purity = 1/5 * (max(2, 0, 3)) = 3/5 Purity example

118 Rand Index Number of points Same cluster in clustering Different clusters in clustering Same class in ground truth A C Different classes in ground truth B D

119 119 Rand index: symmetric version Compare with standard Precision and Recall. PrasadL17HierCluster

120 Rand Index example: 0.68 Number of points Same Cluster in clustering Different Clusters in clustering Same class in ground truth 20 24 Different classes in ground truth 20 72

121 Final word and resources In clustering, clusters are inferred from the data without human input (unsupervised learning) However, in practice, it’s a bit less clear: there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents,...


Download ppt "PrasadL16FlatCluster1 Flat Clustering Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti."

Similar presentations


Ads by Google