Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cluster Analysis CS240B Lecture notes based on those by © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004.

Similar presentations


Presentation on theme: "Cluster Analysis CS240B Lecture notes based on those by © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004."— Presentation transcript:

1 Cluster Analysis CS240B Lecture notes based on those by © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004

2 2 What is Cluster Analysis? l Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized

3 3 Cluster Analysis: many & diverse applications l Understanding – Group related documents for browsing, group genes and proteins that have similar functionality, or group stocks with similar price fluctuations l Summarization – Reduce the size of large data sets Clustering precipitation in Australia

4 4 Types of Clusterings l A clustering is a set of clusters l Important distinction between hierarchical and partitional sets of clusters l Partitional Clustering – A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset l Hierarchical clustering – A set of nested clusters organized as a hierarchical tree

5 5 Partitional Clustering Original Points A Partitional Clustering

6 6 Hierarchical Clustering Traditional Hierarchical Clustering Non-traditional Hierarchical ClusteringNon-traditional Dendrogram Traditional Dendrogram

7 7 Other Distinctions Between Sets of Clusters l Exclusive versus non-exclusive – In non-exclusive clusterings, points may belong to multiple clusters. – Can represent multiple classes or ‘border’ points l Fuzzy versus non-fuzzy – In fuzzy clustering, a point belongs to every cluster with some weight between 0 and 1 – Weights must sum to 1 – Probabilistic clustering has similar characteristics l Partial versus complete – In some cases, we only want to cluster some of the data l Heterogeneous versus homogeneous – Cluster of widely different sizes, shapes, and densities

8 8 Clustering Algorithms l K-means and its variants l Hierarchical clustering l Density-based clustering

9 9 K-means Clustering l Partitional clustering approach l Each cluster is associated with a centroid (center point) l Each point is assigned to the cluster with the closest centroid l Number of clusters, K, must be specified l The basic algorithm is very simple

10 10 K-means Clustering – Details l Initial centroids are often chosen randomly. – Clusters produced vary from one run to another. l The centroid is (typically) the mean of the points in the cluster. l ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc. l K-means will converge for common similarity measures mentioned above. l Most of the convergence happens in the first few iterations. – Often the stopping condition is changed to ‘Until relatively few points change clusters’ l Complexity is O( n * K * I * d ) – n = number of points, K = number of clusters, I = number of iterations, d = number of attributes

11 11 Two different K-means Clusterings Sub-optimal ClusteringOptimal Clustering Original Points

12 12 K-means Clusterings-example Initial centroids: Case 1 -2-1.5-0.500.511.52 0 0.5 1 1.5 2 2.5 3 x y

13 13 Importance of Choosing Initial Centroids

14 14 Importance of Choosing Initial Centroids

15 15 K-means Clusterings: Initial centers Initial centers: Case 1 -2-1.5-0.500.511.52 0 0.5 1 1.5 2 2.5 3 x y

16 16 Importance of Choosing Initial Centroids …

17 17 Importance of Choosing Initial Centroids …

18 18 Problems with Selecting Initial Points l If there are 2 ‘real’ clusters then any two centroids produces an optimal clustering—i.e., each point will become the centroid of cluster. l But if we know that there are K>2 `real’ clusters the chance of selecting one centroid for each becomes small as k grows. – For example, if K = 10, then probability = 10!/10 10 = 0.00036 – Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t – An obvious limitation of random initial assignements (also we normally do not know K)

19 19 Evaluating K-means Clusters l Most common measure is Sum of Squared Error (SSE) – For each point, the error is the distance to the nearest cluster – To get SSE, we square these errors and sum them. – x is a data point in cluster C i and m i is the representative point for cluster C i – One easy way to reduce SSE is to increase K, the number of clusters – But in general, the fewer the clusters the better.  A good clustering with smaller K can have a lower SSE than a poor clustering with higher K  How do we minimize both K and SSE?  For a given K we can try different sets of initial random points.

20 20 10 Clusters Example (5 pairs) Starting with two initial centroids in one cluster of each pair of clusters

21 21 10 Clusters Example Starting with two initial centroids in one cluster of each pair of clusters

22 22 10 Clusters Example Starting with some pairs of clusters having three initial centroids, while other have only one.

23 23 10 Clusters Example Starting with some pairs of clusters having three initial centroids, while other have only one.

24 24 Solutions to Initial Centroids Problem l Multiple runs – Helps, but probability is not on your side l Start with more than k initial centroids and then select k centroids from the most widely separated resulting clusters l Use hierarchical clustering to determine initial centroids on a small sample of data l Bisecting K-means – Not as susceptible to initialization issues l Postprocessing

25 25 Improvements l Eliminate outliers (preprocessing) and very small clusters that may represent outliers l Outliers are defined as points that have large distances from every centroids (they are bad because they exercise an excessive—quadratic effect) l On the other hand many outliers in the same vicinity, could indicate the need to generate a new cluster l Split ‘loose’ clusters, i.e., clusters with relatively high SSE l Merge clusters that are ‘close’ and that have relatively low SSE – Some clusters could turn out empty. l Can use these steps during the clustering process

26 26 Bisecting K-means l Bisecting K-means algorithm – Variant of K-means that can produce a partitional or a hierarchical clustering

27 27 Bisecting K-means Example

28 28 Limitations of K-means l K-means has problems when clusters are of differing – Sizes – Densities – Non-globular shapes l K-means has problems when the data contains outliers.

29 29 Limitations of K-means: Differing Sizes Original Points K-means (3 Clusters)

30 30 Limitations of K-means: Differing Density Original Points K-means (3 Clusters)

31 31 Limitations of K-means: Non-globular Shapes Original Points K-means (2 Clusters)

32 32 Overcoming K-means Limitations Original PointsK-means Clusters One solution is to use many clusters. Find parts of clusters, but need to put together.

33 33 Overcoming K-means Limitations Original PointsK-means Clusters

34 34 Overcoming K-means Limitations Original PointsK-means Clusters

35 35 Hierarchical Clustering l Produces a set of nested clusters organized as a hierarchical tree l Can be visualized as a dendrogram – A tree like diagram that records the sequences of merges or splits

36 36 Strengths of Hierarchical Clustering l Do not have to assume any particular number of clusters – Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level l They may correspond to meaningful taxonomies – Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)

37 37 Hierarchical Clustering l Two main types of hierarchical clustering – Agglomerative:  Start with the points as individual clusters  At each step, merge the closest pair of clusters until only one cluster (or k clusters) left – Divisive:  Start with one, all-inclusive cluster  At each step, split a cluster until each cluster contains a point (or there are k clusters) l Traditional hierarchical algorithms use a similarity or distance matrix – Merge or split one cluster at a time

38 38 Agglomerative Clustering Algorithm l More popular hierarchical clustering technique l Basic algorithm is straightforward 1.Compute the proximity matrix 2.Let each data point be a cluster 3.Repeat 4.Merge the two closest clusters 5.Update the proximity matrix 6.Until only a single cluster remains l Key operation is the computation of the proximity of two clusters – Different approaches to defining the distance between clusters distinguish the different algorithms

39 39 How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p5......... Similarity? l MIN l MAX l Group Average l Distance Between Centroids l Objective function – Ward’s Method Proximity Matrix

40 40 Cluster Similarity: Ward’s Method l Similarity of two clusters is based on the increase in squared error when two clusters are merged – Similar to group average if distance between points is distance squared l Less susceptible to noise and outliers l Biased towards globular clusters l Hierarchical analogue of K-means – Can be used to initialize K-means

41 41 Recent Hierarchical Clustering Methods l Major weakness of agglomerative clustering methods – do not scale well: time complexity of at least O(n 2 ), where n is the number of total objects – can never undo what was done previously l Integration of hierarchical with distance-based clustering – BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters – ROCK (1999): clustering categorical data by neighbor and link analysis – CHAMELEON (1999): hierarchical clustering using dynamic modeling

42 42 Cluster Validity l For supervised classification we have a variety of measures to evaluate how good our model is – Accuracy, precision, recall l For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters? l But “clusters are in the eye of the beholder”!

43 43 l Order the similarity matrix with respect to cluster labels and inspect visually. Using Similarity Matrix for Cluster Validation

44 44 Using Similarity Matrix for Cluster Validation l Clusters in random data are not so crisp K-means

45 45 K-means on data streams l Streams is partitioned into windows l Initially: for each new point P a new centroid is created with probability d/D where D is proportional to size of the domain, and d is the actual distance of P to the closest centroid [Callaghan ’02] l Later windows can use the centroids of the previous window---improved by – Eliminating empty clusters – Splitting large sparse ones – Merging large clusters l Large changes in cluster position/population denote concept changes. l Maintenance on sliding windows, or window panes is harder.


Download ppt "Cluster Analysis CS240B Lecture notes based on those by © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004."

Similar presentations


Ads by Google