Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cluster Analysis Finding Groups in Data C. Taillie February 19, 2004.

Similar presentations


Presentation on theme: "Cluster Analysis Finding Groups in Data C. Taillie February 19, 2004."— Presentation transcript:

1 Cluster Analysis Finding Groups in Data C. Taillie February 19, 2004

2 Atypical Example of Clustering Digitized Photographs of Ten Natural Scenes Can we group them into clusters so that photos within each cluster are similar as images? 1 2 3 4 5 6 7 8 9 10

3 Dendrogram for Atypical Example 1 2 3 4 5 6 7 8 9 10 Dendrogram Dissimilarity Photo

4 Dendrogram = Many Groupings 1 2 3 4 5 6 7 8 9 10

5 Splitogram: Choosing Number of Clusters 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 89 1 2 3 4 5 6 7 8 9 Splitting Node Dissimilarity Splitogram

6 Atypical Example Splitogram levels off at a dissimilarity of about 0.6 (max=1.4) Typically, splitogram levels off to zero Number of objects to cluster is small (only 10) Typically, number of objects is in hundreds or thousands, sometimes millions Number of measurements per object is immense (one grey-scale value for each of several million pixels) Typically, number of measurements per object ranges from a few to a few hundred Complicated spatial structure among the measurements and a specialized dissimilarity measure tailored to that spatial structure Typically, one uses generic dissimilarity measures like Euclidian distance What makes this example atypical?

7 Entomology of “Dendrogram” Statisticians did not invent dendrograms dendron = Tree (Greek) gramma = Diagram (Greek) dendrogram = Tree Diagram (specifically for classification or grouping of objects) dendrograms have been around for aeons, especially in taxonomy

8 Taxonomic Dendrogram for Primates Artistic Issues: Is the Root at the top or bottom of the tree? (Don Knuth ……) Ordering of the leaf nodes? Not significant, except that each cluster represents a consecutive sequence However, ordering of leaf nodes can influence reader’s perception. Try interchanging “human” with the two “chimp” groupings!!!!

9 Classification vs Clustering Classification A priori set of labels (categories) with meaningful descriptions/interpretations so that, with sufficient effort, an “expert” could assign a given object to its true category  Landcover categories (forest, urban, desert, grassland, etc.)  Taxonomic categories (species of moths) Set of “training” data whose objects have been expertly classified Goal in Statistical Classification: Develop rules so that new objects can be classified without the need for an expert ObjectX1X1 X2X2 X3X3 X4X4 Label Training Objects 1 C 2 B... n A New Objects ? ?

10 Classification Training Data with Three Categories X1X1 X2X2 X1X1 X2X2 Simple Classification Rule X1X1 X2X2 Complex Classification Rule New Object

11 Classification Essential feature of Classification: Each object has a TRUE category so the performance of any classification rule can be assessed (doing so may be expensive) Simple Complex Rules Inaccuracy Good Bad Training Data Test Data Best Rule

12 Classification vs Clustering Clustering Collection of objects and a set of measurements on each object Goal in Statistical Clustering: Divide the objects into groups so that objects in the same group are “similar” and objects in different groups are “dissimilar” Assign an identifying group label to each object. Objects with the same label belong to the same group, but a clustering algorithm attaches no other meaning or interpretation to the labels. Interpretation of the labels is up to the user. Clustering algorithm itself does not provide a way of assigning new objects to the clusters ObjectX1X1 X2X2 X3X3 X4X4 Label 1 ? 2 ?... n ?

13 Clustering Original (ungrouped) Data X1X1 X2X2 Three Groups X1X1 X2X2 Four Groups X1X1 X2X2 Which of the two groupings is “correct”?

14 Clustering Cluster Analysis is an Exploratory Tool There is no notion of a TRUE cluster Therefore, no way of evaluating the performance of a particular clustering algorithm The only criterion is whether it yields anything useful for you

15 Applications of Cluster Analysis Performance Evaluation (first few slides---unusual application) Precursor to Classification Assign meaningful interpretations to cluster labels and obtain an initial training set Reduce/Simplify a Large Database Databases in economics and commerce can be megabytes or gigabytes in size. Even something as simple as computing a mean can consume huge amounts of computer time

16 Database Reduction - 1 May be millions of records (rows) in database and hundreds or thousands of variables ObjectX1X1 X2X2 X3X3 X4X4 Label 1 C 2 A 3 B 4 C... n E Get rid of the variables ObjectLabel 1C 2A 3B 4C... nE

17 Database Reduction - 2 Collapse every cluster to its centroid Label Cluster Size A 11 B 14 C 12 Create Auxiliary Table – One row for each cluster X1X1 X2X2 X1X1 X2X2 A B C

18 Clustering Algorithms

19 Ward’s Method Ward, J.H. (1963). Hierarchical groupings to optimize an objective function. J. American Statistical Society, 58, 236-244. Within- and Between Group Sum of Squares SS Total = SS Within + SS Between A “good” clustering should have:  Small within-group sum of squares (objects in a group should be similar)  Large between group sum of squares (objects in different groups should be dissimilar) Bottom-up (agglomerative) hierarchical approach:  Start with each object in its own separate cluster (bottom of the dendrogram --- SS Within = 0)  Combine clusters two at a time; at each step combine the pair of clusters that gives the smallest increase in SS Within

20 Dendrogram for Ward’s Method Within-group SS 0 SS Total Value of SS Within just after the fusion

21 Other Agglomerative Hierarchical Methods Dissimilarity measure D(a,b) between individual objects a and b  Euclidian distance  Manhattan distance  Minkowski distance Extend D to a measure of dissimilarity D(A,B) between groups of objects A and B. This is called the linkage method:  Single linkage  Complete linkage  Average linkage  Centroid linkage  Ward’s linkage (should be called Wishart’s linkage)

22 Single linkage D(A,B) is the shortest link between A and B Complete linkage D(A,B) is the longest link between A and B Average linkage D(A,B) is the average of all the links between A and B Centroid linkage Ward’s linkage (should really be called Wishart’s linkage) Centroid linkage weighted by the cluster sizes Linkage Methods A B a b Link between A and B

23 Wishart showed that when groups A and B are fused, the increase in the within group sum of squares is where D 2 is squared Euclidean distance. So it is a weighted form of centroid linkage. The weight can be rewritten as Ward’s Linkage

24 Why is Weighting Desirable ? 10 objects 50 objects Which pair of group would you prefer to fuse? The pair on the left or on the right? The “sample sizes” on the right are larger so there is stronger evidence that the groups on the right are really different. We can achieve this choice by weighting centroidal distance by the average of the group sizes.

25 Why is Harmonic Mean Better than Arithmetic Mean ? 1 object 99 objects 50 objects Which pair of group would you prefer to fuse? The pair on the left or on the right? The total sample sizes are the same (100) for each pair. But the “sample sizes” on the right are balanced so there is stronger evidence that the groups on the right are really different. We can achieve this choice by weighting centroidal distance by the harmonic mean of the group sizes instead of the arithmetic mean.

26 Classification of Clustering Methods HierarchicalPartitional AgglomerativeDivisive

27 Agglomerative vs Divisive: Practicalities How much computer time is required with N objects? Agglomerative There are N(N-1)/2 pairs of objects, so computer time is O(N 2 ) With N=100, O(N 2 ) is about 10,000 With N=1,000, O(N 2 ) is about one million With N=10,000, O(N 2 ) is about one hundred million Divisive There are 2 N-1 -1 pairs of nonempty subsets, so computer time is O(2 N ) With N=10, O(2 N ) is about 1,000 With N=20, O(2 N ) is about one million With N=30, O(2 N ) is about one billion Examination of all possible subsets is hopeless unless N is very small.

28 Partitional Methods The desired number of clusters, k, is specified beforehand. Three best-known methods: k-means (moving centroid method) k-means (Hartigan’s method) ISODATA (k-means with many embellishments) Available in many of the GIS and image analysis packages, e.g. ENVI

29 k-Means (Moving Centroids) Specify the value of k Specify k points (called “seeds”) in measurement space. The algorithm moves the seeds around until they are the centroids of the k desired groups. Make a pass through the data points. Assign each data point to its closest seed. This determines a partition of the data into k groups, each labeled by its seed. However, a group’s centroid may fail to coincide with the group’s seed. For each group, compute its centroid. Use these centroids as the seeds in the next iteration. Keep iterating until centroids and seeds coincide (to a user- specified degree of accuracy).

30 k-Means (Hartigan’s Method) Specify the value of k Specify k nonempty starting groups. Make a pass through the data points. For each data point, ask if the within-group sum of squares could be reduced by moving that data point to another group. If yes, move it; otherwise proceed to the next data point Keep iterating until no data point can be moved. Hartigan discovered some computationally simple rules for deciding if a data point should be moved and for finding the best group to move it to.

31 ISODATA Basically the same as the moving centroid version of k- means, except that the user specifies a range of acceptable values for the desired number of clusters. After each iteration of moving centroids, the current groups are examined to see if any should be split into two subgroups or fused into a larger group. These decisions are reached using a complicated set of rules based on the within-group standard deviations along each coordinate axis. Caution: The inner workings of ISODATA tend to be very specific to the particular implementation.

32 Every Pathology Exists No matter what clustering method you propose, someone will manage to come up with a data set (usually artificial) for which your method produces a foolish clustering

33 Example of a Pathology: 3-Dimensional Chain Link k-means is a disaster. The centroid of each group is close to many members of the other group Single linkage does quite well. In general, single linkage is good at finding “snake-like” clusters


Download ppt "Cluster Analysis Finding Groups in Data C. Taillie February 19, 2004."

Similar presentations


Ads by Google