Presentation is loading. Please wait.

Presentation is loading. Please wait.

Clustering Algorithms Presented by Michael Smaili CS 157B Spring 2009 1.

Similar presentations


Presentation on theme: "Clustering Algorithms Presented by Michael Smaili CS 157B Spring 2009 1."— Presentation transcript:

1 Clustering Algorithms Presented by Michael Smaili CS 157B Spring 2009 1

2 Terminology  Cluster: a small group or bunch of something  Clustering: the unsupervised classification of patterns (observations, data items, or feature vectors) into clusters. Also referred to as data clustering, cluster analysis, typological analysis  Centroid: center of a cluster  Distance Measure: determines how the similarity between two elements is calculated.  Dendrogram: a tree diagram frequently used to illustrate the arrangement of the clusters 3

3 Applications  Data Mining  Pattern Recognition  Machine Learning  Image Analysis  Bioinformatics  and many more… 4

4 Simple Example 5

5 Idea Behind Unsupervised Learning  You walk into a bar.  A stranger approaches and tells you: “I’ve got data from k classes. Each class produces observations with a normal distribution and variance σ 2 I. Standard simple multivariate Gaussian assumptions. I can tell you all the P(w i )’s.”  So far, looks straightforward. “I need a maximum likelihood estimate of the μ i ’s.“  No problem: “There’s just one thing. None of the data are labeled. I have datapoints, but I don’t know what class they’re from (any of them!) 6

6 Classifications  Exclusive Clustering  Overlapping Clustering  Hierarchical Clustering  Probabilistic Clustering 7

7 Exclusive Clustering 8  Data that is grouped in an exclusive way, so that if a particular data belongs to a definite cluster then it could not be included in another cluster. Separation of points are achieved by a straight line on a bi-dimensional plane.

8 Overlapping Clustering 9  Uses fuzzy sets to cluster data, so that each point may belong to two or more clusters with different degrees of membership. Various data belongs to multiple clusters.

9 Hierarchical Clustering  Builds (agglomerative), or breaks up (divisive), a hierarchy of clusters. Agglomerative algorithms begin at the leaves of the tree, whereas divisive algorithms begin at the root. Agglomerative Clustering 10

10 Probabilistic Clustering 11  Typically shown as a model in an attempt to optimize the fit between the data and the model using a probabilistic approach. Each cluster can be represented by a parametric distribution, like a Gaussian (continuous) or a Poisson (discrete) and the entire data set is therefore modeled by a mixture of these distributions. Mixture of Multivariate Gaussian

11 Distance Measure  Common Measures Euclidean distance Manhattan distance Maximum norm Mahalanobis distance Hamming distance Minkowski distance (higher dimensional data) 12

12 4 Most Commonly Used Algorithms  K-means  Fuzzy C-means  Hierarchical  Mixture of Gaussians 13

13 K-means 14  An Exclusive Clustering algorithm whose steps are as follows: 1) Let k be the number of clusters 2) Randomly generate k clusters and determine the cluster centers, or generate k random points as temporary cluster centers 3) Assign each point to the nearest cluster center 4) Recompute the new cluster centers 5) Repeat steps 3 and 4 until the centroids no longer move

14 K-means  Suppose: n vectors x 1, x 2,..., x n where each fall into k compact clusters, k < n. Let m i be the center points of the vectors in cluster i. Make initial guesses for the points m 1, m 2,..., m k Until there are no changes in any point  Use the estimated points to classify the samples into clusters  For every cluster, replace m i with the point of all of the samples for cluster i Sample points m 1 and m 2 moving towards the center of two clusters 15

15 Fuzzy C-means 16  An Exclusive Clustering algorithm whose steps are as follows: 1) Initialize U = [u ij ] matrix, U (0) 2) At k-step: calculate center vectors C (k) =[c j ] with U (k) 3) Update U (k), U (k+1) 4) Repeat steps 2 and 3 until ||U (k+1) – U (k) || < ε where ε is the given sensitivity threshold

16 Fuzzy C-means 17  Suppose: 20 data and 3 clusters are used to initialize the algorithm and to compute the U matrix. Color of the data in the graph below is that of the nearest cluster. Assume a fuzzyness coefficient m = 2 and ε = 0.3. Initial graph Final condition reached after 8 steps

17 Fuzzy C-means 18  Can we do better? Yes, but the result is more computations. Assume same conditions as before except ε = 0.01. Result? Final condition reached after 37 steps!

18 Hierarchical 19  A Hierarchical Clustering algorithm whose steps are as follows (agglomerative): 1) Assign each item to a cluster 2) Find the closest (most similar) pair of clusters and merge them into a single cluster 3) Compute distances (similarities) between the new cluster and the old clusters using:  single-linkage  complete-linkage  average-linkage 4) Repeat steps 2 and 3 until there is just a single cluster

19 Hierarchical 19  There is also a divisive hierarchical clustering which does the reverse by starting with all objects in one cluster and subdividing, however divisive methods are generally not available, and rarely have been applied.

20 Single-Linkage  Definitions: proximity matrix D=[d(i,j)] and L(k) is the level of the kth clustering. d[(r),(s)] = proximity between clusters (r) and (s). Follow these steps: 1) Begin with level L(0) = 0 and sequence number m = 0 2) Find a pair (r), (s), such that d[(r),(s)] = min d[(i),(j)] where the minimum is over all pairs of clusters in the current clustering. 3) Increment the sequence number: m = m + 1 and merge clusters (r) and (s) into a single cluster, L(m) = d[(r),(s)] 4) Update D by deleting the rows and columns for clusters (r) and (s) and adding a row and column for the newly formed cluster. The proximity between new cluster (r,s) and old cluster (k) is: d[(k), (r,s)] = min d[(k), (r)], d[(k), (s)] 5) Repeat steps 2 thru 4 until all objects are in one cluster 20

21 Single-Linkage  Suppose we want a hierarchical clustering of distances between some Italian cities using single-linkage. Nearest pair is MI and TO at distance 138  merge into cluster “MI/TO” with its level L(MI/TO) = 138. Sequence number m = 1 21

22 Single-Linkage  Next we compute the distance from this new compound object to all other objects. In single-linkage clustering the rule is that the distance from the compound object to another object is equal to the shortest distance from any member of the cluster to the outside object. Distance from "MI/TO" to RM is chosen to be 564, which is the distance from MI to RM 22

23 Single-Linkage After some computation we have: Finally, we merge the last 2 clusters 23 Hierarchical Tree

24 Mixture of Gaussians 24  A Probabilistic Clustering algorithm whose steps are as follows: 1) Assume there are k components where ω i represents the i’th component and has a mean vector μ i with a covariance matrix σ 2 I

25 Mixture of Gaussians 2) Select a random component i with probability P(ω i ) 3) A datapoint can then be generated as N(μ i,σ 2 I) 25

26 Mixture of Gaussian 4) We can define the general datapoint as N(μ i, Σ i ) where each component generates data from a Gaussian with mean μ i and covariance matrix Σ i. 5) Next, we are interested in calculating the probability that an observation from class ω i, would have the data x given means μ i,..., μ x : P(x|ω i,μ i,..., μ k ) 26

27 Mixture of Gaussian 6) Goal: maximize the probability of a datum given the centers of the Gaussians P(x|μ i ) = Σ i P(ω i ) P(x|ω i,μ 1, μ 2,…, μ k )  P(data|μ i ) = ∏ Σ i P(ω i ) P(x|ω i,μ 1, μ 2,…, μ k ) The most popular and simple algorithm that is used is the Expectation-Maximization (EM) algorithm 27 i=1 N

28 Expectation-Maximization (EM)  An iterative algorithm for finding maximum likelihood estimates of parameters in probabilistic models whose steps are as follows: 1) Initialize the distribution parameters 2) Estimate the Expected value of the unknown variables 3) Re-estimate the distribution parameters to Maximize the likelihood of the data 4) Repeat steps 2 and 3 until convergence 28

29 Expectation-Maximization (EM)  Given probabilities of grades in a class: P(A) = ½, P(B) = μ, P(C) = 2μ, P(D) =½ - 3μ where 0<=μ<=1/6 What is the maximum likelihood estimate of μ?  We begin with a guess for μ, iterating between E and M to improve our estimates of μ and a and b (a = # of A’s and b = #’s of B’s) Define μ(t) as the estimate of μ on the t’th iteration b(t) as the estimate of b on the t’th iteration 29

30 EM Convergence  Prob(data|μ) must increase or remain the same between each iteration but it can never exceed 1, therefore it must converge. 30

31 Mixture of Gaussian  Now let us see the affects of the probabilistic approach over several iterations using the EM algorithm. 31

32 Mixture of Gaussian  After the first iteration 32

33 Mixture of Gaussian  After the second iteration 33

34 Mixture of Gaussian  After the third iteration 34

35 Mixture of Gaussian  After the fourth iteration 35

36 Mixture of Gaussian  After the fifth iteration 36

37 Mixture of Gaussian  After the sixth iteration 37

38 Mixture of Gaussian  After the 20th iteration 38

39 Questions? 39

40 References  “A Tutorial on Clustering Algorithms”, http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/ http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/  “Cluster Analysis”, http://en.wikipedia.org/wiki/Data_clusteringhttp://en.wikipedia.org/wiki/Data_clustering  “Clustering”, http://www.cs.cmu.edu/afs/andrew/course/15/381- f08/www/lectures/clustering.pdfhttp://www.cs.cmu.edu/afs/andrew/course/15/381- f08/www/lectures/clustering.pdf  “Clustering with Gaussian Mixtures”, http://autonlab.org/tutorials/gmm14.pdf. http://autonlab.org/tutorials/gmm14.pdf  “Data Clustering, A Review”, http://www.cs.rutgers.edu/~mlittman/courses/lightai03/jain99data.pdf http://www.cs.rutgers.edu/~mlittman/courses/lightai03/jain99data.pdf  “Finding Communities by Clustering a Graph into Overlapping Subgraphs”, www.cs.rpi.edu/~magdon/talks/clusteringIADIS05.pptwww.cs.rpi.edu/~magdon/talks/clusteringIADIS05.ppt 40


Download ppt "Clustering Algorithms Presented by Michael Smaili CS 157B Spring 2009 1."

Similar presentations


Ads by Google