Presentation is loading. Please wait.

Presentation is loading. Please wait.

Clustering.

Similar presentations


Presentation on theme: "Clustering."— Presentation transcript:

1 Clustering

2 What is Cluster Analysis
k-Means Adaptive Initialization EM Learning Mixture Gaussians E-step M-step k-Means vs Mixture of Gaussians

3 What is Cluster Analysis?
Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters Cluster analysis Grouping a set of data objects into clusters Clustering is unsupervised classification: no predefined classes Typical applications As a stand-alone tool to get insight into data distribution As a preprocessing step for other algorithms

4 k-Means Clustering

5 Feature space Sample

6 Norm ||x|| ≥ 0 equality only if x=0 || x||=|| ||x||
||x1+x2||≤ ||x1||+||x2|| lp norm

7 Metric d(x,y) ≥ 0 equality holds only if x=y d(x,y) = d(y,x)
d(x,y) ≤ d(x,z)+d(z,y)

8 k-means Clustering Cluster centers c1,c2,.,ck with clusters C1,C2,.,Ck

9 Error The error function has a local minima if,

10 k-means Example (K=2) Pick seeds Reassign clusters Compute centroids
Reasssign clusters x Compute centroids Reassign clusters Converged!

11 Algorithm Random initialization of k cluster centers do {
-assign to each xi in the dataset the nearest cluster center (centroid) cj according to d2 -compute all new cluster centers } until ( |Enew - Eold| <  or number of iterations max_iterations)

12 Adaptive k-means learning (batch modus) for large datasets
Random initialization of cluster centers do { chose xi from the dataset cj* nearest cluster center (centroid) cj according to d2 } until ( |Enew - Eold| <  or number of iterations max_iterations)

13

14 How to chose k? You have to know your data! Repeated runs of k-means clustering on the same data can lead to quite different partition results Why? Because we use random initialization

15

16 Adaptive Initialization
Choose a maximum radius within every data point should have a cluster seed after completion of the initialization phase In a single sweep go through the data and assigns the cluster seeds according to the chosen radius A data point becomes a new cluster seed, if it is not covered by the spheres with the chosen radius of the other already assigned seeds K-MAI clustering (Wichert et al. 2003)

17 EM Expectation Maximization Clustering

18 Feature space Sample Mahalanobis distance

19 Bayes’s rule After the evidence is obtained; posterior probability
P(a|b) The probability of a given that all we know is b (Reverent Thomas Bayes )

20 Covariance Measuring the tendency two features xi and xj varying in the same direction The covariance between features xi and xj is estimated for n patterns

21

22 Learning Mixture Gaussians
What kind of probability distribution might have generated the data Clustering presumes that the data are generated from mixture distributions, P

23 The Normal Density Univariate density Where:
Density which is analytically tractable Continuous density A lot of processes are asymptotically Gaussian Where:  = mean (or expected value) of x 2 = expected squared deviation or variance

24

25 Example: Mixture of 2 Gaussians

26 Multivariate density where:
Multivariate normal density in d dimensions is: where: x = (x1, x2, …, xd)t (t stands for the transpose vector form)  = (1, 2, …, d)t mean vector  = d*d covariance matrix || and -1 are determinant and inverse respectively

27

28 Example: Mixture of 3 Gaussians

29 A mixture distribution has k components, each of which is a distribution in its own
A data point is generated by first choosing a component and than generating a sample from that component

30 Let C denote the component with values 1,…,k
Mixture distribution is given by x refers to the data point wi=P(C=i) the weight of each component µi the mean (vector) of each component ∑i (matrix) the covariance of each component

31 If we knew which component generated each data point, then it would be easy to recover the component Gaussians We could fit the parameters of a Gaussian to a data set

32 Basic EM idea Pretend that we know the parameters of the model
Infer the probability that each data point belongs to each component Refit the component to the data, where each component is fitted to the entire data set Each point is weighted by the probability that it belongs to that component

33 Algorithm We initialize the mixture parameters arbitrarily
E- step (expectation): Compute the probabilities pij=P(C=i|xj), the probability that xj was generated by the component I By Bayes’ rule pij=P(xj|C=i)P(C=i) P(xj|C=i) is just the probability at xj of the ith Gaussian P(C=i) is just the weight parameter of the ith Gaussian

34 M-step (maximization):
wi=P(C=i)

35

36

37

38

39

40

41

42 Problems Gaussians component shrinks so that it covers just a single point Variance goes to zero, and likelihood will go to infinity Two components can “merge”, acquiring identical means and variances and sharing their data points Serious problems, especially in high dimensions It helps to initialize the parameters with reasonable values

43 k-Means vs Mixture of Gaussians
Both are iterative algorithms to assign points to clusters K-Means: minimize MixGaussian: maximize P(x|C=i) Mixture of Gaussian is the more general formulation Equivalent to k-Means when ∑i =I ,

44 What is Cluster Analysis
k-Means Adaptive Initialization EM Learning Mixture Gaussians E-step M-step k-Means vs Mixture of Gaussians

45 Tree Clustering COBWEB


Download ppt "Clustering."

Similar presentations


Ads by Google