Download presentation
Presentation is loading. Please wait.
Published byJoel Fleming Modified over 9 years ago
1
DATA MINING van data naar informatie Ronald Westra Dep. Mathematics Maastricht University
2
CLUSTERING AND CLUSTER ANALYSIS Data Mining Lecture IV [Chapter 8: sections 8.4 and Chapter 9 from Principles of Data Mining by Hand,, Manilla, Smyth ]
3
1. Clustering versus Classification classification: give a pre-determined label to a sample clustering: provide the relevant labels for classification from structure in a given dataset clustering: maximal intra-cluster similarity and maximal inter-cluster dissimilarity Objectives: - 1. segmentation of space - 2. find natural subclasses
4
Examples of Clustering and Classification 1. Computer Vision
5
Examples of Clustering and Classification: 1. Computer Vision
6
Example of Clustering and Classification: 1. Computer Vision
7
Examples of Clustering and Classification: 2. Types of chemical reactions
8
Examples of Clustering and Classification: 2. Types of chemical reactions
9
Voronoi Clustering Georgy Fedoseevich Voronoy 1868 - 1908
10
Voronoi Clustering A Voronoi diagram (also called a Voronoi tessellation, Voronoi decomposition, Dirichlet tessellation), is a special kind of decomposition of a metric space determined by distances to a specified discrete set of objects in the space, e.g., by a discrete set of points.
11
Voronoi Clustering
14
Voronoi Clustering
15
Voronoi Clustering
18
Partitional Clustering [book section 9.4] score-functions centroid intra-cluster distance inter-cluster distance C-means [book page 303]
19
k-means clustering (also: C-means) The k-means algorithm assigns each point to the cluster whose center (also called centroid) is nearest. The center is the average of all the points in the cluster, ie its coordinates is the arithmetic mean for each dimension separately for all the points in the cluster.
20
k-means clustering (also: C-means) Example: The data set has three dimensions and the cluster has two points: X = (x1, x2, x3) and Y = (y1, y2, y3). Then the centroid Z becomes Z = (z1, z2, z3), where z1 = (x1 + y1)/2 and z2 = (x2 + y2)/2 and z3 = (x3 + y3)/2
21
k-means clustering (also: C-means) This is the basic structure of the algorithm (J. MacQueen, 1967): Randomly generate k clusters and determine the cluster centers or directly generate k seed points as cluster centers Assign each point to the nearest cluster center. Recompute the new cluster centers. Repeat until some convergence criterion is met (usually that the assignment hasn't changed).
22
C-means [book page 303] while changes in cluster C k % form clusters for k=1,…,K do C k = {x | ||x – r k || < || x – r l || } end % compute new cluster centroids for k=1,…,K do r k = mean({x | x C k }) end
23
k-means clustering (also: C-means) The main advantages of this algorithm are its simplicity and speed, which allows it to run on large datasets. Yet it does not systematically yield the same result with each run of the algorithm. Rather, the resulting clusters depend on the initial assignments. The k- means algorithm maximizes inter-cluster (or minimizes intra-cluster) variance, but does not ensure that the solution given is not a local minimum of variance.
24
k-means clustering
25
k-means clustering (also: C-means)
28
Fuzzy c-means One of the problems of the k-means algorithm is that it gives a hard partitioning of the data, that is to say that each point is attributed to one and only one cluster. But points on the edge of the cluster, or near another cluster, may not be as much in the cluster as points in the center of cluster.
29
Fuzzy c-means Therefore, in fuzzy clustering, each point does not pertain to a given cluster, but has a degree of belonging to a certain cluster, as in fuzzy logic. For each point x we have a coefficient giving the degree of being in the k-th cluster u k (x). Usually, the sum of those coefficients has to be one, so that u k (x) denotes a probability of belonging to a certain cluster:
30
Fuzzy c-means With fuzzy c-means, the centroid of a cluster is computed as being the mean of all points, weighted by their degree of belonging to the cluster, that is:
31
Fuzzy c-means The degree of being in a certain cluster is related to the inverse of the distance to the cluster then the coefficients are normalized and fuzzyfied with a real parameter m > 1 so that their sum is 1. So :
32
Fuzzy c-means For m equal to 2, this is equivalent to normalising the coefficient linearly to make their sum 1. When m is close to 1, then cluster center closest to the point is given much more weight than the others, and the algorithm is similar to k-means.
33
Fuzzy c-means The fuzzy c-means algorithm is greatly similar to the k-means algorithm :
34
Fuzzy c-means Choose a number of clusters Assign randomly to each point coefficients for being in the clusters Repeat until the algorithm has converged (that is, the coefficients' change between two iterations is no more than ε, the given sensitivity threshold) : Compute the centroid for each cluster, using the formula above For each point, compute its coefficients of being in the clusters, using the formula above
35
Fuzzy C-means uij is membership of sample i to custer j ck is centroid of custer i while changes in cluster Ck % compute new memberships for k=1,…,K do for i=1,…,N do ujk = f(xj – ck) end % compute new cluster centroids for k=1,…,K do % weighted mean ck = SUMj jkxk xj /SUMj ujk end
36
Fuzzy c-means The fuzzy c-means algorithm minimizes intra- cluster variance as well, but has the same problems as k-means, the minimum is local minimum, and the results depend on the initial choice of weights.
37
Fuzzy c-means
40
Fuzzy c-means
41
Hierarchical Clustering [book section 9.5] One major problem with partitional clustering is that the number of clusters (= #classes) must be pre-specified !!! This poses the question: what IS the real number of clusters in a given set of data? Answer: it depends! Agglomerative methods: bottom-up Divisive methods: top-down
42
Hierarchical Clustering Agglomerative hierarchical clustering
43
Hierarchical Clustering
44
Hierarchical Clustering
45
Hierarchical Clustering
47
Example of Clustering and Classification
48
1. Clustering versus Classification classification: give a pre-determined label to a sample clustering: provide the relevant labels for classification from structure in a given dataset clustering: maximal intra-cluster similarity and maximal inter-cluster dissimilarity Objectives: - 1. segmentation of space - 2. find natural subclasses
50
DATA ANALYSIS AND UNCERTAINTY Data Mining Lecture V [Chapter 4, Hand, Manilla, Smyth ]
51
Random Variables [4.3] multivariate random variables marginal density conditional density & dependency: p(x|y) = p(x,y) / p(y) * example: supermarket purchases RANDOM VARIABLES
52
Example: supermarket purchases X = n customers x p products; X(i,j) = Boolean variable: “Has customer #i bought a product of type p ?” nA = sum(X(:,A)) is number of customers that bought product A nB = sum(X(:,B)) is number of customers that bought product B nAB = sum(X(:,A).*X(:,B)) is number of customers that bought product B *** Demo: matlab: conditionaldensity RANDOM VARIABLES
53
(conditionally) independent: p(x,y) = p(x)*p(y) i.e. : p(x|y) = p(x) RANDOM VARIABLES
57
SAMPLING
58
ESTIMATION
59
Maximum Likelihood Estimation
62
BAYESIAN ESTIMATION
68
PROBABILISTIC MODEL-BASED CLUSTERING USING MIXTURE MODELS Data Mining Lecture VI [4.5, 8.4, 9.2, 9.6, Hand, Manilla, Smyth ]
69
Probabilistic Model-Based Clustering using Mixture Models A probability mixture model A mixture model is a formalism for modeling a probability density function as a sum of parameterized functions. In mathematical terms:
70
A probability mixture model where p X (x) is the modeled probability distribution function, K is the number of components in the mixture model, and a k is mixture proportion of component k. By definition, 0 < a k < 1 for all k = 1…K and:
71
A probability mixture model h(x | λ k ) is a probability distribution parameterized by λ k. Mixture models are often used when we know h(x) and we can sample from p X (x), but we would like to determine the a k and λ k values. Such situations can arise in studies in which we sample from a population that is composed of several distinct subpopulations.
72
A common approach for ‘decomposing’ a mixture model It is common to think of mixture modeling as a missing data problem. One way to understand this is to assume that the data points under consideration have "membership" in one of the distributions we are using to model the data. When we start, this membership is unknown, or missing. The job of estimation is to devise appropriate parameters for the model functions we choose, with the connection to the data points being represented as their membership in the individual model distributions.
73
Probabilistic Model-Based Clustering using Mixture Models The EM-algorithm [book section 8.4]
74
Mixture Decomposition: The ‘Expectation-Maximization’ Algorithm The Expectation-maximization algorithm computes the missing memberships of data points in our chosen distribution model. It is an iterative procedure, where we start with initial parameters for our model distribution (the a k 's and λ k 's of the model listed above). The estimation process proceeds iteratively in two steps, the Expectation Step, and the Maximization Step.
75
The ‘Expectation-Maximization’ Algorithm The expectation step With initial guesses for the parameters in our mixture model, we compute "partial membership" of each data point in each constituent distribution. This is done by calculating expectation values for the membership variables of each data point.
76
The ‘Expectation-Maximization’ Algorithm The maximization step With the expectation values in hand for group membership, we can recompute plug-in estimates of our distribution parameters. For the mixing coefficient of this is simply the fractional membership of all data points in the second distribution.
77
EM-algorithm for Clustering The Suppose we have data D with a model with parameters and hidden parameters H Interpretation: H = the class label Log-likelihood of observed data:
78
EM-algorithm for Clustering With p the probability over the data D. Let Q be the unknown distribution over the hidden parameters H Then the log-likelihood is:
79
[*Jensen’s inequality]
80
Jensen’s inequality for a concave-down function, the expected value of the function is less than the function of the expected value. The gray rectangle along the horizontal axis represents the probability distribution of x, which is uniform for simplicity, but the general idea applies for any distribution
81
EM-algorithm So: F(Q, ) is a lower-bound on the log-likelihood function l(Q, ). EM alternates between: E-step: maximising F to Q with fixed , and: M-step: maximising F to with fixed Q.
82
EM-algorithm E-step: M-step:
83
Probabilistic Model-Based Clustering using Gaussian Mixtures
84
Probabilistic Model-Based Clustering using Mixture Models
87
Gaussian Mixture Decomposition Gaussian mixture Decomposition is a good classificator. It allows supervised as well as unsupervised learning (find how many classes is optimal, how they should be defined,...). But training is iterative and time consuming. Idea is to set position and width of gaussian distribution(s) to optimize the coverage of learning samples.
88
Probabilistic Model-Based Clustering using Mixture Models
99
The End
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.