Download presentation
Presentation is loading. Please wait.
Published byLucinda Cobb Modified over 9 years ago
1
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques
2
K-means clustering This an elementary but very popular method for clustering. Our goal is to find the k mean vectors or “cluster centers”. Initialize k, m1, m2, …, mk Repeat Classify samples according to its nearest mi Recompute mi Until there is no change in mi Return m1, m2, …, mk
3
Complexity The computational complexity of the algorithm is defined as follows: O( n d c T ) Where d is the number of features, n is the number of examples, c is the number of clusters, and T is the number of iterations. The number of iterations is normally much less than the number of examples.
4
Figure 10.3
5
K-means clustering Disadvantage 1: Prone to fall into local minima. This can be solved with more computational power by running the algorithm many times with different initial means. Disadvantage 2: Susceptible to outliers. One solution is to replace the mean with the median.
6
K-means clustering Hugo Steinhaus Born in January 14, 1887 (Austria-Hungary). Professor at the University of Wroclaw, Notre Dame, and Sussex. Authored over 170 works in mathematics. First one to use k-means clustering
7
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques
8
The Sum-of-Squared Error We can now define the goal of clustering: Goal: To divide a dataset of examples into c disjoint subsets D1, D2, …, Dc, so that the distance between examples within the same partition is small compared to the distance between examples on different partitions. To achieve this, we define the c means by looking to minimize a metric.
9
Metric Let mi be the mean of examples on partition Di: mi = (1 / ni) Σ x (for all x in Di) Then the metric to minimize is the sum-of-squared errors: Je = Σi Σx || x – mi || 2 For all x in Di where index i goes along the clusters.
10
Figure 10.10
11
Others Hierarchical clustering Clusters have subclusters which also have subclusters and so on. Online clustering As time goes on new information may call for restructuring the clusters (plasticity). But we don’t want this to happen very often (stability).
12
Figure 10.11
13
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques
14
Vector Quantisation Data will be represented with prototype vectors.
15
Feature Mapping Input Nodes
16
Feature Mapping Input Nodes [ x1, x2, x3, x4 ] T [ w1, w2, w3, w4 ] T w1 w2 w3 w4
17
Feature Mapping Weight vector will be mapped into the feature space. [ w1, w2, w3, w4 ] T [ x1, x2, x3, x4 ] T
18
SOM Algorithm Initialization Select the number of neurons in the map Choose random values for all weights Learning Repeat For each example, find the neuron closest to the point: min || x - w ||
19
SOM Algorithm Winner takes all Input Nodes Update weights of winner only (and neighbors)
20
SOM Algorithm Update Weights Update weights for the closest neuron and neighbors: w t+1 = w t + A(x,w) (x – w) where is the learning rate Function A defines a neighboring function.
21
SOM Algorithm The neighboring function A:
22
SOM Algorithm Usage For every test point Select the closest neuron using minimum Euclidean distance: min || x - w ||
23
Mapping a Grid to a Grid
24
SOM Algorithm Comments Neighborhoods should be large at the beginning but short as the nodes gain a specific ordering Global ordering comes naturally (complexity theory) Architecture of the map: Few nodes: underfitting Many nodes: overfitting
25
Teuvo Kohonen Born in 1934, Finland He has several books and over 300 papers His most famous work is in Self Organizing Maps Member of the Academy of Finland Awards: IEEE Neural Networks Council Pioneer Award, 1991 Technical Achievement Award of IEEE, 1995 Frank Rosenblatt Technical Field Award, 2008
26
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques
27
Cluster Tendency Cluster tendency is a preprocessing step that indicates when data objects exhibit a clustering structure; it precludes using clustering when the data appears randomly generated under the uniform distribution over a sample window of interest in the attribute space
28
Example Cluster Tendency Clustering captures inherent data groups. Clustering does not capture groups; Results come from random variations.
29
Example Cluster Tendency Problem: How do we choose the sampling window? Rule of thumb: Create a window centered at the mean that captures half the total number of examples.
30
Cluster Validation Cluster validation is used to assess the value of the output of a clustering algorithm. Internal Statistics are devised to capture the quality of the induced clusters using the available data objects. External If the validation is performed by gathering statistics comparing the induced clusters against an external and independent classification of objects, the validation is called external.
31
Example Cluster Validation
32
Metrics Cluster Validation One type of statistical metrics is defined in terms of a 2 x2 table where each entry counts the number of object pairs that agree or disagree with the class and cluster to which they belong: E11 E12 E21 E22 Same class; Same cluster Different class; Different cluster Same class; Different cluster Different class; Same cluster
33
Examples Metrics Cluster Validation Rand: [ E11 + E22 ] / [ E11 + E12 + E21 + E22 ] Jaccard: E11 / [ E11 + E12 + E21 ]
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.