Presentation is loading. Please wait.

Presentation is loading. Please wait.

Selected Topics in AI: Data Clustering

Similar presentations


Presentation on theme: "Selected Topics in AI: Data Clustering"— Presentation transcript:

1 Selected Topics in AI: Data Clustering
ECOM 6349 Selected Topics in AI: Data Clustering Most of these slides are given from internet – Authors: Tan, Steinbach, Kumar : Jiawei Han

2 Syllabus

3 Syllabus

4 What is Cluster Analysis?
Clustering is unsupervised learning: no predefined classes Cluster: a collection of data objects Objects are similar to objects in same cluster Objects are dissimilar to objects in other clusters Cluster analysis Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups 4

5 What is Cluster Analysis?
Inter-cluster distances are maximized Intra-cluster distances are minimized

6 What is Cluster Analysis?
How many clusters? Six Clusters Two Clusters Four Clusters

7 What is Cluster Analysis?
Exclusive versus non-exclusive In non-exclusive clustering, points may belong to multiple clusters Partial versus complete In some cases, we only want to cluster some of the data Heterogeneous versus homogeneous Cluster of widely different sizes, shapes, and densities

8 What is Cluster Analysis?
Hard Clustering versus Fuzzy Clustering In fuzzy clustering, a point belongs to every cluster with some weight between 0 and 1 Weights must sum to 1 Probabilistic clustering has similar characteristics

9 What Is Good Clustering?
A good clustering method will produce high quality clusters with high intra-class similarity low inter-class similarity The quality of a clustering result depends on the similarity measure used by the method. The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns.

10 Vocabulary of Clustering
Records, data points, samples, items, objects, patterns… Attributes, features, variables… Similarity, dissimilarity, distances. Centre, Centroid, Prototype. Hard Clustering (Crisp Clustering)

11 Requirements of Clustering
Scalability Ability to deal with different types of attributes Discovery of clusters with arbitrary shape Minimal requirements for domain knowledge to determine input parameters Able to deal with noise and outliers Insensitive to order of input records Insensitive to the initial conditions High dimensionality

12 Clustering Algorithms

13 Clustering Algorithms

14 Data Representation Data matrix (two mode) N objects with p attributes
Dissimilarity matrix (one mode) d(i,j) : dissimilarity between i and j with p attributes

15 How to deal with missing values?

16 Types of Clusters: Well-Separated
Well-separated clusters A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster 3 well-separated clusters

17 Types of Clusters: Center-Based
A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster 4 center-based clusters

18 Types of Clusters: Contiguity-Based
Contiguous Cluster (Nearest neighbor or Transitive) A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster. 8 contiguous clusters

19 Types of Clusters: Density-Based
A cluster is a dense region of points, which is separated by low-density regions, from other regions of high density. Used when the clusters are irregular or intertwined, and when noise and outliers are present. 6 density-based clusters

20 Types of Clusters: Conceptual Clusters
Shared Property or Conceptual Clusters Finds clusters that share some common property or represent a particular concept. 2 Overlapping Circles

21 Types of Clusters: Objective Function
Clusters Defined by an Objective Function Finds clusters that minimize or maximize an objective function. Enumerate all possible ways of dividing the points into clusters and evaluate the `goodness' of each potential set of clusters by using the given objective function.

22 Type of data in clustering analysis
November 9, 2018

23 Symbol Table November 9, 2018

24 Symbol Table November 9, 2018

25 Frequency Table November 9, 2018

26 Frequency Table November 9, 2018 26 26

27 Frequency Table November 9, 2018 27 27

28 Frequency Table November 9, 2018 28 28

29 Type of data in clustering analysis
Binary variables Nominal variables Ordinal variables Interval-scaled variables Ratio variables Variables of mixed types November 9, 2018

30 Binary variables The binary variable is symmetric (Simple match coefficient) The binary variable is asymmetric (Jaccard coefficient) Object j Object i November 9, 2018 30 30

31 Binary variables November 9, 2018 31 31

32 Dissimilarity between Binary Variables
Example gender is a symmetric attribute the remaining attributes are asymmetric binary let the values Y and P be set to 1, and the value N be set to 0 November 9, 2018 32

33 Nominal Variables A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, green Method 1: Simple matching m: # of matches, p: total # of variables Method 2: use a large number of binary variables creating a new binary variable for each of the M nominal states November 9, 2018 33

34 Nominal Variables Examples Eye Color Days of the week Religion Seasons
Job title November 9, 2018 34

35 Nominal Variables Find the Proximity Matrix? November 9, 2018 35 35

36 Ordinal Variables Order is important, e.g., rank
Can be treated like interval-scaled replacing xif by their rank map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by compute the dissimilarity using methods for interval-scaled variables November 9, 2018 36

37 Ordinal Variables Find the Proximity Matrix? November 9, 2018 37 37

38 Interval-valued variables
Examples Temperature Weight Time Age Length November 9, 2018 38

39 Interval-valued variables
Standardize data Calculate the mean absolute deviation: where Calculate the standardized measurement (z-score) Using mean absolute deviation is more robust than using standard deviation November 9, 2018 39

40 Ratio-Scaled Variables
Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as AeBt or Ae-Bt Methods: treat them like interval-scaled variables — not a good choice! (why?) apply logarithmic transformation yif = log(xif) treat them as continuous ordinal data treat their rank as interval-scaled. November 9, 2018 40

41 Ratio-Scaled Variables
Find the Proximity Matrix? November 9, 2018 41 41

42 Variables of Mixed Types
A database may contain all the six types of variables symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio. One may use a weighted formula to combine their effects. f is binary or nominal: dij(f) = 0 if xif = xjf , or dij(f) = 1 o.w. f is interval-based: use the normalized distance f is ordinal or ratio-scaled compute ranks rif and and treat zif as interval-scaled

43 Variables of Mixed Types
Find the Proximity Matrix? November 9, 2018 43 43

44 Scale Conversion - Interval to Ordinal
(a) No substantive change necessary. (b) Substitution. (c) Equal length categories. (d) Equal membership categories. (g) One-dimensional hierarchical linkage methods. (h) Ward’s hierarchical clustering method. (i) Iterative improvement of a partition. 44

45 Scale Conversion - Interval to Nominal - Ordinal to Nominal
- Nominal to Ordinal: 1. Correlation with an interval variable Let X be a nominal variable with g classes and Y be an interval variable. Where is the number of observations in the class and is the observation in the class. 45

46 - Nominal to Ordinal (Continue):
Scale Conversion - Nominal to Ordinal (Continue): 2. Rank correlation and mean ranks Obviously, maximizing the rank correlation r is equivalent to minimizing 46

47 Scale Conversion - Ordinal to Interval 1- Mapping (Normalization)
2- Class ranks 3- Correlation 47

48 Categorization of Numerical Data
- Direct Categorization 48

49 Categorization of Numerical Data
- Direct Categorization (continue) (3.2) 49

50 Categorization of Numerical Data
- Direct Categorization (continue) 50

51 Categorization of Numerical Data
- Direct Categorization (continue) 51

52 Categorization of Numerical Data
- Direct Categorization (continue) 52

53 Categorization of Numerical Data
- Cluster-based Categorization 1- k-means–based categorization 53

54 Categorization of Numerical Data
- Cluster-based Categorization (continue) 1- k-means–based categorization 54

55 Categorization of Numerical Data
- Cluster-based Categorization (continue) 1- k-means–based categorization 55

56 Categorization of Numerical Data
- Cluster-based Categorization (continue) 2- Least squares–based categorization 56

57 Categorization of Numerical Data
2- Least squares–based categorization (continue) 57

58 Categorization of Numerical Data
2- Least squares–based categorization (continue) Cutting indices: 58

59 Categorization of Numerical Data
Automatic Categorization 59

60 Categorization of Numerical Data
Automatic Categorization (continue) 60

61 Categorization of Numerical Data
Automatic Categorization (continue) 61

62 Categorization of Numerical Data
Automatic Categorization (continue) 62

63 Similarity and Dissimilarity Measures
Proximity Matrix 63

64 Similarity and Dissimilarity Measures
Proximity Graph 64

65 Similarity and Dissimilarity Measures
Scatter Matrix The trace of the scatter matrix 65

66 Similarity and Dissimilarity Measures
Scatter Matrix 66

67 Similarity and Dissimilarity Measures
Covariance Matrix 67

68 Similarity and Dissimilarity Measures
Covariance Matrix 68

69 Similarity and Dissimilarity Measures
Covariance Matrix The sample covariance matrix of D is defined to be a d × d matrix as: 69

70 Similarity and Dissimilarity Measures
Measures for Numerical Data 1- Euclidean Distance 70

71 Similarity and Dissimilarity Measures
Measures for Numerical Data 2- Manhattan Distance 71

72 Similarity and Dissimilarity Measures
Measures for Numerical Data 2- Manhattan Distance 72

73 Similarity and Dissimilarity Measures
Measures for Numerical Data 3- Maximum Distance 73

74 Similarity and Dissimilarity Measures
Measures for Numerical Data 4- Minkowski Distance 74

75 Similarity and Dissimilarity Measures
Measures for Numerical Data 5- Mahalanobis Distance 6- Average Distance 75

76 Similarity and Dissimilarity Measures
Measures for Numerical Data 7- Chord Distance 76

77 Similarity and Dissimilarity Measures
Measures for Categorical Data 1- The Simple Matching Distance 77

78 Similarity and Dissimilarity Measures
Measures for Categorical Data 2- Other Matching Coefficients be the number of attributes on which the two records match in a “not applicable” category. 78

79 Similarity and Dissimilarity Measures
Measures for Categorical Data 2- Other Matching Coefficients 79

80 Similarity and Dissimilarity Measures
Measures for Binary Data 80

81 Similarity and Dissimilarity Measures
Measures for Binary Data 81

82 Similarity and Dissimilarity Measures
Measures for Binary Data 82

83 Similarity and Dissimilarity Measures
Measures for Mixed-type Data 1- A General Similarity Coefficient 83

84 Similarity and Dissimilarity Measures
Measures for Mixed-type Data 1- A General Similarity Coefficient 84

85 Similarity and Dissimilarity Measures
Measures for Mixed-type Data 1- A General Similarity Coefficient PAGE 80, 85

86 Similarity and Dissimilarity Measures
Measures for Mixed-type Data 2- A General Distance Coefficient Sections 6.6 and 6.7, are not required From section 6.9, we will take only 6.9.1 86

87 Similarity and Dissimilarity Measures
Similarity and Dissimilarity Measures between Clusters 1- The Mean-based Distance 87

88 Similarity and Dissimilarity Measures
Similarity and Dissimilarity Measures between Clusters 2- The Nearest Neighbor Distance 88

89 Similarity and Dissimilarity Measures
Similarity and Dissimilarity Measures between Clusters 2- The Nearest Neighbor Distance 89

90 Similarity and Dissimilarity Measures
Similarity and Dissimilarity Measures between Clusters 3- The Farthest Neighbor Distance 90

91 Similarity and Dissimilarity Measures
Similarity and Dissimilarity Measures between Clusters 3- The Farthest Neighbor Distance 91

92 Similarity and Dissimilarity Measures
Similarity and Dissimilarity Measures between Clusters 4- The Average Neighbor Distance 92

93 Similarity and Dissimilarity Measures
Similarity and Dissimilarity Measures between Clusters 5- Lance-Williams Formula 93

94 Similarity and Dissimilarity Measures
Similarity and Dissimilarity between Variables 1- Pearson’s Correlation Coefficients 94

95 Hierarchical Clustering Techniques
95

96 Hierarchical Clustering Techniques
Representations of Hierarchical Clustering 1- n-tree 96

97 Hierarchical Clustering Techniques
Representations of Hierarchical Clustering 2- Dendrogram 97

98 Hierarchical Clustering Techniques
Representations of Hierarchical Clustering 2- Dendrogram 98

99 Hierarchical Clustering Techniques
Agglomerative Hierarchical Methods 99

100 Hierarchical Clustering Techniques
100

101 Hierarchical Clustering Techniques
For step 3 we have: 101

102 Hierarchical Clustering Techniques
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 . . . . Similarity? MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward’s Method uses squared error Proximity Matrix 102

103 Hierarchical Clustering Techniques
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 . . . . MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward’s Method uses squared error Proximity Matrix 103

104 Hierarchical Clustering Techniques
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 . . . . MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward’s Method uses squared error Proximity Matrix 104

105 Hierarchical Clustering Techniques
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 . . . . MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward’s Method uses squared error Proximity Matrix 105

106 Hierarchical Clustering Techniques
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 . . . . MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward’s Method uses squared error Proximity Matrix 106

107 Hierarchical Clustering Techniques
Single-Linkage versus Complete-Linkage 107

108 Hierarchical Clustering Techniques
Single-Linkage versus Complete-Linkage 108

109 Hierarchical Clustering Techniques
Strength of MIN Two Clusters Original Points Can handle non-elliptical shapes 109

110 Hierarchical Clustering Techniques
Limitations of MIN Two Clusters Original Points Sensitive to noise and outliers 110

111 Hierarchical Clustering Techniques
Strength of MAX Two Clusters Original Points Less susceptible to noise and outliers 111

112 Hierarchical Clustering Techniques
Limitations of MAX Two Clusters Original Points Tends to break large clusters Biased towards globular clusters 112

113 Hierarchical Clustering Techniques
1- The Single-link Method 113

114 Hierarchical Clustering Techniques
1- The Single-link Method 114

115 Hierarchical Clustering Techniques
1- The Single-link Method 115

116 Hierarchical Clustering Techniques
1- The Single-link Method 116

117 Hierarchical Clustering Techniques
1- The Single-link Method 117

118 Hierarchical Clustering Techniques
1- The Single-link Method The dendrogram of this clustering is 118

119 Hierarchical Clustering Techniques
2- The Complete Link Method 119

120 Hierarchical Clustering Techniques
2- The Complete Link Method 120

121 Hierarchical Clustering Techniques
2- The Complete Link Method 121

122 Hierarchical Clustering Techniques
2- The Complete Link Method 122

123 Hierarchical Clustering Techniques
2- The Complete Link Method The dendrogram of this clustering is 123

124 Hierarchical Clustering Techniques
3- The Group Average Method 124

125 Hierarchical Clustering Techniques
3- The Group Average Method 125

126 Hierarchical Clustering Techniques
3- The Group Average Method 126

127 Hierarchical Clustering Techniques
3- The Group Average Method 127

128 Hierarchical Clustering Techniques
3- The Group Average Method The dendrogram of this clustering is 128

129 Hierarchical Clustering Techniques
4- The Weighted Group Average Method 129

130 Hierarchical Clustering Techniques
4- The Weighted Group Average Method 130

131 Hierarchical Clustering Techniques
4- The Weighted Group Average Method The dendrogram of this clustering is 131

132 Hierarchical Clustering Techniques
5- The Centroid Method 132

133 Hierarchical Clustering Techniques
5- The Centroid Method 133

134 Hierarchical Clustering Techniques
5- The Centroid Method 134

135 Hierarchical Clustering Techniques
5- The Centroid Method 135

136 Hierarchical Clustering Techniques
5- The Centroid Method The dendrogram of this clustering is 136

137 Hierarchical Clustering Techniques
6- The Median Method 137

138 Hierarchical Clustering Techniques
6- The Median Method 138

139 Hierarchical Clustering Techniques
6- The Median Method The dendrogram of this clustering is 139

140 Hierarchical Clustering Techniques
7- Ward’s Method 140

141 Hierarchical Clustering Techniques
7- Ward’s Method 141

142 Hierarchical Clustering Techniques
7- Ward’s Method 142

143 Hierarchical Clustering Techniques
7- Ward’s Method 143

144 Hierarchical Clustering Techniques
7- Ward’s Method 144

145 Hierarchical Clustering Techniques
7- Ward’s Method 145

146 Hierarchical Clustering Techniques
7- Ward’s Method 146

147 Hierarchical Clustering Techniques
7- Ward’s Method 147

148 Hierarchical Clustering Techniques
7- Ward’s Method 148

149 Hierarchical Clustering Techniques
7- Ward’s Method 149

150 Hierarchical Clustering Techniques
7- Ward’s Method The dendrogram of this clustering is 150

151 Hierarchical Clustering Techniques
Problems and Limitations O(N3) time in many cases There are N steps and at each step the size, N2, proximity matrix must be updated and searched Complexity can be reduced to O(N2 log(N) ) time for some approaches 151

152 Hierarchical Clustering Techniques
Once a decision is made to combine two clusters, it cannot be undone No objective function is directly minimized Different schemes have problems with one or more of the following: Sensitivity to noise and outliers Difficulty handling different sized clusters and convex shapes Breaking large clusters 152

153 K-means Clustering Partitional clustering approach
Each cluster is associated with a centroid (center point) Each point is assigned to the cluster with the closest centroid Number of clusters, K, must be specified The basic algorithm is very simple 153

154 K-means Clustering – Details
Initial centroids are often chosen randomly. Clusters produced vary from one run to another. The centroid is (typically) the mean of the points in the cluster. ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc. K-means will converge for common similarity measures mentioned above. Most of the convergence happens in the first few iterations. Often the stopping condition is changed to ‘Until relatively few points change clusters’ Complexity is O( n * K * I * d ) n = number of points, K = number of clusters, I = number of iterations, d = number of attributes 154

155 Two different K-means Clusterings
Original Points Optimal Clustering Sub-optimal Clustering 155

156 Importance of Choosing Initial Centroids
156

157 Importance of Choosing Initial Centroids
157

158 Evaluating K-means Clusters
Most common measure is Sum of Squared Error (SSE) For each point, the error is the distance to the nearest cluster To get SSE, we square these errors and sum them. x is a data point in cluster Ci and mi is the representative point for cluster Ci can show that mi corresponds to the center (mean) of the cluster Given two clusters, we can choose the one with the smallest error One easy way to reduce SSE is to increase K, the number of clusters A good clustering with smaller K can have a lower SSE than a poor clustering with higher K 158

159 Importance of Choosing Initial Centroids …
159

160 Importance of Choosing Initial Centroids …
160

161 Problems with Selecting Initial Points
If there are K ‘real’ clusters then the chance of selecting one centroid from each cluster is small. Chance is relatively small when K is large If clusters are the same size, n, then For example, if K = 10, then probability = 10!/1010 = Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t Consider an example of five pairs of clusters 161

162 10 Clusters Example Starting with two initial centroids in one cluster of each pair of clusters 162

163 10 Clusters Example Starting with two initial centroids in one cluster of each pair of clusters 163

164 10 Clusters Example Starting with some pairs of clusters having three initial centroids, while other have only one. 164

165 10 Clusters Example Starting with some pairs of clusters having three initial centroids, while other have only one. 165

166 Solutions to Initial Centroids Problem
Multiple runs Helps, but probability is not on your side Sample and use hierarchical clustering to determine initial centroids Select more than k initial centroids and then select among these initial centroids Select most widely separated Postprocessing Bisecting K-means Not as susceptible to initialization issues 166

167 Handling Empty Clusters
Basic K-means algorithm can yield empty clusters Several strategies Choose the point that contributes most to SSE Choose a point from the cluster with the highest SSE If there are several empty clusters, the above can be repeated several times. 167

168 Pre-processing and Post-processing
Normalize the data Eliminate outliers Post-processing Eliminate small clusters that may represent outliers Split ‘loose’ clusters, i.e., clusters with relatively high SSE Merge clusters that are ‘close’ and that have relatively low SSE 168

169 Bisecting K-means Bisecting K-means algorithm
Variant of K-means that can produce a partitional or a hierarchical clustering 169

170 Bisecting K-means Example
170

171 Limitations of K-means
K-means has problems when clusters are of differing Sizes Densities Non-globular shapes K-means has problems when the data contains outliers. 171

172 Limitations of K-means: Differing Sizes
Original Points K-means (3 Clusters) 172

173 Limitations of K-means: Differing Density
Original Points K-means (3 Clusters) 173

174 Limitations of K-means: Non-globular Shapes
Original Points K-means (2 Clusters) 174

175 Overcoming K-means Limitations
Original Points K-means Clusters One solution is to use many clusters. Find parts of clusters, but need to put together. 175

176 Overcoming K-means Limitations
Original Points K-means Clusters 176

177 Overcoming K-means Limitations
Original Points K-means Clusters 177

178 K-means++ 178

179 PAM Clustering 179

180 PAM Clustering 180

181 PAM Clustering t i h j j i h t h i t j t i h j 181

182 Soft K-means 182

183 Fuzzy C-means 183

184 Kernel K-means 184

185 Kernel K-means 185

186 Kernel K-means 186

187 Kernel K-means 187

188 Graph-Based Clustering Methods
188

189 Grid-Based Clustering Methods
189

190 Density-Based Clustering Methods
190

191 Model-Based Clustering Methods
191

192 Self-Organizing Map (SOM)
192

193 Self-Organizing Map (SOM)
193

194 Self-Organizing Map (SOM)
194

195 Self-Organizing Map (SOM)
195

196 Generative Topographic Mapping (GTM)
1/09/2006 Generative Topographic Mapping (GTM) The GTM provides a principled alternative to the self-organizing map resolving many of its associated theoretical problems Defining a probability distribution over the latent space will induce a corresponding probability distribution in the data space Latent points are non-linearly projected into data space 196

197 Students Assignments All the best 197


Download ppt "Selected Topics in AI: Data Clustering"

Similar presentations


Ads by Google