Presentation is loading. Please wait.

Presentation is loading. Please wait.

Clustering Algorithms Sunida Ratanothayanon. What is Clustering?

Similar presentations


Presentation on theme: "Clustering Algorithms Sunida Ratanothayanon. What is Clustering?"— Presentation transcript:

1 Clustering Algorithms Sunida Ratanothayanon

2 What is Clustering?

3 Clustering Clustering is a classification pattern that divide data into groups in meaningful and useful way Unsupervised classification pattern

4 Clustering Clustering is a classification pattern that divide data into groups in meaningful and useful way Unsupervised classification pattern

5 Outline K-Means Algorithm Hierarchical Clustering Algorithm

6 K-Means Algorithm A partial clustering algorithm k clusters (# of k is specified by a user) Each cluster has a cluster center called centroid. The algorithm will literately group data into k clusters based on a distance function.

7 K-Means Algorithm The centroid can be obtained from the mean of all data points in the cluster. Stop when there is no change of center.

8 A numerical example

9 K-Means example Data Pointx1x2 12221 21920 31822 413 542 We have five data points with 2 attributes Want to group data into 2 clusters (k=2)

10 K-Means example We can plot a graph from five data points as following.

11 K-Means example (1 st iteration) Step1 : Choosing center and defining k Data Pointx1x2 12221 21920 31822 413 542 C1=(18,22), C2= (4,2) Step2 : Computing cluster centers We already define c1 and c2 Step3 : Finding square of Euclidian distance of each data point from the center and assigning each data points to a cluster

12 K-Means example (1 st iteration) Data Pointx1x2 12221 21920 31822 413 542 Step3 (cont): Distance table for all data points Data Point C1C2 (18,22)(4,2) (22,21)4.1326.9 (19,20)2.2323.43 (18,22)024.41 (1,3)25.493.1 (4,2)24.410 Then, we assign each data point to the cluster by comparing its distance to the center. The data point will be assigned to its closest cluster.

13 K-Means example (2 nd iteration) Step2 : Computing cluster centers We will compute new cluster centers Member of cluster1 are (22,21), (19,20) and (18,22). We will find average of these data points Data Point C1C2 (18,22)(4,2) (22,21)4.1326.9 (19,20)2.2323.43 (18,22)024.41 (1,3)25.493.1 (4,2)24.410 C1 is [19.7, 21] Member of cluster2 are (1,3) and (4,2). C2 is [2.5, 2.5]

14 K-Means example (2 nd iteration) Data Point C1’C2’ (19.7,21)(2.5,2.5) (22,21)2.326.88 (19,20)1.2224.05 (18,22)1.9724.91 (1,3)25.961.58 (4,2)24.651.58 Step3 : Finding square of Euclidian distance of each data point from the center and assigning each data points to a cluster Distance table for all data points with new centers Assign each data point to the cluster by comparing its distance to the center. The data point will be assigned to its closest cluster. Repeat step2 and 3 for the next iteration because centers still have a change Data Point C1C2 (18,22)(4,2) (22,21)4.1326.9 (19,20)2.2323.43 (18,22)024.41 (1,3)25.493.1 (4,2)24.410

15 K-Means example (3 rd iteration) Step2 : Computing cluster centers We will compute new cluster centers Member of cluster1 are (22,21), (19,20) and (18,22). We will find average of these data points C1 is [19.7, 21] Member of cluster2 are (1,3) and (4,2). C2 is [2.5, 2.5] Data Point C1’C2’(19.7,21)(2.5,2.5) (22,21)2.326.88 (19,20)1.2224.05 (18,22)1.9724.91 (1,3)25.961.58 (4,2)24.651.58

16 K-Means example (3 rd iteration) Data Point C1’C2’ (19.7,21)(2.5,2.5) (22,21)2.326.88 (19,20)1.2224.05 (18,22)1.9724.91 (1,3)25.961.58 (4,2)24.651.58 Step3 : Finding square of Euclidian distance of each data point from the center and assigning each data points to a cluster Distance table for all data points with new centers Assign each data point to the cluster by comparing its distance to the center. The data point will be assigned to its closest cluster. Stop the algorithm because centers remain the same. Data Point C1’’C2’’ (19.7,21)(2.5,2.5) (22,21)2.326.88 (19,20)1.2224.05 (18,22)1.9724.91 (1,3)25.961.58 (4,2)24.651.58

17 Hierarchical Clustering Algorithm Produce a nest sequence of cluster like a tree. Allow to have subclusters. Individual data point at the bottom of the tree are called “Singleton clusters”.

18 Hierarchical Clustering Algorithm Agglomerative method A tree will be build up from the bottom level and will be merged the nearest pair of clusters at each level to go one level up Continue until all the data points are merged into a single cluster.

19 A numerical example

20 Hierarchical Clustering example We have five data points with 3 attributes Data Pointx1x2x3 A937 B1029 C194 D655 E1 3

21 Hierarchical Clustering example (1 st iteration) Data Pointx1x2x3 A937 B1029 C194 D655 E1 3 Step1 : Calculating Euclidian distance between two vector points Then we obtain distance table as following Data PointABCDE (9, 3, 7)(10, 2, 9)(1, 9, 4)(6, 5, 5)(1, 10, 3) A ( 9, 3, 7)02.4510.444.1211.36 B (10, 2, 9)-012.456.413.45 C (1, 9, 4)--06.481.41 D (6, 5, 5)---07.35 E (1, 10, 3)----0

22 Hierarchical Clustering example (1 st iteration) Step2 : Forming a tree Consider the most similar pair of data points from the previous distance table Data PointABCDE (9, 3, 7)(10, 2, 9)(1, 9, 4)(6, 5, 5)(1, 10, 3) A ( 9, 3, 7)02.4510.444.1211.36 B (10, 2, 9)-012.456.413.45 C (1, 9, 4)--06.481.41 D (6, 5, 5)---07.35 E (1, 10, 3)----0 C and E are the most similar We will obtain the first cluster as following Repeat step1 and 2 until all data points are merged into a single cluster.

23 Hierarchical Clustering example (2 nd iteration) Data PointABCDE (9, 3, 7)(10, 2, 9)(1, 9, 4)(6, 5, 5)(1, 10, 3) A ( 9, 3, 7)02.4510.444.1211.36 B (10, 2, 9)-012.456.413.45 C (1, 9, 4)--06.481.41 D (6, 5, 5)---07.35 E (1, 10, 3)----0 Step1 : Calculating Euclidian distance between two vector points We will redraw the distance table including the merge of two entities, C&E. Data PointABDC&E (9, 3, 7)(10, 2, 9)(6, 5, 5) A ( 9, 3, 7)02.454.1210.9 B (10, 2, 9)-06.412.95 D (6, 5, 5)--06.90 C&E (1, 9.5, 3.5)---0 A distance for C&E to A can be obtained from We can use a previous table to get the distance from C to A and E to A. avg (10.44, 11.36) = 10.9

24 Hierarchical Clustering example (2 nd iteration) Step2 : Forming a tree Consider the most similar pair of data points from the previous distance table A and B are the most similar We will obtain the second cluster as following Repeat step1 and 2 until all data points are merged into a single cluster. Data PointABDC&E (9, 3, 7)(10, 2, 9)(6, 5, 5) A ( 9, 3, 7)02.454.1210.9 B (10, 2, 9)-06.412.95 D (6, 5, 5)--06.90 C&E (1, 9.5, 3.5)---0

25  From previous table, we can obtain following distances for the new distance table Hierarchical Clustering example (3 rd iteration) Data PointABDC&E (9, 3, 7)(10, 2, 9)(6, 5, 5) A ( 9, 3, 7)02.454.1210.9 B (10, 2, 9)-06.412.95 D (6, 5, 5)--06.90 C&E (1, 9.5, 3.5)---0 Step1 : Calculating Euclidian distance between two vector points We will redraw the distance table including the merge entities, C&E and A&B. Data PointA&BDC&E (6, 5, 5) A&B05.2611.93 D (6, 5, 5)-06.9 C&E--0

26 Hierarchical Clustering example (3 rd iteration) Step2 : Forming a tree Consider the most similar pair of data points from the previous distance table A&B and D are the most similar We will obtain the new cluster as following Repeat step1 and 2 until all data points are merged into a single cluster. Data PointA&BDC&E (6, 5, 5) A&B05.2611.93 D (6, 5, 5)-06.9 C&E--0

27  From previous table, we can obtain a distance from cluster A&B&D to C&E as following Hierarchical Clustering example (4 th iteration) Data PointA&BDC&E (6, 5, 5) A&B05.2611.93 D (6, 5, 5)-06.9 C&E--0 Step1 : Calculating Euclidian distance between two vector points We will redraw the distance table including the merge entities, C&E and A&B&D. Data PointA&B&DC&E A&B&D09.4 C&E-0

28 Hierarchical Clustering example (4 th iteration) Step2 : Forming a tree Consider the most similar pair of data points from the previous distance table We can form a final tree because no more recalculation has to be made We can merge all data points into a single cluster A&B&D&C&E. Stop the algorithm. Data PointA&B&DC&E A&B&D09.4 C&E-0

29 Conclusion Two major clustering algorithms. K-Means algorithm An algorithm which literately groups data into k clusters based on a distance function. # of k is specified by a user. Hierarchical Clustering algorithm It is a nest sequence of cluster like a tree. A tree will be build up from the bottom level and continue until all the data points are merged into a single cluster.

30 References [1] Hastie, T., Tibeshirani, R., & Friedman J. Data Mining, Inference, Prediction. Unsupervised Learning. pp.453-480 [2] JAIN, A. K., MURTY, M. N., & FLYNN, P. J. (1999). Data Clustering: A Review. ACM Computing Surveys, 31(3), 264-330. [3] Liu, B. (2006). Web Data Mining. Unsupervised Learning. Springer. pp.117-150. [4] Ning, T. P., STEINBACH, M., & KUMAR, V. Introduction to Data Mining. Cluster Analysis: Basic Concepts and Algorithms. pp.487- 553.

31 Thank you


Download ppt "Clustering Algorithms Sunida Ratanothayanon. What is Clustering?"

Similar presentations


Ads by Google