Hierarchical and Ensemble Clustering

Slides:



Advertisements
Similar presentations
CS 478 – Tools for Machine Learning and Data Mining Clustering: Distance-based Approaches.
Advertisements

SEEM Tutorial 4 – Clustering. 2 What is Cluster Analysis?  Finding groups of objects such that the objects in a group will be similar (or.
Clustering.
Hierarchical Clustering
Cluster Analysis: Basic Concepts and Algorithms
1 CSE 980: Data Mining Lecture 16: Hierarchical Clustering.
Hierarchical Clustering. Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram – A tree-like diagram that.
Hierarchical Clustering, DBSCAN The EM Algorithm
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ What is Cluster Analysis? l Finding groups of objects such that the objects in a group will.
Data Mining Cluster Analysis: Basic Concepts and Algorithms
Agglomerative Hierarchical Clustering 1. Compute a distance matrix 2. Merge the two closest clusters 3. Update the distance matrix 4. Repeat Step 2 until.
IT 433 Data Warehousing and Data Mining Hierarchical Clustering Assist.Prof.Songül Albayrak Yıldız Technical University Computer Engineering Department.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ What is Cluster Analysis? l Finding groups of objects such that the objects in a group will.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
Today Unsupervised Learning Clustering K-means. EE3J2 Data Mining Lecture 18 K-means and Agglomerative Algorithms Ali Al-Shahib.
4. Ad-hoc I: Hierarchical clustering
Cluster Analysis: Basic Concepts and Algorithms
Cluster Analysis.  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods.
Revision (Part II) Ke Chen COMP24111 Machine Learning Revision slides are going to summarise all you have learnt from Part II, which should be helpful.
Ulf Schmitz, Pattern recognition - Clustering1 Bioinformatics Pattern recognition - Clustering Ulf Schmitz
Chapter 3: Cluster Analysis  3.1 Basic Concepts of Clustering  3.2 Partitioning Methods  3.3 Hierarchical Methods The Principle Agglomerative.
Data mining and machine learning A brief introduction.
Cluster Analysis Part II. Learning Objectives Hierarchical Methods Density-Based Methods Grid-Based Methods Model-Based Clustering Methods Outlier Analysis.
Hierarchical Clustering
Lecture 11. Microarray and RNA-seq II
Clustering Algorithms k-means Hierarchic Agglomerative Clustering (HAC) …. BIRCH Association Rule Hypergraph Partitioning (ARHP) Categorical clustering.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Clustering COMP Research Seminar BCB 713 Module Spring 2011 Wei Wang.
CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides.
K-Means Algorithm Each cluster is represented by the mean value of the objects in the cluster Input: set of objects (n), no of clusters (k) Output:
Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram – A tree like diagram that.
Definition Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to)
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Cluster Analysis This lecture node is modified based on Lecture Notes for Chapter.
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
Data Mining and Text Mining. The Standard Data Mining process.
COMP24111 Machine Learning K-means Clustering Ke Chen.
Data Mining: Basic Cluster Analysis
Hierarchical Clustering
Semi-Supervised Clustering
Clustering CSC 600: Data Mining Class 21.
Hierarchical Clustering
Ke Chen Reading: [7.3, EA], [9.1, CMB]
Data Mining K-means Algorithm
Hierarchical Clustering
Data Mining -Cluster Analysis. What is a clustering ? Clustering is the process of grouping data into classes, or clusters, so that objects within a cluster.
CS 685: Special Topics in Data Mining Jinze Liu
CSE 5243 Intro. to Data Mining
K-means and Hierarchical Clustering
Clustering.
John Nicholas Owen Sarah Smith
CSE572, CBS598: Data Mining by H. Liu
Revision (Part II) Ke Chen
Information Organization: Clustering
Ke Chen Reading: [7.3, EA], [9.1, CMB]
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Revision (Part II) Ke Chen
DATA MINING Introductory and Advanced Topics Part II - Clustering
Hierarchical and Ensemble Clustering
CSCI N317 Computation for Scientific Applications Unit Weka
Data Mining – Chapter 4 Cluster Analysis Part 2
Hierarchical Clustering
Text Categorization Berlin Chen 2003 Reference:
Hierarchical Clustering
BIRCH: Balanced Iterative Reducing and Clustering Using Hierarchies
SEEM4630 Tutorial 3 – Clustering.
Hierarchical Clustering
CS 685: Special Topics in Data Mining Jinze Liu
Data Mining Cluster Analysis: Basic Concepts and Algorithms
Presentation transcript:

Hierarchical and Ensemble Clustering Ke Chen Reading: [7.8-7.10, EA], [25.5, KPM], [Fred & Jain, 2005] COMP24111 Machine Learning

Outline Introduction Cluster Distance Measures Agglomerative Algorithm Example and Demo Key Concepts in Hierarchal Clustering Clustering Ensemble via Evidence Accumulation Summary COMP24111 Machine Learning

Introduction Hierarchical Clustering Approach A typical clustering analysis approach via partitioning data set sequentially Construct nested partitions layer by layer via grouping objects into a tree of clusters (without the need to know the number of clusters in advance) Use (generalised) distance matrix as clustering criteria Agglomerative vs. Divisive Agglomerative: a bottom-up strategy Initially each data object is in its own (atomic) cluster Then merge these atomic clusters into larger and larger clusters Divisive: a top-down strategy Initially all objects are in one single cluster Then the cluster is subdivided into smaller and smaller clusters Clustering Ensemble Using multiple clustering results for robustness and overcoming weaknesses of single clustering algorithms. COMP24111 Machine Learning

Introduction: Illustration Illustrative Example: Agglomerative vs. Divisive Agglomerative and divisive clustering on the data set {a, b, c, d ,e } Step 0 Step 1 Step 2 Step 3 Step 4 b d c e a a b d e c d e a b c d e Agglomerative Divisive Cluster distance Termination condition COMP24111 Machine Learning

Cluster Distance Measures single link (min) complete link (max) average Single link: smallest distance between an element in one cluster and an element in the other, i.e., d(Ci, Cj) = min{d(xip, xjq)} Complete link: largest distance between an element in one cluster and an element in the other, i.e., d(Ci, Cj) = max{d(xip, xjq)} Average: avg distance between elements in one cluster and elements in the other, i.e., d(Ci, Cj) = avg{d(xip, xjq)} Define D(C, C)=0, the distance between two exactly same clusters is ZERO! d(C, C)=0 COMP24111 Machine Learning

Cluster Distance Measures Example: Given a data set of five objects characterised by a single continuous feature, assume that there are two clusters: C1: {a, b} and C2: {c, d, e}. (Minkowski distance for distance matrix) 1. Calculate the distance matrix . 2. Calculate three cluster distances between C1 and C2. a b c d e Feature 1 2 4 5 6 Single link Complete link Average a b c d e 1 3 4 5 2 For distance matrix, single feature regardless of p: (|x-y|^p)^(1/p)=|x-y|, x, y scalar! COMP24111 Machine Learning

Agglomerative Algorithm The Agglomerative algorithm is carried out in three steps: Convert all object features into a distance matrix Set each object as a cluster (thus if we have N objects, we will have N clusters at the beginning) Repeat until number of cluster is one (or known # of clusters) Merge two closest clusters Update “distance matrix” COMP24111 Machine Learning

Example Problem: clustering analysis with agglomerative algorithm data matrix Euclidean distance distance matrix COMP24111 Machine Learning

Example Merge two closest clusters (iteration 1) COMP24111 Machine Learning

Example Update distance matrix (iteration 1) COMP24111 Machine Learning

Example Merge two closest clusters (iteration 2) COMP24111 Machine Learning

Example Update distance matrix (iteration 2) COMP24111 Machine Learning

Example Merge two closest clusters/update distance matrix (iteration 3) COMP24111 Machine Learning

Example Merge two closest clusters/update distance matrix (iteration 4) COMP24111 Machine Learning

Example Final result (meeting termination condition) COMP24111 Machine Learning

Key Concepts in Hierarchal Clustering Dendrogram tree representation In the beginning we have 6 clusters: A, B, C, D, E and F We merge clusters D and F into cluster (D, F) at distance 0.50 We merge cluster A and cluster B into (A, B) at distance 0.71 We merge clusters E and (D, F) into ((D, F), E) at distance 1.00 We merge clusters ((D, F), E) and C into (((D, F), E), C) at distance 1.41 We merge clusters (((D, F), E), C) and (A, B) into ((((D, F), E), C), (A, B)) at distance 2.50 The last cluster contain all the objects, thus conclude the computation 2 3 4 5 6 object lifetime For a dendrogram tree, its horizontal axis indexes all objects in a given data set, while its vertical axis expresses the lifetime of all possible cluster formation. The lifetime of a cluster (individual cluster) in the dendrogram is defined as a distance interval from the moment that the cluster is created to the moment that it disappears by merging with other clusters. COMP24111 Machine Learning

Key Concepts in Hierarchal Clustering Lifetime vs K-cluster Lifetime Lifetime The distance between that a cluster is created and that it disappears (merges with other clusters during clustering). e.g. lifetime of A, B, C, D, E and F are 0.71, 0.71, 1.41, 0.50, 1.00 and 0.50, respectively, the life time of (A, B) is 2.50 – 0.71 = 1.79, …… K-cluster Lifetime The distance from that K clusters emerge to that K clusters vanish (due to the reduction to K-1 clusters). e.g. 5-cluster lifetime is 0.71 - 0.50 = 0.21 4-cluster lifetime is 1.00 - 0.71 = 0.29 3-cluster lifetime is 1.41 – 1.00 = 0.41 2-cluster lifetime is 2.50 – 1.41 = 1.09 2 3 4 5 6 object lifetime Emphasize the difference between individual cluster lifetime and K-cluster lifetime! COMP24111 Machine Learning

Demo Agglomerative Demo COMP24111 Machine Learning

Relevant Issues How to determine the number of clusters If the number of clusters known, termination condition is given! The K-cluster lifetime as the range of threshold value on the dendrogram tree that leads to the identification of K clusters Heuristic rule: cut a dendrogram tree with maximum life time to find a “proper” K Major weakness of agglomerative clustering methods Can never undo what was done previously Sensitive to cluster distance measures and noise/outliers Less efficient: O (n2 logn), where n is the number of total objects There are several variants to overcome its weaknesses BIRCH: scalable to a large data set ROCK: clustering categorical data CHAMELEON: hierarchical clustering using dynamic modelling COMP24111 Machine Learning

Clustering Ensemble Motivation A single clustering algorithm may be affected by various factors Sensitive to initialisation and noise/outliers, e.g. the K-means is sensitive to initial centroids! Sensitive to distance metrics but hard to find a proper one Hard to decide a single best algorithm that can handle all types of cluster shapes and sizes An effective treatments: clustering ensemble Utilise the results obtained by multiple clustering analyses for robustness COMP24111 Machine Learning

Clustering Ensemble Clustering Ensemble via Evidence Accumulation (Fred & Jain, 2005) A simple clustering ensemble algorithm to overcome the main weaknesses of different clustering methods by exploiting their synergy via evidence accumulation Algorithm summary Initial clustering analysis by using either different clustering algorithms or running a single clustering algorithm on different conditions, leading to multiple partitions e.g. the K-mean with various initial centroid settings and different K, the agglomerative algorithm with different distance metrics and forced to terminated with different number of clusters… Converting clustering results on different partitions into binary “distance” matrices Evidence accumulation: form a collective “distance” matrix based on all the binary “distance” matrices Apply a hierarchical clustering algorithm (with a proper cluster distance metric) to the collective “distance” matrix and use the maximum K-cluster lifetime to decide K COMP24111 Machine Learning

Clustering Ensemble Example: convert clustering results into binary “Distance” matrix A B C D Cluster 1 (C1) Cluster 2 (C2) “distance” Matrix A D C B COMP24111 Machine Learning

Clustering Ensemble Example: convert clustering results into binary “Distance” matrix A B C D Cluster 3 (C3) “distance Matrix” Cluster 2 (C2) A D C B Cluster 1 (C1) COMP24111 Machine Learning

Clustering Ensemble Evidence accumulation: form the collective “distance” matrix COMP24111 Machine Learning

Clustering Ensemble Application to “non-convex” dataset Data set of 400 data points Initial clustering analysis: K-mean (K=2,…,11), 3 initial settings per K  totally 30 partitions Converting clustering results to binary “distance” matrices for the collective “distance matrix” Applying the Agglomerative algorithm to the collective “distance matrix” (single-link) Cut the dendrogram tree with the maximum K-cluster lifetime to decide K COMP24111 Machine Learning

Summary Hierarchical algorithm is a sequential clustering algorithm Use distance matrix to construct a tree of clusters (dendrogram) Hierarchical representation without the need of knowing # of clusters (can set termination condition with known # of clusters) Major weakness of agglomerative clustering methods Can never undo what was done previously Sensitive to cluster distance measures and noise/outliers Less efficient: O (n2 logn), where n is the number of total objects Clustering ensemble based on evidence accumulation Initial clustering with different conditions, e.g., K-means on different K, initialisations Evidence accumulation – “collective” distance matrix Apply agglomerative algorithm to “collective” distance matrix and max k-cluster lifetime Online tutorial: how to use hierarchical clustering functions in Matlab: https://www.youtube.com/watch?v=aYzjenNNOcc COMP24111 Machine Learning