Clustering Paolo Ferragina Dipartimento di Informatica Università di Pisa This is a mix of slides taken from several presentations, plus my touch !

Slides:



Advertisements
Similar presentations
Lecture 15(Ch16): Clustering
Advertisements

Clustering II.
Clustering (2). Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram –A tree like.
Hierarchical Clustering
Cluster Analysis: Basic Concepts and Algorithms
1 CSE 980: Data Mining Lecture 16: Hierarchical Clustering.
Hierarchical Clustering. Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram – A tree-like diagram that.
Hierarchical Clustering, DBSCAN The EM Algorithm
Clustering Paolo Ferragina Dipartimento di Informatica Università di Pisa This is a mix of slides taken from several presentations, plus my touch !
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ What is Cluster Analysis? l Finding groups of objects such that the objects in a group will.
Data Mining Cluster Analysis: Basic Concepts and Algorithms
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ What is Cluster Analysis? l Finding groups of objects such that the objects in a group will.
Data Mining Cluster Analysis: Basic Concepts and Algorithms
Clustering II.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
1 Text Clustering. 2 Clustering Partition unlabeled examples into disjoint subsets of clusters, such that: –Examples within a cluster are very similar.
Lecture 13: Clustering (continued) May 12, 2010
Cluster Analysis.
Cluster Analysis: Basic Concepts and Algorithms
What is Cluster Analysis?
Cluster Analysis CS240B Lecture notes based on those by © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Unsupervised Learning: Clustering 1 Lecture 16: Clustering Web Search and Mining.
Clustering Ram Akella Lecture 6 February 23, & 280I University of California Berkeley Silicon Valley Center/SC.
DATA MINING LECTURE 8 Clustering The k-means algorithm
Partitional and Hierarchical Based clustering Lecture 22 Based on Slides of Dr. Ikle & chapter 8 of Tan, Steinbach, Kumar.
START OF DAY 8 Reading: Chap. 14. Midterm Go over questions General issues only Specific issues: visit with me Regrading may make your grade go up OR.
Text Clustering.
Clustering Supervised vs. Unsupervised Learning Examples of clustering in Web IR Characteristics of clustering Clustering algorithms Cluster Labeling 1.
tch?v=Y6ljFaKRTrI Fireflies.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
ITCS 6265 Information Retrieval & Web Mining Lecture 15 Clustering.
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 16: Clustering.
CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides.
Information Retrieval Lecture 6 Introduction to Information Retrieval (Manning et al. 2007) Chapter 16 For the MSc Computer Science Programme Dell Zhang.
Information Retrieval and Organisation Chapter 16 Flat Clustering Dell Zhang Birkbeck, University of London.
Clustering. What is Clustering? Clustering: the process of grouping a set of objects into classes of similar objects –Documents within a cluster should.
Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram – A tree like diagram that.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 15 10/13/2011.
Clustering (Modified from Stanford CS276 Slides - Lecture 17 Clustering)
Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Lecture 12: Clustering May 5, Clustering (Ch 16 and 17)  Document clustering  Motivations  Document representations  Success criteria  Clustering.
Clustering/Cluster Analysis. What is Cluster Analysis? l Finding groups of objects such that the objects in a group will be similar (or related) to one.
Information Retrieval and Organisation Chapter 17 Hierarchical Clustering Dell Zhang Birkbeck, University of London.
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
Clustering Slides adapted from Chris Manning, Prabhakar Raghavan, and Hinrich Schütze (
1 Machine Learning Lecture 9: Clustering Moshe Koppel Slides adapted from Raymond J. Mooney.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Cluster Analysis This lecture node is modified based on Lecture Notes for Chapter.
Introduction to Information Retrieval Introduction to Information Retrieval Clustering Chris Manning, Pandu Nayak, and Prabhakar Raghavan.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
CSE4334/5334 Data Mining Clustering. What is Cluster Analysis? Finding groups of objects such that the objects in a group will be similar (or related)
Data Mining and Text Mining. The Standard Data Mining process.
Unsupervised Learning: Clustering
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Sampath Jayarathna Cal Poly Pomona
Unsupervised Learning: Clustering
Clustering CSC 600: Data Mining Class 21.
Data Mining K-means Algorithm
Information Organization: Clustering
本投影片修改自Introduction to Information Retrieval一書之投影片 Ch 16 & 17
Hierarchical Clustering
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Data Mining Cluster Analysis: Basic Concepts and Algorithms
Presentation transcript:

Clustering Paolo Ferragina Dipartimento di Informatica Università di Pisa This is a mix of slides taken from several presentations, plus my touch !

Objectives of Cluster Analysis  Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized Competing objectives The commonest form of unsupervised learning

Yahoo! Hierarchy isn’t clustering but is the kind of output you want from clustering dairy crops agronomy forestry AI HCI craft missions botany evolution cell magnetism relativity courses agriculturebiologyphysicsCSspace... … (30)

Google News: automatic clustering gives an effective news presentation metaphor

For improving search recall  Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs  Therefore, to improve search recall:  Cluster docs in corpus a priori  When a query matches a doc D, also return other docs in the cluster containing D  Hope if we do this: The query “car” will also return docs containing automobile Sec. 16.1

For better visualization/navigation of search results Sec. 16.1

Issues for clustering  Representation for clustering  Document representation  Vector space? Normalization?  Need a notion of similarity/distance  How many clusters?  Fixed a priori?  Completely data driven? Sec. 16.2

Notion of similarity/distance  Ideal: semantic similarity  Practical: term-statistical similarity  Docs as vectors  We will use cosine similarity.  For many algorithms, easier to think in terms of a distance (rather than similarity) between docs.

Clustering Algorithms  Flat algorithms  Create a set of clusters  Usually start with a random (partial) partitioning  Refine it iteratively  K means clustering  Hierarchical algorithms  Create a hierarchy of clusters (dendogram)  Bottom-up, agglomerative  Top-down, divisive

Hard vs. soft clustering  Hard clustering: Each document belongs to exactly one cluster  More common and easier to do  Soft clustering: Each document can belong to more than one cluster.  Makes more sense for applications like creating browsable hierarchies  News is a proper example  Search results is another example

Flat & Partitioning Algorithms  Given: a set of n documents and the number K  Find: a partition in K clusters that optimizes the chosen partitioning criterion  Globally optimal  Intractable for many objective functions  Ergo, exhaustively enumerate all partitions  Locally optimal  Effective heuristic methods: K-means and K-medoids algorithms

K-Means  Assumes documents are real-valued vectors.  Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c:  Reassignment of instances to clusters is based on distance to the current cluster centroids. Sec. 16.4

K-Means Algorithm Select K random docs {s 1, s 2,… s K } as seeds. Until clustering converges (or other stopping criterion): For each doc d i : Assign d i to the cluster c r such that dist(d i, s r ) is minimal. For each cluster c j s j =  (c j ) Sec. 16.4

K Means Example (K=2) Pick seeds Reassign clusters Compute centroids x x Reassign clusters x x x x Compute centroids Reassign clusters Converged! Sec. 16.4

Termination conditions  Several possibilities, e.g.,  A fixed number of iterations.  Doc partition unchanged.  Centroid positions don’t change. Sec. 16.4

Convergence  Why should the K-means algorithm ever reach a fixed point?  K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm  EM is known to converge  Number of iterations could be large  But in practice usually isn’t Sec. 16.4

Convergence of K-Means  Define goodness measure of cluster c as sum of squared distances from cluster centroid:  G(c,s) = Σ i (d i – s c ) 2 (sum over all d i in cluster c)  G(C,s) = Σ c G(c,s)  Reassignment monotonically decreases G  It is a coordinate descent algorithm (opt one component at a time)  At any step we have some value for G(C,s) 1) Fix s, optimize C  assign d to the closest centroid  G(C’,s) < G(C,s) 2) Fix C’, optimize s  take the new centroids  G(C’,s’) < G(C’,s) < G(C,s) The new cost is smaller than the original one  local minimum Sec. 16.4

Time Complexity  The centroids are K  Each doc/centroid consists of M dimensions  Computing distance btw vectors is O(M) time.  Reassigning clusters: Each doc compared with all centroids, O(KNM) time.  Computing centroids: Each doc gets added once to some centroid, O(NM) time. Assume these two steps are each done once for I iterations: O(IKNM). Sec. 16.4

Seed Choice  Results can vary based on random seed selection.  Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings.  Select good seeds using a heuristic  doc least similar to any existing centroid  According to a probability distribution that depends inversely-proportional on the distance from the other current centroids In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F} If you start with D and F you converge to {A,B,D,E} {C,F} Example showing sensitivity to seeds Sec. 16.4

How Many Clusters?  Number of clusters K is given  Partition n docs into predetermined number of clusters  Finding the “right” number of clusters is part of the problem  Can usually take an algorithm for one flavor and convert to the other.

Bisecting K-means Variant of K-means that can produce a partitional or a hierarchical clustering

Bisecting K-means Example

K-means Pros  Simple  Fast for low dimensional data  It can find pure sub-clusters if large number of clusters is specified (but, over-partitioning) Cons  K-Means cannot handle non-globular data of different sizes and densities  K-Means will not identify outliers  K-Means is restricted to data which has the notion of a center (centroid)

Hierarchical Clustering  Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents  One approach: recursive application of a partitional clustering algorithm animal vertebrate fish reptile amphib. mammal worm insect crustacean invertebrate Ch. 17

Strengths of Hierarchical Clustering  No assumption of any particular number of clusters  Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level  They may correspond to meaningful taxonomies  Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)

Hierarchical Agglomerative Clustering (HAC)  Starts with each doc in a separate cluster  Then repeatedly joins the closest pair of clusters, until there is only one cluster.  The history of mergings forms a binary tree or hierarchy. Sec. 17.1

Closest pair of clusters  Single-link  Similarity of the closest points, the most cosine-similar  Complete-link  Similarity of the farthest points, the least cosine-similar  Centroid  Clusters whose centroids are the closest (or most cosine- similar)  Average-link  Clusters whose average distance/cosine between pairs of elements is the smallest Sec. 17.2

How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Similarity?  Single link (MIN)  Complete link (MAX)  Centroids  Average Proximity Matrix

How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix  MIN  MAX  Centroids  Average

How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix  MIN  MAX  Centroids  Average

How to define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix  MIN  MAX  Centroids  Average 

How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix  MIN  MAX  Centroids  Average

Starting Situation  Start with clusters of individual points and a proximity matrix p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix

Intermediate Situation  After some merging steps, we have some clusters C1 C4 C2 C5 C3 C2C1 C3 C5 C4 C2 C3C4C5 Proximity Matrix

Intermediate Situation  We want to merge the two closest clusters (C2 and C5) and update the proximity matrix. C1 C4 C2 C5 C3 C2C1 C3 C5 C4 C2 C3C4C5 Proximity Matrix

After Merging  The question is “How do we update the proximity matrix?” C1 C4 C2 U C5 C3 ? ? ? ? ? C2 U C5 C1 C3 C4 C2 U C5 C3C4 Proximity Matrix

Cluster Similarity: MIN or Single Link  Similarity of two clusters is based on the two most similar (closest) points in the different clusters  Determined by one pair of points, i.e., by one link in the proximity graph

Hierarchical Clustering: MIN Nested ClustersDendrogram

Strength of MIN Original Points Two Clusters Can handle non-elliptical shapes

Limitations of MIN Original Points Two Clusters Sensitive to noise and outliers

Cluster Similarity: MAX or Complete Linkage  Similarity of two clusters is based on the two least similar (most distant) points in the different clusters  Determined by all pairs of points in the two clusters 12345

Hierarchical Clustering: MAX Nested ClustersDendrogram

Strength of MAX Original Points Two Clusters Less susceptible to noise and outliers

Limitations of MAX Original Points Two Clusters Tends to break large clusters Biased towards globular clusters

Cluster Similarity: Average  Proximity of two clusters is the average of pairwise proximity between points in the two clusters

Hierarchical Clustering: Average Nested ClustersDendrogram

Hierarchical Clustering: Comparison Average MIN MAX

How to evaluate clustering quality ? Assesses a clustering with respect to ground truth … requires labeled data Produce the gold standard is costly !! Sec. 16.3

A different approach: Co-Clustering  Co-occurrence Matrices Characteristics  Data sparseness  High dimension  Noise  Is it possible to combine the document and term clustering together?  Yes, Co-Clustering  Simultaneously cluster the rows and columns of the co- occurrence matrix.

Information-Theoretic Co-Clustering  View co-occurrence matrix as a joint probability distribution between row & column random variables  We seek a hard-clustering of both dimensions such that loss in “Mutual Information” is minimized given a fixed no. of row & col. clusters

Example  Mutual Information btw random variables X and Y: It can be verified that this is the minimum mutual information loss Column grouping identifies the key terms of a cluster !!