Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering 1.

Slides:



Advertisements
Similar presentations
Lecture 15(Ch16): Clustering
Advertisements

Clustering Paolo Ferragina Dipartimento di Informatica Università di Pisa This is a mix of slides taken from several presentations, plus my touch !
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 16 10/18/2011.
Unsupervised learning
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering 1.
CS347 Lecture 8 May 7, 2001 ©Prabhakar Raghavan. Today’s topic Clustering documents.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
CS728 Web Clustering II Lecture 14. K-Means Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of gravity or mean)
Clustering Color/Intensity
CS276A Text Retrieval and Mining Lecture 13 [Borrows slides from Ray Mooney and Soumen Chakrabarti]
Switch to Top-down Top-down or move-to-nearest Partition documents into ‘k’ clusters Two variants “Hard” (0/1) assignment of documents to clusters “soft”
Clustering. 2 Outline  Introduction  K-means clustering  Hierarchical clustering: COBWEB.
Semi-Supervised Clustering Jieping Ye Department of Computer Science and Engineering Arizona State University
What is Cluster Analysis?
Advanced Multimedia Text Clustering Tamara Berg. Reminder - Classification Given some labeled training documents Determine the best label for a test (query)
What is Cluster Analysis?
K-means Clustering. What is clustering? Why would we want to cluster? How would you determine clusters? How can you do this efficiently?
Unsupervised Learning: Clustering 1 Lecture 16: Clustering Web Search and Mining.
Clustering Unsupervised learning Generating “classes”
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Unsupervised Learning. CS583, Bing Liu, UIC 2 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate.
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
Text Clustering.
Clustering Supervised vs. Unsupervised Learning Examples of clustering in Web IR Characteristics of clustering Clustering algorithms Cluster Labeling 1.
tch?v=Y6ljFaKRTrI Fireflies.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
ITCS 6265 Information Retrieval & Web Mining Lecture 15 Clustering.
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 16: Clustering.
UNSUPERVISED LEARNING David Kauchak CS 451 – Fall 2013.
Unsupervised Learning. Supervised learning vs. unsupervised learning.
Information Retrieval Lecture 6 Introduction to Information Retrieval (Manning et al. 2007) Chapter 16 For the MSc Computer Science Programme Dell Zhang.
Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering 1.
Clustering Clustering is a technique for finding similarity groups in data, called clusters. I.e., it groups data instances that are similar to (near)
Information Retrieval and Organisation Chapter 16 Flat Clustering Dell Zhang Birkbeck, University of London.
CS 8751 ML & KDDData Clustering1 Clustering Unsupervised learning Generating “classes” Distance/similarity measures Agglomerative methods Divisive methods.
Clustering. What is Clustering? Clustering: the process of grouping a set of objects into classes of similar objects –Documents within a cluster should.
Clustering Instructor: Max Welling ICS 178 Machine Learning & Data Mining.
V. Clustering 인공지능 연구실 이승희 Text: Text mining Page:82-93.
Machine Learning Queens College Lecture 7: Clustering.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 15 10/13/2011.
Clustering (Modified from Stanford CS276 Slides - Lecture 17 Clustering)
Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.
Compiled By: Raj Gaurang Tiwari Assistant Professor SRMGPC, Lucknow Unsupervised Learning.
Flat clustering approaches
Lecture 12: Clustering May 5, Clustering (Ch 16 and 17)  Document clustering  Motivations  Document representations  Success criteria  Clustering.
Text Clustering Hongning Wang
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
1 Machine Learning Lecture 9: Clustering Moshe Koppel Slides adapted from Raymond J. Mooney.
Information Retrieval Search Engine Technology (8) Prof. Dragomir R. Radev.
Clustering Approaches Ka-Lok Ng Department of Bioinformatics Asia University.
Introduction to Information Retrieval Introduction to Information Retrieval Clustering Chris Manning, Pandu Nayak, and Prabhakar Raghavan.
Introduction to Data Mining Clustering & Classification Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Machine Learning Lecture 4: Unsupervised Learning (clustering) 1.
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Sampath Jayarathna Cal Poly Pomona
Semi-Supervised Clustering
Data Mining K-means Algorithm
Information Organization: Clustering
本投影片修改自Introduction to Information Retrieval一書之投影片 Ch 16 & 17
Machine Learning on Data Lecture 9b- Clustering
Text Categorization Berlin Chen 2003 Reference:
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Data Mining CSCI 307, Spring 2019 Lecture 24
Introduction to Machine learning
Presentation transcript:

Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering 1

Introduction to Information Retrieval Overview ❶ Clustering: Introduction ❷ Clustering in IR ❸ K-means ❹ Evaluation ❺ How many clusters? 2

Introduction to Information Retrieval Outline ❶ Clustering: Introduction ❷ Clustering in IR ❸ K-means ❹ Evaluation ❺ How many clusters? 3

Introduction to Information Retrieval 4 Clustering: Definition  (Document) clustering is the process of grouping a set of documents into clusters of similar documents.  Documents within a cluster should be similar.  Documents from different clusters should be dissimilar.  Clustering is the most common form of unsupervised learning.  Unsupervised = there are no labeled or annotated data. 4

Introduction to Information Retrieval 5 Data set with clear cluster structure Propose algorithm for finding the cluster structure in this example 5

Introduction to Information Retrieval 6 Classification vs. Clustering  Classification: supervised learning  Clustering: unsupervised learning  Classification: Classes are human-defined and part of the input to the learning algorithm.  Clustering: Clusters are inferred from the data without human input.  However, there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents,... 6

Introduction to Information Retrieval Outline ❶ Clustering: Introduction ❷ Clustering in IR ❸ K-means ❹ Evaluation ❺ How many clusters? 7

Introduction to Information Retrieval 8 The cluster hypothesis Cluster hypothesis. Documents in the same cluster behave similarly with respect to relevance to information needs. All applications of clustering in IR are based (directly or indirectly) on the cluster hypothesis. Van Rijsbergen’s original wording: “closely associated documents tend to be relevant to the same requests”. 8

Introduction to Information Retrieval 9 Applications of clustering in IR 9 ApplicationWhat is clustered? Benefit Search result clusteringsearch results more effective information presentation to user Scatter-Gather(subsets of) collection alternative user interface: “search without typing” Collection clusteringcollectioneffective information presentation for exploratory browsing Cluster-based retrievalcollectionhigher efficiency: faster search

Introduction to Information Retrieval 10 Search result clustering for better navigation 10

Introduction to Information Retrieval 11 Scatter-Gather 11

Introduction to Information Retrieval 12 Collection clustering

Introduction to Information Retrieval 13 Clustering for improving recall  To improve search recall:  When a query matches a doc d, also return other docs in the cluster containing d  Hope: if we do this: the query “car” will also return docs containing “automobile”  Because the clustering algorithm groups together docs containing “car” with those containing “automobile”. 13

Introduction to Information Retrieval 14 Desiderata for clustering  General goal: put related docs in the same cluster, put unrelated docs in different clusters.  The number of clusters should be appropriate for the data set we are clustering.  Initially, we will assume the number of clusters K is given.  Later: Semiautomatic methods for determining K  Secondary goals in clustering  Avoid very small and very large clusters  Define clusters that are easy to explain to the user  Many others... 14

Introduction to Information Retrieval 15 Hard vs. Soft clustering  Hard clustering: Each document belongs to exactly one cluster.  More common and easier to do  Soft clustering: A document can belong to more than one cluster.  You may want to put sneakers in two clusters:  sports apparel  Shoes  We will do flat, hard clustering only in this class. 15

Introduction to Information Retrieval 16 Flat algorithms  Flat algorithms compute a partition of N documents into a set of K clusters.  Given: a set of documents and the number K  Find: a partition into K clusters that optimizes the chosen partitioning criterion  Global optimization: exhaustively enumerate partitions, pick optimal one  Not tractable  Effective heuristic method: K-means algorithm 16

Introduction to Information Retrieval Outline ❶ Clustering: Introduction ❷ Clustering in IR ❸ K-means ❹ Evaluation ❺ How many clusters? 17

Introduction to Information Retrieval 18 Document representations in clustering  Vector space model  As in vector space classification, we measure relatedness between vectors by Euclidean distance...  Almost: centroids are not length-normalized. 18

Introduction to Information Retrieval 19 K-means  Each cluster in K-means is defined by a centroid.  Objective/partitioning criterion: minimize the average squared difference from the centroid  Recall definition of centroid: where we use ω to denote a cluster.  We try to find the minimum average squared difference by iterating two steps:  reassignment: assign each vector to its closest centroid  recomputation: recompute each centroid as the average of the vectors that were assigned to it in reassignment 19

Introduction to Information Retrieval K-means algorithm 20

Introduction to Information Retrieval Worked Example: Set of to be clustered 21

Introduction to Information Retrieval 22 Worked Example: Random selection of initial centroids Exercise: (i) Guess what the optimal clustering into two clusters is in this case; (ii) compute the centroids of the clusters 22

Introduction to Information Retrieval Worked Example: Assign points to closest center 23

Introduction to Information Retrieval Worked Example: Assignment 24

Introduction to Information Retrieval Worked Example: Recompute cluster centroids 25

Introduction to Information Retrieval Worked Example: Assign points to closest centroid 26

Introduction to Information Retrieval Worked Example: Assignment 27

Introduction to Information Retrieval Worked Example: Recompute cluster centroids 28

Introduction to Information Retrieval Worked Example: Assign points to closest centroid 29

Introduction to Information Retrieval Worked Example: Assignment 30

Introduction to Information Retrieval Worked Example: Recompute cluster centroids 31

Introduction to Information Retrieval Worked Example: Assign points to closest centroid 32

Introduction to Information Retrieval Worked Example: Assignment 33

Introduction to Information Retrieval Worked Example: Recompute cluster centroids 34

Introduction to Information Retrieval Worked Example: Assign points to closest centroid 35

Introduction to Information Retrieval Worked Example: Assignment 36

Introduction to Information Retrieval Worked Example: Recompute cluster centroids 37

Introduction to Information Retrieval Worked Example: Assign points to closest centroid 38

Introduction to Information Retrieval Worked Example: Assignment 39

Introduction to Information Retrieval Worked Example: Recompute cluster centroids 40

Introduction to Information Retrieval Worked Example: Assign points to closest centroid 41

Introduction to Information Retrieval Worked Example: Assignment 42

Introduction to Information Retrieval Worked Example: Recompute cluster caentroids 43

Introduction to Information Retrieval Worked Ex.: Centroids and assignments after convergence 44

Introduction to Information Retrieval 45 K-means is guaranteed to converge: Proof  RSS = sum of all squared distances between document vector and closest centroid  RSS decreases during each reassignment step.  because each vector is moved to a closer centroid  RSS decreases during each recomputation step.  see next slide  There is only a finite number of clusterings.  Thus: We must reach a fixed point. 45

Introduction to Information Retrieval 46 Recomputation decreases average distance – the residual sum of squares (the “goodness” measure) The last line is the componentwise definition of the centroid! We minimize RSSk when the old centroid is replaced with the new centroid. RSS, the sum of the RSSk, must then also decrease during recomputation. 46

Introduction to Information Retrieval 47 K-means is guaranteed to converge  But we don’t know how long convergence will take!  If we don’t care about a few docs switching back and forth, then convergence is usually fast (< iterations).  However, complete convergence can take many more iterations. 47

Introduction to Information Retrieval 48 Optimality of K-means  Convergence does not mean that we converge to the optimal clustering!  This is the great weakness of K-means.  If we start with a bad set of seeds, the resulting clustering can be horrible. 48

Introduction to Information Retrieval 49 Convergence Exercise: Suboptimal clustering  What is the optimal clustering for K = 2? 49

Introduction to Information Retrieval 50 Initialization of K-means  Random seed selection is just one of many ways K-means can be initialized.  Random seed selection is not very robust: It’s easy to get a suboptimal clustering.  Better ways of computing initial centroids:  Select seeds not randomly, but using some heuristic (e.g., filter out outliers.)  Use hierarchical clustering to find good seeds  Select i (e.g., i = 10) different random sets of seeds, do a K- means clustering for each, select the clustering with lowest RSS 50

Introduction to Information Retrieval 51 Time complexity of K-means  Computing one distance of two vectors is O(M).  Reassignment step: O(KNM) (we need to compute KN document-centroid distances)  Recomputation step: O(NM)  Assume number of iterations bounded by I  Overall complexity: O(IKNM) – linear in all important dimensions 51

Introduction to Information Retrieval Outline ❶ Clustering: Introduction ❷ Clustering in IR ❸ K-means ❹ Evaluation ❺ How many clusters? 52

Introduction to Information Retrieval 53 What is a good clustering ?  Internal criteria  Example of an internal criterion: RSS in K-means  But an internal criterion often does not evaluate the actual utility of a clustering in the application.  Alternative: External criteria  Evaluate with respect to a human-defined classification 53

Introduction to Information Retrieval 54 External criteria for clustering quality  Based on a gold standard data set, e.g., the Reuters collection we also used for the evaluation of classification  Goal: Clustering should reproduce the classes in the gold standard  (But we only want to reproduce how documents are divided into groups, not the class labels.)  First measure for how well we were able to reproduce the classes: purity 54

Introduction to Information Retrieval 55 External criterion: Purity  Ω= {ω 1, ω 2,..., ω K } is the set of clusters and C = {c 1, c 2,..., c J } is the set of classes.  For each cluster ω k : find class c j with most members n kj in ω k  Sum all n kj and divide by total number of points 55

Introduction to Information Retrieval 56 Example for computing purity To compute purity: max j |ω 1 ∩ c j | = 5 (cluster 1, class x) ; max j |ω 2 ∩ c j | = 4 (cluster 2, class o) ; max j |ω 3 ∩ c j | = 3 (cluster 3, class ⋄ ). Purity is (1/17) × ( ) ≈

Introduction to Information Retrieval 57 Rand index  Definition:  Based on 2x2 contingency table of all pairs of documents:  TP+FN+FP+TN is the total number of pairs.  There are pairs for N documents.  Example: = 136 in o/ ⋄ /x example  Each pair is either positive or negative (the clustering puts the two documents in the same or in different clusters)... ... and either “true” (correct) or “false” (incorrect): the clustering decision is correct or incorrect. 57

Introduction to Information Retrieval 58 Rand Index: Example As an example, we compute RI for the o/ ⋄ /x example. We first compute TP + FP. The three clusters contain 6, 6, and 5 points, respectively, so the total number of “positives” or pairs of documents that are in the same cluster is: Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster 3, and the x pair in cluster 3 are true positives: Thus, FP = 40 − 20 = 20. FN and TN are computed similarly. 58

Introduction to Information Retrieval 59 Rand measure for the o/ ⋄ /x example ( )/( ) ≈

Introduction to Information Retrieval Outline ❶ Clustering: Introduction ❷ Clustering in IR ❸ K-means ❹ Evaluation ❺ How many clusters? 60

Introduction to Information Retrieval 61 How many clusters?  Number of clusters K is given in many applications.  E.g., there may be an external constraint on K. Example: In the case of Scatter-Gather, it was hard to show more than 10–20 clusters on a monitor in the 90s.  What if there is no external constraint? Is there a “right” number of clusters?  One way to go: define an optimization criterion  Given docs, find K for which the optimum is reached.  What optimiation criterion can we use?  We can’t use RSS or average squared distance from centroid as criterion: always chooses K = N clusters. 61

Introduction to Information Retrieval 62 Simple objective function for K (1)  Basic idea:  Start with 1 cluster (K = 1)  Keep adding clusters (= keep increasing K)  Add a penalty for each new cluster  Trade off cluster penalties against average squared distance from centroid  Choose the value of K with the best tradeoff 62

Introduction to Information Retrieval 63 Simple objective function for K (2)  Given a clustering, define the cost for a document as (squared) distance to centroid  Define total distortion RSS(K) as sum of all individual document costs (corresponds to average distance)  Then: penalize each cluster with a cost λ  Thus for a clustering with K clusters, total cluster penalty is Kλ  Define the total cost of a clustering as distortion plus total cluster penalty: RSS(K) + Kλ  Select K that minimizes (RSS(K) + Kλ)  Still need to determine good value for λ... 63