CS 188: Artificial Intelligence Fall 2008

Slides:



Advertisements
Similar presentations
Projects Project 4 due tonight Project 5 (last project!) out later
Advertisements

Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Linear Classifiers (perceptrons)
Unsupervised learning
MACHINE LEARNING 9. Nonparametric Methods. Introduction Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 
CS 188: Artificial Intelligence Fall 2009
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Nearest Neighbor Models (Reading: Chapter.
Segmentation Divide the image into segments. Each segment:
Clustering Color/Intensity
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Cluster Analysis (1).
Clustering Ram Akella Lecture 6 February 23, & 280I University of California Berkeley Silicon Valley Center/SC.
Module 04: Algorithms Topic 07: Instance-Based Learning
Computer Vision James Hays, Brown
Unsupervised Learning. CS583, Bing Liu, UIC 2 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate.
CSE 185 Introduction to Computer Vision Pattern Recognition.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
Clustering Supervised vs. Unsupervised Learning Examples of clustering in Web IR Characteristics of clustering Clustering algorithms Cluster Labeling 1.
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
Image segmentation Prof. Noah Snavely CS1114
Classification: Feature Vectors Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free : 2 YOUR_NAME.
UNSUPERVISED LEARNING David Kauchak CS 451 – Fall 2013.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
Unsupervised Learning. Supervised learning vs. unsupervised learning.
Data Mining: Knowledge Discovery in Databases Peter van der Putten ALP Group, LIACS Pre-University College LAPP-Top Computer Science February 2005.
1Ellen L. Walker Category Recognition Associating information extracted from images with categories (classes) of objects Requires prior knowledge about.
Recap: General Naïve Bayes  A general naïve Bayes model:  Y: label to be predicted  F 1, …, F n : features of each instance Y F1F1 FnFn F2F2 This slide.
CS 188: Artificial Intelligence Fall 2007 Lecture 25: Kernels 11/27/2007 Dan Klein – UC Berkeley.
Clustering Instructor: Max Welling ICS 178 Machine Learning & Data Mining.
Machine Learning and Data Mining Clustering (adapted from) Prof. Alexander Ihler TexPoint fonts used in EMF. Read the TexPoint manual before you delete.
Compiled By: Raj Gaurang Tiwari Assistant Professor SRMGPC, Lucknow Unsupervised Learning.
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
6.S093 Visual Recognition through Machine Learning Competition Image by kirkh.deviantart.com Joseph Lim and Aditya Khosla Acknowledgment: Many slides from.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP10 Advanced Segmentation Miguel Tavares.
Announcements  Midterm 2  Monday 4/20, 6-9pm  Rooms:  2040 Valley LSB [Last names beginning with A-C]  2060 Valley LSB [Last names beginning with.
CS 188: Artificial Intelligence Fall 2008 Lecture 24: Perceptrons II 11/24/2008 Dan Klein – UC Berkeley 1.
KNN & Naïve Bayes Hongning Wang
Another Example: Circle Detection
Image Representation and Description – Representation Schemes
Unsupervised Learning: Clustering
Unsupervised Learning: Clustering
Semi-Supervised Clustering
Clustering CSC 600: Data Mining Class 21.
Miguel Tavares Coimbra
Chapter 15 – Cluster Analysis
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Instance Based Learning
Computer Vision Lecture 13: Image Segmentation III
Machine Learning and Data Mining Clustering
Recognition using Nearest Neighbor (or kNN)
Computer Vision Lecture 12: Image Segmentation II
K-means and Hierarchical Clustering
Lecture 25: Introduction to Recognition
K Nearest Neighbor Classification
Grouping.
Revision (Part II) Ke Chen
CS 188: Artificial Intelligence Spring 2007
Brief Review of Recognition + Context
Revision (Part II) Ke Chen
Instance Based Learning
Text Categorization Berlin Chen 2003 Reference:
Unsupervised Learning: Clustering
EM Algorithm and its Applications
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Presentation transcript:

CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein – UC Berkeley

Case-Based Reasoning Similarity for classification Predict an instance’s label using similar instances Nearest-neighbor classification 1-NN: copy the label of the most similar data point K-NN: let the k nearest neighbors vote (have to devise a weighting scheme) Key issue: how to define similarity Trade-off: Small k gives relevant neighbors Large k gives smoother functions Sound familiar? [DEMO] http://www.cs.cmu.edu/~zhuxj/courseproject/knndemo/KNN.html

Parametric / Non-parametric Parametric models: Fixed set of parameters More data means better settings Non-parametric models: Complexity of the classifier increases with data Better in the limit, often worse in the non-limit (K)NN is non-parametric Truth 2 Examples 10 Examples 100 Examples 10000 Examples

Nearest-Neighbor Classification Nearest neighbor for digits: Take new image Compare to all training images Assign based on closest example Encoding: image is vector of intensities: What’s the similarity function? Dot product of two images vectors? Usually normalize vectors so ||x|| = 1 min = 0 (when?), max = 1 (when?)

Basic Similarity Many similarities based on feature dot products: If features are just the pixels: Note: not all similarities are of this form

Invariant Metrics Better distances use knowledge about vision Similarities are invariant under certain transformations Rotation, scaling, translation, stroke-thickness… E.g: 16 x 16 = 256 pixels; a point in 256-dim space Small similarity in R256 (why?) How to incorporate invariance into similarities? This and next few slides adapted from Xiao Hu, UIUC

Rotation Invariant Metrics Each example is now a curve in R256 Rotation invariant similarity: s’=max s( r( ), r( )) E.g. highest similarity between images’ rotation lines

Template Deformation Deformable templates: An “ideal” version of each category Best-fit to image using min variance Cost for high distortion of template Cost for image points being far from distorted template Used in many commercial digit recognizers Examples from [Hastie 94]

Recap: Classification Classification systems: Supervised learning Make a rational prediction given evidence We’ve seen several methods for this Useful when you have labeled data (or can get it)

Clustering Clustering systems: Unsupervised learning Detect patterns in unlabeled data E.g. group emails or search results E.g. find categories of customers E.g. detect anomalous program executions Useful when don’t know what you’re looking for Requires data, but no labels Often get gibberish

Clustering Basic idea: group together similar instances Example: 2D point patterns What could “similar” mean? One option: small (squared) Euclidean distance

K-Means An iterative clustering algorithm Pick K random points as cluster centers (means) Alternate: Assign data instances to closest mean Assign each mean to the average of its assigned points Stop when no points’ assignments change

K-Means Example

K-Means as Optimization Consider the total distance to the means: Each iteration reduces phi Two stages each iteration: Update assignments: fix means c, change assignments a Update means: fix assignments a, change means c points means assignments

Phase I: Update Assignments For each point, re-assign to closest mean: Can only decrease total distance phi!

Phase II: Update Means Move each mean to the average of its assigned points: Also can only decrease total distance… (Why?) Fun fact: the point y with minimum squared Euclidean distance to a set of points {x} is their mean

Initialization K-means is non-deterministic Requires initial means It does matter what you pick! What can go wrong? Various schemes for preventing this kind of thing: variance-based split / merge, initialization heuristics

K-Means Getting Stuck A local optimum: Why doesn’t this work out like the earlier example, with the purple taking over half the blue?

K-Means Questions Will K-means converge? To a global optimum? Will it always find the true patterns in the data? If the patterns are very very clear? Will it find something interesting? Do people ever use it? How many clusters to pick?

Clustering for Segmentation Quick taste of a simple vision algorithm Idea: break images into manageable regions for visual processing (object recognition, activity detection, etc.) http://www.cs.washington.edu/research/imagedatabase/demo/kmcluster/

Representing Pixels Basic representation of pixels: 3 dimensional color vector <r, g, b> Ranges: r, g, b in [0, 1] What will happen if we cluster the pixels in an image using this representation? Improved representation for segmentation: 5 dimensional vector <r, g, b, x, y> Ranges: x in [0, M], y in [0, N] Bigger M, N makes position more important How does this change the similarities? Note: real vision systems use more sophisticated encodings which can capture intensity, texture, shape, and so on.

K-Means Segmentation Results depend on initialization! Why? Note: best systems use graph segmentation algorithms

Other Uses of K-Means Speech recognition: can use to quantize wave slices into a small number of types (SOTA: work with multivariate continuous features) Document clustering: detect similar documents on the basis of shared words (SOTA: use probabilistic models which operate on topics rather than words)

Agglomerative Clustering First merge very similar instances Incrementally build larger clusters out of smaller clusters Algorithm: Maintain a set of clusters Initially, each instance in its own cluster Repeat: Pick the two closest clusters Merge them into a new cluster Stop when there’s only one cluster left Produces not one clustering, but a family of clusterings represented by a dendrogram

Agglomerative Clustering How should we define “closest” for clusters with multiple elements? Many options Closest pair (single-link clustering) Farthest pair (complete-link clustering) Average of all pairs Distance between centroids (broken) Ward’s method (my pick, like k-means) Different choices create different clustering behaviors

Collaborative Filtering Ever wonder how online merchants decide what products to recommend to you? Simplest idea: recommend the most popular items to everyone Not entirely crazy! (Why) Can do better if you know something about the customer (e.g. what they’ve bought) Better idea: recommend items that similar customers bought A popular technique: collaborative filtering Define a similarity function over customers (how?) Look at purchases made by people with high similarity Trade-off: relevance of comparison set vs confidence in predictions How can this go wrong? You are here