Classification Tamara Berg CSE 595 Words & Pictures.

Slides:



Advertisements
Similar presentations
Introduction to Support Vector Machines (SVM)
Advertisements

ECG Signal processing (2)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
An Introduction of Support Vector Machine
Appearance-based recognition & detection II Kristen Grauman UT-Austin Tuesday, Nov 11.
An Introduction of Support Vector Machine
Support Vector Machines
SVM—Support Vector Machines
Machine learning continued Image source:
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
LPP-HOG: A New Local Image Descriptor for Fast Human Detection Andy Qing Jun Wang and Ru Bo Zhang IEEE International Symposium.
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,
Classification and Decision Boundaries
Discriminative and generative methods for bags of features
Decision making in episodic environments
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class.
Support Vector Machines Kernel Machines
Support Vector Machines
CS 4700: Foundations of Artificial Intelligence
Support Vector Machines
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Machine learning Image source:
An Introduction to Support Vector Machines Martin Law.
Machine learning Image source:
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Tamara Berg Machine Learning Recognizing People, Objects, & Actions 1.
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
Machine Learning Overview Tamara Berg Language and Vision.
Machine Learning Overview Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CS 231A Section 1: Linear Algebra & Probability Review
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
Support Vector Machine & Image Classification Applications
Part 3: discriminative methods Antonio Torralba. Overview of section Object detection with classifiers Boosting –Gentle boosting –Weak detectors –Object.
Recognition using Boosting Modified from various sources including
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 24 – Classifiers 1.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
An Introduction to Support Vector Machines (M. Law)
Lecture 6: Classification – Boosting and SVMs CAP 5415 Fall 2006.
An Introduction to Support Vector Machine (SVM)
Support Vector Machines in Marketing Georgi Nalbantov MICC, Maastricht University.
Machine Learning Overview Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Classification II Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Machine learning Image source:
Machine Learning Overview Tamara Berg Recognizing People, Objects, and Actions.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
CSSE463: Image Recognition Day 15 Today: Today: Your feedback: Your feedback: Projects/labs reinforce theory; interesting examples, topics, presentation;
An Introduction of Support Vector Machine In part from of Jinwei Gu.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
A Brief Introduction to Support Vector Machine (SVM) Most slides were from Prof. A. W. Moore, School of Computer Science, Carnegie Mellon University.
Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)
An Introduction of Support Vector Machine Courtesy of Jinwei Gu.
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Machine learning Image source:
Recognition using Nearest Neighbor (or kNN)
Decision making in episodic environments
COSC 4335: Other Classification Techniques
Presentation transcript:

Classification Tamara Berg CSE 595 Words & Pictures

HW2 Online after class – Due Oct 10, 11:59pm Use web text descriptions as proxy for class labels. Train color attribute classifiers on web shopping images. Classify test images as to whether they display attributes.

Topic Presentations First group starts on Tuesday Audience – please read papers!

Example: Image classification apple pear tomato cow dog horse inputdesired output Slide credit: Svetlana Lazebnik

Slide from Dan Klein

Slide from Dan Klein

Example: Seismic data Body wave magnitude Surface wave magnitude Nuclear explosions Earthquakes Slide credit: Svetlana Lazebnik

Slide from Dan Klein

The basic classification framework y = f(x) Learning: given a training set of labeled examples {(x 1,y 1 ), …, (x N,y N )}, estimate the parameters of the prediction function f Inference: apply f to a never before seen test example x and output the predicted value y = f(x) outputclassification function input Slide credit: Svetlana Lazebnik

Some classification methods 10 6 examples Nearest neighbor Shakhnarovich, Viola, Darrell 2003 Berg, Berg, Malik 2005 … Neural networks LeCun, Bottou, Bengio, Haffner 1998 Rowley, Baluja, Kanade 1998 … Support Vector Machines and Kernels Conditional Random Fields McCallum, Freitag, Pereira 2000 Kumar, Hebert 2003 … Guyon, Vapnik Heisele, Serre, Poggio, 2001 … Slide credit: Antonio Torralba

Example: Training and testing Key challenge: generalization to unseen examples Training set (labels known)Test set (labels unknown) Slide credit: Svetlana Lazebnik

Slide credit: Dan Klein

Slide from Min-Yen Kan Classification by Nearest Neighbor Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in?

Slide from Min-Yen Kan Classification by Nearest Neighbor

Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc) Slide from Min-Yen Kan

Classification by kNN Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan

What are the features? What’s the training data? Testing data? Parameters? Classification by kNN

Decision tree classifier Example problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1.Alternate: is there an alternative restaurant nearby? 2.Bar: is there a comfortable bar area to wait in? 3.Fri/Sat: is today Friday or Saturday? 4.Hungry: are we hungry? 5.Patrons: number of people in the restaurant (None, Some, Full) 6.Price: price range ($, $$, $$$) 7.Raining: is it raining outside? 8.Reservation: have we made a reservation? 9.Type: kind of restaurant (French, Italian, Thai, Burger) 10.WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60) Slide credit: Svetlana Lazebnik

Decision tree classifier Slide credit: Svetlana Lazebnik

Decision tree classifier Slide credit: Svetlana Lazebnik

Linear classifier Find a linear function to separate the classes f(x) = sgn(w 1 x 1 + w 2 x 2 + … + w D x D ) = sgn(w  x) Slide credit: Svetlana Lazebnik

Discriminant Function It can be arbitrary functions of x, such as: Nearest Neighbor Decision Tree Linear Functions Slide credit: Jinwei Gu

Linear Discriminant Function g(x) is a linear function: x1x1 x2x2 w T x + b = 0 w T x + b < 0 w T x + b > 0 A hyper-plane in the feature space Slide credit: Jinwei Gu denotes +1 denotes -1 x1x1

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers! Slide credit: Jinwei Gu

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu

x1x1 x2x2 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function Infinite number of answers! Which one is the best? denotes +1 denotes -1 Slide credit: Jinwei Gu

Large Margin Linear Classifier “safe zone” The linear discriminant function (classifier) with the maximum margin is the best Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best?  strong generalization ability Margin x1x1 x2x2 Linear SVM Slide credit: Jinwei Gu

Large Margin Linear Classifier x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu

Large Margin Linear Classifier We know that The margin width is: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n Support Vectors Slide credit: Jinwei Gu

Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu

Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu

Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu

Solving the Optimization Problem s.t. Quadratic programming with linear constraints s.t. Lagrangian Function Slide credit: Jinwei Gu

Solving the Optimization Problem s.t., and Lagrangian Dual Problem Slide credit: Jinwei Gu

Solving the Optimization Problem The solution has the form: From KKT condition, we know: Thus, only support vectors have x1x1 x2x2 w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu

Solving the Optimization Problem The linear discriminant function is: Notice it relies on a dot product between the test point x and the support vectors x i Slide credit: Jinwei Gu

Linear separability Slide credit: Svetlana Lazebnik

Non-linear SVMs: Feature Space General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) This slide is courtesy of

Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function is now: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space: Slide credit: Jinwei Gu

Nonlinear SVM: Optimization Formulation: (Lagrangian Dual Problem) such that The solution of the discriminant function is The optimization technique is the same. Slide credit: Jinwei Gu

Nonlinear SVMs: The Kernel Trick  Linear kernel: Examples of commonly-used kernel functions:  Polynomial kernel:  Gaussian (Radial-Basis Function (RBF) ) kernel:  Sigmoid: Slide credit: Jinwei Gu

Support Vector Machine: Algorithm 1. Choose a kernel function 2. Choose a value for C 3. Solve the quadratic programming problem (many software packages available) 4. Construct the discriminant function from the support vectors Slide credit: Jinwei Gu

Some Issues Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. This slide is courtesy of Slide credit: Jinwei Gu

Summary: Support Vector Machine 1. Large Margin Classifier – Better generalization ability & less over-fitting 2. The Kernel Trick – Map data points to higher dimensional space in order to make them linearly separable. – Since only dot product is used, we do not need to represent the mapping explicitly. Slide credit: Jinwei Gu

A simple algorithm for learning robust classifiers – Freund & Shapire, 1995 – Friedman, Hastie, Tibshhirani, 1998 Provides efficient algorithm for sparse visual feature selection – Tieu & Viola, 2000 – Viola & Jones, 2003 Easy to implement, doesn’t require external optimization tools. Boosting Slide credit: Antonio Torralba

Defines a classifier using an additive model: Boosting Strong classifier Weak classifier Weight Features vector Slide credit: Antonio Torralba

Defines a classifier using an additive model: We need to define a family of weak classifiers Boosting Strong classifier Weak classifier Weight Features vector from a family of weak classifiers Slide credit: Antonio Torralba

Adaboost Slide credit: Antonio Torralba

Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Boosting It is a sequential procedure: x t=1 x t=2 xtxt Slide credit: Antonio Torralba

Toy example Weak learners from the family of lines h => p(error) = 0.5 it is at chance Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

Toy example This one seems to be the best Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = This is a ‘weak classifier’: It performs slightly better than chance. Slide credit: Antonio Torralba

Toy example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

Toy example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

Toy example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

Toy example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

Toy example The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers. f1f1 f2f2 f3f3 f4f4 Slide credit: Antonio Torralba

Adaboost Slide credit: Antonio Torralba