Machine Learning Overview Tamara Berg Recognizing People, Objects, and Actions.

Slides:



Advertisements
Similar presentations
ECG Signal processing (2)
Advertisements

Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
An Introduction of Support Vector Machine
Data Mining Classification: Alternative Techniques
Data Mining Classification: Alternative Techniques
An Introduction of Support Vector Machine
SVM—Support Vector Machines
Machine learning continued Image source:
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,
Classification and Decision Boundaries
Discriminative and generative methods for bags of features
Decision making in episodic environments
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class.
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
Ensemble Learning: An Introduction
Three kinds of learning
ICS 273A Intro Machine Learning
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Discriminative and generative methods for bags of features
Machine learning Image source:
Learning Chapter 18 and Parts of Chapter 20
Machine learning Image source:
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Tamara Berg Machine Learning Recognizing People, Objects, & Actions 1.
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
CSE 185 Introduction to Computer Vision Pattern Recognition.
Machine Learning Overview Tamara Berg Language and Vision.
Machine Learning Overview Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CS 231A Section 1: Linear Algebra & Probability Review
Data mining and machine learning A brief introduction.
Classification Tamara Berg CSE 595 Words & Pictures.
Classification 2: discriminative models
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
Inductive learning Simplest form: learn a function from examples
Support Vector Machine & Image Classification Applications
ADVANCED CLASSIFICATION TECHNIQUES David Kauchak CS 159 – Fall 2014.
Recognition using Boosting Modified from various sources including
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 24 – Classifiers 1.
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
Learning from Observations Chapter 18 Through
BOOSTING David Kauchak CS451 – Fall Admin Final project.
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
1 Learning Chapter 18 and Parts of Chapter 20 AI systems are complex and may have many parameters. It is impractical and often impossible to encode all.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Lecture 6: Classification – Boosting and SVMs CAP 5415 Fall 2006.
Methods for classification and image representation
Machine Learning Overview Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
The Viola/Jones Face Detector A “paradigmatic” method for real-time object detection Training is slow, but detection is very fast Key ideas Integral images.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
Classification II Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Machine learning Image source:
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
CS 2750: Machine Learning Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh February 17, 2016.
Learning From Observations Inductive Learning Decision Trees Ensembles.
An Introduction of Support Vector Machine In part from of Jinwei Gu.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
An Introduction of Support Vector Machine Courtesy of Jinwei Gu.
Machine learning Image source:
Recognition using Nearest Neighbor (or kNN)
Decision making in episodic environments
COSC 4335: Other Classification Techniques
Learning Chapter 18 and Parts of Chapter 20
Presentation transcript:

Machine Learning Overview Tamara Berg Recognizing People, Objects, and Actions

Today Schedule has been adjusted a little bit due to Monday’s cancellation – Today – Overview of machine learning algorithms (other than deep learning) – We will cover a quick intro to deep learning on day 2 of the object recognition topic The Topic Presentation groups have been posted to the class webpage – Group 1, Feb 15/17, should meet with me early next week to go over presentation outline and proposed paper list (Adam, Zherong, Jae-Sung, Cheng-Yang)

For next class Read assigned object recognition papers (posted later today) Before class turn in hard copy ½ page summary for each assigned paper outlining: 1) the goal of the paper, 2) the approach, 3) what was novel, 4) what you thought of the paper. (summary template on the class webpage)

To Do – prepping for projects – Install your favorite machine learning tool (e.g. CNNs, SVMs, etc) – Download your favorite image dataset (imagenet subset, LFW face dataset, Zappos shoe dataset….) – Run some simple experiment on image classification – split your dataset into training/testing sets, train classifier to recognize images from each category (may or may not require extracting features) Useful code/data/etc: visionhttps://github.com/jbhuang0604/awesome-computer- vision Deep Learning:

Types of ML algorithms Unsupervised – Algorithms operate on unlabeled examples Supervised – Algorithms operate on labeled examples Semi/Partially-supervised – Algorithms combine both labeled and unlabeled examples Slide 5 of 113

Unsupervised Learning, e.g. clustering Slide 6 of 113

Slide 7 of 113

K-means clustering Want to minimize sum of squared Euclidean distances between points x i and their nearest cluster centers m k Algorithm: Randomly initialize K cluster centers Iterate until convergence: Assign each data point to the nearest center Recompute each cluster center as the mean of all points assigned to it source: Svetlana Lazebnik Slide 8 of 113

Supervised Learning, e.g. nearest neighbor, decision trees, SVMs, boosting Slide 9 of 113

Slide from Dan Klein Slide 10 of 113

Slide from Dan Klein Slide 11 of 113

Slide from Dan Klein Slide 12 of 113

Slide from Dan Klein Slide 13 of 113

Example: Image classification apple pear tomato cow dog horse inputdesired output Slide credit: Svetlana Lazebnik Slide 14 of 113

Slide from Dan Klein Slide 15 of 113

Example: Seismic data Body wave magnitude Surface wave magnitude Nuclear explosions Earthquakes Slide credit: Svetlana Lazebnik Slide 16 of 113

Slide from Dan Klein Slide 17 of 113

The basic classification framework y = f(x) Learning: given a training set of labeled examples {(x 1,y 1 ), …, (x N,y N )}, estimate the parameters of the prediction function f Inference: apply f to a never before seen test example x and output the predicted value y = f(x) outputclassification function input Slide credit: Svetlana Lazebnik Slide 18 of 113

Some ML classification methods 10 6 examples Nearest neighbor Shakhnarovich, Viola, Darrell 2003 Berg, Berg, Malik 2005 … Neural networks LeCun, Bottou, Bengio, Haffner 1998 Rowley, Baluja, Kanade 1998 … Support Vector Machines and Kernels Conditional Random Fields McCallum, Freitag, Pereira 2000 Kumar, Hebert 2003 … Guyon, Vapnik Heisele, Serre, Poggio, 2001 … Slide credit: Antonio Torralba 19

Example: Training and testing Key challenge: generalization to unseen examples Training set (labels known)Test set (labels unknown) Slide credit: Svetlana Lazebnik Slide 20 of 113

Slide credit: Dan Klein Slide 21 of 113

Slide from Min-Yen Kan Classification by Nearest Neighbor Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in? Slide 22 of 113

Slide from Min-Yen Kan Classification by Nearest Neighbor Slide 23 of 113

Classification by Nearest Neighbor Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc) Slide from Min-Yen Kan Slide 24 of 113

Classification by kNN Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan Slide 25 of 113

Slide from Min-Yen Kan What are the features? What’s the training data? Testing data? Parameters? Classification by kNN Slide 26 of 113

Slide from Min-Yen Kan Slide 27 of 113

Slide from Min-Yen Kan Slide 28 of 113

Slide from Min-Yen Kan Slide 29 of 113

Slide from Min-Yen Kan Slide 30 of 113

Slide from Min-Yen Kan Slide 31 of 113

Slide from Min-Yen Kan What are the features? What’s the training data? Testing data? Parameters? Classification by kNN Slide 32 of 113

NN (examples from computer vision) 33

NN for pose estimation Fast Pose Estimation with Parameter Sensitive Hashing Shakhnarovich, Viola, Darrell 34

Input Query Representation Processed query Fast indexing (LSH) Database of examples The algorithm flow Retrieval Output Match 35

J. Hays and A. Efros, IM2GPS: estimating geographic information from a single image, CVPR 2008 NN for vision

Where? What can you say about where these photos were taken? 37

How? Collect a large collection of geo-tagged photos 6.5 million images with both GPS coordinates and geographic keywords, removing images with keywords like birthday, concert, abstract, … Test set – 400 randomly sampled images from this collection. Manually removed abstract photos and photos with recognizable people – 237 test photos. 38

Nearest Neighbor Matching For each input image compute features (color, texture, shape) Compute distance in feature space to all 6 million images in the database (each feature contributes equally). Label the image with GPS coordinates of: 1 nearest neighbor k=120 nearest neighbors – probability map over entire globe. 39

Results 40

Results 41

Results 42

Decision tree classifier Example problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1.Alternate: is there an alternative restaurant nearby? 2.Bar: is there a comfortable bar area to wait in? 3.Fri/Sat: is today Friday or Saturday? 4.Hungry: are we hungry? 5.Patrons: number of people in the restaurant (None, Some, Full) 6.Price: price range ($, $$, $$$) 7.Raining: is it raining outside? 8.Reservation: have we made a reservation? 9.Type: kind of restaurant (French, Italian, Thai, Burger) 10.WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60) 43

Decision tree classifier 44

Decision tree classifier 45

46 Shall I play tennis today?

47

48 How do we choose the best attribute? Leaf nodes Choose next attribute for splitting

49 Criterion for attribute selection Which is the best attribute? – The one which will result in the smallest tree – Heuristic: choose the attribute that produces the “ purest ” nodes Need a good measure of purity!

50 Information Gain Which test is more informative? Humidity <=75%>75% <=20 >20 Wind

51 Information Gain Impurity/Entropy (informal) – Measures the level of impurity in a group of examples

52 Impurity Very impure group Less impure Minimum impurity

53 Entropy: a common way to measure impurity Entropy = p i is the probability of class i Compute it as the proportion of class i in the set.

54 2-Class Cases: What is the entropy of a group in which all examples belong to the same class? entropy = - 1 log 2 1 = 0 What is the entropy of a group with 50% in either class? entropy = -0.5 log – 0.5 log =1 Minimum impurity Maximum impurity

55 Information Gain We want to determine which attribute in a given set of training feature vectors is most useful for discriminating between the classes to be learned. Information gain tells us how useful a given attribute of the feature vectors is. We can use it to decide the ordering of attributes in the nodes of a decision tree.

56 Calculating Information Gain Entire population (30 instances) 17 instances 13 instances (Weighted) Average Entropy of Children = Information Gain= = 0.38 Information Gain = entropy(parent) – [weighted average entropy(children)] parent entropy child entropy child entropy

57 e.g. based on information gain

Linear classifier Find a linear function to separate the classes f(x) = sgn(w 1 x 1 + w 2 x 2 + … + w D x D ) = sgn(w  x) Slide credit: Svetlana Lazebnik Slide 58 of 113

Discriminant Function It can be arbitrary functions of x, such as: Nearest Neighbor Decision Tree Linear Functions Slide credit: Jinwei Gu Slide 59 of 113

Linear Discriminant Function g(x) is a linear function: x1x1 x2x2 w T x + b = 0 w T x + b < 0 w T x + b > 0 A hyper-plane in the feature space Slide credit: Jinwei Gu denotes +1 denotes -1 x1x1 Slide 60 of 113

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers! Slide credit: Jinwei Gu Slide 61 of 113

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 62 of 113

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 63 of 113

x1x1 x2x2 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function Infinite number of answers! Which one is the best? denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 64 of 113

Large Margin Linear Classifier “safe zone” The linear discriminant function (classifier) with the maximum margin is the best Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best?  strong generalization ability Margin x1x1 x2x2 Linear SVM Slide credit: Jinwei Gu Slide 65 of 113

Large Margin Linear Classifier x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu Slide 66 of 113

Support vector machines Find hyperplane that maximizes the margin between the positive and negative examples Margin Support vectors C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition Distance between point and hyperplane: For support vectors, Therefore, the margin is 2 / ||w||

Finding the maximum margin hyperplane 1.Maximize margin 2 / ||w|| 2.Correctly classify all training data: Quadratic optimization problem: C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

Discriminating between classes The linear discriminant function is: Notice it relies on a dot product between the test point x and the support vectors x i 69

Linear separability 70

Non-linear SVMs: Feature Space General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) Slide courtesy of 71

Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function becomes: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space: 72

Nonlinear SVMs: The Kernel Trick  Linear kernel: Examples of commonly-used kernel functions:  Polynomial kernel:  Gaussian (Radial-Basis Function (RBF) ) kernel:  Sigmoid: 73

Support Vector Machine: Algorithm 1. Choose a kernel function 2. Choose a value for C and any other parameters (e.g. σ) 3. Solve the quadratic programming problem (many software packages available) 4. Classify held out validation instances using the learned model 5. Select the best learned model based on validation accuracy 6. Classify test instances using the final selected model 74

Some Issues Choice of kernel - Linear, Gaussian, or polynomial kernel are default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. This slide is courtesy of 75

SVMs in Computer Vision 76

Detection features ? classify +1 pos -1 neg We slide a window over the image Extract features for each window Classify each window into pos/neg x F(x)y ??

Sliding Window Detection 78

79 Representation

80

81 Example Results

82 Example Results

Summary: Support Vector Machine 1. Large Margin Classifier – Better generalization ability & less over-fitting 2. The Kernel Trick – Map data points to higher dimensional space in order to make them linearly separable. – Since only dot product is needed, we do not need to represent the mapping explicitly. 83

Model Ensembles

Random Forests 88 A variant of bagging proposed by Breiman Classifier consists of a collection of decision tree-structure classifiers. Each tree cast a vote for the class of input x.

A simple algorithm for learning robust classifiers – Freund & Shapire, 1995 – Friedman, Hastie, Tibshhirani, 1998 Provides efficient algorithm for sparse visual feature selection – Tieu & Viola, 2000 – Viola & Jones, 2003 Easy to implement, doesn’t require external optimization tools. Used for many real problems in AI. Boosting 89

Defines a classifier using an additive model: Boosting Strong classifier Weak classifier Weight Input feature vector 90

Defines a classifier using an additive model: We need to define a family of weak classifiers Boosting Strong classifier Weak classifier Weight Input feature vector Selected from a family of weak classifiers 91

Adaboost Input: training samples Initialize weights on samples For T iterations: Select best weak classifier based on weighted error Update sample weights Output: final strong classifier (combination of selected weak classifier predictions)

Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Boosting It is a sequential procedure: x t=1 x t=2 xtxt 93

Toy example Weak learners from the family of lines h => p(error) = 0.5 it is at chance Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = 94

Toy example This one seems to be the best Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = This is a ‘weak classifier’: It performs slightly better than chance. 95

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = 96

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = 97

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = 98

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = 99

Toy example The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers. f1f1 f2f2 f3f3 f4f4 100

Adaboost Input: training samples Initialize weights on samples For T iterations: Select best weak classifier based on weighted error Update sample weights Output: final strong classifier (combination of selected weak classifier predictions)

Boosting for Face Detection 102

Face detection features ? classify +1 face -1 not face We slide a window over the image Extract features for each window Classify each window into face/non-face x F(x)y ??

What is a face? Eyes are dark (eyebrows+shadows) Cheeks and forehead are bright. Nose is bright Paul Viola, Michael Jones, Robust Real-time Object Detection, IJCV 04

Basic feature extraction Information type: – intensity Sum over: – gray and white rectangles Output: gray-white Separate output value for – Each type – Each scale – Each position in the window FEX(im)=x=[x 1,x 2,…….,x n ] Paul Viola, Michael Jones, Robust Real-time Object Detection, IJCV 04 x 120 x 357 x 629 x 834

Decision trees Stump: – 1 root – 2 leaves If x i > a then positive else negative Very simple “Weak classifier” Paul Viola, Michael Jones, Robust Real-time Object Detection, IJCV 04 x 120 x 357 x 629 x 834

Summary: Face detection Use decision stumps as weak classifiers Use boosting to build a strong classifier Use sliding window to detect the face x 120 x 357 x 629 x 834 X 234 >1.3 Non-face +1 Face Yes No

Semi-Supervised Learning Slide 108 of 113

Supervised learning has many successes recognize speech, steer a car, classify documents classify proteins recognizing faces, objects in images... Slide Credit: Avrim Blum Slide 109 of 113

However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 110

However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Speech Images Medical outcomes Customer modeling Protein sequences Web pages Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 111

However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. [From Jerry Zhu] Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 112

Need to pay someone to do it, requires special testing,… However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Can we make use of cheap unlabeled data? Slide Credit: Avrim Blum 113

Semi-Supervised Learning Can we use unlabeled data to augment a small labeled sample to improve learning? But unlabeled data is missing the most important info!! But maybe still has useful regularities that we can use. But… Slide Credit: Avrim Blum Slide 114 of 113

Method 1: EM 115

How to use unlabeled data One way is to use the EM algorithm – EM: Expectation Maximization The EM algorithm is a popular iterative algorithm for maximum likelihood estimation in problems with missing data. The EM algorithm consists of two steps, – Expectation step, i.e., filling in the missing data – Maximization step – calculate a new maximum a posteriori estimate for the parameters. Slide 116 of 113

Example Algorithm 1.Train a classifier with only the labeled documents. 2.Use it to probabilistically classify the unlabeled documents. 3.Use ALL the documents to train a new classifier. 4.Iterate steps 2 and 3 to convergence. Slide 117 of 113

Method 2: Co-Training 118

Co-training [Blum&Mitchell ’ 98] Many problems have two different sources of info (“features/views”) you can use to determine label. E.g., classifying faculty webpages: can use words on page or words on links pointing to the page. My AdvisorProf. Avrim BlumMy AdvisorProf. Avrim Blum x 2 - Text info x 1 - Link info x - Link info & Text info Slide Credit: Avrim Blum Slide 119 of 113

Co-training Idea: Use small labeled sample to learn initial rules. – E.g., “ my advisor ” pointing to a page is a good indicator it is a faculty home page. – E.g., “ I am teaching ” on a page is a good indicator it is a faculty home page. my advisor Slide Credit: Avrim Blum Slide 120 of 113

Co-training Idea: Use small labeled sample to learn initial rules. – E.g., “ my advisor ” pointing to a page is a good indicator it is a faculty home page. – E.g., “ I am teaching ” on a page is a good indicator it is a faculty home page. Then look for unlabeled examples where one view is confident and the other is not. Have it label the example for the other. Training 2 classifiers, one on each type of info. Using each to help train the other. h x 1,x 2 i Slide Credit: Avrim Blum Slide 121 of 113

Co-training vs. EM Co-training splits features, EM does not. Co-training incrementally uses the unlabeled data. EM probabilistically labels all the data at each round; EM iteratively uses the unlabeled data. Slide 122 of 113

Generative vs Discriminative Discriminative version – build a classifier to discriminate between monkeys and non-monkeys. P(monkey|image)

Generative version - build a model of the joint distribution. P(image,monkey) Generative vs Discriminative

Can use Bayes rule to compute p(monkey|image) if we know p(image,monkey)

Generative vs Discriminative Can use Bayes rule to compute p(monkey|image) if we know p(image,monkey) Discriminative Generative

Decision tree classifier Example problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1.Alternate: is there an alternative restaurant nearby? 2.Bar: is there a comfortable bar area to wait in? 3.Fri/Sat: is today Friday or Saturday? 4.Hungry: are we hungry? 5.Patrons: number of people in the restaurant (None, Some, Full) 6.Price: price range ($, $$, $$$) 7.Raining: is it raining outside? 8.Reservation: have we made a reservation? 9.Type: kind of restaurant (French, Italian, Thai, Burger) 10.WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60) Slide credit: Svetlana Lazebnik Slide 127 of 113

Decision tree classifier Slide credit: Svetlana Lazebnik Slide 128 of 113

Decision tree classifier Slide credit: Svetlana Lazebnik Slide 129 of 113

A simple algorithm for learning robust classifiers – Freund & Shapire, 1995 – Friedman, Hastie, Tibshhirani, 1998 Provides efficient algorithm for sparse visual feature selection – Tieu & Viola, 2000 – Viola & Jones, 2003 Easy to implement, doesn’t require external optimization tools. Boosting Slide credit: Antonio Torralba Slide 130 of 113

Defines a classifier using an additive model: Boosting Strong classifier Weak classifier Weight Features vector Slide credit: Antonio Torralba Slide 131 of 113

Defines a classifier using an additive model: We need to define a family of weak classifiers Boosting Strong classifier Weak classifier Weight Features vector from a family of weak classifiers Slide credit: Antonio Torralba Slide 132 of 113

Adaboost Slide credit: Antonio Torralba Slide 133 of 113

Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Boosting It is a sequential procedure: x t=1 x t=2 xtxt Slide credit: Antonio Torralba Slide 134 of 113

Toy example Weak learners from the family of lines h => p(error) = 0.5 it is at chance Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 135 of 113

Toy example This one seems to be the best Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = This is a ‘weak classifier’: It performs slightly better than chance. Slide credit: Antonio Torralba Slide 136 of 113

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 137 of 113

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 138 of 113

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 139 of 113

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 140 of 113

Toy example The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers. f1f1 f2f2 f3f3 f4f4 Slide credit: Antonio Torralba Slide 141 of 113

Adaboost Slide credit: Antonio Torralba Slide 142 of 113