Object Recognition by Discriminative Methods Sinisa Todorovic 1st Sino-USA Summer School in VLPR July, 2009.

Slides:



Advertisements
Similar presentations
Max-Margin Additive Classifiers for Detection
Advertisements

Introduction to Support Vector Machines (SVM)
ECG Signal processing (2)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
An Introduction of Support Vector Machine
Classification / Regression Support Vector Machines
Appearance-based recognition & detection II Kristen Grauman UT-Austin Tuesday, Nov 11.
An Introduction of Support Vector Machine
Support Vector Machines
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Machine learning continued Image source:
MIT CSAIL Vision interfaces Approximate Correspondences in High Dimensions Kristen Grauman* Trevor Darrell MIT CSAIL (*) UT Austin…
Lecture 31: Modern object recognition
Global spatial layout: spatial pyramid matching Spatial weighting the features Beyond bags of features: Adding spatial information.
DISCRIMINATIVE DECORELATION FOR CLUSTERING AND CLASSIFICATION ECCV 12 Bharath Hariharan, Jitandra Malik, and Deva Ramanan.
Discriminative and generative methods for bags of features
Bag-of-features models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Beyond bags of features: Part-based models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Recognition using Regions CVPR Outline Introduction Overview of the Approach Experimental Results Conclusion.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Agenda Introduction Bag-of-words models Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class.
Generic Object Detection using Feature Maps Oscar Danielsson Stefan Carlsson
Object Recognizing We will discuss: Features Classifiers Example ‘winning’ system.
Supervised Distance Metric Learning Presented at CMU’s Computer Vision Misc-Read Reading Group May 9, 2007 by Tomasz Malisiewicz.
Support Vector Machines
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Discriminative and generative methods for bags of features
Large Scale Recognition and Retrieval. What does the world look like? High level image statistics Object Recognition for large-scale search Focus on scaling.
Generic object detection with deformable part-based models
Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.
Object Recognizing. Object Classes Individual Recognition.
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
Classification Tamara Berg CSE 595 Words & Pictures.
Support Vector Machine & Image Classification Applications
Recognition using Boosting Modified from various sources including
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 24 – Classifiers 1.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
Fast Similarity Search for Learned Metrics Prateek Jain, Brian Kulis, and Kristen Grauman Department of Computer Sciences University of Texas at Austin.
Classical Methods for Object Recognition Rob Fergus (NYU)
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Lecture 31: Modern recognition CS4670 / 5670: Computer Vision Noah Snavely.
CS Statistical Machine learning Lecture 18 Yuan (Alan) Qi Purdue CS Oct
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Class-Specific Hough Forests for Object Detection Zhen Yuan Hsu Advisor:S.J.Wang Gall, J., Lempitsky, V.: Class-specic hough forests for object detection.
Handwritten digit recognition
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
Methods for classification and image representation
An Introduction to Support Vector Machine (SVM)
A feature-based kernel for object classification P. Moreels - J-Y Bouguet Intel.
SVM – Support Vector Machines Presented By: Bella Specktor.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #23.
Locally Linear Support Vector Machines Ľubor Ladický Philip H.S. Torr.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Finding Clusters within a Class to Improve Classification Accuracy Literature Survey Yong Jae Lee 3/6/08.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Lecture IX: Object Recognition (2)
Recognition using Nearest Neighbor (or kNN)
Object detection as supervised classification
Machine Learning Week 2.
Presentation transcript:

Object Recognition by Discriminative Methods Sinisa Todorovic 1st Sino-USA Summer School in VLPR July, 2009

Motivation from Kristen Grauman, B. Leibe Discriminate yes, but what? Problem 1: What are good features

Motivation from Kristen Grauman, B. Leibe Discriminate yes, but against what? Problem 2: What are good training examples

Motivation from Kristen Grauman, B. Leibe Discriminate yes, but how? Problem 3: What is a good classifier

Motivation from Kristen Grauman, B. Leibe What is a non-car? Is Bayes decision optimal? Sliding windows????

How to Classify? observable feature class discriminant function Discriminant function may have a probabilistic interpretation

How to Classify? image feature Generative From Antonio Torralba 2007

How to Classify? image feature Discriminative From Antonio Torralba 2007

How to Classify? observable feature class discriminant function Linear discriminant function

How to Classify? Large margin classifiers margin = width that the boundary can be increased before hitting a datapoint margin 1 margin 2 Var 1 Var 2

Distance Based Classifiers Given: query:

Outline Random Forests Breiman, Fua, Criminisi, Cipolla, Shotton, Lempitsky, Zisserman, Bosch,... Support Vector Machines Guyon, Vapnik, Heisele, Serre, Poggio, Berg... Nearest Neighbor Shakhnarovich, Viola, Darrell, Torralba, Efros, Berg, Frome, Malik, Todorovic...

Maxim-Margin Linear Classifier margin width support vectors

Maxim-Margin Linear Classifier Problem: subject to:

Maxim-Margin Linear Classifier Problem: subject to: After rescaling data

LSVM Derivation

Problem: s.t.

Dual Problem Solve using Lagrangian At solution

Dual Problem s.t. Then compute: Only for support vectors for support vectors

Linearly Non-Separable Case s.t. trade-off between maximum separation and misclassification

Non-Linear SVMs Non-linear separation by mapping data to another space In SVM formulation, data appear only in the vector product No need to compute the vector product in the new space Mercer kernels

Vision Applications Pedestrian detection multiscale scanning windows for each window compute the wavelet transform classify the window using SVM “A general framework for object detection,” C. Papageorgiou, M. Oren and T. Poggio -- CVPR 98

Vision Applications “A general framework for object detection,” C. Papageorgiou, M. Oren and T. Poggio -- CVPR 98

Shortcomings of SVMs Kernelized SVM requires evaluating the kernel for a test vector and each of the support vectors Complexity = Kernel complexity × number of support vectors For a class of kernels this can be done more efficiently [Maji,Berg, Malik CVPR 08] intersection kernel

Outline Random Forests Breiman, Fua, Criminisi, Cipolla, Shotton, Lempitsky, Zisserman, Bosch,... Nearest Neighbor Shakhnarovich, Viola, Darrell, Berg, Frome, Malik, Todorovic...

Distance Based Classifiers Given: query:

Learning Global Distance Metric Given query and datapoint-class pairs Learn a Mahalanobis distance metric that brings points from the same class closer, and makes points from different classes be far away “Distance metric learning with application to clustering with side information” E. Xing, A. Ng, and M. Jordan, NIPS, 2003.

Learning Global Distance Metric s.t. PROBLEM!

Learning Global Distance Metric Problem with multimodal classes before learningafter learning

Per-Exemplar Distance Learning Frome & Malik ICCV07, Todorovic & Ahuja CVPR08

Distance Between Two Images distance between j-th patch in image F and image I

Learning from Triplets For each image I in the set we have:

Max-Margin Formulation Learn for each focal image F independently s.t. PROBLEM! They heuristically select 15 closest images

Max-Margin Formulation distance between j-th patch in image F and image I Problem: After computing in-class and out-of-class datapoints that have initially been closest may not be closest after learning

EM-based Max-Margin Formulation of Local Distance Learning Todorovic&Ahuja CVPR08

Learning from Triplets For each image I in the set: Frome, Malik ICCV07: Todorovic et al. CVPR08:

CVPR 2008: Results on Caltech-256

Outline Random Forests Breiman, Fua, Criminisi, Cipolla, Shotton, Lempitsky, Zisserman, Bosch,...

Decision Trees -- Not Stable Partitions of data via recursive splitting on a single feature Result: Histograms based on data-dependent partitioning Majority voting Quinlan C4.5; Breiman, Freedman, Olshen, Stone (1984); Devroye, Gyorfi, Lugosi (1996)

Random Forests (RF) RF = Set of decision trees such that each tree depends on a random vector sampled independently and with the same distribution for all trees in RF

Hough Forests Combine: spatial info + class info “Class-Specific Hough Forests for Object Detection” Juergen Gall and Victor Lempitsky CVPR 2009

Hough Forests

In the test image all features cast votes about the location of the bounding box “Class-Specific Hough Forests for Object Detection” Juergen Gall and Victor Lempitsky CVPR 2009

Hough Forests “Class-Specific Hough Forests for Object Detection” Juergen Gall and Victor Lempitsky CVPR 2009

Thank you!