Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

The blue and green colors are actually the same.
Adjusting Active Basis Model by Regularized Logistic Regression
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Quiz 1 on Wednesday ~20 multiple choice or short answer questions
Computer Vision CS 143, Brown James Hays
An Overview of Machine Learning
Supervised Learning Recap
The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features Kristen Grauman Trevor Darrell MIT.
Computer Vision – Image Representation (Histograms)
Image Similarity and the Earth Mover’s Distance Empirical Evaluation of Dissimilarity Measures for Color and Texture Y. Rubner, J. Puzicha, C. Tomasi and.
Discriminative and generative methods for bags of features
Robust Moving Object Detection & Categorization using self- improving classifiers Omar Javed, Saad Ali & Mubarak Shah.
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
MACHINE LEARNING 9. Nonparametric Methods. Introduction Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 
CS 590M Fall 2001: Security Issues in Data Mining Lecture 3: Classification.
Comparing Images Adopted from Frederic Heger : Computational Photography Alexei Efros, CMU, Fall 2008.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Discriminative and generative methods for bags of features
Crash Course on Machine Learning
Exercise Session 10 – Image Categorization
Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
CSE 185 Introduction to Computer Vision Pattern Recognition.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Data mining and machine learning A brief introduction.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Learning Visual Bits with Direct Feature Selection Joel Jurik 1 and Rahul Sukthankar 2,3 1 University of Central Florida 2 Intel Research Pittsburgh 3.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 24 – Classifiers 1.
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao.
Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department.
Image Categorization Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/15/11.
Classifiers Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/09/15.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Text Classification 2 David Kauchak cs459 Fall 2012 adapted from:
CSSE463: Image Recognition Day 11 Lab 4 (shape) tomorrow: feel free to start in advance Lab 4 (shape) tomorrow: feel free to start in advance Test Monday.
Classification Derek Hoiem CS 598, Spring 2009 Jan 27, 2009.
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
Jakob Verbeek December 11, 2009
Methods for classification and image representation
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
KNN & Naïve Bayes Hongning Wang Today’s lecture Instance-based classifiers – k nearest neighbors – Non-parametric learning algorithm Model-based.
CSSE463: Image Recognition Day 11 Due: Due: Written assignment 1 tomorrow, 4:00 pm Written assignment 1 tomorrow, 4:00 pm Start thinking about term project.
Representation in Vision Derek Hoiem CS 598, Spring 2009 Jan 22, 2009.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
KNN & Naïve Bayes Hongning Wang
Machine Learning Usman Roshan Dept. of Computer Science NJIT.
CMPS 142/242 Review Section Fall 2011 Adapted from Lecture Slides.
1 C.A.L. Bailer-Jones. Machine Learning. Data exploration and dimensionality reduction Machine learning, pattern recognition and statistical data modelling.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Photo: CMU Machine Learning Department Protests G20
Machine Learning Crash Course
Machine Learning Crash Course
CSSE463: Image Recognition Day 11
COMP61011 : Machine Learning Ensemble Models
Object detection as supervised classification
Overview of machine learning for computer vision
Machine Learning Crash Course
CS 1674: Intro to Computer Vision Scene Recognition
CSSE463: Image Recognition Day 11
Brief Review of Recognition + Context
CS4670: Intro to Computer Vision
CSSE463: Image Recognition Day 11
CSSE463: Image Recognition Day 11
Lecture: Object Recognition
Derek Hoiem CS 598, Spring 2009 Jan 27, 2009
Presentation transcript:

Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/11/10 Image Categorization Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem

Last classes Object recognition: localizing an object instance in an image Face recognition: matching one face image to another

Today’s class: categorization Overview of image categorization Representation Image histograms Classification Important concepts in machine learning What the classifiers are and when to use them

Image Categorization Training Training Labels Training Images Image Features Classifier Training Trained Classifier

Image Categorization Training Testing Training Labels Training Images Image Features Classifier Training Trained Classifier Testing Image Features Trained Classifier Prediction Outdoor Test Image

Part 1: Image features Training Training Labels Training Images Classifier Training Trained Classifier

General Principles of Representation Coverage Ensure that all relevant info is captured Concision Minimize number of features without sacrificing coverage Directness Ideal features are independently useful for prediction Image Intensity

Right features depend on what you want to know Shape: scene-scale, object-scale, detail-scale 2D form, shading, shadows, texture, linear perspective Material properties: albedo, feel, hardness, … Color, texture Motion Optical flow, tracked points Distance Stereo, position, occlusion, scene shape If known object: size, other objects

Examples of cues Color Texture Location Perspective

Image representations Image intensity Histograms Color, texture, SIFT keypoints, etc.

Image Representations: Histograms Global histogram Represent distribution of features Color, texture, depth, … Space Shuttle Cargo Bay Images from Dave Kauchak

Image Representations: Histograms Histogram: Probability or count of data in each bin Joint histogram Requires lots of data Loss of resolution to avoid empty bins Marginal histogram Requires independent features More data/bin than joint histogram Images from Dave Kauchak

Image Representations: Histograms Clustering EASE Truss Assembly Use the same cluster centers for all images Space Shuttle Cargo Bay Images from Dave Kauchak

Issue: How to Compare Histograms? Bin-by-bin comparison Sensitive to bin size. Could use wider bins … … but at a loss of resolution Cross-bin comparison How much cross-bin influence is necessary/sufficient?

Computing histogram distance Histogram intersection (assuming normalized histograms) Chi-squared Histogram matching distance Cars found by color histogram matching using chi-squared

Histograms: Implementation issues Quantization Grids: fast but only applicable with few dimensions Clustering: slower but can quantize data in higher dimensions Matching Histogram intersection or Euclidean may be faster Chi-squared often works better Earth mover’s distance is good for when nearby bins represent similar values Few Bins Need less data Coarser representation Many Bins Need more data Finer representation

What kind of things do we compute histograms of? Color Texture (filter banks or HOG over regions) L*a*b* color space HSV color space

What kind of things do we compute histograms of? Histograms of gradient Visual words SIFT – Lowe IJCV 2004

Bag of words model Extract features E.g., compute SIFT keypoints for all training images

Bag of words model Extract features Learn “visual vocabulary” E.g., cluster SIFT descriptors into 500 clusters using K-means

Bag of words model Extract features Learn “visual vocabulary” Quantize features using visual vocabulary Assign each descriptor to nearest cluster center

Bag of words model Extract features Learn “visual vocabulary” Quantize features using visual vocabulary Represent images by normalized counts of “visual words” Compute histogram of word occurrences for each image

Image Categorization: Bag of Words Training Extract keypoints and descriptors for all training images Cluster descriptors Quantize descriptors using cluster centers to get “visual words” Represent each image by normalized counts of “visual words” Train classifier on labeled examples using histogram values as features Testing Extract keypoints/descriptors and quantize into visual words Compute visual word histogram Compute label or confidence using classifier

But what about layout? All of these images have the same color histogram

Spatial pyramid Compute histogram in each spatial bin

Image Categorization: BoW+SVM Extract features Learn “visual vocabulary” Quantize features using visual vocabulary Represent images by normalized counts of “visual words” Train SVM on labeled examples using histogram values as features For example, label images of indoor scenes as positive and images of outdoor scenes as negative. Then, train classifier based on features and labels.

Computing Features Compute Features over Image Quantize (Optional) Choose Spatial Support Compute Statistics of Features within Spatial Support RGB Values Quantized to 10 Levels 71% 29% Histogram Bin Features

Things to remember about representation Think about the right features for the problem Coverage Concision Directness Think about what features represent

Part 2: Classifiers Training Training Labels Training Images Image Features Classifier Training Trained Classifier

Learning a classifier Given some set features with corresponding labels, learn a function to predict the labels from the features x o x2 x1

Many classifiers to choose from SVM Neural networks Naïve Bayes Bayesian network Logistic regression Randomized Forests Boosted Decision Trees K-nearest neighbor RBMs Etc. Which is the best one?

No Free Lunch Theorem You can only get generalization through assumptions.

Bias-Variance Trade-off MSE = bias2 + variance

Bias and Variance Error = bias2 + variance Few training examples Complexity Low Bias High Variance High Bias Low Variance Test Error Few training examples Many training examples

Choosing the trade-off Need validation set Validation set not same as test set Complexity Low Bias High Variance High Bias Low Variance Error Test error Training error

Effect of Training Size Fixed classifier Number of Training Examples Error Testing Generalization Error Training

How to measure complexity? VC dimension Upper bound on generalization error Training error + N: size of training set h: VC dimension : 1-probability

How to reduce variance? Choose a simpler classifier Regularize the parameters Get more training data

Risk Minimization Margins x o x2 x1

The perfect classification algorithm Objective function: solves what you want to solve Parameterization: makes assumptions that fit the problem Regularization: right level of regularization for amount of training data Training algorithm: can find parameters that maximize objective on training set Inference algorithm: can solve for objective function in evaluation

Generative vs. Discriminative Classifiers Training Maximize joint likelihood of data and labels Assume (or learn) probability distribution and dependency structure Can impose priors Testing P(y=1, x) / P(y=0, x) > t? Examples Foreground/background GMM Naïve Bayes classifier Bayesian network Discriminative Training Learn to directly predict the labels from the data Assume form of boundary Margin maximization or parameter regularization Testing f(x) > t ; e.g., wTx > t Examples Logistic regression SVM Boosted decision trees

Classifiers Generative methods Discriminative methods Ensemble methods Naïve Bayes Bayesian Networks Gaussian mixture for both classes Discriminative methods Logistic Regression Linear SVM Kernelized SVM Ensemble methods Randomized Forests Boosted Decision Trees Instance based K-nearest neighbor Unsupervised Kmeans

Generative Classifier: Naïve Bayes Objective Parameterization Regularization Training Inference y x1 x2 x3

Using Naïve Bayes Simple thing to try for categorical data Very fast to train/test

Classifiers: Logistic Regression Objective Parameterization Regularization Training Inference x o x2 x1

Using Logistic Regression Quick, simple classifier (try it first) Use L2 or L1 regularization L1 does feature selection and is robust to irrelevant features

Classifiers: Linear SVM Objective Parameterization Regularization Training Inference x o x2 x1

Classifiers: Linear SVM Objective Parameterization Regularization Training Inference x o x2 x1

Classifiers: Linear SVM Objective Parameterization Regularization Training Inference x o x2 x1

Classifiers: Kernelized SVM Objective Parameterization Regularization Training Inference x o x o x2

Using SVMs Good general purpose classifier Choosing kernel Generalization depends on margin, so works well with many weak features No feature selection Usually requires some parameter tuning Choosing kernel Linear: fast training/testing – start here RBF: related to neural networks, nearest neighbor Chi-squared, histogram intersection: good for histograms (but slower, esp. chi-squared) Can learn a kernel function

Classifiers: Decision Trees Objective Parameterization Regularization Training Inference x o x2 x1

Ensemble Methods: Boosting figure from Friedman et al. 2000

Boosted Decision Trees High in Image? Gray? Yes No Yes No Smooth? Green? High in Image? Many Long Lines? … Yes Yes No Yes No Yes No No Blue? Very High Vanishing Point? Yes No Yes No P(label | good segment, data) Ground Vertical Sky [Collins et al. 2002]

Using Boosted Decision Trees Flexible: can deal with both continuous and categorical variables How to control bias/variance trade-off Size of trees Number of trees Boosting trees often works best with a small number of well-designed features Boosting “stubs” can give a fast classifier

K-nearest neighbor Objective Parameterization Regularization Training Inference x o x2 x1 + +

1-nearest neighbor x o x2 x1 + +

3-nearest neighbor x o x2 x1 + +

5-nearest neighbor x o x2 x1 + +

Using K-NN Simple, so another good one to try first With infinite examples, 1-NN provably has error that is at most twice Bayes optimal error

Clustering (unsupervised) x2 + x1 x o x1

What to remember about classifiers No free lunch: machine learning algorithms are tools, not dogmas Try simple classifiers first Better to have smart features and simple classifiers than simple features and smart classifiers Use increasingly powerful classifiers with more training data (bias-variance tradeoff)

Next class Object category detection overview

Some Machine Learning References General Tom Mitchell, Machine Learning, McGraw Hill, 1997 Christopher Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995 Adaboost Friedman, Hastie, and Tibshirani, “Additive logistic regression: a statistical view of boosting”, Annals of Statistics, 2000 SVMs http://www.support-vector.net/icml-tutorial.pdf