Bag-of-features models

Slides:



Advertisements
Similar presentations
Kapitel 14 Recognition Scene understanding / visual object categorization Pose clustering Object recognition by local features Image categorization Bag-of-features.
Advertisements

Tamara Berg Object Recognition – BoF models Recognizing People, Objects, & Actions 1.
Clustering with k-means and mixture of Gaussian densities Jakob Verbeek December 3, 2010 Course website:
Part 1: Bag-of-words models by Li Fei-Fei (Princeton)
Marco Cristani Teorie e Tecniche del Riconoscimento1 Teoria e Tecniche del Riconoscimento Estrazione delle feature: Bag of words Facoltà di Scienze MM.
Multi-layer Orthogonal Codebook for Image Classification Presented by Xia Li.
1 Part 1: Classical Image Classification Methods Kai Yu Dept. of Media Analytics NEC Laboratories America Andrew Ng Computer Science Dept. Stanford University.
Instructor: Mircea Nicolescu Lecture 17
Generative learning methods for bags of features
CS4670 / 5670: Computer Vision Bag-of-words models Noah Snavely Object
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Words & Pictures Clustering and Bag of Words Representations Many slides adapted from Svetlana Lazebnik, Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Global spatial layout: spatial pyramid matching Spatial weighting the features Beyond bags of features: Adding spatial information.
1 Image Retrieval Hao Jiang Computer Science Department 2009.
Image alignment Image from
Discriminative and generative methods for bags of features
Bag-of-features models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
CVPR 2008 James Philbin Ondˇrej Chum Michael Isard Josef Sivic
Beyond bags of features: Part-based models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Bag of Features Approach: recent work, using geometric information.
Effective Image Database Search via Dimensionality Reduction Anders Bjorholm Dahl and Henrik Aanæs IEEE Computer Society Conference on Computer Vision.
Probabilistic inference
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
What is Texture? Texture depicts spatially repeating patterns Many natural phenomena are textures radishesrocksyogurt.
Object retrieval with large vocabularies and fast spatial matching
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Lecture 28: Bag-of-words models
Agenda Introduction Bag-of-words model Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Unsupervised discovery of visual object class hierarchies Josef Sivic (INRIA / ENS), Bryan Russell (MIT), Andrew Zisserman (Oxford), Alyosha Efros (CMU)
Generative learning methods for bags of features
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
“Bag of Words”: recognition using texture : Advanced Machine Perception A. Efros, CMU, Spring 2006 Adopted from Fei-Fei Li, with some slides from.
Lecture XI: Object Recognition (2)
By Suren Manvelyan,
Discriminative and generative methods for bags of features
Agenda Introduction Bag-of-words models Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
The Beauty of Local Invariant Features
Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Indexing Techniques Mei-Chen Yeh.
Clustering with Application to Fast Object Search
Exercise Session 10 – Image Categorization
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
CSE 185 Introduction to Computer Vision Pattern Recognition.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Watch, Listen and Learn Sonal Gupta, Joohyun Kim, Kristen Grauman and Raymond Mooney -Pratiksha Shah.
Computational Photography Tamara Berg Features. Representing Images Keep all the pixels! Pros? Cons?
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Svetlana Lazebnik, Cordelia Schmid, Jean Ponce
SVM-KNN Discriminative Nearest Neighbor Classification for Visual Category Recognition Hao Zhang, Alex Berg, Michael Maire, Jitendra Malik.
Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic and Andrew Zisserman.
Classical Methods for Object Recognition Rob Fergus (NYU)
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Unsupervised Learning: Kmeans, GMM, EM Readings: Barber
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
Machine Learning Overview Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
Lecture 08 27/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Lecture 15: Eigenfaces CS6670: Computer Vision Noah Snavely.
CS654: Digital Image Analysis
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
A PPLICATIONS OF TOPIC MODELS Daphna Weinshall B Slides credit: Joseph Sivic, Li Fei-Fei, Brian Russel and others.
CS598:V ISUAL INFORMATION R ETRIEVAL Lecture IV: Image Representation: Feature Coding and Pooling.
Lecture IX: Object Recognition (2)
Video Google: Text Retrieval Approach to Object Matching in Videos
By Suren Manvelyan, Crocodile (nile crocodile?) By Suren Manvelyan,
CS 1674: Intro to Computer Vision Scene Recognition
Video Google: Text Retrieval Approach to Object Matching in Videos
Part 1: Bag-of-words models
Some slides: courtesy of David Jacobs
Presentation transcript:

Bag-of-features models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba

Overview: Bag-of-features models Origins and motivation Image representation Discriminative methods Nearest-neighbor classification Support vector machines Generative methods Naïve Bayes Probabilistic Latent Semantic Analysis Extensions: incorporating spatial information

Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures, it is the identity of the textons, not their spatial arrangement, that matters Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003

Origin 1: Texture recognition histogram Universal texton dictionary Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003

Origin 2: Bag-of-words models Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

Origin 2: Bag-of-words models Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983) US Presidential Speeches Tag Cloud http://chir.ag/phernalia/preztags/

Origin 2: Bag-of-words models Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983) US Presidential Speeches Tag Cloud http://chir.ag/phernalia/preztags/

Origin 2: Bag-of-words models Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983) US Presidential Speeches Tag Cloud http://chir.ag/phernalia/preztags/

Bags of features for image classification Extract features

Bags of features for image classification Extract features Learn “visual vocabulary”

Bags of features for image classification Extract features Learn “visual vocabulary” Quantize features using visual vocabulary

Bags of features for image classification Extract features Learn “visual vocabulary” Quantize features using visual vocabulary Represent images by frequencies of “visual words”

1. Feature extraction Regular grid Vogel & Schiele, 2003 Fei-Fei & Perona, 2005

1. Feature extraction Regular grid Interest point detector Vogel & Schiele, 2003 Fei-Fei & Perona, 2005 Interest point detector Csurka et al. 2004 Sivic et al. 2005

1. Feature extraction Regular grid Interest point detector Vogel & Schiele, 2003 Fei-Fei & Perona, 2005 Interest point detector Csurka et al. 2004 Sivic et al. 2005 Other methods Random sampling (Vidal-Naquet & Ullman, 2002) Segmentation-based patches (Barnard et al. 2003)

Compute SIFT descriptor 1. Feature extraction Compute SIFT descriptor [Lowe’99] Normalize patch Detect patches [Mikojaczyk and Schmid ’02] [Mata, Chum, Urban & Pajdla, ’02] [Sivic & Zisserman, ’03] Slide credit: Josef Sivic

1. Feature extraction …

2. Learning the visual vocabulary …

2. Learning the visual vocabulary … Clustering Slide credit: Josef Sivic

2. Learning the visual vocabulary … Clustering Slide credit: Josef Sivic

K-means clustering Want to minimize sum of squared Euclidean distances between points xi and their nearest cluster centers mk Algorithm: Randomly initialize K cluster centers Iterate until convergence: Assign each data point to the nearest center Recompute each cluster center as the mean of all points assigned to it

From clustering to vector quantization Clustering is a common method for learning a visual vocabulary or codebook Unsupervised learning process Each cluster center produced by k-means becomes a codevector Codebook can be learned on separate training set Provided the training set is sufficiently representative, the codebook will be “universal” The codebook is used for quantizing features A vector quantizer takes a feature vector and maps it to the index of the nearest codevector in a codebook Codebook = visual vocabulary Codevector = visual word

Example visual vocabulary Fei-Fei et al. 2005

Image patch examples of visual words Sivic et al. 2005

Visual vocabularies: Issues How to choose vocabulary size? Too small: visual words not representative of all patches Too large: quantization artifacts, overfitting Generative or discriminative learning? Computational efficiency Vocabulary trees (Nister & Stewenius, 2006)

3. Image representation frequency ….. codewords

Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?