Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Slides:



Advertisements
Similar presentations
Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
Advertisements

FATIH CAKIR MELIHCAN TURK F. SUKRU TORUN AHMET CAGRI SIMSEK Content-Based Image Retrieval using the Bag-of-Words Concept.
Outline SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion.
Marco Cristani Teorie e Tecniche del Riconoscimento1 Teoria e Tecniche del Riconoscimento Estrazione delle feature: Bag of words Facoltà di Scienze MM.
TP14 - Indexing local features
Large-Scale Image Retrieval From Your Sketches Daniel Brooks 1,Loren Lin 2,Yijuan Lu 1 1 Department of Computer Science, Texas State University, TX, USA.
MIT CSAIL Vision interfaces Approximate Correspondences in High Dimensions Kristen Grauman* Trevor Darrell MIT CSAIL (*) UT Austin…
The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features Kristen Grauman Trevor Darrell MIT.
1 Part 1: Classical Image Classification Methods Kai Yu Dept. of Media Analytics NEC Laboratories America Andrew Ng Computer Science Dept. Stanford University.
Generative learning methods for bags of features
CS4670 / 5670: Computer Vision Bag-of-words models Noah Snavely Object
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Global spatial layout: spatial pyramid matching Spatial weighting the features Beyond bags of features: Adding spatial information.
1 Image Retrieval Hao Jiang Computer Science Department 2009.
Discriminative and generative methods for bags of features
Object Recognition. So what does object recognition involve?
Bag-of-features models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Beyond bags of features: Part-based models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
What is Texture? Texture depicts spatially repeating patterns Many natural phenomena are textures radishesrocksyogurt.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Lecture 28: Bag-of-words models
Agenda Introduction Bag-of-words model Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
“Bag of Words”: when is object recognition, just texture recognition? : Advanced Machine Perception A. Efros, CMU, Spring 2009 Adopted from Fei-Fei.
1 Unsupervised Modeling and Recognition of Object Categories with Combination of Visual Contents and Geometric Similarity Links Gunhee Kim Christos Faloutsos.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Object Recognition. So what does object recognition involve?
Object recognition Jana Kosecka Slides from D. Lowe, D. Forsythe and J. Ponce book, ICCV 2005 Tutorial Fei-Fei Li, Rob Fergus and A. Torralba.
CS294‐43: Visual Object and Activity Recognition Prof. Trevor Darrell Spring 2009 March 17 th, 2009.
Bag-of-features models
Unsupervised discovery of visual object class hierarchies Josef Sivic (INRIA / ENS), Bryan Russell (MIT), Andrew Zisserman (Oxford), Alyosha Efros (CMU)
Object recognition Jana Kosecka Slides from D. Lowe, D. Forsythe and J. Ponce book, ICCV 2005 Tutorial Fei-Fei Li, Rob Fergus and A. Torralba.
Generative learning methods for bags of features
Agenda Introduction Bag-of-words model Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
“Bag of Words”: recognition using texture : Advanced Machine Perception A. Efros, CMU, Spring 2006 Adopted from Fei-Fei Li, with some slides from.
Object Recognition: Conceptual Issues Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and K. Grauman.
A Bayesian Hierarchical Model for Learning Natural Scene Categories L. Fei-Fei and P. Perona. CVPR 2005 Discovering objects and their location in images.
Discriminative and generative methods for bags of features
Agenda Introduction Bag-of-words models Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
Large Scale Recognition and Retrieval. What does the world look like? High level image statistics Object Recognition for large-scale search Focus on scaling.
The Beauty of Local Invariant Features
Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Unsupervised Learning of Categories from Sets of Partially Matching Image Features Kristen Grauman and Trevor Darrel CVPR 2006 Presented By Sovan Biswas.
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Svetlana Lazebnik, Cordelia Schmid, Jean Ponce
Category Discovery from the Web slide credit Fei-Fei et. al.
Classical Methods for Object Recognition Rob Fergus (NYU)
MSRI workshop, January 2005 Object Recognition Collected databases of objects on uniform background (no occlusions, no clutter) Mostly focus on viewpoint.
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Unsupervised Learning: Kmeans, GMM, EM Readings: Barber
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
Topic Models Presented by Iulian Pruteanu Friday, July 28 th, 2006.
Discovering Objects and their Location in Images Josef Sivic 1, Bryan C. Russell 2, Alexei A. Efros 3, Andrew Zisserman 1 and William T. Freeman 2 Goal:
Li Fei-Fei, UIUC Rob Fergus, MIT Antonio Torralba, MIT Recognizing and Learning Object Categories ICCV 2005 Beijing, Short Course, Oct 15.
CS654: Digital Image Analysis
A PPLICATIONS OF TOPIC MODELS Daphna Weinshall B Slides credit: Joseph Sivic, Li Fei-Fei, Brian Russel and others.
Class 3: Feature Coding and Pooling Liangliang Cao, Feb 7, 2013 EECS Topics in Information Processing Spring 2013, Columbia University
Lecture IX: Object Recognition (2)
The topic discovery models
CS 2770: Computer Vision Feature Matching and Indexing
Paper Presentation: Shape and Matching
By Suren Manvelyan, Crocodile (nile crocodile?) By Suren Manvelyan,
The topic discovery models
CS 1674: Intro to Computer Vision Scene Recognition
Brief Review of Recognition + Context
The topic discovery models
Part 1: Bag-of-words models
Presentation transcript:

Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Related works Early “bag of words” models: mostly texture recognition –Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003; Hierarchical Bayesian models for documents (pLSA, LDA, etc.) –Hoffman 1999; Blei, Ng & Jordan, 2004; Teh, Jordan, Beal & Blei, 2004 Object categorization –Csurka, Bray, Dance & Fan, 2004; Sivic, Russell, Efros, Freeman & Zisserman, 2005; Sudderth, Torralba, Freeman & Willsky, 2005; Natural scene categorization –Vogel & Schiele, 2004; Fei-Fei & Perona, 2005; Bosch, Zisserman & Munoz, 2006

Object Bag of ‘words’

Analogy to documents Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step- wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value. China, trade, surplus, commerce, exports, imports, US, yuan, bank, domestic, foreign, increase, trade, value

Looser definition –Independent features A clarification: definition of “BoW”

Looser definition –Independent features Stricter definition –Independent features –histogram representation

categorydecisionlearning feature detection & representation codewords dictionary image representation category models (and/or) classifiers recognition

feature detection & representation codewords dictionary image representationRepresentation

1.Feature detection and representation

Regular grid –Vogel & Schiele, 2003 –Fei-Fei & Perona, 2005

1.Feature detection and representation Regular grid –Vogel & Schiele, 2003 –Fei-Fei & Perona, 2005 Interest point detector –Csurka, et al –Fei-Fei & Perona, 2005 –Sivic, et al. 2005

1.Feature detection and representation Regular grid –Vogel & Schiele, 2003 –Fei-Fei & Perona, 2005 Interest point detector –Csurka, Bray, Dance & Fan, 2004 –Fei-Fei & Perona, 2005 –Sivic, Russell, Efros, Freeman & Zisserman, 2005 Other methods –Random sampling (Vidal-Naquet & Ullman, 2002) –Segmentation based patches (Barnard, Duygulu, Forsyth, de Freitas, Blei, Jordan, 2003)

1.Feature detection and representation Normalize patch Detect patches [Mikojaczyk and Schmid ’02] [Mata, Chum, Urban & Pajdla, ’02] [Sivic & Zisserman, ’03] Compute SIFT descriptor [Lowe’99] Slide credit: Josef Sivic

… 1.Feature detection and representation

2. Codewords dictionary formation …

Vector quantization … Slide credit: Josef Sivic

2. Codewords dictionary formation Fei-Fei et al. 2005

Image patch examples of codewords Sivic et al. 2005

3. Image representation ….. frequency codewords

feature detection & representation codewords dictionary image representationRepresentation

categorydecision codewords dictionary category models (and/or) classifiers Learning and Recognition

category models (and/or) classifiers Learning and Recognition 1. Generative method: - graphical models 2. Discriminative method: - SVM

2 generative models 1.Naïve Bayes classifier –Csurka Bray, Dance & Fan, Hierarchical Bayesian text models (pLSA and LDA) –Background: Hoffman 2001, Blei, Ng & Jordan, 2004 –Object categorization: Sivic et al. 2005, Sudderth et al –Natural scene categorization: Fei-Fei et al. 2005

w n : each patch in an image –w n = [0,0,…1,…,0,0] T w: a collection of all N patches in an image –w = [w 1,w 2,…,w N ] d j : the j th image in an image collection c: category of the image z: theme or topic of the patch First, some notations

w N c Case #1: the Naïve Bayes model Prior prob. of the object classes Image likelihood given the class Csurka et al Object class decision

Csurka et al. 2004

Hoffman, 2001 Case #2: Hierarchical Bayesian text models w N d z D w N c z D  Blei et al., 2001 Probabilistic Latent Semantic Analysis (pLSA) Latent Dirichlet Allocation (LDA)

w N d z D Case #2: Hierarchical Bayesian text models Probabilistic Latent Semantic Analysis (pLSA) “face” Sivic et al. ICCV 2005

w N c z D  Case #2: Hierarchical Bayesian text models Latent Dirichlet Allocation (LDA) Fei-Fei et al. ICCV 2005 “beach”

Case #2: the pLSA model w N d z D

w N d z D Observed codeword distributions Codeword distributions per theme (topic) Theme distributions per image Slide credit: Josef Sivic

Case #2: Recognition using pLSA Slide credit: Josef Sivic

Maximize likelihood of data using EM Observed counts of word i in document j M … number of codewords N … number of images Case #2: Learning the pLSA parameters Slide credit: Josef Sivic

Demo Course website

task: face detection – no labeling

Output of crude feature detector –Find edges –Draw points randomly from edge set –Draw from uniform distribution to get scale Demo: feature detection

Demo: learnt parameters Codeword distributions per theme (topic) Theme distributions per image Learning the model: do_plsa(‘config_file_1’) Evaluate and visualize the model: do_plsa_evaluation(‘config_file_1’)

Demo: recognition examples

Performance of each theme Demo: categorization results

category models (and/or) classifiers Learning and Recognition 1. Generative method: - graphical models 2. Discriminative method: - SVM

Zebra Non-zebra Decision boundary Discriminative methods based on ‘bag of words’ representation

Grauman & Darrell, 2005, 2006: –SVM w/ Pyramid Match kernels Others –Csurka, Bray, Dance & Fan, 2004 –Serre & Poggio, 2005

Summary: Pyramid match kernel optimal partial matching between sets of features Grauman & Darrell, 2005, Slide credit: Kristen Grauman

Pyramid Match (Grauman & Darrell 2005) Histogram intersection Slide credit: Kristen Grauman

Difference in histogram intersections across levels counts number of new pairs matched matches at this levelmatches at previous level Histogram intersection Pyramid Match (Grauman & Darrell 2005) Slide credit: Kristen Grauman

Pyramid match kernel Weights inversely proportional to bin size Normalize kernel values to avoid favoring large sets measure of difficulty of a match at level i histogram pyramids number of newly matched pairs at level i Slide credit: Kristen Grauman

Example pyramid match Level 0 Slide credit: Kristen Grauman

Example pyramid match Level 1 Slide credit: Kristen Grauman

Example pyramid match Level 2 Slide credit: Kristen Grauman

Example pyramid match pyramid match optimal match Slide credit: Kristen Grauman

Summary: Pyramid match kernel optimal partial matching between sets of features number of new matches at level idifficulty of a match at level i Slide credit: Kristen Grauman

Object recognition results ETH-80 database 8 object classes (Eichhorn and Chapelle 2004) Features: –Harris detector –PCA-SIFT descriptor, d=10 KernelComplexityRecognition rate Match [Wallraven et al.] 84% Bhattacharyya affinity [Kondor & Jebara] 85% Pyramid match 84% Slide credit: Kristen Grauman

Object recognition results Caltech objects database 101 object classes Features: –SIFT detector –PCA-SIFT descriptor, d=10 30 training images / class 43% recognition rate (1% chance performance) seconds per match Slide credit: Kristen Grauman

categorydecisionlearning feature detection & representation codewords dictionary image representation category models (and/or) classifiers recognition

What about spatial info? ?

Feature level –Spatial influence through correlogram features: Savarese, Winn and Criminisi, CVPR 2006

What about spatial info? Feature level Generative models –Sudderth, Torralba, Freeman & Willsky, 2005, 2006 –Niebles & Fei-Fei, CVPR 2007

What about spatial info? Feature level Generative models –Sudderth, Torralba, Freeman & Willsky, 2005, 2006 –Niebles & Fei-Fei, CVPR 2007 P3P3 P1P1 P2P2 P4P4 Bg Image w

What about spatial info? Feature level Generative models Discriminative methods –Lazebnik, Schmid & Ponce, 2006

Invariance issues Scale and rotation –Implicit –Detectors and descriptors Kadir and Brady. 2003

Scale and rotation Occlusion –Implicit in the models –Codeword distribution: small variations –(In theory) Theme (z) distribution: different occlusion patterns Invariance issues

Scale and rotation Occlusion Translation –Encode (relative) location information Sudderth, Torralba, Freeman & Willsky, 2005, 2006 Niebles & Fei-Fei, 2007 Invariance issues

Scale and rotation Occlusion Translation View point (in theory) –Codewords: detector and descriptor –Theme distributions: different view points Invariance issues Fergus, Fei-Fei, Perona & Zisserman, 2005

Model properties Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step- wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel Intuitive –Analogy to documents

Model properties Olshausen and Field, 2004, Fei-Fei and Perona, 2005 Intuitive –Analogy to documents –Analogy to human vision

Model properties Intuitive generative models –Convenient for weakly- or un-supervised, incremental training –Prior information –Flexibility (e.g. HDP) Li, Wang & Fei-Fei, CVPR 2007 model Classification Dataset Incremental learning Sivic, Russell, Efros, Freeman, Zisserman, 2005

Model properties Intuitive generative models Discriminative method –Computationally efficient Grauman et al. CVPR 2005

Model properties Intuitive generative models Discriminative method Learning and recognition relatively fast –Compare to other methods

No rigorous geometric information of the object components It’s intuitive to most of us that objects are made of parts – no such information Not extensively tested yet for –View point invariance –Scale invariance Segmentation and localization unclear Weakness of the model