Part 1: Bag-of-words models

Slides:



Advertisements
Similar presentations
A Bayesian Approach to Recognition Moshe Blank Ita Lifshitz Reverend Thomas Bayes
Advertisements

Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
Part 1: Bag-of-words models by Li Fei-Fei (Princeton)
FATIH CAKIR MELIHCAN TURK F. SUKRU TORUN AHMET CAGRI SIMSEK Content-Based Image Retrieval using the Bag-of-Words Concept.
Outline SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion.
Marco Cristani Teorie e Tecniche del Riconoscimento1 Teoria e Tecniche del Riconoscimento Estrazione delle feature: Bag of words Facoltà di Scienze MM.
TP14 - Indexing local features
Large-Scale Image Retrieval From Your Sketches Daniel Brooks 1,Loren Lin 2,Yijuan Lu 1 1 Department of Computer Science, Texas State University, TX, USA.
1 Part 1: Classical Image Classification Methods Kai Yu Dept. of Media Analytics NEC Laboratories America Andrew Ng Computer Science Dept. Stanford University.
Generative learning methods for bags of features
CS4670 / 5670: Computer Vision Bag-of-words models Noah Snavely Object
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
1 Image Retrieval Hao Jiang Computer Science Department 2009.
Object Recognition. So what does object recognition involve?
Bag-of-features models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
What is Texture? Texture depicts spatially repeating patterns Many natural phenomena are textures radishesrocksyogurt.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Lecture 28: Bag-of-words models
Agenda Introduction Bag-of-words model Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
“Bag of Words”: when is object recognition, just texture recognition? : Advanced Machine Perception A. Efros, CMU, Spring 2009 Adopted from Fei-Fei.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Object Recognition. So what does object recognition involve?
Bag-of-features models
Unsupervised discovery of visual object class hierarchies Josef Sivic (INRIA / ENS), Bryan Russell (MIT), Andrew Zisserman (Oxford), Alyosha Efros (CMU)
Object recognition Jana Kosecka Slides from D. Lowe, D. Forsythe and J. Ponce book, ICCV 2005 Tutorial Fei-Fei Li, Rob Fergus and A. Torralba.
Generative learning methods for bags of features
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
“Bag of Words”: recognition using texture : Advanced Machine Perception A. Efros, CMU, Spring 2006 Adopted from Fei-Fei Li, with some slides from.
A Bayesian Hierarchical Model for Learning Natural Scene Categories L. Fei-Fei and P. Perona. CVPR 2005 Discovering objects and their location in images.
Discriminative and generative methods for bags of features
Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
A Thousand Words in a Scene P. Quelhas, F. Monay, J. Odobez, D. Gatica-Perez and T. Tuytelaars PAMI, Sept
CSE 473/573 Computer Vision and Image Processing (CVIP)
Topic Models in Text Processing IR Group Meeting Presented by Qiaozhu Mei.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Topic Modelling: Beyond Bag of Words By Hanna M. Wallach ICML 2006 Presented by Eric Wang, April 25 th 2008.
Bayesian Parameter Estimation Liad Serruya. Agenda Introduction Bayesian decision theory Scale-Invariant Learning Bayesian “One-Shot” Learning.
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Unsupervised Learning: Kmeans, GMM, EM Readings: Barber
Latent Dirichlet Allocation D. Blei, A. Ng, and M. Jordan. Journal of Machine Learning Research, 3: , January Jonathan Huang
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
Topic Models Presented by Iulian Pruteanu Friday, July 28 th, 2006.
Discovering Objects and their Location in Images Josef Sivic 1, Bryan C. Russell 2, Alexei A. Efros 3, Andrew Zisserman 1 and William T. Freeman 2 Goal:
CS654: Digital Image Analysis
A PPLICATIONS OF TOPIC MODELS Daphna Weinshall B Slides credit: Joseph Sivic, Li Fei-Fei, Brian Russel and others.
Class 3: Feature Coding and Pooling Liangliang Cao, Feb 7, 2013 EECS Topics in Information Processing Spring 2013, Columbia University
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Lecture IX: Object Recognition (2)
B. Freeman, Tomasz Malisiewicz, Tom Landauer and Peter Foltz,
The topic discovery models
CS 2770: Computer Vision Feature Matching and Indexing
An Additive Latent Feature Model
Large-scale Instance Retrieval
Video Google: Text Retrieval Approach to Object Matching in Videos
Li Fei-Fei, UIUC Rob Fergus, MIT Antonio Torralba, MIT
By Suren Manvelyan, Crocodile (nile crocodile?) By Suren Manvelyan,
Object Recognition by Parts
The topic discovery models
CS 1674: Intro to Computer Vision Scene Recognition
a chicken and egg problem…
The topic discovery models
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
Topic Models in Text Processing
Video Google: Text Retrieval Approach to Object Matching in Videos
EM Algorithm and its Applications
Object Recognition by Parts
Unsupervised learning of visual sense models for Polysemous words
Presentation transcript:

Part 1: Bag-of-words models by Li Fei-Fei (UIUC)

Related works Early “bag of words” models: mostly texture recognition Cula et al. 2001; Leung et al. 2001; Schmid 2001; Varma et al. 2002, 2003; Lazebnik et al. 2003 Hierarchical Bayesian models for documents (pLSA, LDA, etc.) Hoffman 1999; Blei et al, 2004; Teh et al. 2004 Object categorization Dorko et al. 2004; Csurka et al. 2003; Sivic et al. 2005; Sudderth et al. 2005; Natural scene categorization Fei-Fei et al. 2005

Object Bag of ‘words’

retinal, cerebral cortex, Analogy to documents Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value. China, trade, surplus, commerce, exports, imports, US, yuan, bank, domestic, foreign, increase, trade, value sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel

category models (and/or) classifiers learning recognition feature detection & representation codewords dictionary image representation category decision category models (and/or) classifiers

2. 1. 3. Representation codewords dictionary feature detection image representation 3.

1.Feature detection and representation

1.Feature detection and representation Regular grid Vogel et al. 2003 Fei-Fei et al. 2005

1.Feature detection and representation Regular grid Vogel et al. 2003 Fei-Fei et al. 2005 Interest point detector Csurka et al. 2004 Sivic et al. 2005

1.Feature detection and representation Regular grid Vogel et al. 2003 Fei-Fei et al. 2005 Interest point detector Csurka et al. 2004 Sivic et al. 2005 Other methods Random sampling (Ullman et al. 2002) Segmentation based patches (Barnard et al. 2003)

1.Feature detection and representation Compute SIFT descriptor [Lowe’99] Normalize patch Detect patches [Mikojaczyk and Schmid ’02] [Matas et al. ’02] [Sivic et al. ’03] Slide credit: Josef Sivic

1.Feature detection and representation …

2. Codewords dictionary formation …

2. Codewords dictionary formation … Vector quantization Slide credit: Josef Sivic

2. Codewords dictionary formation Fei-Fei et al. 2005

Image patch examples of codewords Sivic et al. 2005

3. Image representation frequency ….. codewords

2. 1. 3. Representation codewords dictionary feature detection image representation 3.

category models (and/or) classifiers Learning and Recognition codewords dictionary category decision category models (and/or) classifiers

2 case studies Naïve Bayes classifier Csurka et al. 2004 Hierarchical Bayesian text models (pLSA and LDA) Background: Hoffman 2001, Blei et al. 2004 Object categorization: Sivic et al. 2005, Sudderth et al. 2005 Natural scene categorization: Fei-Fei et al. 2005

First, some notations wn: each patch in an image wn = [0,0,…1,…,0,0]T w: a collection of all N patches in an image w = [w1,w2,…,wN] dj: the jth image in an image collection c: category of the image z: theme or topic of the patch

Case #1: the Naïve Bayes model w N Prior prob. of the object classes Image likelihood given the class Object class decision Csurka et al. 2004

Csurka et al. 2004

Csurka et al. 2004

Case #2: Hierarchical Bayesian text models Probabilistic Latent Semantic Analysis (pLSA) w N d z D Hoffman, 2001 Latent Dirichlet Allocation (LDA) w N c z D  Blei et al., 2001

Case #2: Hierarchical Bayesian text models Probabilistic Latent Semantic Analysis (pLSA) w N d z D “face” Sivic et al. ICCV 2005

Case #2: Hierarchical Bayesian text models “beach” Latent Dirichlet Allocation (LDA) w N c z D  Fei-Fei et al. ICCV 2005

w N d z D Case #2: the pLSA model

Codeword distributions z D Case #2: the pLSA model Observed codeword distributions Codeword distributions per theme (topic) Theme distributions per image For that we will use a method called probabilistic latent semantic analysis. pLSA can be thought of as a matrix decomposition. Here is our term-document matrix (documents, words) and we want to find topic vectors common to all documents and mixture coefficients P(z|d) specific to each document. Note that these are all probabilites which sum to one. So a column here is here expressed as a convex combination of the topic vectors. What we would like to see is that the topics correspond to objects. So an image will be expressed as a mixture of different objects and backgrounds. Slide credit: Josef Sivic

Case #2: Recognition using pLSA An images is a mixture of learned topic vectors and mixing weights. Fit the model and use the learned mixing coeficents to classify an image. In particular assign to the highest P(z|d) Slide credit: Josef Sivic

Case #2: Learning the pLSA parameters Observed counts of word i in document j We use maximum likelihood approach to fit parameters of the model. We write down the likelihood of the whole collection and maximize it using EM. M,N goes over all visual words and all documents. P(w|d) factorizes as on previous slide the entries in those matrices are our parameters. n(w,d) are the observed counts in our data Maximize likelihood of data using EM M … number of codewords N … number of images Slide credit: Josef Sivic

Demo Course website

task: face detection – no labeling

Demo: feature detection Output of crude feature detector Find edges Draw points randomly from edge set Draw from uniform distribution to get scale

Demo: learnt parameters Learning the model: do_plsa(‘config_file_1’) Evaluate and visualize the model: do_plsa_evaluation(‘config_file_1’) Codeword distributions per theme (topic) Theme distributions per image

Demo: recognition examples

Demo: categorization results Performance of each theme

Demo: naïve Bayes Learning the model: do_naive_bayes(‘config_file_2’) Evaluate and visualize the model: do_naive_bayes_evaluation(‘config_file_2’)

category models (and/or) classifiers Learning and Recognition codewords dictionary category decision category models (and/or) classifiers

Invariance issues Scale and rotation Implicit Detectors and descriptors Kadir and Brady. 2003

Invariance issues Scale and rotation Occlusion Implicit in the models Codeword distribution: small variations (In theory) Theme (z) distribution: different occlusion patterns

Invariance issues Scale and rotation Occlusion Translation Encode (relative) location information Sudderth et al. 2005

Invariance issues Scale and rotation Occlusion Translation View point (in theory) Codewords: detector and descriptor Theme distributions: different view points Fergus et al. 2005

retinal, cerebral cortex, Model properties Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel Intuitive Analogy to documents

Model properties Intuitive (Could use) generative models Convenient for weakly- or un-supervised training Prior information Hierarchical Bayesian framework Sivic et al., 2005, Sudderth et al., 2005

Model properties Intuitive (Could use) generative models Learning and recognition relatively fast Compare to other methods

Weakness of the model No rigorous geometric information of the object components It’s intuitive to most of us that objects are made of parts – no such information Not extensively tested yet for View point invariance Scale invariance Segmentation and localization unclear