“Bag of Words”: recognition using texture 16-721: Advanced Machine Perception A. Efros, CMU, Spring 2006 Adopted from Fei-Fei Li, with some slides from.

Slides:



Advertisements
Similar presentations
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Advertisements

Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Object recognition and scene “understanding”
Recap: Bag of Words for Large Scale Retrieval Slide Credit: Nister Slide.
Part 1: Bag-of-words models by Li Fei-Fei (Princeton)
FATIH CAKIR MELIHCAN TURK F. SUKRU TORUN AHMET CAGRI SIMSEK Content-Based Image Retrieval using the Bag-of-Words Concept.
Outline SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion.
Marco Cristani Teorie e Tecniche del Riconoscimento1 Teoria e Tecniche del Riconoscimento Estrazione delle feature: Bag of words Facoltà di Scienze MM.
TP14 - Indexing local features
Large-Scale Image Retrieval From Your Sketches Daniel Brooks 1,Loren Lin 2,Yijuan Lu 1 1 Department of Computer Science, Texas State University, TX, USA.
1 Part 1: Classical Image Classification Methods Kai Yu Dept. of Media Analytics NEC Laboratories America Andrew Ng Computer Science Dept. Stanford University.
Lecture 31: Modern object recognition
CS4670 / 5670: Computer Vision Bag-of-words models Noah Snavely Object
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
1 Image Retrieval Hao Jiang Computer Science Department 2009.
Ghunhui Gu, Joseph J. Lim, Pablo Arbeláez, Jitendra Malik University of California at Berkeley Berkeley, CA
Discriminative and generative methods for bags of features
Object Recognition. So what does object recognition involve?
Bag-of-features models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
What is Texture? Texture depicts spatially repeating patterns Many natural phenomena are textures radishesrocksyogurt.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Lecture 28: Bag-of-words models
Agenda Introduction Bag-of-words model Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
“Bag of Words”: when is object recognition, just texture recognition? : Advanced Machine Perception A. Efros, CMU, Spring 2009 Adopted from Fei-Fei.
Front-end computations in human vision Jitendra Malik U.C. Berkeley References: DeValois & DeValois,Hubel, Palmer, Spillman &Werner, Wandell Jitendra Malik.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Object Recognition. So what does object recognition involve?
Object recognition Jana Kosecka Slides from D. Lowe, D. Forsythe and J. Ponce book, ICCV 2005 Tutorial Fei-Fei Li, Rob Fergus and A. Torralba.
Bag-of-features models
Object recognition Jana Kosecka Slides from D. Lowe, D. Forsythe and J. Ponce book, ICCV 2005 Tutorial Fei-Fei Li, Rob Fergus and A. Torralba.
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
5/30/2006EE 148, Spring Visual Categorization with Bags of Keypoints Gabriella Csurka Christopher R. Dance Lixin Fan Jutta Willamowski Cedric Bray.
Discriminative and generative methods for bags of features
The Beauty of Local Invariant Features
Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Multiclass object recognition
Computer Vision James Hays, Brown
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Special Topic on Image Retrieval
Learning Visual Bits with Direct Feature Selection Joel Jurik 1 and Rahul Sukthankar 2,3 1 University of Central Florida 2 Intel Research Pittsburgh 3.
Object Bank Presenter : Liu Changyu Advisor : Prof. Alex Hauptmann Interest : Multimedia Analysis April 4 th, 2013.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
SVM-KNN Discriminative Nearest Neighbor Classification for Visual Category Recognition Hao Zhang, Alex Berg, Michael Maire, Jitendra Malik.
Lecture 31: Modern recognition CS4670 / 5670: Computer Vision Noah Snavely.
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Unsupervised Learning: Kmeans, GMM, EM Readings: Barber
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
Discovering Objects and their Location in Images Josef Sivic 1, Bryan C. Russell 2, Alexei A. Efros 3, Andrew Zisserman 1 and William T. Freeman 2 Goal:
Lecture 15: Eigenfaces CS6670: Computer Vision Noah Snavely.
CS654: Digital Image Analysis
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
A PPLICATIONS OF TOPIC MODELS Daphna Weinshall B Slides credit: Joseph Sivic, Li Fei-Fei, Brian Russel and others.
Class 3: Feature Coding and Pooling Liangliang Cao, Feb 7, 2013 EECS Topics in Information Processing Spring 2013, Columbia University
Lecture IX: Object Recognition (2)
The topic discovery models
CS 2770: Computer Vision Feature Matching and Indexing
Large-scale Instance Retrieval
Video Google: Text Retrieval Approach to Object Matching in Videos
By Suren Manvelyan, Crocodile (nile crocodile?) By Suren Manvelyan,
The topic discovery models
CS 1674: Intro to Computer Vision Scene Recognition
Brief Review of Recognition + Context
The topic discovery models
SIFT keypoint detection
Video Google: Text Retrieval Approach to Object Matching in Videos
Part 1: Bag-of-words models
Some slides: courtesy of David Jacobs
Prof. Adriana Kovashka University of Pittsburgh September 17, 2019
Presentation transcript:

“Bag of Words”: recognition using texture : Advanced Machine Perception A. Efros, CMU, Spring 2006 Adopted from Fei-Fei Li, with some slides from L.W. Renninger A quiet meditation on the importance of trying simple things first…

Object Bag of ‘words’

Analogy to documents Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step- wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value. China, trade, surplus, commerce, exports, imports, US, yuan, bank, domestic, foreign, increase, trade, value

categorydecisionlearning feature detection & representation codewords dictionary image representation category models (and/or) classifiers recognition

1.Feature detection and representation

Feature detection Sliding Window –Leung et al, 1999 –Viola et al, 1999 –Renninger et al 2002

Feature detection Sliding Window –Leung et al, 1999 –Viola et al, 1999 –Renninger et al 2002 Regular grid –Vogel et al –Fei-Fei et al. 2005

Feature detection Sliding Window –Leung et al, 1999 –Viola et al, 1999 –Renninger et al 2002 Regular grid –Vogel et al –Fei-Fei et al Interest point detector –Csurka et al –Fei-Fei et al –Sivic et al. 2005

Feature detection Sliding Window –Leung et al, 1999 –Viola et al, 1999 –Renninger et al 2002 Regular grid –Vogel et al –Fei-Fei et al Interest point detector –Csurka et al –Fei-Fei et al –Sivic et al Other methods –Random sampling (Ullman et al. 2002) –Segmentation based patches (Barnard et al. 2003

Feature Representation Visual words, aka textons, aka keypoints: K-means clustered pieces of the image Various Representations: –Filter bank responses –Image Patches –SIFT descriptors All encode more-or-less the same thing…

Interest Point Features Normalize patch Detect patches [Mikojaczyk and Schmid ’02] [Matas et al. ’02] [Sivic et al. ’03] Compute SIFT descriptor [Lowe’99] Slide credit: Josef Sivic

… Interest Point Features

… Patch Features

dictionary formation …

Clustering (usually k-means) Vector quantization … Slide credit: Josef Sivic

Clustered Image Patches Fei-Fei et al. 2005

Filterbank

Textons (Malik et al, IJCV 2001) K-means on vectors of filter responses

Textons (cont.)

Image patch examples of codewords Sivic et al. 2005

Visual Polysemy. Single visual word occurring on different (but locally similar) parts on different object categories. Visual Synonyms. Two different visual words representing a similar part of an object (wheel of a motorbike). Visual synonyms and polysemy

Image representation ….. frequency codewords

Vision Science & Computer Vision Groups University of California Berkeley Scene Classification (Renninger & Malik) kitchenlivingroombedroombathroom citystreetfarm beachmountainforest

Vision Science & Computer Vision Groups University of California Berkeley kNN Texton Matching

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance 37 ms

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance 50 ms

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance 69 ms

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance 37 ms50 ms69 ms

Object Recognition using texture

Learn texture model representation: Textons (rotation-variant) Clustering K=2000 Then clever merging Then fitting histogram with Gaussian Training Labeled class data

Results movie

Simple is still the best!

Discussion There seems to be no geometry (true/folse?), so why does it work so well? Which sampling scheme is better you think? Which patch representation is better (invariance vs. discriminability)? What are the big challenges for this type of methods?