Learning Visual Similarity Measures for Comparing Never Seen Objects Eric Nowak, Frédéric Jurie CVPR 2007.

Slides:



Advertisements
Similar presentations
Imbalanced data David Kauchak CS 451 – Fall 2013.
Advertisements

Paper By - Manish Mehta, Rakesh Agarwal and Jorma Rissanen
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Evaluation of Decision Forests on Text Categorization
Hunt’s Algorithm CIT365: Data Mining & Data Warehousing Bajuna Salehe
Introduction Training Complexity, Pruning CART vs. ID3 vs. C4.5
1 Data Mining Classification Techniques: Decision Trees (BUSINESS INTELLIGENCE) Slides prepared by Elizabeth Anglo, DISCS ADMU.
Decision Tree.
Classification Techniques: Decision Tree Learning
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
SLIQ: A Fast Scalable Classifier for Data Mining Manish Mehta, Rakesh Agrawal, Jorma Rissanen Presentation by: Vladan Radosavljevic.
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Generic Object Detection using Feature Maps Oscar Danielsson Stefan Carlsson
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Lecture 5 (Classification with Decision Trees)
Three kinds of learning
Supervised Distance Metric Learning Presented at CMU’s Computer Vision Misc-Read Reading Group May 9, 2007 by Tomasz Malisiewicz.
5/30/2006EE 148, Spring Visual Categorization with Bags of Keypoints Gabriella Csurka Christopher R. Dance Lixin Fan Jutta Willamowski Cedric Bray.
ICS 273A Intro Machine Learning
1 Accurate Object Detection with Joint Classification- Regression Random Forests Presenter ByungIn Yoo CS688/WST665.
Predicting Matchability - CVPR 2014 Paper -
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
05/06/2005CSIS © M. Gibbons On Evaluating Open Biometric Identification Systems Spring 2005 Michael Gibbons School of Computer Science & Information Systems.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Ensemble Learning (2), Tree and Forest
(Fri) Young Ki Baik Computer Vision Lab.
Exercise Session 10 – Image Categorization
Person-Specific Domain Adaptation with Applications to Heterogeneous Face Recognition (HFR) Presenter: Yao-Hung Tsai Dept. of Electrical Engineering, NTU.
CSE 185 Introduction to Computer Vision Pattern Recognition.
Learning Visual Similarity Measures for Comparing Never Seen Objects By: Eric Nowark, Frederic Juric Presented by: Khoa Tran.
Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of.
Learning Visual Bits with Direct Feature Selection Joel Jurik 1 and Rahul Sukthankar 2,3 1 University of Central Florida 2 Intel Research Pittsburgh 3.
1 Data Mining Lecture 3: Decision Trees. 2 Classification: Definition l Given a collection of records (training set ) –Each record contains a set of attributes,
Marcin Marszałek, Ivan Laptev, Cordelia Schmid Computer Vision and Pattern Recognition, CVPR Actions in Context.
1 Action Classification: An Integration of Randomization and Discrimination in A Dense Feature Representation Computer Science Department, Stanford University.
Decision Trees Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao.
1 Learning Chapter 18 and Parts of Chapter 20 AI systems are complex and may have many parameters. It is impractical and often impossible to encode all.
Combining multiple learners Usman Roshan. Bagging Randomly sample training data Determine classifier C i on sampled data Goto step 1 and repeat m times.
For Wednesday No reading Homework: –Chapter 18, exercise 6.
A Novel Local Patch Framework for Fixing Supervised Learning Models Yilei Wang 1, Bingzheng Wei 2, Jun Yan 2, Yang Hu 2, Zhi-Hong Deng 1, Zheng Chen 2.
Class-Specific Hough Forests for Object Detection Zhen Yuan Hsu Advisor:S.J.Wang Gall, J., Lempitsky, V.: Class-specic hough forests for object detection.
School of Engineering and Computer Science Victoria University of Wellington Copyright: Peter Andreae, VUW Image Recognition COMP # 18.
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
DOCUMENT UPDATE SUMMARIZATION USING INCREMENTAL HIERARCHICAL CLUSTERING CIKM’10 (DINGDING WANG, TAO LI) Advisor: Koh, Jia-Ling Presenter: Nonhlanhla Shongwe.
Event retrieval in large video collections with circulant temporal encoding CVPR 2013 Oral.
ASSESSING LEARNING ALGORITHMS Yılmaz KILIÇASLAN. Assessing the performance of the learning algorithm A learning algorithm is good if it produces hypotheses.
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
1Ellen L. Walker Category Recognition Associating information extracted from images with categories (classes) of objects Requires prior knowledge about.
A feature-based kernel for object classification P. Moreels - J-Y Bouguet Intel.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
Image Classifier Digital Image Processing A.A
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Improved Video Categorization from Text Metadata and User Comments ACM SIGIR 2011:Research and development in Information Retrieval - Katja Filippova -
Lecture Notes for Chapter 4 Introduction to Data Mining
Feature Selction for SVMs J. Weston et al., NIPS 2000 오장민 (2000/01/04) Second reference : Mark A. Holl, Correlation-based Feature Selection for Machine.
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
1 Systematic Data Selection to Mine Concept-Drifting Data Streams Wei Fan Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery.
CS 548 Spring 2016 Model and Regression Trees Showcase by Yanran Ma, Thanaporn Patikorn, Boya Zhou Showcasing work by Gabriele Fanelli, Juergen Gall, and.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Presenter: Jae Sung Park
CSE343/543 Machine Learning: Lecture 4.  Chapter 3: Decision Trees  Weekly assignment:  There are lot of applications and systems using machine learning.
DECISION TREES An internal node represents a test on an attribute.
Trees, bagging, boosting, and stacking
Quadtrees 1.
Leverage Consensus Partition for Domain-Specific Entity Coreference
Presentation transcript:

Learning Visual Similarity Measures for Comparing Never Seen Objects Eric Nowak, Frédéric Jurie CVPR 2007

Overview Motivation Method Background –Extremely Randomized Trees Used for clustering and for defining a similarity measure Experiments –Datasets –Results Discussion

Motivation Do two images represent the same object instance?

Motivation (con’t) Goal: Compute the visual similarity of two never seen objects Train on pairs labeled Same or Different. Invariant to occlusions and changes in pose, lighting and background

Method Use a large number of images showing the same or different object instances Use local descriptors (SIFT, geometry) to define thousands of patch pairs Use extremely randomized trees to cluster these pairs Use binary SVD to weight the positive/negative pair classifiers Apply the resulting weights to any new image pair

Background Local Patch Pairs Forests of extremely randomized binary decision trees

Background: Local Patch Pairs Select a large number of patch pairs, half from image pairs representing “Same” instances and half from image pairs representing “Different” instances. SameDifferent ……

Defining Local Patch Pairs Choose a random patch in the first image Find the best match in a subset of the second image

Background: Forests of extremely randomized binary decision trees Cluster two groups using several independent binary decision trees, each of which grows dynamically. The decisions are random and independent of any other node. If a leaf contains elements from both groups, subdivide further. Child nodes are created until any given leaf contains only elements from one group.

Background: Demo Red represents Group1, green represents Group2

Background: Demo Looking at a single tree. Repeat for every tree.

Background: Demo

Building Trees Select a large number of patch pairs, representing both “Same” and “Different” pairs For each node, select an optimal decision to split its pairs. Recurse until each leaf has only positive or only negative elements.

Decision for Nowak/Jurie article Decision: each node has the form k(S 1 (i)-d) > 0 and k(S 2 (i)-d) > 0 where k=1 or -1, i is a SIFT dimension, d is a threshold Patch1 Patch2 S1 S2  

Building Trees: splitting nodes Want to define a node’s decision condition intelligently: separate the “Same” pairs and the “Different” pairs into different children: Information Gain

Building Trees: stopping condition

Using Trees: Defining the Similarity Measure Throw out the patches, keeping only the tree structures. Choose new patch pairs from the training set. Quantize each image pair, to get an image pair descriptor.

Overview: Quantizing an image pair The descriptor, x, is a point in binary space At any node, if both inequalities are satisfied, put the pair in the right child. Otherwise, put the pair in the left child d=.19 k=1 d=.03 k=1

Demo: Quantizing an Image Pair Recall one tree constructed during the clustering phase in the previous demo

Demo: Quantizing an Image Pair Strip off the contents of the original leaves, but retain the tree structure, including node conditions. Consider all patch pairs for a positive image pair.

Demo: Quantizing an Image Pair Introduce a patch pair at the root

Demo: Quantizing an Image Pair Trickle it down to a leaf

Demo: Quantizing an Image Pair Trickle it down to a leaf

Demo: Quantizing an Image Pair Trickle it down to a leaf

Demo: Quantizing an Image Pair Repeat with another patch pair

Demo: Quantizing an Image Pair Repeat with another patch pair

Demo: Quantizing an Image Pair Repeat with another patch pair

Demo: Quantizing an Image Pair Repeat with another patch pair

Demo: Quantizing an Image Pair Repeat with another patch pair

Demo: Quantizing an Image Pair Repeat for all patch pairs

Demo: Quantizing an Image Pair Repeat for all trees. Build the image pair descriptor based on which leaves have patch pairs.

Demo: Quantizing an Image Pair Build the image pair descriptor based on which leaves have patch pairs.

Demo: Quantizing an Image Pair Demo showing a negative pair

Demo: Quantizing an Image Pair Demo showing a negative pair

Demo: Quantizing an Image Pair Demo showing a negative pair

Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.

Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.

Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.

Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.

Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.

Weighting leaves (con’t) Create a descriptor for all positive and negative image pairs in this second testing set. Split the resulting points using binary SVM, defining a support vector, . For any image descriptor, x, S(x)=  t x gives a weighting that reflects the useful leaves. For any new, never before seen pair of images, we can define a descriptor y. Computing S(y) gives a value we can threshold on to determine whether the pair belongs on the positive or negative side of the SVM’s support vector.

Data Sets Toy Cars (new) Ferencz & Malik cars Jain faces Coil-100

Toy Cars

Ferencz & Malik

Jain “Faces in the News”

Coil-100

Classifier vs. Clusters S vote –Uses the leaf labels and count of patches –No SVM, just decision forest S lin –Uses trees for clusters only –Second round of patch pairs –SVM to learn relevant leaves

Parameter Evaluation

Comparison with State of the Art Forest –50 trees –200,000 patch pairs for trees (½+,½-) –1,000 random split conditions per split Use for clustering –1,000 new patch pairs for SVM

Comparison with State of the Art Toy CarsFerencz Cars FacesCoil 100 PreviousN/A ± 4.0 This85.9 ± ± ± ± 1.9

Generic or Domain Specific? (Train on Ferencz Cars) Train on Ferencz Toy CarsFerencz Cars FacesCoil 100 Nothing85.9 ± ± ± ± 1.9 Trees81.4 (-4.5) 60.7 (-23.5) 82.9 (-10.1) Trees & Weights 57.9 (-28.0) 35.0 (-49.2) 57.2 (-35.8)

Generic or Domain Specific? (Train on Coil 100) Train on Coil 100 Toy CarsFerencz Cars FacesCoil 100 Nothing85.9 ± ± ± ± 1.9 Trees84.3 (-1.6) 88.6 (-2.4) 81.4 (-2.8) Trees & Weights 75.9 (-10.0) 82.0 (-9.0) 71.0 (-13.2)

Applications Photo collection browsing Face recognition Assemble photos for 3D reconstruction Others??

Discussion Is this method applicable to more heterogeneous domains and datasets? Could this method be extended to recognize categories rather than instances? –Objects in the same category might have few local patches in common –Method requires finding corresponding local patches Is a general description of object identity possible by using a sufficiently varied training set for tree creation?