Download presentation
Presentation is loading. Please wait.
Published byAnnabel Campbell Modified over 9 years ago
1
Learning Visual Similarity Measures for Comparing Never Seen Objects Eric Nowak, Frédéric Jurie CVPR 2007
2
Overview Motivation Method Background –Extremely Randomized Trees Used for clustering and for defining a similarity measure Experiments –Datasets –Results Discussion
3
Motivation Do two images represent the same object instance?
4
Motivation (con’t) Goal: Compute the visual similarity of two never seen objects Train on pairs labeled Same or Different. Invariant to occlusions and changes in pose, lighting and background
5
Method Use a large number of images showing the same or different object instances Use local descriptors (SIFT, geometry) to define thousands of patch pairs Use extremely randomized trees to cluster these pairs Use binary SVD to weight the positive/negative pair classifiers Apply the resulting weights to any new image pair
6
Background Local Patch Pairs Forests of extremely randomized binary decision trees
7
Background: Local Patch Pairs Select a large number of patch pairs, half from image pairs representing “Same” instances and half from image pairs representing “Different” instances. SameDifferent ……
8
Defining Local Patch Pairs Choose a random patch in the first image Find the best match in a subset of the second image
9
Background: Forests of extremely randomized binary decision trees Cluster two groups using several independent binary decision trees, each of which grows dynamically. The decisions are random and independent of any other node. If a leaf contains elements from both groups, subdivide further. Child nodes are created until any given leaf contains only elements from one group.
10
Background: Demo Red represents Group1, green represents Group2
11
Background: Demo Looking at a single tree. Repeat for every tree.
12
Background: Demo
14
Building Trees Select a large number of patch pairs, representing both “Same” and “Different” pairs For each node, select an optimal decision to split its pairs. Recurse until each leaf has only positive or only negative elements.
15
Decision for Nowak/Jurie article Decision: each node has the form k(S 1 (i)-d) > 0 and k(S 2 (i)-d) > 0 where k=1 or -1, i is a SIFT dimension, d is a threshold Patch1 Patch2 S1 S2
16
Building Trees: splitting nodes Want to define a node’s decision condition intelligently: separate the “Same” pairs and the “Different” pairs into different children: Information Gain
17
Building Trees: stopping condition
18
Using Trees: Defining the Similarity Measure Throw out the patches, keeping only the tree structures. Choose new patch pairs from the training set. Quantize each image pair, to get an image pair descriptor.
19
Overview: Quantizing an image pair The descriptor, x, is a point in binary space At any node, if both inequalities are satisfied, put the pair in the right child. Otherwise, put the pair in the left child d=.19 k=1 d=.03 k=1
20
Demo: Quantizing an Image Pair Recall one tree constructed during the clustering phase in the previous demo
21
Demo: Quantizing an Image Pair Strip off the contents of the original leaves, but retain the tree structure, including node conditions. Consider all patch pairs for a positive image pair.
22
Demo: Quantizing an Image Pair Introduce a patch pair at the root
23
Demo: Quantizing an Image Pair Trickle it down to a leaf
24
Demo: Quantizing an Image Pair Trickle it down to a leaf
25
Demo: Quantizing an Image Pair Trickle it down to a leaf
26
Demo: Quantizing an Image Pair Repeat with another patch pair
27
Demo: Quantizing an Image Pair Repeat with another patch pair
28
Demo: Quantizing an Image Pair Repeat with another patch pair
29
Demo: Quantizing an Image Pair Repeat with another patch pair
30
Demo: Quantizing an Image Pair Repeat with another patch pair
31
Demo: Quantizing an Image Pair Repeat for all patch pairs
32
Demo: Quantizing an Image Pair Repeat for all trees. Build the image pair descriptor based on which leaves have patch pairs.
33
Demo: Quantizing an Image Pair Build the image pair descriptor based on which leaves have patch pairs.
34
Demo: Quantizing an Image Pair Demo showing a negative pair
35
Demo: Quantizing an Image Pair Demo showing a negative pair
36
Demo: Quantizing an Image Pair Demo showing a negative pair
37
Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.
38
Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.
39
Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.
40
Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.
41
Weighting leaves A leaf which is equally probable in positive and negative image pairs should not weigh much because it is not informative. A leaf which occurs only in positive or negative image pairs should weigh more.
42
Weighting leaves (con’t) Create a descriptor for all positive and negative image pairs in this second testing set. Split the resulting points using binary SVM, defining a support vector, . For any image descriptor, x, S(x)= t x gives a weighting that reflects the useful leaves. For any new, never before seen pair of images, we can define a descriptor y. Computing S(y) gives a value we can threshold on to determine whether the pair belongs on the positive or negative side of the SVM’s support vector.
43
Data Sets Toy Cars (new) Ferencz & Malik cars Jain faces Coil-100
44
Toy Cars
46
Ferencz & Malik
47
Jain “Faces in the News”
48
Coil-100
50
Classifier vs. Clusters S vote –Uses the leaf labels and count of patches –No SVM, just decision forest S lin –Uses trees for clusters only –Second round of patch pairs –SVM to learn relevant leaves
51
Parameter Evaluation
54
Comparison with State of the Art Forest –50 trees –200,000 patch pairs for trees (½+,½-) –1,000 random split conditions per split Use for clustering –1,000 new patch pairs for SVM
55
Comparison with State of the Art Toy CarsFerencz Cars FacesCoil 100 PreviousN/A84.970.088.6 ± 4.0 This85.9 ± 0.491.0 ± 0.684.2 ± 3.193.0 ± 1.9
56
Generic or Domain Specific? (Train on Ferencz Cars) Train on Ferencz Toy CarsFerencz Cars FacesCoil 100 Nothing85.9 ± 0.491.0 ± 0.684.2 ± 3.193.0 ± 1.9 Trees81.4 (-4.5) 60.7 (-23.5) 82.9 (-10.1) Trees & Weights 57.9 (-28.0) 35.0 (-49.2) 57.2 (-35.8)
57
Generic or Domain Specific? (Train on Coil 100) Train on Coil 100 Toy CarsFerencz Cars FacesCoil 100 Nothing85.9 ± 0.491.0 ± 0.684.2 ± 3.193.0 ± 1.9 Trees84.3 (-1.6) 88.6 (-2.4) 81.4 (-2.8) Trees & Weights 75.9 (-10.0) 82.0 (-9.0) 71.0 (-13.2)
58
Applications Photo collection browsing Face recognition Assemble photos for 3D reconstruction Others??
59
Discussion Is this method applicable to more heterogeneous domains and datasets? Could this method be extended to recognize categories rather than instances? –Objects in the same category might have few local patches in common –Method requires finding corresponding local patches Is a general description of object identity possible by using a sufficiently varied training set for tree creation?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.