Download presentation
Presentation is loading. Please wait.
Published byMyrtle Barnett Modified over 9 years ago
1
2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab.
2
References Compact Signatures for High-speed Interest Point Description and Matching Michael Calonder, Vincent Lepetit, Pascal Fua et.al. (ICCV 2009 oral) Keypoint signatures for fast learning and recognition M. Calonder, V. Lepetit, P. Fua (ECCV 2008) Fast Keypoint Recognition using Random Ferns M. Ozuysal, M. Calonder, V. Lepetit, P. Fua (PAMI 2009) Keypoint recognition using randomized trees V. Lepetit, P. Fua (PAMI 2006) Computer Vision Lab. ClassificationClassification DescriptorDescriptor
3
Outline Previous works Randomized tree Fern classifiers Signatures Proposed algorithm Compact signatures Experimental results Conclusion Computer Vision Lab.
4
Problem statement Classification (or Recognition) Assumption The number of class = N Appearance of an image patch = p Obtain new patches from the input image Computer Vision Lab. What is class of p?
5
Problem statement Classification (or Recognition) Conventional approaches Compute all possible distance of p Select NN class to the solution. Computer Vision Lab. ∑ ( - ) 2 = 1 ∑ ( - ) 2 = 10 ∑ ( - ) 2 = 11 = Nerest neighbor
6
Problem statement Classification (or Recognition) Advanced approaches (SIFT, …) Compute descriptor and match… View (rotation, scale, affine) and illumination changes. Computer Vision Lab. ∑ ( - ) 2 = ? SIFT SURF … Bottleneck!!! When we have to make descriptors for many input patches, bottleneck problem is occurred.
7
Randomized tree Computer Vision Lab.
8
Randomized tree A simple classifier f Randomly select pixel position a and b in the image patch Simple binary test The tests compare the intensities of two pixels around the keypoint: I(*) : Intensity of pixel position * Invariant to light change by any raising function Computer Vision Lab. a b
9
Randomized tree Initialization of RT with classifier f Define depth of tree d and establish binary tree… Each leaf has a N dimensional vector, which denotes the probability. Computer Vision Lab. f0f0 f1f1 f2f2 f3f3 f4f4 f5f5 f6f6 depth 1 leaf 1 1 0 0 01 depth 2 depth 3 N Number of leaves = 2 d, where d = total depth
10
Randomized tree Training RT Generate patches to cover image variations (scale, rotation, affine transform, …) Computer Vision Lab.
11
Randomized tree Training RT Implement training for all generating patches… Update probabilities of leaves… Computer Vision Lab. f0f0 f1f1 f2f2 depth 1 leaf 1 1 0 0 01 depth 2 N
12
Randomized tree Training RT Implement training for all generating patches… Update probabilities of leaves… Computer Vision Lab. f0f0 f1f1 f2f2 depth 1 leaf 1 1 0 0 01 depth 2 N N
13
Randomized tree Training RT Implement training for all generating patches… Update probabilities of leaves… Computer Vision Lab. f0f0 f1f1 f2f2 depth 1 leaf 1 1 0 0 01 depth 2 N N
14
Randomized tree Classification with trained RT Implement classification for input patches… Confirm probability when a patch reaches a leaf… Computer Vision Lab. f0f0 f1f1 f2f2 depth 1 leaf 1 1 0 0 01 depth 2 NN
15
Randomized tree Random forest Multiple RTs (or RF) are used for robustness. Computer Vision Lab.
16
Randomized tree Random forest Final probability is summed value of probability of each RT. Computer Vision Lab. + Final probability
17
Randomized tree Pros. Easily handle multi-class problems. Easily cover large perspective and scale variations. Classifier training is time consuming, but recognition is very fast and robust. Cons. Memory requirement is high. Computer Vision Lab.
18
Fern classifier Computer Vision Lab.
19
Fern classifier Randomized tree Computer Vision Lab. f0f0 f1f1 f2f2 f3f3 f4f4 f5f5 f6f6 1 1 0 0 01 depth 1 depth 2 depth 3
20
Fern classifier Modified randomized tree In same depth, same classifier f is used Computer Vision Lab. f0f0 f1f1 f1f1 f2f2 f2f2 f2f2 f2f2 1 1 0 0 01 depth 1 depth 2 depth 3
21
Fern classifier Fern classifier (Randomized list) Modified RT and RL(Fern) are identical… Computer Vision Lab. f0f0 f1f1 f2f2 depth 1 depth 2 depth 3 N classes 2d2d
22
Fern classifier Fern classifier (Randomized list) Computer Vision Lab. TreeList
23
Fern classifier Multiple fern classifier (training) Initialize each fern classifier with depth d and class N Computer Vision Lab.
24
Multiple fern classifier (training) Update probabilities of fern classifiers for all reference image… Fern classifier Computer Vision Lab. 0 1 1 1 1 1 0 0 1 6 4 7
25
Fern classifier Multiple fern classifier (training) Establish trained fern classifiers… Computer Vision Lab.
26
Multiple fern classifier (Recognition) Just apply new patch image and obtain final probability. Fern classifier Computer Vision Lab. + +
27
Fern classifier Pros. Same as pros. of randomized tree. A small number of f classifiers are used. Fast, easy to implement relative to RT. Cons. Still takes a lot of memory… Computer Vision Lab.
28
Signature Computer Vision Lab.
29
Signature Assumption Definition Classifier C with patch p Computer Vision Lab.
30
Signature Assumption Definition If the classifier C has been trained well, then result of classifier C for the deformed patch is also same as original one. Computer Vision Lab.
31
Signature Assumption Input patch q is not member of class. Definition of signature … Signature of q is the result of classifier C. Computer Vision Lab.
32
Signature Assumption If input patch q’ is same member of q, then signatures of q and q’ are almost same... C() can make the signature of q. Signature of q (= C(q)) can be descriptor. Computer Vision Lab.
33
Signature Sparse signature q is not a member of K. Response of C(q) can not be an exact probability of S(q). Change the signature value by using user defined threshold. Fern classifiers is used for descriptor maker (=SIFT). A result of sparse signature(q) = descriptor of q. Computer Vision Lab.
34
Compact Signature Computer Vision Lab.
35
Compact Signature Purpose There is no loss of good matching rate while reduced memory size of classifiers! Computer Vision Lab. F1F1F1F1 F2F2F2F2 FJFJFJFJ + + 2d2d N th() Signature generation
36
Compact Signature Approach #1 (Dimension reduction) To reduce the size of memory… Random Ortho-Projection(ROP) matrix Definition of compact signature A ROP is applied to all probabilities… Computer Vision Lab. F1F1F1F1 2d2d N F1F1F1F1 M (N >>M)
37
Compact Signature Approach #1 (Dimension reduction) No loss of good matching rate, while reduced memory size of classifiers! (N >>M) Computer Vision Lab. F1F1F1F1 F2F2F2F2 FJFJFJFJ + + 2d2d M Compact Signature generation
38
Compact Signature Approach #2 (Quantization) The computation can be further streamlined by quantizing the compressed leaf vectors. Using 1 byte per element instead of 4 bytes required by floats. Computer Vision Lab. F1F1F1F1 2d2d M P float P byte Quantization
39
Compact Signature Total complexity of compact signature classifier (or descriptor maker) Computer Vision Lab. 50 times less memory
40
Experimental results Comparison 4 image database Wall, light, jpg and fountain http://www.robots.ox.ac.uk/~vgg/research/affine Descriptors SURF-64 (Speed Up Robust Feature) Sparse signatures Compact signatures-176 Compact signatures-88 Computer Vision Lab. Class N = 500
41
Experimental results Matching performance Computer Vision Lab. m|n test image reference image
42
Experimental results CPU time and Memory Consumption Computer Vision Lab.
43
Conclusion Contribution A Descriptor maker is proposed by using novel classifier which is well-known Fern-classifier. Descriptors can be comuted rapidly. Classifier size is extremly reduced Almost 50 times less memory than sparse ones… by using dimesion reduction (ROP) and simple quantization technique Discussion Well trained classifier is difficult to obtain in in normal situation. Computer Vision Lab.
44
Q&AQ&A
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.