Download presentation
Presentation is loading. Please wait.
Published byRolf Clark Modified over 9 years ago
1
Enhancing Exemplar SVMs using Part Level Transfer Regularization 1
2
Problem Definition: Image Retrieval 2
3
query 3
4
Problem Definition: Image Retrieval query Image Database Retrieved Images query Retrieved Images Retrieving same category in a similar pose Example: bicycle facing left 4
5
A Candidate Solution: Exemplar SVM (E-SVM) Training a SVM with a single positive and many negative samples Linear SVMs over HoG features [Dalal &Triggs’05], [Felzenszwalb’08] Exemplar SVM 5 [Malisiewicz’11] [Shrivastava’11]
6
A Candidate Solution: Exemplar SVM (E-SVM) Linear SVMs over HoG features [Dalal &Triggs’05], [Felzenszwalb’08] Exemplar SVM Training a SVM with a single positive and many negative samples Retrieval via sliding window search on the image database Image Database 6
7
A Candidate Solution: Exemplar SVM (E-SVM) Retrieval via sliding window search on the image database Image Database 7 Linear SVMs over HoG features [Dalal &Triggs’05], [Felzenszwalb’08] Exemplar SVM Training a SVM with a single positive and many negative samples Retrieved Images
8
Framework: Enhanced Exemplar SVM (EE-SVM) positive sample negative samples Train E-SVM over HoG features Part-Level Transfer Enhanced E-SVM Exemplar SVM Previously Trained Classifiers 8
9
Benefit: Enhanced Exemplar SVM (EE-SVM) Exemplar SVM Retrieved Subwindows Enhanced E-SVM Subwindow Retrieval Subwindow Retrieval Image Database Retrieved Subwindows 9 Query Image
10
Overview Transfer Learning in Computer Vision – Classification & Detection Enhanced Exemplar SVM Feature Augmentation vs Transfer Results & Discussion 10
11
Transfer Learning Learning new classes by building upon previously learned classes. 11 Learning Training Samples Classifier [Yang’07] [Li’07] Previously Learned Classifiers Transfer Knowledge
12
Transfer Learning Learning new classes by building upon the previous experiences [Yang’07] [Li’07] Learning Classifier Training Samples Previously Learned Classifiers Transfer Knowledge 12
13
Transfer Learning in Computer Vision Image Classification – Adaptive SVMs, – Transfer from Multiple Models, – Adaptive Multiple Kernel Learning Object Detection – Rigid Transfer – Flexible Transfer 13 [Yang et al. ICDM’07] [Tommasi et al. BMVC’09] [Tommasi et al. CVPR’10] [Luo et al. ICCV’11] [Duan et al. CVPR’10] [Stark et al. ICCV’09] [Aytar and Zisserman ICCV’11] [Gao et al. ECCV’12] Learning new classes by building upon previously learned classes.
14
Transfer Learning for Detection Rigid Transfer [Aytar and Zisserman ICCV’11] – Transfer between fixed sized templates – Good performance, especially for smaller number of training samples. – Hard to find visually similar detectors with same aspect ratio and size. Flexible Transfer – Transfer between different sized templates. – Transferring shape features [Stark et al. ICCV’09] – Deformable Transfer [Aytar and Zisserman ICCV’11] – Transfer via Structured Priors [Gao et al. ECCV’12] Fixed Sized Transfer 14 Flexible Transfer
15
Overview Transfer Learning in Computer Vision – Classification & Detection Enhanced Exemplar SVM Feature Augmentation vs Transfer Results & Discussion 15
16
Framework: Enhanced Exemplar SVM (EE-SVM) Train E-SVM Part-Level Transfer Enhanced E-SVM Exemplar SVM Previously Trained Classifiers 16 Query
17
Framework: Part-Level Transfer Regularization 17 Exemplar SVM uiui
18
uiui Parameters: Part-Level Transfer Regularization 18 close to E-SVM close to construction from u i ’s
19
Framework: Matching Classifier Patches 19 Previously Learned Classifiers uiui Exemplar SVM
20
Why is it beneficial? Part-Level Transfer Regularization Part level transfer is beneficial because… – parts can be relocated (deformation), – the possibility of finding a good match for transfer increases when we look at smaller classifier patches. Advantages of transferring parts from well trained classifiers: – Better background suppression and discriminativity due to well trained source classifiers. – Better handling of local variations since source classifiers are trained on many positive samples. No additional cost on runtime 20
21
Unusual Poses Composition of Objects [Visual Phrases - Sadeghi CVPR’11] 21 Where is it beneficial? Part-Level Transfer Regularization
22
PASCAL 2007: Results - Left Facing Horse 22 query E-SVM Enhanced E-SVM
23
PASCAL 2007: Results - Left Facing Bicycle 23 E-SVM Enhanced E-SVM query
24
PASCAL 2007: Visual Phrase – Riding Horse 24 query E-SVM Enhanced E-SVM
25
ImageNet: Unusual Pose - Bicycle 25 E-SVM Enhanced E-SVM query
26
ImageNet: Unusual Pose - Lion 26 query E-SVM Enhanced E-SVM
27
Overview Transfer Learning in Computer Vision – Classification & Detection Enhanced Exemplar SVM Feature Augmentation vs Transfer Results & Discussion 27
28
Implementation: Transfer vs. Feature Augmentation 28 is equivalent to learning “normal” SVM with augmented features. Transfer Regularization
29
.. Implementation: Transfer vs. Feature Augmentation 29 0.20.7 0.1... is equivalent to learning Transfer Regularization “normal” SVM with augmented features.
30
Implications: Transfer vs. Feature Augmentation This equivalence is not specific to Exemplar SVMs. Transfer regularization can be implemented as feature augmentation. Transfer regularization can be efficiently solved using standard SVM packages. 30
31
Overview Transfer Learning in Computer Vision – Classification & Detection Enhanced Exemplar SVM Feature Augmentation vs Transfer Results & Discussion 31
32
PASCAL 2007: Quantitative Results 32
33
ImageNet: Quantitative Results Three queries are evaluated for each of the five classes. Precisions at top 5, 10, 50 and 100 are reported. 33
34
Handling Occlusions 34 Query E-SVM EE-SVM
35
Query E-SVM EE-SVM Handling Truncation 35
36
36 Conclusions Boosted the performance of E-SVM which incurs no additional cost on runtime. Presented the equivalence between Transfer regularization and feature augmentation. Showed the benefit for unusual poses and visual phrases. Handling truncation and occlusion.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.