Download presentation
Presentation is loading. Please wait.
Published byHaylee Duell Modified over 10 years ago
1
Improving the Fisher Kernel for Large-Scale Image Classification Florent Perronnin, Jorge Sanchez, and Thomas Mensink, ECCV 2010 VGG reading group, January 2011, presented by V. Lempitsky
2
From generative modeling to features dataset Generative model Input sample fitting Param eters of the fit Discriminative classfier model
3
Simplest example Dataset of vectors K-means Codebook Input vector fitting Closest codeword Discriminative classfier model – Codebooks – Sparse or dense component analysis – Deep belief networks – Color GMMs –....
4
Fisher vector idea Generative model Input sample fitting Param eters of the fit Discriminative classfier model Information loss (generative models are always inaccurate!) Can we retain some of the lost information without building better generative model? Jaakkola, T., Haussler, D.: Exploiting generative models in discriminative classifiers. NIPS’99 Main idea: retain information about the fitting error for the best fit. Same best fit, but different fitting errors!
5
Fisher vector idea Generative model Input sample fitting Fisher vector Discriminative classfier model Jaakkola, T., Haussler, D.: Exploiting generative models in discriminative classifiers. NIPS’99 λ Main idea: retain information about the fitting error of the best fit. X Fisher vector: (λ1,λ2)(λ1,λ2)
6
Fisher vector for image classification F. Peronnin and C. Dance // CVPR 2007 Assuming independence between the observed T features Encoding each visual feature (e.g. SIFT) extracted from image to a Fisher vector Using N-component gaussian mixture models with diagonalized covariance matrices: N dimensions 128N dimensions
7
Relation to BoW N dimensions 128N dimensions BoW Extra info F. Peronnin and C. Dance // CVPR 2007
8
Whitening the data Fisher matrix (covariance matrix for Fisher vectors): Whitening the data (setting the covariance to identity): Fisher matrix is hard to estimate. Approximations needed: [Peronnin and Dance//CVPR07] suggest a diagonal approximation to Fisher matrix:
9
Classification with Fisher kernels Use whitened Fisher vectors as an input to e.g. linear SVM Small codebooks (e.g. 100 words) are sufficient Encoding runs faster than BoW with large codebooks (although with approximate NN this is not so straightforward!) Slightly better accuracy than “plain, linear BoW” F. Peronnin and C. Dance // CVPR 2007
10
Improvements to Fisher Kernels Perronnin, Jorge Sanchez, and Thomas Mensink, ECCV 2010 Overall very similar to how people improve regular BoW classification Idea 1: normalization of Fisher vectors. Justification: probability distribution of VW in an image Assume: our GMM Image specific “content” Then: =0 Thus: Observation: image non-specific “content” affects the length of the vector, but not direction Conclusion: normalize to remove the effect of non-specific “content”...also L2-normalization ensures K(x,x) = 1 and improves BoV [Vedaldi et al. ICCV’09]
11
Improvement: power normalization α =0.5 i.e. square root works well c.f. for example [Vedaldi and Zisserman// CVPR10] or [Peronnin et al.//CVPR10] on the use of square root and Hellinger’s kernel for BoW
12
Improvement 3: spatial pyramids Fully standard spatial pyramids [Lazebnik et al.] with sum- pooling
13
Results: Pascal 2007 Details: regular grid, multiple scales, SIFT and local RGB color layout, both reduced to 64 dimensions via PCA
14
Results: Caltech 256
15
PASCAL + additional training data Flickr groups up to 25000 per class ImageNet up to 25000 per class
16
Conclusion Fisher kernels – good way to exploit your generative model Fisher kernels based on GMMs in SIFT space lead to state-of-the-art results (on par with the most recent BoW with soft assignments) Main advantage of FK over BoW are smaller dictionaries...although FV are less sparse than BoV Peronnin et al. trained their system within a day for 20 classes for 350K images on 1 CPU
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.