Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Image Priors in Maximum Margin Classifiers Tali Brayer Margarita Osadchy Daniel Keren.

Similar presentations


Presentation on theme: "Using Image Priors in Maximum Margin Classifiers Tali Brayer Margarita Osadchy Daniel Keren."— Presentation transcript:

1 Using Image Priors in Maximum Margin Classifiers Tali Brayer Margarita Osadchy Daniel Keren

2 Object Detection Problem: Locate instances of object category in a given image. Asymmetric classification problem! BackgroundObject (Category) Very largeRelatively small Complex (thousands of categories) Simple (single category) Large prior to appear in an image Small prior Easy to collect (not easy to learn from examples) Hard to collect

3 All images Intuition  Denote H to be the acceptance region of a classifier. We propose to minimize Pr(All images) ( Pr(bkg)) in H except for the object samples. Background Object class All images Background We have a prior on the distribution of all natural images

4 Other work: Combine small labeled training set with large unlabeled set – semi-supervised learning: EM with generative mixture models, Fisher kernel,self-training, co-training, transductive SVM, and graph-based methods… All good for the symmetric case, but We have more information: marginal background

5 Image smoothness measure Lower probability Distribution of Natural Images – “Boltzmann-like” In frequency domain:

6 Linear SVM Maximal margin Enough training data Class 1 Class 2 Not Enough training data

7 Linear SVM Class 1 Class 2

8 MM classifier with Prior Class 1 Class 2

9 Minimize the probability of natural images over H After some manipulations it reduces to Q

10 Random w with unit norm and random b from [-0.5, 0.5] % of images that wx+b>0 Relation between the number of natural random images in the positive half-space and the integral

11 Training Algorithm Probability constraint: ( δ→0 )

12 Convex Constraint convex

13 Results Tested categories: cars (side view), faces. Training: 5/10/20/60/(all available data) object’s images. All available background images. Test: Face set: 472 faces, 23,573 bkg. Images Cars test: 299 cars, 10,111 bkg. images Ran 50 trials for each set with different random choices of training data. Weighted SVM was used to deal with the asymmetry in class sizes. UIUC CBCL

14 Average recognition rate(%): Faces 51060all Weighted Linear SVM 7072.575.277 Weighted Kernel SVM 69.772.679.683 MM_prior Linear 72.7757880.3 MM_prior Kernel 71.775.279.1-

15 Average recognition rate(%): Cars 51060all Weighted Linear SVM 89.2491.892.893.7 Weighted Kernel SVM 9092.995.496 MM_prior Linear 91.39394.395.3 MM_prior Kernel 89.493.295.8-

16 Future Work Video. Explore additional and more robust features. Refining the priors (using background examples). Kernelization.


Download ppt "Using Image Priors in Maximum Margin Classifiers Tali Brayer Margarita Osadchy Daniel Keren."

Similar presentations


Ads by Google