Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adding Unlabeled Samples to Categories by Learned Attributes Jonghyun Choi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley.

Similar presentations


Presentation on theme: "Adding Unlabeled Samples to Categories by Learned Attributes Jonghyun Choi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley."— Presentation transcript:

1 Adding Unlabeled Samples to Categories by Learned Attributes Jonghyun Choi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley

2 Constructing Training Set is Expensive Human labeling is expensive Hard to find training samples for some categories – Heavy-tailed distribution of training set Solutions – Semi-supervised learning – Active learning Assume that unlabeled data are drawn from the same distribution of labeled data Require humans in the loop Adding Unlabeled Data Without. modeling a distribution humans in the Loop Number of training samples In SUN 09 Dataset Salakhutdinov, Torralba, Tenenbaum (CVPR’11)

3 Long Tail Problem Training sets can have lots of objects in one pose f But not another.

4 Expanding Training Set by Attribute Given small number of labeled training samples Expanding visual boundary of a category to build a better visual classifier Given initial training set Large Unlabeled Image Pool Find by low-level feature Find by attributes Similar to specific sample images Dotted whitish Color Animal-like shape Added back From web

5 Why Attribute? Low-level features are – Specific to shape, color, pose of given seeds – Expanding the visual boundary to the known region Add similar examples to the current set Attributes are – More general mid-level description than low-level description (e.g., dotted) – Able to find visually unseen examples but maintaining traits of the current set

6 Why Attributes Again? Not necessarily needed. Training set should have at least one example of each pose. Remainder are found by Exemplar method. (The strong point of this paper)

7 How to Add Samples by Attributes? Given candidate attributes – Pre-learned by an auxiliary data By learning – A discriminative set of attributes Find samples confident in combinations of attributes

8 Formulation A joint optimization for – Learning classifier in visual feature space ( w c a ) – Learning classifier in attribute space ( w c v ) – With finding the samples ( I ) Non-convex – Mixed integer program: NP-complete problem – Solution: Block coordinate-descent Learning a classifier on visual feature space Learning a classifier on attribute space with a selection criterion Learning a classifier on attribute space with a selection criterion Mutual Exclusion Not convex discrete continuous

9 Formulation Detail Classifier on visual feature space Classifier on attribute space Mutual Exclusion Max-margin Classifier on visual feature space Max-margin Classifier on attribute space w/ Top-Lambda Selector Mutual Exclusion Attribute Mapper Attribute Mapper

10 How to Build the Attribute Mapper Candidate attribute set Labeling attribute by human is expensive Automatic attribute space generation by [1] – Learn in offline with any labeled set available on the web Attributes essentially reduce feature space into binary decision boundaries. [1] Mohammad Rastegari, Ali Farhadi, David Forsyth, “Attribute Discovery via Predictable Discriminative Binary Codes”, ECCV 2012

11 Overview Diagram Initial Labeled-Samples Build Attribute Space Project Find Useful Attributes Unlabeled Samples Project Choose Confident Examples To Add Auxiliary data

12 Exemplar: specific traits of a sample Two Types of Attributes Categorical: common traits of a category Selected by Categorical Attributes Initial Labeled Training Examples Selected by Exemplar Attributes Dotted Animal-like shape …

13 Exemplar Attributes Exemplar-SVM [1] – Captures example- specific information But requires lots of negative samples to build Our approach – Difference on retrieval set obtained by leave- one-out classifier and full-set classifier [1] Tomasz Malisiewicz, Abhinav Gupta, Alexei A. Efros, “Ensemble of Exemplar-SVMs for Object Detection and Beyond”, ICCV 2011 Use all Except each Score high Given Training Samples Modify the top-lambda-selector term in the formulation

14 Experimental Results Dataset – A subset of ImageNet (ILSVRC 2010) – Target set: 11 categories (initial training #: 10/cat., testing: 500/cat.) Both distinctive and fine-grained categories – 6 vegetables (mashed potato, orange, lemon, green onion, acorn, coffee bean), 5 dog breeds (Golden Retriever, Yorkshire Terrier, Greyhound, Dalmatian, Miniature Poodle) – Unlabeled set: randomly chosen samples from entire 1,000 categories in ILSVRC – Auxiliary set: exclusive categories from target set Download available soon in http://umiacs.umd.edu/~jhchoi/addingbyattr

15 Comparison with Other Approaches Red: using low-level features only Blue: our approach – Categorical attribute only (Cat.) – Exemplar+categorical attributes (E+C) Decrease! Increase

16 Purity of Added Sample Purity: # added samples of same category total # added samples Not very pure but still improves mAP!

17 Saturation of Performance Increase in mAP for method quickly saturates based on initial training set size. Most useful for small training sets.

18 Exemplar Attributes Compare our exemplar attribute learning with exemplar-SVM with large negative set

19 Summary A new way of expanding a training set by attributes – A joint optimization formulation – Solve by block-coordinate descent Using both categorical and exemplar attributes to find examples Future work – Constraining expansion path Attribute may mislead as we added more samples due to low purity

20 Additional Slides

21 Varying number of added samples and accuracy


Download ppt "Adding Unlabeled Samples to Categories by Learned Attributes Jonghyun Choi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley."

Similar presentations


Ads by Google