Presentation is loading. Please wait.

Presentation is loading. Please wait.

UNBIASED LOOK AT DATASET BIAS Antonio Torralba Massachusetts Institute of Technology Alexei A. Efros Carnegie Mellon University CVPR 2011.

Similar presentations


Presentation on theme: "UNBIASED LOOK AT DATASET BIAS Antonio Torralba Massachusetts Institute of Technology Alexei A. Efros Carnegie Mellon University CVPR 2011."— Presentation transcript:

1 UNBIASED LOOK AT DATASET BIAS Antonio Torralba Massachusetts Institute of Technology Alexei A. Efros Carnegie Mellon University CVPR 2011

2 Outline  1. Introduction  2. Measuring Dataset Bias  3. Measuring Dataset’s Value  4. Discussion

3 Name That Dataset!  Let’s play a game!

4

5 Answer 1Caltech-101 2UIUC 3MSRC 4Tiny Images 5ImageNet 6PASCAL VOC 7LabelMe 8SUNS-09 915 Scenes 10Corel 11Caltech-256 12COIL-100

6 UIUC test set is not the same as its training set COIL is a lab-based dataset Caltech101 and Caltech256 are predictably confused with each other

7 Caltech 101 Caltech256 Caltech256  Pictures of objects belonging to 101 categories. About 40 to 800 images per category  Most categories have about 50 images  Collected in September 2003  The size of each image is roughly 300 x 200 pixels

8

9 LabelMe  A project created by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)  A dataset of digital images with annotations  The most applicable use of LabelMe is in computer vision research  As of October 31, 2010, LabelMe has 187,240 images, 62,197 annotated images, and 658,992 labeled objects

10

11

12 Bias  Urban scenes Rural landscapes  Professional photographs Amateur snapshots  Entire scenes Single objects

13

14

15 The Rise of the Modern Dataset  COIL-100 dataset (a hundred household objects on a black background)  Corel and 15 Scenes were Professional collections visual complexity  Caltech-101 (101 objects using Google and cleaned by hand) wilderness of the Internet  MSRC and LabelMe (both researcher-collected sets), complex scenes with many objects

16 The Rise of the Modern Dataset  PASCAL Visual Object Classes (VOC) was a reaction against the lax training and testing standards of previous datasets  The batch of very-large-scale, Internet-mined datasets – Tiny Images, ImageNet, and SUN09 – can be considered a reaction against the inadequacies of training and testing on datasets that are just too small for the complexity of the real world

17 Outline  2. Measuring Dataset Bias -2.1. Cross-dataset generalization -2.2. Negative Set Bias

18 Cross-dataset generalization

19

20 Negative Set Bias  Evaluate the relative bias in the negative sets of different datasets (e.g. is a “not car” in PASCAL different from “not car” in MSRC?).  For each dataset, we train a classifier on its own set of positive and negative instances. Then, during testing, the positives come from that dataset, but the negatives come from all datasets combined

21

22 Outline  3. Measuring Dataset’s Value

23 Measuring Dataset’s Value  Given a particular detection task and benchmark, there are two basic ways of improving the performance  The first solution is to improve the features, the object representation and the learning algorithm for the detector  The second solution is to simply enlarge the amount of data available for training

24

25

26 Market Value for a car sample across datasets

27 Outline  4. Discussion

28 Discussion  Caltech-101 is extremely biased with virtually no observed generalization, and should have been retired long ago (as arguedby [14] back in 2006)  MSRC has also fared very poorly.  PASCAL VOC, ImageNet and SUN09, have fared comparatively well


Download ppt "UNBIASED LOOK AT DATASET BIAS Antonio Torralba Massachusetts Institute of Technology Alexei A. Efros Carnegie Mellon University CVPR 2011."

Similar presentations


Ads by Google