Presentation is loading. Please wait.

Presentation is loading. Please wait.

The SUN Database Slides by Jennifer Baulier. What is the SUN database? Scene Understanding Database Scene = a place humans could act within Full database.

Similar presentations


Presentation on theme: "The SUN Database Slides by Jennifer Baulier. What is the SUN database? Scene Understanding Database Scene = a place humans could act within Full database."— Presentation transcript:

1 The SUN Database Slides by Jennifer Baulier

2 What is the SUN database? Scene Understanding Database Scene = a place humans could act within Full database as of paper: 899 categories and 130,519 images

3 Size Representation of the Categories in SUN 908 Many in bedroom, kitchen, and living room, but few in grotto, launch pad, sinkhole, signal box, sunken garden, etc.

4 Motivation Huge databases available for objects Largest scene database had 15 categories Many questions to answer

5 Four Objectives Determine categories with some objectivity Judge human performance Feature benchmark Sub-scene detection

6 Making The Database

7 Choosing Scenes for the Database Terms from tiny image Differing visual identities, navigable, and not proper nouns 2500, down to 899

8 Getting images Uncontrolled from search engines Full Color, (200 x 200) + Checked for correctness No duplicates

9 Examples

10 Human Performance

11 Human Recognition Task Test category overlap Comparison other tests Avoid too much training

12 Human Experiment Setup 397 categories, 20 scenes Labels in a 3 level tree First level: indoor, outdoor natural, outdoor man made Used Amazon Mechanical Turk 61 seconds / 58.6% accuracy “Good workers” (100+ hits) 68.5%

13 Some Easy Categories

14 Some Confusing Categories (and what people think they are)

15 Computer Performance Benchmark

16 Image Feature Comparison Experimental Setup 1-vs-all SVMs Both datasets 12 feature types “All features” = a weighted sum

17 GIST Estimate perceptual dimensions: Naturalness, openness, roughness, expansion, ruggedness Output energy of 24 filters tuned to 8 orientations at 4 scales Averaged on a 4x4 grid

18 HOG (2x2) Histogram of Oriented Gradients (31 bins) per cell Cells have 8 pixel steps stacked 2x2 neighbors into 124 dimensions 300 visual words using k-means

19 SIFT (General) Scale Invariant Feature Transform A detector for attribute regions Semi-invariant to viewpoint, illumination, etc. A descriptor for the appearance

20 Dense SIFT Extract SIFT features with both 4 and 8 pixel radii Stack descriptors of Hue, Saturation, Value color channel 300 visual words

21 Sparse SIFT Hessian-affine and MSER interest points Cluster both sets into 1000 visual words 2 histograms

22 LBP Histogram of local binary patterns Texture recognition Rotation invariant

23 SSIM Self Similarity Descriptors compare small patches to neighbors Grid of 5x5 patches Radius = 3 bins, Angles = 10 bins, 30 dimensional descriptor per patch 300 visual How can you tell that these are the same shape?

24 Tiny Image Most basic Greatly scale down both images One long array

25 Line Features Straight lines from Canny edges 2 unnormalized histograms: lengths and angles

26 Texton Histogram “the basic elements in early (pre- attentive) visual perception.” - ucla.edu Responses to a bank of filters with 8 orientations, 2 scales, and 2 elongations Defined 512 textons

27 Color Histogram Used CIE L*a*b* color space L* = lightness a* = red to green b* = yellow to blue Histograms: 4 x 14 x 14 bins

28 Geometric Probability Map Geometric probability: chance of a point in a region falling into a sub region Consider four classes: ground, vertical, porous, and sky Probability map for each class -> 8x8 grid

29 Geometry Specific Histogram Color & texton histograms for the 4 classes Every sample adds to a histogram for each class Weighted by the likelihood of belonging to that class

30

31 Results Discussion 38% vs 68.5% for good workers Outdoor natural = 43.2%, indoor = 37.5%, outdoor man-made = 35.8% Indoor transportation = 51.9%, indoor shopping and dinning = 29%

32 Humans (left %) vs All Feature SVM(right %)

33 Localizing Multiple Scenes

34 Scene Detection An image may transition between scenes Find & localize all scenes Classification vs detection = object terms

35 Test and Approach 24 categories from SUN 397 104 photos of urban environments Averages 4 scenes/image Window scans the image 3 times at different scales

36 Validation Bounding box has to overlap >= 15% of ground truth Space doesn't have well defined edges

37 Results when training with 200 examples per class

38 Works Cited Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., & Torralba, A. (2010). SUN database: Large-scale scene recognition from abbey to zoo. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Retrieved from http://vision.princeton.edu/projects/2010/SUN/paper.pdf Xiao, J., Ehinger, K. A., Hays, J., Torralba, A., & Oliva, A. (2014). SUN Database: Exploring a Large Collection of Scene Categories. International Journal of Computer Vision Int J Comput Vis. Retrieved from http://vision.princeton.edu/projects/2010/SUN/paperIJCV.pdf SUN Database. Retrieved March 13, 2016, from http://groups.csail.mit.edu/vision/SUN/ Oliva, A., & Torralba, A. Modeling the shape of the scene: A holistic representation of the spatial envelope. Retrieved March 14, 2016, from http://people.csail.mit.edu/torralba/code/spatialenvelope/ VLFeat.org. Retrieved March 14, 2016, from http://www.vlfeat.org/ Chatfield, K., Philbin, J., & Zisserman, A. (2009). Efficient retrieval of deformable shape classes using local self-similarities. 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. Retrieved from http://www.robots.ox.ac.uk/~vgg/publications/2009/Chatfield09/chatfield09.pdf Green, B. Canny Edge Detection Tutorial. Retrieved March 14, 2016, from http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/can_tut.html http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/can_tut.html Texton. Retrieved March 14, 2016, from http://vcla.stat.ucla.edu/old/Chengen_Research/texton.htm#base_texton Lab Color Space. Retrieved March 14, 2016, from https://en.wikipedia.org/wiki/Lab_color_space#CIELAB Large-scale Scene Understanding Challenge. Retrieved March 14, 2016, from http://lsun.cs.princeton.edu/http://lsun.cs.princeton.edu/ Places2: A Large-Scale Database for Scene Understanding. Retrieved March 14, 2016, from http://places2.csail.mit.edu/challenge.html MIT Places Database for Scene Recognition. Retrieved March 14, 2016, from http://places.csail.mit.edu/

39 Images Cited SUN Logo. Digital image. Princeton Vision Group. Web.. SUN Categories Size Visualization. Digital image. Princeton.edu.Web..http://vision.princeton.edu/projects/2010/SUN/paperIJCV.pdf SUN Indoor Mall Image. Digital image. SUN Database. Web. <http://labelme.csail.mit.edu/Release3.0/tool.html?http://labelme.csail.mit.edu/Release3.0/tool.html SUN Lectture Room (for GIST). Digital image. SUN Database. Web. <http://labelme.csail.mit.edu/Release3.0/tool.html?http://labelme.csail.mit.edu/Release3.0/tool.html SUN Category Image Examples. Digital image. Princeton.edu. Web.. SUN Categories That are Easy to Humans. Digital image. Google Scholar. Web.. SUN Categories That are Confusing to Humans. Digital image. Google Scholar. Web.. HOG Base Image. Digital image. VLFeat.org. N.p., n.d. Web.. HOG Feature Image. Digital image. VLFeat.org. N.p., n.d. Web.. SIFT Image Base. Digital image. VLFeat.org. Web.. SIFT Image Feature Points. Digital image. VLFeat.org. Web.. SIFT Image Feature Descriptors. Digital image. VLFeat.org. Web..

40 Images Cited 2 Description of Facial Expressions with Local Binary Patterns. Digital image. Scholarpedia. Web.. MSER Features. Digital image. Mathworks. Web..http://www.mathworks.com/help/vision/ref/detectmserfeatures.html?refresh=true Affine Covariant Region Detectors. Digital image. Robots.ox.ac.uk. N.p., n.d. Web.. SSIM Example Diagram. Digital image. Robots.ox.ac.uk. Web.. Heart Shape Classification Task. Digital image. Robots.ox.ac.uk. Web.. Canny Edge Base Image. Digital image. Drexel.edu. Web..http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/can_tut.html Canny Edge Result Image. Digital image. Drexel.edu. Web..http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/can_tut.html Texton Example. Digital Image. Ucla.edu. Web.. CIE L*a*b* Color Space Examples. Digitial Image. Wikipedia. Web.. Geometric Probability Dartboard Example. Digital Image. Ck12.org. Web..www.ck12.org/user:Sample(123)/book/02.-CK-12-Middle-School-Math-Grade- 8/sction/11.7/ Feature Results on SUN and the 15 Category Database. Digital image. Princeton.edu. Web..http://vision.princeton.edu/projects/2010/SUN/paper.pdf Human vs All Feature SVM Class Results. Digital image. Princeton.edu. Web..http://vision.princeton.edu/projects/2010/SUN/paper.pdf Open Concept House. Digital Image. Hzcdn.com. Wev. SUN Tower Image. Digital Image. SUN Database. Web.. Image Detections Results Table (Split in 2). Digital Image. Princeton.edu. Web..


Download ppt "The SUN Database Slides by Jennifer Baulier. What is the SUN database? Scene Understanding Database Scene = a place humans could act within Full database."

Similar presentations


Ads by Google