LARGE-SCALE IMAGE PARSING Joseph Tighe and Svetlana Lazebnik University of North Carolina at Chapel Hill road building car sky
Small-scale image parsing Tens of classes, hundreds of images He et al. (2004), Hoiem et al. (2005), Shotton et al. (2006, 2008, 2009), Verbeek and Triggs (2007), Rabinovich et al. (2007), Galleguillos et al. (2008), Gould et al. (2009), etc. Figure from Shotton et al. (2009)
Large-scale image parsing Hundreds of classes, tens of thousands of images Non-uniform class frequencies
Large-scale image parsing Hundreds of classes, tens of thousands of images Evolving training set Non-uniform class frequencies
Challenges What’s considered important for small-scale image parsing? Combination of local cues Multiple segmentations, multiple scales Context How much of this is feasible for large-scale, dynamic datasets?
Our first attempt: A nonparametric approach Lazy learning: do (almost) nothing up front To parse (label) an image we will: Find a set of similar images Transfer labels from the similar images by matching pieces of the image (superpixels)
Finding Similar Images
Ocean Open Field Highway Street Forest Mountain Inner City Tall Building What is depicted in this image? Which image is most similar? Then assign the label from the most similar image
Pixels are a bad measure of similarity Most similar according to pixel distanceMost similar according to “Bag of Words”
Origin of the Bag of Words model Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983) US Presidential Speeches Tag Cloud
What are words for an image?
Wing Tail WheelBuildingPropeller
Wing Tail WheelBuilding PropellerJet Engine
Wing Tail WheelBuilding PropellerJet Engine
Wing Tail WheelBuilding PropellerJet Engine
But where do the words come from?
Then where does the dictionary come from?
Example Dictionary Source: B. Leibe
Another dictionary … … … … Source: B. Leibe
Fei-Fei et al. 2005
Outline of the Bag of Words method Divide the image into patches Assign a “word” for each patch Count the number of occurrences of each “word” in the image
Does this work for our problem? 65,536 Pixels256 Dimensions
Which look the most similar?
building road car sky building road car sky building road car sky building road car sky building road car sky tree sky tree building sand mountain car road
Step 1: Scene-level matching Gist (Oliva & Torralba, 2001) Spatial Pyramid (Lazebnik et al., 2006) Color Histogram Retrieval set: Source of possible labels Source of region-level matches
Step 2: Region-level matching
Superpixels (Felzenszwalb & Huttenlocher, 2004)
Step 2: Region-level matching Snow Road Tree Building Sky Pixel Area (size)
Road Sidewalk Step 2: Region-level matching Absolute mask (location)
Step 2: Region-level matching Road SkySnow Sidewalk Texture
Step 2: Region-level matching Building Sidewalk Road Color histogram
Step 2: Region-level matching Superpixels (Felzenszwalb & Huttenlocher, 2004) Superpixel features
Region-level likelihoods Nonparametric estimate of class-conditional densities for each class c and feature type k: Per-feature likelihoods combined via Naïve Bayes: kth feature type of ith region Features of class c within some radius of r i Total features of class c in the dataset
Region-level likelihoods BuildingCarCrosswalk SkyWindowRoad
Step 3: Global image labeling Compute a global image labeling by optimizing a Markov random field (MRF) energy function: Likelihood score for region r i and label c i Co-occurrence penalty Vector of region labels Regions Neighboring regions Smoothing penalty riri rjrj Efficient approximate minimization using - expansion (Boykov et al., 2002)
Step 3: Global image labeling How do we resolve issues like this? sky tree sand road sea road Original image Maximum likelihood labeling sky sand sea
Step 3: Global image labeling Compute a global image labeling by optimizing a Markov random field (MRF) energy function: Likelihood score for region r i and label c i Co-occurrence penalty Vector of region labels Regions Neighboring regions Smoothing penalty
Step 3: Global image labeling Compute a global image labeling by optimizing a Markov random field (MRF) energy function: Maximum likelihood labeling Edge penaltiesFinal labelingFinal edge penalties road building car window sky road building car sky Likelihood score for region r i and label c i Co-occurrence penalty Vector of region labels Regions Neighboring regions Smoothing penalty
Step 3: Global image labeling Compute a global image labeling by optimizing a Markov random field (MRF) energy function: sky tree sand road sea road sky sand sea Original image Maximum likelihood labeling Edge penalties MRF labeling Likelihood score for region r i and label c i Co-occurrence penalty Vector of region labels Regions Neighboring regions Smoothing penalty
Joint geometric/semantic labeling Semantic labels: road, grass, building, car, etc. Geometric labels: sky, vertical, horizontal Gould et al. (ICCV 2009) sky tree car road sky horizontal vertical Original imageSemantic labelingGeometric labeling
Joint geometric/semantic labeling Objective function for joint labeling: Geometric/semantic consistency penalty Semantic labels Geometric labels Cost of semantic labeling Cost of geometric labeling sky tree car road sky horizontal vertical Original imageSemantic labelingGeometric labeling
Example of joint labeling
Understanding scenes on many levels To appear at ICCV 2011
Understanding scenes on many levels To appear at ICCV 2011
Datasets Training imagesTest imagesLabels SIFT Flow (Liu et al., 2009)2, Barcelona (Russell et al., 2007)14, LabelMe+SUN50,
Datasets Training imagesTest imagesLabels SIFT Flow (Liu et al., 2009)2, Barcelona (Russell et al., 2007)14, LabelMe+SUN50,
Overall performance SIFT FlowBarcelonaLabelMe + SUN SemanticGeom.SemanticGeom.SemanticGeom. Base73.2 (29.1) (8.0) (10.7)81.5 MRF76.3 (28.8) (7.6) (9.1)81.0 MRF + Joint76.9 (29.4) (7.6) (10.5)82.2 LabelMe + SUN IndoorLabelMe + SUN Outdoor SemanticGeom.SemanticGeom. Base22.4 (9.5) (11.0)83.1 MRF27.5 (6.5) (8.6)82.3 MRF + Joint27.8 (9.0) (10.8)84.1 *SIFT Flow: 74.75
Per-class classification rates
Results on SIFT Flow dataset
Results on LM+SUN dataset ImageGround truth Initial semanticFinal semantic Final geometric
Results on LM+SUN dataset ImageGround truth Initial semanticFinal semantic Final geometric
ImageGround truth Initial semanticFinal semantic Final geometric Results on LM+SUN dataset
ImageGround truth Initial semanticFinal semantic Final geometric Results on LM+SUN dataset
Running times SIFT Flow Barcelona dataset
Conclusions Lessons learned Can go pretty far with very little learning Good local features, and global (scene) context is more important than neighborhood context What’s missing A rich representation for scene understanding The long tail Scalable, dynamic learning road building car sky