installation by Erik Kessels

Slides:



Advertisements
Similar presentations
Semantic Contours from Inverse Detectors Bharath Hariharan et.al. (ICCV-11)
Advertisements

Attributes for Classifier Feedback Amar Parkash and Devi Parikh.
Multi-Attribute Spaces: Calibration for Attribute Fusion and Similarity Search University of Oxford 5 th December 2012 Walter Scheirer, Neeraj Kumar, Peter.
Learning Shared Body Plans Ian Endres University of Illinois work with Derek Hoiem, Vivek Srikumar and Ming-Wei Chang.
The Problem with the Blank Slate Approach Derek Hoiem Beckman Institute, UIUC * unpublished *
Adding Unlabeled Samples to Categories by Learned Attributes Jonghyun Choi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley.
What makes an image memorable?
Shape Sharing for Object Segmentation
UCB Computer Vision Animals on the Web Tamara L. Berg CSE 595 Words & Pictures.
My Group’s Current Research on Image Understanding.
Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.
1 Building a Dictionary of Image Fragments Zicheng Liao Ali Farhadi Yang Wang Ian Endres David Forsyth Department of Computer Science, University of Illinois.
Analyzing Semantic Segmentation Using Hybrid Human-Machine CRFs Roozbeh Mottaghi 1, Sanja Fidler 2, Jian Yao 2, Raquel Urtasun 2, Devi Parikh 3 1 UCLA.
Enhancing Exemplar SVMs using Part Level Transfer Regularization 1.
Recognition: A machine learning approach
Relative Attributes Presenter: Shuai Zheng (Kyle) Supervised by Philip H.S. Torr Author: Devi Parikh (TTI-Chicago) and Kristen Grauman (UT-Austin)
Statistical Recognition Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and Kristen Grauman.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Object Recognition: Conceptual Issues Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and K. Grauman.
Object Recognition: Conceptual Issues Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and K. Grauman.
Recap: HOGgles Data, Representation, and Learning matter. – This work looked just at representation By creating a human-understandable HoG visualization,
Computer Vision Group University of California Berkeley Recognizing Objects in Adversarial Clutter: Breaking a Visual CAPTCHA Greg Mori and Jitendra Malik.
Opportunities of Scale, Part 2 Computer Vision James Hays, Brown Many slides from James Hays, Alyosha Efros, and Derek Hoiem Graphic from Antonio Torralba.
Face Detection and Recognition
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
Human abilities Presented By Mahmoud Awadallah 1.
Computer Vision CS 776 Spring 2014 Recognition Machine Learning Prof. Alex Berg.
Why Categorize in Computer Vision ?. Why Use Categories? People love categories!
Last part: datasets and object collections. CMU/MIT frontal facesvasc.ri.cmu.edu/idb/html/face/frontal_images cbcl.mit.edu/software-datasets/FaceData2.html.
Recovering Surface Layout from a Single Image D. Hoiem, A.A. Efros, M. Hebert Robotics Institute, CMU Presenter: Derek Hoiem CS 598, Spring 2009 Jan 29,
Accuracy Assessment Having produced a map with classification is only 50% of the work, we need to quantify how good the map is. This step is called the.
80 million tiny images: a large dataset for non-parametric object and scene recognition CS 4763 Multimedia Systems Spring 2008.
Representations for object class recognition David Lowe Department of Computer Science University of British Columbia Vancouver, Canada Sept. 21, 2006.
Human Computation and Computer Vision CS143 Computer Vision James Hays, Brown University.
MSRI workshop, January 2005 Object Recognition Collected databases of objects on uniform background (no occlusions, no clutter) Mostly focus on viewpoint.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li and Li Fei-Fei Dept. of Computer Science, Princeton University, USA CVPR ImageNet1.
Modeling the Shape of a Scene: Seeing the trees as a forest Scene Understanding Seminar
Category Independent Region Proposals Ian Endres and Derek Hoiem University of Illinois at Urbana-Champaign.
Machine Learning Tutorial-2. Recall, Precision, F-measure, Accuracy Ch. 5.
Recognition Using Visual Phrases
Data Mining for Surveillance Applications Suspicious Event Detection Dr. Bhavani Thuraisingham.
SUN Database: Large-scale Scene Recognition from Abbey to Zoo Jianxiong Xiao *James Haysy Krista A. Ehinger Aude Oliva Antonio Torralba Massachusetts Institute.
PANDA: Pose Aligned Networks for Deep Attribute Modeling Ning Zhang 1,2 Manohar Paluri 1 Marć Aurelio Ranzato 1 Trevor Darrell 2 Lumbomir Boudev 1 1 Facebook.
NEIL: Extracting Visual Knowledge from Web Data Xinlei Chen, Abhinav Shrivastava, Abhinav Gupta Carnegie Mellon University CS381V Visual Recognition -
1 Bilinear Classifiers for Visual Recognition Computational Vision Lab. University of California Irvine To be presented in NIPS 2009 Hamed Pirsiavash Deva.
Another Example: Circle Detection
Data Mining for Surveillance Applications Suspicious Event Detection
SHAHAB iCV Research Group.
Histograms CSE 6363 – Machine Learning Vassilis Athitsos
Object detection with deformable part-based models
University of Ioannina
Data Driven Attributes for Action Detection
Multimedia Content-Based Retrieval
CSSE463: Image Recognition Day 11
Face Recognition and Feature Subspaces
Opportunities of Scale, Part 2
Recognition using Nearest Neighbor (or kNN)
Li Fei-Fei, UIUC Rob Fergus, MIT Antonio Torralba, MIT
Thesis Advisor : Prof C.V. Jawahar
HOGgles Visualizing Object Detection Features
Lecture 25: Introduction to Recognition
Data Mining for Surveillance Applications Suspicious Event Detection
CS 1674: Intro to Computer Vision Scene Recognition
“The Truth About Cats And Dogs”
Brief Review of Recognition + Context
Data Mining for Surveillance Applications Suspicious Event Detection
CSSE463: Image Recognition Day 11
CSSE463: Image Recognition Day 11
Semantic Segmentation
Generic object recognition
Presentation transcript:

installation by Erik Kessels 24 hours of Photo Sharing installation by Erik Kessels

And sometimes Internet photos have useful labels Im2gps. Hays and Efros. CVPR 2008 But what if we want more?

Image Categorization Training Testing Training Images Training Labels Image Features Classifier Training Trained Classifier Testing Image Features Trained Classifier Prediction Outdoor Test Image

Human Computation for Annotation Unlabeled Images Human Computation for Annotation Show images, Collect and filter labels Training Labels Training Images Image Features Classifier Training Trained Classifier Training

Active Learning Training Unlabeled Images Show images, Collect and filter labels Find images near decision boundary Training Labels Training Images Image Features Classifier Training Trained Classifier Training

Human-in-the-loop Recognition Human Observer Human Estimate Image Features Trained Classifier Prediction Pipeline could look different, but human is involved at test time. Outdoor Test Image Testing

Computer Vision James Hays Attributes Computer Vision James Hays Many slides from Derek Hoiem

Recap: Human Computation Mechanical Turk is powerful but noisy Determine which workers are trustworthy Find consensus over multiple annotators “Gamify” your task to the degree possible Human-in-the-loop recognition: Have a human and computer cooperate to do recognition.

Today – Crowd enabled recognition Recognizing Object Attributes Recognizing Scene Attributes

Describing Objects by their Attributes Ali Farhadi, Ian Endres, Derek Hoiem, David Forsyth CVPR 2009

What do we want to know about this object?

What do we want to know about this object? Object recognition expert: “Dog”

What do we want to know about this object? Object recognition expert: “Dog” Person in the Scene: “Big pointy teeth”, “Can move fast”, “Looks angry”

Our Goal: Infer Object Properties Can I put stuff in it? Can I poke with it? Is it alive? Is it soft? What shape is it? Does it have a tail? Will it blend?

“Large, angry animal with pointy teeth” Why Infer Properties We want detailed information about objects “Dog” vs. “Large, angry animal with pointy teeth”

Why Infer Properties 2. We want to be able to infer something about unfamiliar objects Familiar Objects New Object

Why Infer Properties 2. We want to be able to infer something about unfamiliar objects If we can infer category names… Familiar Objects New Object Cat Horse Dog ???

Why Infer Properties 2. We want to be able to infer something about unfamiliar objects If we can infer properties… Familiar Objects New Object Has Stripes Has Ears Has Eyes …. Has Four Legs Has Mane Has Tail Has Snout …. Brown Muscular Has Snout …. Has Stripes (like cat) Has Mane and Tail (like horse) Has Snout (like horse and dog)

Why Infer Properties 3. We want to make comparisons between objects or categories What is the difference between horses and zebras? What is unusual about this dog?

Strategy 1: Category Recognition Object Image Has Wheels Used for Transport Made of Metal Has Windows … associated properties classifier “Car” Note that much work on recognition, but little on inferring attributes Category Recognition: PASCAL 2008 Category  Attributes: ??

Strategy 2: Exemplar Matching Similar Image Object Image similarity function Has Wheels Used for Transport Made of Metal Old … associated properties Note that much work on recognition, but little on inferring attributes Malisiewicz Efros 2008 Hays Efros 2008 Efros et al. 2003

Strategy 3: Infer Properties Directly Object Image No Wheels Old Brown Made of Metal … classifier for each attribute Note that much work on recognition, but little on inferring attributes See also Lampert et al. 2009 Gibson’s affordances

The Three Strategies “Car” Category Has Wheels Used for Transport associated properties classifier “Car” Has Wheels Used for Transport Made of Metal Has Windows Old No Wheels Brown … Object Image Similar Image similarity function associated properties Direct classifier for each attribute

Our attributes Visible parts: “has wheels”, “has snout”, “has eyes” Visible materials or material properties: “made of metal”, “shiny”, “clear”, “made of plastic” Shape: “3D boxy”, “round”

Attribute Examples Shape: Horizontal Cylinder Part: Wing, Propeller, Window, Wheel Material: Metal, Glass Shape: Part: Window, Wheel, Door, Headlight, Side Mirror Material: Metal, Shiny

Attribute Examples Shape: Shape: Shape: Part: Head, Ear, Nose, Mouth, Hair, Face, Torso, Hand, Arm Material: Skin, Cloth Shape: Part: Head, Ear, Snout, Eye, Torso, Leg Material: Furry Shape: Part: Head, Ear, Snout, Eye Material: Furry

Datasets a-Pascal a-Yahoo Attribute labels are somewhat ambiguous 20 categories from PASCAL 2008 trainval dataset (10K object images) airplane, bicycle, bird, boat, bottle, bus, car, cat, chair, cow, dining table, dog, horse, motorbike, person, potted plant, sheep, sofa, train, tv monitor Ground truth for 64 attributes Annotation via Amazon’s Mechanical Turk a-Yahoo 12 new categories from Yahoo image search bag, building, carriage, centaur, donkey, goat, jet ski, mug, monkey, statue of person, wolf, zebra Categories chosen to share attributes with those in Pascal Attribute labels are somewhat ambiguous Agreement among “experts” 84.3 Between experts and Turk labelers 81.4 Among Turk labelers 84.1

Annotation on Amazon Turk

Our approach

Features Strategy: cover our bases Spatial pyramid histograms of quantized Color and texture for materials Histograms of gradients (HOG) for parts Canny edges for shape

Our approach

Learning Attributes Learn to distinguish between things that have an attribute and things that do not Train one classifier (linear SVM) per attribute

Describing Objects by their Attributes No examples from these object categories were seen during training

Describing Objects by their Attributes No examples from these object categories were seen during training

Attribute Prediction: Quantitative Analysis Area Under the ROC for Familiar (PASCAL) vs. Unfamiliar (Yahoo) Object Classes Worst Wing Handlebars Leather Clear Cloth Best Eye Side Mirror Torso Head Ear

Average ROC Area Trained on a-PASCAL objects Test Objects Parts Materials Shape a-PASCAL 0.794 0.739 a-Yahoo 0.726 0.645 0.677

Our approach

Category Recognition Semantic attributes not enough 74% accuracy even with ground truth attributes Introduce discriminative attributes Trained by selecting subset of classes and features Dogs vs. sheep using color Cars and buses vs. motorbikes and bicycles using edges Train 10,000 and select 1,000 most reliable, according to a validation set

Attributes not big help when sufficient data Use attribute predictions as features Train linear SVM to categorize objects PASCAL 2008 Base Features Semantic Attributes All Attributes Classification Accuracy 58.5% 54.6% 59.4% Class-normalized Accuracy 35.5% 28.4% 37.7%

Identifying Unusual Attributes Look at predicted attributes that are not expected given class label

Absence of typical attributes 752 reports 68% are correct

Presence of atypical attributes 951 reports 47% are correct

Today – Crowd enabled recognition Recognizing Object Attributes Recognizing Scene Attributes

Genevieve Patterson and James Hays. CVPR 2012 Space of Scenes scenes are all photos that aren’t of objects and aren’t abstract. Genevieve Patterson and James Hays. CVPR 2012

Space of Scenes We’ve come up with various categories of scenes in this space. Sun database has about 900 categories, so this is just a tiny, random sample

Space of Scenes And you hope that these categories cover the space of scenes, and that for any new photo you can say, oh yes this belongs in category Y. But it’s actually hard to assign some photos to categories. Transition between savanna and forest is continuous, based on canopy coverage. True of many, many groups of categories

Space of Scenes Even for scene categories that are completely distinct, that do not lie on a continuum, a single image can depict multiple scene categories. They’re spatially distinct, they’re somewhat independent. But it’s hard to put this image into our categorical taxonomy.

Space of Scenes ? Our taxonomy isn’t necessarily exhaustive, so some scenes (this is a “rehearsal room”) we can’t say anything about.

Space of Scenes and even if we agree that these are all “villages” (and they are in SUN), there is a huge, interesting intra-class variation and it seems unsatisfying to just say “village”

Big Picture Scenes don’t fit neatly into categories. Objects often do! Categories aren’t expressive enough. We should reason about scene attributes instead of (or in addition to) scene categories.

Attribute-based Visual Understanding Learning To Detect Unseen Object Classes by Between-Class Attribute Transfer. Lampert, Nickisch, and Harmeling. CVPR 2009. Describing Objects by their Attributes. Farhadi, Endres, Hoiem, Forsyth. CVPR 2009. Attribute and Simile Classifiers for Face Verification. Kumar, Berg, Belhumeur, Nayar. ICCV 2009. Numerous more recent works on activity, texture, 3d models, etc. and I think this type of understanding is even more appropriate for scenes

Spatial layout: large, enclosed Affordances / functions: can fly, park, walk Materials: shiny, black, hard Object presence: has people, ships Simile: looks like Star Trek Emotion: scary, intimidating So what would interesting scene attributes be? Here’s some possible examples of how we could describe a scene by attributes In this case all global properties. Spatial envelope considered by Antonio, but there are a lot of other important attributes that define a scene. What makes a good / interesting attribute, though?

Space of Scenes emphasize: even though I’ve said scenes don’t fit neatly in to categories, huge categorical databases like the SUN db definitely give us a great diversity of scenes, a great sampling of the space. I think one measure of a good attribute is whether it divides this space.

Space of Scenes good attributes are discriminative

Space of Scenes emphasize: not a strict hierarchy / tree.

Space of Scenes affordance / function attribute.

Space of Scenes material attribute. emphasize: goal is not to recover categories. Maybe you could, but probably best to directly classify in to categories if that’s your goal. emphasize: there can, of course, be a lot of variance for certain attributes within a category. Maybe half of canyons are dry and maybe half have water. emphasize: while for any given query, some attributes can be ambiguous, (just like scene boundaries were), most of them won’t be. It may be unclear if an enclosed restaurant patio is indoors our outdoors, but we can say it is a place for eating. Fails more gracefully.

Which Scene Attributes are Relevant? Inspired by the “splitting” task of Oliva and Torralba and “ESP game” by von Ahn and Blum.

102 Scene Attributes

Describe setting up DB Describe design of interface, improvements planned 8787

SUN Attributes: A Large-Scale Database of Scene Attributes http://www.cs.brown.edu/~gen/sunattributes.html Global, binary attributes describing: Affordances / Functions (e.g. farming, eating) Materials (e.g. carpet, running water) Surface Properties (e.g. aged, sterile) Spatial Envelope (e.g. enclosed, symmetrical) Statistics of database: 14,340 images from 717 scene categories 102 attributes 4 million+ labels good workers ~92% accurate pre-trained classifiers for download key point – we’re augmenting the recent, categorical SUN database with attributes. Larger scale than most (all?) attribute databases. Crowdsourced. Variety of types of attributes. Space of Scenes Organized by Attributes

102 dimensional attribute space reduced to 2d with t-SNE

Enclosed Area

Open Area

Transport

Sailing

Instances of the “15 Scene” Categories

Average Precision of Attribute Classifiers

Average Precision of Attribute Classifiers Redo this

Attribute Recognition

Most Confident Classifications

Most Confident Classifications

Recap: Attributes and Crowdsourcing If you can only get one label per instance, maybe a categorical label is the most informative. But now that crowdsourcing exists, we can get enough training data to simultaneously reason about a multitude of object / scene properties (e.g. attributes). In general, there is a broadening of interesting recognition tasks. Zero-shot learning: model category with an attribute distribution only.