Object Recognizing. Object Classes Individual Recognition.

Slides:



Advertisements
Similar presentations
Presenter: Duan Tran (Part of slides are from Pedro’s)
Advertisements

Human Detection Phanindra Varma. Detection -- Overview  Human detection in static images is based on the HOG (Histogram of Oriented Gradients) encoding.
Learning Shared Body Plans Ian Endres University of Illinois work with Derek Hoiem, Vivek Srikumar and Ming-Wei Chang.
Jan-Michael Frahm, Enrique Dunn Spring 2013
VC theory, Support vectors and Hedged prediction technology.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Classification using intersection kernel SVMs is efficient Joint work with Subhransu Maji and Alex Berg Jitendra Malik UC Berkeley.
Computer Vision for Human-Computer InteractionResearch Group, Universität Karlsruhe (TH) cv:hci Dr. Edgar Seemann 1 Computer Vision: Histograms of Oriented.
Lecture 31: Modern object recognition
Many slides based on P. FelzenszwalbP. Felzenszwalb General object detection with deformable part-based models.
Histograms of Oriented Gradients for Human Detection Navneet Dalal and Bill Triggs CVPR 2005 Another Descriptor.
Activity Recognition Aneeq Zia. Agenda What is activity recognition Typical methods used for action recognition “Evaluation of local spatio-temporal features.
Global spatial layout: spatial pyramid matching Spatial weighting the features Beyond bags of features: Adding spatial information.
More sliding window detection: Discriminative part-based models Many slides based on P. FelzenszwalbP. Felzenszwalb.
Student: Yao-Sheng Wang Advisor: Prof. Sheng-Jyh Wang ARTICULATED HUMAN DETECTION 1 Department of Electronics Engineering National Chiao Tung University.
DISCRIMINATIVE DECORELATION FOR CLUSTERING AND CLASSIFICATION ECCV 12 Bharath Hariharan, Jitandra Malik, and Deva Ramanan.
Beyond bags of features: Part-based models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Generic Object Detection using Feature Maps Oscar Danielsson Stefan Carlsson
Object Recognition with Informative Features and Linear Classification Authors: Vidal-Naquet & Ullman Presenter: David Bradley.
Object Recognizing We will discuss: Features Classifiers Example ‘winning’ system.
Object Detection using Histograms of Oriented Gradients
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Optimization Theory Primal Optimization Problem subject to: Primal Optimal Value:
Object Recognition Vision Class Object Classes.
Lecture 29: Recent work in recognition CS4670: Computer Vision Noah Snavely.
Object recognition. Object Classes Individual Recognition.
Generic object detection with deformable part-based models
Object Recognizing. Object Classes Individual Recognition.
Object Bank Presenter : Liu Changyu Advisor : Prof. Alex Hauptmann Interest : Multimedia Analysis April 4 th, 2013.
Object Recognizing. Recognition -- topics Features Classifiers Example ‘winning’ system.
Watch, Listen and Learn Sonal Gupta, Joohyun Kim, Kristen Grauman and Raymond Mooney -Pratiksha Shah.
“Secret” of Object Detection Zheng Wu (Summer intern in MSRNE) Sep. 3, 2010 Joint work with Ce Liu (MSRNE) William T. Freeman (MIT) Adam Kalai (MSRNE)
Object Detection with Discriminatively Trained Part Based Models
Lecture 31: Modern recognition CS4670 / 5670: Computer Vision Noah Snavely.
Pedestrian Detection and Localization
Deformable Part Model Presenter : Liu Changyu Advisor : Prof. Alex Hauptmann Interest : Multimedia Analysis April 11 st, 2013.
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search The best paper prize at CVPR 2008.
Efficient Subwindow Search: A Branch and Bound Framework for Object Localization ‘PAMI09 Beyond Sliding Windows: Object Localization by Efficient Subwindow.
Deformable Part Models (DPM) Felzenswalb, Girshick, McAllester & Ramanan (2010) Slides drawn from a tutorial By R. Girshick AP 12% 27% 36% 45% 49% 2005.
Recognition II Ali Farhadi. We have talked about Nearest Neighbor Naïve Bayes Logistic Regression Boosting.
Handwritten digit recognition
Training and Evaluating of Object Bank Models Presenter : Changyu Liu Advisor : Prof. Alex Interest : Multimedia Analysis May 16 th, 2013.
Object detection, deep learning, and R-CNNs
Methods for classification and image representation
Histograms of Oriented Gradients for Human Detection Navneet Dalal and Bill Triggs CVPR 2005 Another Descriptor.
CS 1699: Intro to Computer Vision Detection II: Deformable Part Models Prof. Adriana Kovashka University of Pittsburgh November 12, 2015.
Object Detection Overview Viola-Jones Dalal-Triggs Deformable models Deep learning.
Pictorial Structures and Distance Transforms Computer Vision CS 543 / ECE 549 University of Illinois Ian Endres 03/31/11.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #23.
Computational Intelligence: Methods and Applications Lecture 24 SVM in the non-linear case Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
More sliding window detection: Discriminative part-based models
Object Recognizing. Object Classes Individual Recognition.
A Discriminatively Trained, Multiscale, Deformable Part Model Yeong-Jun Cho Computer Vision and Pattern Recognition,2008.
Fine-grained Fine-grained Recognition( 细粒度分类 ) 沈志强.
Strong Supervision From Weak Annotation Interactive Training of Deformable Part Models ICCV /05/23.
1 Bilinear Classifiers for Visual Recognition Computational Vision Lab. University of California Irvine To be presented in NIPS 2009 Hamed Pirsiavash Deva.
Cascade for Fast Detection
Object Recognizing ..
Object detection with deformable part-based models
Data Driven Attributes for Action Detection
Lit part of blue dress and shadowed part of white dress are the same color
Object detection, deep learning, and R-CNNs
Support Vector Machines
Object Localization Goal: detect the location of an object within an image Fully supervised: Training data labeled with object category and ground truth.
Digit Recognition using SVMS
Object detection as supervised classification
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
A Tutorial on HOG Human Detection
“The Truth About Cats And Dogs”
Object Classes Most recent work is at the object level We perceive the world in terms of objects, belonging to different classes. What are the differences.
Presentation transcript:

Object Recognizing

Object Classes

Individual Recognition

ClassNon-class

Features and Classifiers Same features with different classifiers Same classifier with different features

Generic Features Simple (wavelets)Complex (Geons)

Marr-Nishihara

Class-specific Features: Common Building Blocks

Optimal Class Components? Large features are too rare Small features are found everywhere Find features that carry the highest amount of information

Mutual Information I(C,F) Class: Feature: I(F,C) = H(C) – H(C|F)

Selecting Fragments

Horse-class features Car-class features Pictorial features Learned from examples

Fragments with positions On all detected fragments within their regions

Star model Detected fragments ‘vote’ for the center location Find location with maximal vote In variations, a popular state-of-the art scheme

Recognition Features in the Brain

fMRI Functional Magnetic Resonance Imaging

תמונות של פעילות המח

V1 early processing LO object recognition

Class-fragments and Activation Malach et al 2008

HoG Descriptor Dallal, N & Triggs, B. Histograms of Oriented Gradients for Human Detection

SVM – linear separation in feature space

Optimal Separation SVM Perceptron Find a separating plane such that the closest points are as far as possible Rosenblatt, Principles of Neurodynamics The Nature of Statistical Learning Theory, 1995

Optimal Separation SVM Find a separating plane such that the closest points are as far as possible Advantages of SVM: Optimal separation Extensions to the non-separable case: Kernel SVM

Separating line:w ∙ x + b = 0 Far line:w ∙ x + b = +1 Their distance:w ∙ ∆x = +1 Separation:|∆x| = 1/|w| Margin:2/|w| 0 +1 The Margin

DPM Felzenszwalb Felzenszwalb, McAllester, Ramanan CVPR A Discriminatively Trained, Multiscale, Deformable Part Model Many implementation details, will describe the main points.

HoG descriptor

Using patches with HoG descriptors and classification by SVM Person model: HoG

Object model using HoG A bicycle and its ‘root filter’ The root filter is a patch of HoG descriptor Image is partitioned into 8x8 pixel cells In each block we compute a histogram of gradient orientations

The filter is searched on a pyramid of HoG descriptors, to deal with unknown scale Dealing with scale: multi-scale analysis

A part Pi = (Fi, vi, si, ai, bi). Fi is filter for the i-th part, vi is the center for a box of possible positions for part i relative to the root position, si the size of this box ai and bi are two-dimensional vectors specifying coefficients of a quadratic function measuring a score for each possible placement of the i-th part. That is, a i and b i are two numbers each, and the penalty for deviation ∆x, ∆y from the expected location is a 1 ∆ x + a 2 ∆y + b 1 ∆x 2 + b 2 ∆y 2 Adding Parts

Bicycle model: root, parts, spatial map Person model

The full score of a potential match is: ∑ F i ∙ H i + ∑ a i1 x i + a i2 y i + b i1 x i 2 + b i2 y i 2 F i ∙ H i is the appearance part x i, y i, is the deviation of part p i from its expected location in the model. This is the spatial part. Match Score

search with gradient descent over the placement. This includes also the levels in the hierarchy. Start with the root filter, find places of high score for it. For these high-scoring locations, each for the optimal placement of the parts at a level with twice the resolution as the root-filter, using GD. Final decision β∙ψ > θ implies class Recognition Essentially maximize ∑ F i H i + ∑ a i1 x i + a i2 y i + b i1 x i 2 + b i2 y i 2 Over placements (xi yi)

The score of a match can be expressed as the dot-product of a vector β of coefficients, with the image: Score = β∙ψ Using the vectors ψ to train an SVM classifier: β∙ψ > 1 for class examples β∙ψ < 1 for class examples Using SVM: Z is the placement

Training -- positive examples with bounding boxes around the objects, and negative examples. Learn root filter using SVM Define fixed number of parts, at locations of high energy in the root filter HoG Use these to start the iterative learning

β∙ψ > 1 for class examples β∙ψ < 1 for class examples However, ψ depends on the placement z, that is, the values of ∆x i, ∆y i We need to take the best ψ over all placements. In their notation: Classification then uses β∙f > 1 We need to take the best ψ over all placements. In their notation: Classification then uses β∙f > 1

‘Pascal Challenge’ Airplanes Obtaining human-level performance?

Deep Learning