Download presentation
Presentation is loading. Please wait.
Published byProsper Rodgers Modified over 9 years ago
1
Object Detection Overview Viola-Jones Dalal-Triggs Deformable models Deep learning
2
Recap: Viola-Jones sliding window detector Fast detection through two mechanisms Quickly eliminate unlikely windows Use features that are fast to compute Viola and Jones. Rapid Object Detection using a Boosted Cascade of Simple Features (2001).Rapid Object Detection using a Boosted Cascade of Simple Features
3
Cascade for Fast Detection Examples Stage 1 H 1 (x) > t 1 ? Reject No Yes Stage 2 H 2 (x) > t 2 ? Stage N H N (x) > t N ? Yes … Pass Reject No Reject No Choose threshold for low false negative rate Fast classifiers early in cascade Slow classifiers later, but most examples don’t get there
4
Features that are fast to compute “Haar-like features” – Differences of sums of intensity – Thousands, computed at various positions and scales within detection window Two-rectangle featuresThree-rectangle featuresEtc. +1
5
Integral Images ii = cumsum(cumsum(im, 1), 2) x, y ii(x,y) = Sum of the values in the grey region SUM within Rectangle D is ii(4) - ii(2) - ii(3) + ii(1)
6
Feature selection with Adaboost Create a large pool of features (180K) Select features that are discriminative and work well together – “Weak learner” = feature + threshold + parity – Choose weak learner that minimizes error on the weighted training set – Reweight
7
Adaboost
8
Viola-Jones details 38 stages with 1, 10, 25, 50 … features – 6061 total used out of 180K candidates – 10 features evaluated on average Training Examples – 4916 positive examples – 10000 negative examples collected after each stage Scanning – Scale detector rather than image – Scale steps = 1.25 (factor between two consecutive scales) – Translation 1*scale (# pixels between two consecutive windows) Non-max suppression: average coordinates of overlapping boxes Train 3 classifiers and take vote
9
Viola Jones Results MIT + CMU face dataset Speed = 15 FPS (in 2001)
10
Object Detection Overview Viola-Jones Dalal-Triggs Deformable models Deep learning
11
Statistical Template Object model = sum of scores of features at fixed positions +3+2-2-2.5= -0.5 +4+1+0.5+3+0.5= 10.5 > 7.5 ? ? Non-object Object
12
Design challenges How to efficiently search for likely objects Even simple models require searching hundreds of thousands of positions and scales Feature design and scoring How should appearance be modeled? What features correspond to the object? How to deal with different viewpoints? Often train different models for a few different viewpoints Implementation details Window size Aspect ratio Translation/scale step size Non-maxima suppression
13
Example: Dalal-Triggs pedestrian detector 1.Extract fixed-sized (64x128 pixel) window at each position and scale 2.Compute HOG (histogram of gradient) features within each window 3.Score the window with a linear SVM classifier 4.Perform non-maxima suppression to remove overlapping detections with lower scores Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
14
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
15
Tested with – RGB – LAB – Grayscale Gamma Normalization and Compression – Square root – Log Slightly better performance vs. grayscale Very slightly better performance vs. no adjustment
16
uncentered centered cubic-corrected diagonal Sobel Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05 Outperforms
17
Histogram of gradient orientations – Votes weighted by magnitude – Bilinear interpolation between cells Orientation: 9 bins (for unsigned angles 0 -180) Histograms in k x k pixel cells Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
18
Normalize with respect to surrounding cells Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
19
X= Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05 # features = 15 x 7 x 9 x 4 = 3780 # cells # orientations # normalizations by neighboring cells Original Formulation
20
X= Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05 # features = 15 x 7 x 9 x 4 = 3780 # cells # orientations # normalizations by neighboring cells # features = 15 x 7 x (3 x 9) + 4 = 3780 # cells # orientations magnitude of neighbor cells UoCTTI variant Original Formulation
21
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05 pos w neg w
22
pedestrian Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
23
Pedestrian detection with HOG Train a pedestrian template using a linear support vector machine At test time, convolve feature map with template Find local maxima of response For multi-scale detection, repeat over multiple levels of a HOG pyramid N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005Histograms of Oriented Gradients for Human Detection Template HOG feature mapDetector response map
24
Something to think about… Sliding window detectors work – very well for faces – fairly well for cars and pedestrians – badly for cats and dogs Why are some classes easier than others?
25
Strengths and Weaknesses of Statistical Template Approach Strengths Works very well for non-deformable objects with canonical orientations: faces, cars, pedestrians Fast detection Weaknesses Not so well for highly deformable objects or “stuff” Not robust to occlusion Requires lots of training data
26
Tricks of the trade Details in feature computation really matter – E.g., normalization in Dalal-Triggs improves detection rate by 27% at fixed false positive rate Template size – Typical choice is size of smallest detectable object “Jittering” to create synthetic positive examples – Create slightly rotated, translated, scaled, mirrored versions as extra positive examples Bootstrapping to get hard negative examples 1.Randomly sample negative examples 2.Train detector 3.Sample negative examples that score > -1 4.Repeat until all high-scoring negative examples fit in memory
27
Things to remember Sliding window for search Features based on differences of intensity (gradient, wavelet, etc.) – Excellent results require careful feature design Boosting for feature selection Integral images, cascade for speed Bootstrapping to deal with many, many negative examples Examples Stage 1 H 1 (x) > t 1 ? Reject No Yes Stage 2 H 2 (x) > t 2 ? Stage N H N (x) > t N ? Yes … Pass Reject No Reject No
29
Many slides from Lana Lazebnik based on P. FelzenszwalbP. Felzenszwalb Generic object detection with deformable part-based models
30
Challenge: Generic object detection
31
Histograms of oriented gradients (HOG) Partition image into blocks at multiple scales and compute histogram of gradient orientations in each block N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005Histograms of Oriented Gradients for Human Detection 10x10 cells 20x20 cells Image credit: N. Snavely
32
Histograms of oriented gradients (HOG) Partition image into blocks at multiple scales and compute histogram of gradient orientations in each block N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005Histograms of Oriented Gradients for Human Detection Image credit: N. Snavely
33
Pedestrian detection with HOG Train a pedestrian template using a linear support vector machine N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005Histograms of Oriented Gradients for Human Detection positive training examples negative training examples
34
Are we done? Single rigid template usually not enough to represent a category Many objects (e.g. humans) are articulated, or have parts that can vary in configuration Many object categories look very different from different viewpoints, or from instance to instance Slide by N. Snavely
35
Discriminative part-based models P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, PAMI 32(9), 2010Object Detection with Discriminatively Trained Part Based Models Root filter Part filters Deformation weights
36
Discriminative part-based models P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, PAMI 32(9), 2010Object Detection with Discriminatively Trained Part Based Models Multiple components
37
Discriminative part-based models P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, PAMI 32(9), 2010Object Detection with Discriminatively Trained Part Based Models
38
Object hypothesis Multiscale model: the resolution of part filters is twice the resolution of the root
39
Scoring an object hypothesis The score of a hypothesis is the sum of filter scores minus the sum of deformation costs Filters Subwindow features Deformation weights Displacements
40
Scoring an object hypothesis The score of a hypothesis is the sum of filter scores minus the sum of deformation costs Concatenation of filter and deformation weights Concatenation of subwindow features and displacements Filters Subwindow features Deformation weights Displacements
41
Detection Define the score of each root filter location as the score given the best part placements:
42
Detection Define the score of each root filter location as the score given the best part placements: Efficient computation: generalized distance transforms For each “default” part location, find the score of the “best” displacement Head filter Deformation cost
43
Detection Define the score of each root filter location as the score given the best part placements: Efficient computation: generalized distance transforms For each “default” part location, find the score of the “best” displacement Head filter responsesDistance transform Head filter
44
Detection
45
Detection result
46
Training Training data consists of images with labeled bounding boxes Need to learn the filters and deformation parameters
47
Training Our classifier has the form w are model parameters, z are latent hypotheses Latent SVM training: Initialize w and iterate: Fix w and find the best z for each training example (detection) Fix z and solve for w (standard SVM training) Issue: too many negative examples Do “data mining” to find “hard” negatives
48
Car model Component 1 Component 2
49
Car detections
50
Person model
51
Person detections
52
Cat model
53
Cat detections
54
Bottle model
55
More detections
56
Quantitative results (PASCAL 2008) 7 systems competed in the 2008 challenge Out of 20 classes, first place in 7 classes and second place in 8 classes BicyclesPersonBird Proposed approach
57
Detection state of the art Object detection system overview. Our system (1) takes an input image, (2) extracts around 2000 bottom-up region proposals, (3) computes features for each proposal using a large convolutional neural network (CNN), and then (4) classifies each region using class-specific linear SVMs. R-CNN achieves a mean average precision (mAP) of 53.7% on PASCAL VOC 2010. For comparison, Uijlings et al. (2013) report 35.1% mAP using the same region proposals, but with a spatial pyramid and bag-of-visual- words approach. The popular deformable part models perform at 33.4%. R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, CVPR 2014, to appear.Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.