Download presentation
Presentation is loading. Please wait.
1
Robust Moving Object Detection & Categorization using self- improving classifiers Omar Javed, Saad Ali & Mubarak Shah
2
Moving Object Detection & Categorization Goal Detect moving objects in images and classify them into categories, e.g., humans or vehicles. Motivation Most monitoring and video understanding systems require knowledge of, location and type of objects in the scene.
3
Object Classification: Major Approaches Supervised Classifiers Adaboost (Viola & Jones), Naive Bayes (Schniederman et al.), SVMs (Papageorgiou & Poggio) Limitations Requirement of large number of training examples, 1000000 negative examples for face detection (Zhang et al.). More than 10000 examples used by (Viola & Jones) Fixed parameters after training. After deployment, parameters are not tunable to best performance in a particular scenario.
4
Object Classification: Major Approaches Semi-Supervised Classifiers Co-training (Levin et al.) Limitations: Requirement for collection of large amount of training data, though no need for labels. Offline training, i.e., Fixed parameters in the testing phase.
5
Properties of an “Ideal” Object Detection System Learns both background and object models online with no prior training. Adapts quickly to changing background and object properties
6
Overview of the Proposed Approach In a single boosted framework, Obtain regions of Interest (ROI) from a background subtraction approach. Obtain motion and appearance features from the ROI. Use separate views (motion and appearance features) of the data for online co-training, i.e., If one set of features confidently predicts a label of an object, then use this label to online update the base classifiers and the boosting parameters. Use combined view (both features) for classification decisions.
7
Properties of the Proposed Object Detection Method Background model is learned online. Object models are learned offline with a small number of training examples. The object classifier parameters are continuously updated online using co-training to improve detection rates.
8
Proposed Object Detection Method Co-Training Decision (if confident prediction by one set) ROIs Background Appearance Feature Extraction Background Updated weak learners Background Models Foreground Models Updated parameters Classification Output Color Classifier Base Classifiers (Appearance) Motion Feature Extraction Edge Classifier Base Classifiers (Motion) Boosted Classifier Updated Boosted Parameters
9
Background Detection First level Per-pixel Mixture of Gaussian color models Second Level Gradient magnitude and gradient direction models Gradient boundary check Feedback to first level Current Image from videoOutput of first levelOutput of second level
10
Features for Object Classification Base classifiers learned from global PCA coefficients of appearance and motion templates of Image regions. Appearance subspace learned by performing PCA separately on a small set of labeled ‘d’ dimensional gradient magnitude images of people and vehicles.
11
Features for Object Classification The people and vehicle appearance subspaces are represented by d x m 1 and d x m 2 projection matrices (S 1 and S 2 ) respectively. m 1 and m 2 are chosen such that the eigenvectors account for 99% of variance in the respective subspaces.
12
Features for Object Classification Appearance features for base learners are obtained by projecting each training example ‘r’ in the two subspaces
13
Features for Object Classification Row 1: Top 3 eigenvectors for person appearance subspace. Row 2: Vehicle appearance subspace
14
Features for Object Classification To obtain motion features, person and vehicle motion subspaces (matrices S 3 and S 4 )are constructed from m 3 and m 4 dimensional person and vehicle examples respectively. Optical flow is obtained using the method by Lucas and Kanade. Motion features for base learners are obtained by projecting each training motion example ‘o’ in the two subspaces
15
Base Classifiers We use the Bayes Classifier as the base classifier. Let c 1, c 2 and c 3 represent the person, vehicle and background classes. Each feature vector component v q,where q ranges from 1,.., m 1 +m 2 +m 3 +m 4, is used to learn the pdf for each class. The pdf is represented by a smoothed 1D histogram.
16
Base Classifiers The classification decision by the q th base classifier is taken as c i, If
17
Adaboost Boosting is a method for combining many base classifiers to come up with a more accurate ‘strong’ classifier. We use the Adaboost.M1 (Freund and Schapire) to learn the strong classifier, from the initial training data and the base classifiers.
18
The online co-training Framework In general co-training requires at least two classifiers trained on independent features for labeling of data. Examples confidently labeled by one classifier are used to train the other. In our case, individual base classifiers either represent motion or appearance features. To determine confidence thresholds for each base classifier, we use a validation data set.
19
The online co-training Framework For class c i and j th base classifier the confidence threshold, is set to be the highest probability achieved by a negative example, i.e., All examples in the validation set with probability higher than the threshold are correctly classified.
20
During the test phase, If more than 20% of the appearance based or motion based classifiers predict the label of an example with the probability higher than the validation threshold, then the example is selected for online update. Online update is only necessary if the boosted classifier decision has a small or negative margin. Margin thresholds are also computed from the validation set. The online co-training Framework
21
Once an example has been labeled by the co-training mechanism, an online boosting algorithm is used to update the base classifiers and the boosting coefficients.
22
Online Co-training Algorithm
23
Experiments Initial Training 50 examples of each class All examples scaled to 30x30 vector Validation Set 20 images per class Testing on three sequences
24
Experiments Results on Sequence1.
25
Experiments Results on Sequence1. Performance over time Performance over number of co-trained examples
26
Experiments Results on Sequence 2.
27
Experiments Results on Sequence 2. Performance over time Performance over number of co-trained examples
28
Experiments Results on Sequence 3.
29
Experiments Results on Sequence 3. Performance over time Performance over number of co-trained examples
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.