Download presentation
1
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition
Oytun Akman
2
Overview Surveillance Systems Single Camera Configuration
Moving Object Detection Tracking Event Recognition Multi-camera Configuration Occlusion Handling
3
Single Camera Configuration
Moving Object Detection (MOD) Tracking Event Recognition
4
Single Camera Configuration Moving Object Detection (MOD)
Input Image Background Image = Foreground Mask
5
Single Camera Configuration Moving Object Detection (MOD)
Frame Differencing (M. Piccardi, 1996) Eigenbackground Subtraction (N. Oliver, 1999) Parzen Window (KDE) Based MOD (A. Elgammal, 1999) Mixture of Gaussians Based MOD (W. E. Grimson, 1999)
6
Single Camera Configuration MOD – Frame Differencing
Foreground mask detection Background model update
7
Single Camera Configuration MOD – Eigenbackground Subtraction
Principal Component Analysis (PCA) Reduce the data dimension Capture the major variance Reduced data represents the background model (
8
Single Camera Configuration MOD – Parzen Window Based
Nonparametrically estimating the probability of observing pixel intensity values, based on the sample intensities
9
Single Camera Configuration MOD – Mixture of Gaussians Based
Based on modeling each pixel by mixture of K Gaussian distributions Probability of observing pixel value xN at time N, where (assuming that R,G,B are independent)
10
Single Camera Configuration MOD - Simulation Results
11
Single Camera Configuration Tracking
Object Association Mean-shift Tracker (D. Comaniciu, 2003) Cam-shift Tracker (G. R. Bradski, 1998) Pyramidal Kanade-Lucas-Tomasi Tracker (KLT) (J. Y. Bouguet, 1999) (A constant velocity Kalman filter is associated with each tracker)
12
Single Camera Configuration Tracking – Object Association
Oi(t) = OJ(t+1) if Bounding box overlapping D(Oi(t), OJ(t+1)) < thresholdmd, D() is a distance metric between color histograms of objects Kullback-Leibler divergence Bhattacharya coefficient
13
Single Camera Configuration Tracking – Mean-shift Tracker
Similarity function between the target model q and the candidate model p(y) is where p and q are m-bin color histograms (
14
Single Camera Configuration Tracking - Mean-shift Tracker - Simulation Result
15
Single Camera Configuration Tracking – Cam-shift Tracker
Backprojection image (probability distribution image) calculated Mean-shift algorithm is used to find mode of probability distribution image around the previous target location
16
Single Camera Configuration Tracking – Cam-shift Tracker - Simulation Result
17
Single Camera Configuration Tracking – Pyramidal KLT
Optical flow d=[dx dy] of the good feature point (corner) is found by minimizing the error function (
18
Single Camera Configuration Tracking - Pyramidal KLT - Simulation Results
19
Single Camera Configuration Event Recognition - Hidden Markov Models (HMM)
GM - HMMs, trained by proper object trajectories, are used to model the traffic flow (F. Porikli, 2004) (F. Bashir, 2005) m :starting frame number in which the object enters the FOV n :end frame number in which the object leaves the FOV
20
Single Camera Configuration Event Recognition – Simulation Result
21
Multi-camera Configuration
Background Modeling Occlusion Handling Tracking Event Recognition
22
Multi-camera Configuration Background Modeling
Three background modeling algorithms Foreground Detection by Unanimity Foreground Detection by Weighted Voting Mixture of Multivariate Gaussians Background Model
23
Multi-camera Configuration Background Modeling
Common field-of-view must be defined to specify the region in which the cameras will cooperate
24
Multi-camera Configuration Background Modeling - Unanimity
If (x is foreground) && (xI is foreground) foreground
25
Multi-camera Configuration Background Modeling – Weighted Voting
and are the coefficients to adjust the contributions of the cameras. Generally, the contribution for the first camera (reference camera with better positioning) is larger than the second one, and
26
Multi-camera Configuration Background Modeling – Weighted Voting
27
Multi-camera Configuration Background Modeling – Mixture of Multivariate Gaussians
Each pixel modeled by mixture of K multivariate Gaussian distributions where
28
Multi-camera Configuration Background Modeling – Mixture of Multivariate Gaussians
Input image Mixture of Multivariate Gaussians Single camera MOG
29
Multi-camera Configuration Background Modeling - Conclusions
Projections errors due to the planar-object assumption Erroneous foreground masks False segmentation results Cameras must be mounted on high altitudes compared to object heights Background modeling by unanimity False segmented regions are eliminated Any camera failure failure in final mask Solved by weighted voting In multivariate MOG method missed vehicles in single camera MOG method can be segmented
30
Multi-camera Configuration Occlusion Handling
Primary issue of surveillance systems False foreground segmentation results Tracking failures Difficult to solve by using single-camera configuration Occlusion-free view generation by using multiple cameras Utilization of 3D information Presence of different points of views
31
Multi-camera Configuration Occlusion Handling – Block Diagram
32
Multi-camera Configuration Occlusion Handling - Background Subtraction
Foreground masks are obtained using background subtraction
33
Multi-camera Configuration Occlusion Handling – Oversegmentation
Foreground mask is oversegmented using “Recursive Shortest Spanning Tree” (RSST) and K-means algorithms RSST K-means
34
Multi-camera Configuration Occlusion Handling – Top-view Generation
35
Multi-camera Configuration Occlusion Handling – Top-view Generation
Corresponding match of a segment is found by comparing the color histograms of the target segment and candidate segments on the epipolar line RSST K-means
36
Multi-camera Configuration Occlusion Handling – Clustering
Segments are grouped using “shortest spanning tree” algorithm using the weight function RSST K-means
37
Multi-camera Configuration Occlusion Handling – Clustering
After cutting the edges greater than certain threshold RSST K-means
38
Multi-camera Configuration Occlusion Handling – Conclusions
Successful results for partially occluded objects Under strong occlusion Epipolar matching fails Objects are oversegmented or undersegmented Problem is solved if one of the cameras can see the object without occlusion RSST and K-means have similar results K-means has better real time performance
39
Multi-camera Configuration Tracking – Kalman Filters
Advantage: continuous and correct tracking as long as one of the cameras is able to view the object Tracking is performed in both of the views by using Kalman filters 2D state model: State transition model: Observation model:
40
Multi-camera Configuration Tracking – Object Matching
Objects in different views are related to each other via homography
41
Multi-camera Configuration Tracking - Example
42
Multi-camera Configuration Tracking - Example
43
Multi-camera Configuration Tracking – Simulation Results
Multi-camera Tracking
44
Multi-camera Configuration Tracking – Simulation Results
Single-camera Tracking Single-camera Tracking
45
Multi-camera Configuration Tracking – Simulation Results
Multi-camera Tracking
46
Multi-camera Configuration Tracking – Simulation Results
Single-camera Tracking Single-camera Tracking
47
Multi-camera Configuration Event Recognition - Trajectories
Extracted trajectories from both of the views are concatenated to obtain a multi-view trajectory
48
Multi-camera Configuration Event Recognition – Training
49
Multi-camera Configuration Event Recognition – Viterbi Distances of Training Samples
Object ID Viterbi Distance to GM_HMM_1 Viterbi Distance to GM_HMM_2 Viterbi Distance to GM_HMM_1+2 1 2 3 4 10.577 5 6 7 8 9 9.8764 10 11 12 13 14 10.139 15 16 10.038 17 10.126 18 10.108 19 20 20.333 21 10.111 22 9.9294 23 24 20.248 25 9.8986 26 9.9264 27
50
Multi-camera Configuration Event Recognition – Simulation Results with Abnormal Data
Average distance to GM_HMM_1 : Average distance to GM_HMM_2 : Average distance to GM_HMM_1+2: Object ID Viterbi Distance to GM_HMM_1 Viterbi Distance to GM_HMM_2 Viterbi Distance to GM_HMM_1+2 28 29 45.034 30 31 32 33
51
Multi-camera Configuration Tracking & Event Recognition - Conclusions
Successful results Correct initial segmentation Other tracker algorithms can be used Event recognition GM_HMM1+2 classifies the test data better
52
Thank you...
53
Summary - Surveillance
Single camera configuration Moving object detection Frame differencing Eigen background Parzen window (KDE) Mixture of Gaussians Tracking Object association Mean-shift tracker Cam-shift tracker Pyramidal Kanade-Lucas-Tomasi tracker (KLT) Event recognition Multi-camera configuration Background modeling Foreground detection by unanimity Foreground detection by weighted voting Mixture of multivariate Gaussian distributions Occlusion handling
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.