Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.

Slides:



Advertisements
Similar presentations
DDDAS: Stochastic Multicue Tracking of Objects with Many Degrees of Freedom PIs: D. Metaxas, A. Elgammal and V. Pavlovic Dept of CS, Rutgers University.
Advertisements

Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Human Identity Recognition in Aerial Images Omar Oreifej Ramin Mehran Mubarak Shah CVPR 2010, June Computer Vision Lab of UCF.
1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
Foreground Background detection from video Foreground Background detection from video מאת : אבישג אנגרמן.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
Robust Object Tracking via Sparsity-based Collaborative Model
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
Robust Multi-Pedestrian Tracking in Thermal-Visible Surveillance Videos Alex Leykin and Riad Hammoud.
Broadcast Court-Net Sports Video Analysis Using Fast 3-D Camera Modeling Jungong Han Dirk Farin Peter H. N. IEEE CSVT 2008.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Robust Object Segmentation Using Adaptive Thresholding Xiaxi Huang and Nikolaos V. Boulgouris International Conference on Image Processing 2007.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
Student: Hsu-Yung Cheng Advisor: Jenq-Neng Hwang, Professor
Background Subtraction for Urban Traffic Monitoring using Webcams Master Graduation Project Final Presentation Supervisor: Rein van den Boomgaard Mark.
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Introduction to Object Tracking Presented by Youyou Wang CS643 Texas A&M University.
University of MarylandComputer Vision Lab 1 A Perturbation Method for Evaluating Background Subtraction Algorithms Thanarat Horprasert, Kyungnam Kim, David.
California Car License Plate Recognition System ZhengHui Hu Advisor: Dr. Kang.
Dorin Comaniciu Visvanathan Ramesh (Imaging & Visualization Dept., Siemens Corp. Res. Inc.) Peter Meer (Rutgers University) Real-Time Tracking of Non-Rigid.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
1 Video Surveillance systems for Traffic Monitoring Simeon Indupalli.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
Human-Computer Interaction Human-Computer Interaction Tracking Hanyang University Jong-Il Park.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Crowd Analysis at Mass Transit Sites Prahlad Kilambi, Osama Masound, and Nikolaos Papanikolopoulos University of Minnesota Proceedings of IEEE ITSC 2006.
Expectation-Maximization (EM) Case Studies
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Real-Time Tracking with Mean Shift Presented by: Qiuhua Liu May 6, 2005.
Supervisor: Nakhmani Arie Semester: Winter 2007 Target Recognition Harmatz Isca.
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Presented by: Idan Aharoni
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Tracking Groups of People for Video Surveillance Xinzhen(Elaine) Wang Advisor: Dr.Longin Latecki.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Motion Estimation of Moving Foreground Objects Pierre Ponce ee392j Winter March 10, 2004.
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
A segmentation and tracking algorithm
Vehicle Segmentation and Tracking in the Presence of Occlusions
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Presentation transcript:

Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman

Overview Surveillance Systems Single Camera Configuration Moving Object Detection Tracking Event Recognition Multi-camera Configuration Occlusion Handling

Single Camera Configuration Moving Object Detection (MOD) Tracking Event Recognition

Single Camera Configuration Moving Object Detection (MOD) Input Image - Background Image = Foreground Mask

Single Camera Configuration Moving Object Detection (MOD) Frame Differencing (M. Piccardi, 1996) Eigenbackground Subtraction (N. Oliver, 1999) Parzen Window (KDE) Based MOD (A. Elgammal, 1999) Mixture of Gaussians Based MOD (W. E. Grimson, 1999)

Single Camera Configuration MOD – Frame Differencing Foreground mask detection Background model update

Single Camera Configuration MOD – Eigenbackground Subtraction Principal Component Analysis (PCA) Reduce the data dimension Capture the major variance Reduced data represents the background model (http://web.media.mit.edu/~tristan/phd/dissertation/chapter5.html)

Single Camera Configuration MOD – Parzen Window Based Nonparametrically estimating the probability of observing pixel intensity values, based on the sample intensities

Single Camera Configuration MOD – Mixture of Gaussians Based Based on modeling each pixel by mixture of K Gaussian distributions Probability of observing pixel value xN at time N, where (assuming that R,G,B are independent)

Single Camera Configuration MOD - Simulation Results

Single Camera Configuration Tracking Object Association Mean-shift Tracker (D. Comaniciu, 2003) Cam-shift Tracker (G. R. Bradski, 1998) Pyramidal Kanade-Lucas-Tomasi Tracker (KLT) (J. Y. Bouguet, 1999) (A constant velocity Kalman filter is associated with each tracker)

Single Camera Configuration Tracking – Object Association Oi(t) = OJ(t+1) if Bounding box overlapping D(Oi(t), OJ(t+1)) < thresholdmd, D() is a distance metric between color histograms of objects Kullback-Leibler divergence Bhattacharya coefficient

Single Camera Configuration Tracking – Mean-shift Tracker Similarity function between the target model q and the candidate model p(y) is where p and q are m-bin color histograms (http://www.lisif.jussieu.fr/~belaroussi/face_track/CamshiftApproach.htm)

Single Camera Configuration Tracking - Mean-shift Tracker - Simulation Result

Single Camera Configuration Tracking – Cam-shift Tracker Backprojection image (probability distribution image) calculated Mean-shift algorithm is used to find mode of probability distribution image around the previous target location

Single Camera Configuration Tracking – Cam-shift Tracker - Simulation Result

Single Camera Configuration Tracking – Pyramidal KLT Optical flow d=[dx dy] of the good feature point (corner) is found by minimizing the error function (http://www.suri.it.okayama-u.ac.jp/research/2001/s-takahashi/s-takahashi.html)

Single Camera Configuration Tracking - Pyramidal KLT - Simulation Results

Single Camera Configuration Event Recognition - Hidden Markov Models (HMM) GM - HMMs, trained by proper object trajectories, are used to model the traffic flow (F. Porikli, 2004) (F. Bashir, 2005) m :starting frame number in which the object enters the FOV n :end frame number in which the object leaves the FOV

Single Camera Configuration Event Recognition – Simulation Result

Multi-camera Configuration Background Modeling Occlusion Handling Tracking Event Recognition

Multi-camera Configuration Background Modeling Three background modeling algorithms Foreground Detection by Unanimity Foreground Detection by Weighted Voting Mixture of Multivariate Gaussians Background Model

Multi-camera Configuration Background Modeling Common field-of-view must be defined to specify the region in which the cameras will cooperate

Multi-camera Configuration Background Modeling - Unanimity If (x is foreground) && (xI is foreground) foreground

Multi-camera Configuration Background Modeling – Weighted Voting and are the coefficients to adjust the contributions of the cameras. Generally, the contribution for the first camera (reference camera with better positioning) is larger than the second one, and

Multi-camera Configuration Background Modeling – Weighted Voting

Multi-camera Configuration Background Modeling – Mixture of Multivariate Gaussians Each pixel modeled by mixture of K multivariate Gaussian distributions where

Multi-camera Configuration Background Modeling – Mixture of Multivariate Gaussians Input image Mixture of Multivariate Gaussians Single camera MOG

Multi-camera Configuration Background Modeling - Conclusions Projections errors due to the planar-object assumption Erroneous foreground masks False segmentation results Cameras must be mounted on high altitudes compared to object heights Background modeling by unanimity False segmented regions are eliminated Any camera failure failure in final mask Solved by weighted voting In multivariate MOG method missed vehicles in single camera MOG method can be segmented

Multi-camera Configuration Occlusion Handling Primary issue of surveillance systems False foreground segmentation results Tracking failures Difficult to solve by using single-camera configuration Occlusion-free view generation by using multiple cameras Utilization of 3D information Presence of different points of views

Multi-camera Configuration Occlusion Handling – Block Diagram

Multi-camera Configuration Occlusion Handling - Background Subtraction Foreground masks are obtained using background subtraction

Multi-camera Configuration Occlusion Handling – Oversegmentation Foreground mask is oversegmented using “Recursive Shortest Spanning Tree” (RSST) and K-means algorithms RSST K-means

Multi-camera Configuration Occlusion Handling – Top-view Generation

Multi-camera Configuration Occlusion Handling – Top-view Generation Corresponding match of a segment is found by comparing the color histograms of the target segment and candidate segments on the epipolar line RSST K-means

Multi-camera Configuration Occlusion Handling – Clustering Segments are grouped using “shortest spanning tree” algorithm using the weight function RSST K-means

Multi-camera Configuration Occlusion Handling – Clustering After cutting the edges greater than certain threshold RSST K-means

Multi-camera Configuration Occlusion Handling – Conclusions Successful results for partially occluded objects Under strong occlusion Epipolar matching fails Objects are oversegmented or undersegmented Problem is solved if one of the cameras can see the object without occlusion RSST and K-means have similar results K-means has better real time performance

Multi-camera Configuration Tracking – Kalman Filters Advantage: continuous and correct tracking as long as one of the cameras is able to view the object Tracking is performed in both of the views by using Kalman filters 2D state model: State transition model: Observation model:

Multi-camera Configuration Tracking – Object Matching Objects in different views are related to each other via homography

Multi-camera Configuration Tracking - Example

Multi-camera Configuration Tracking - Example

Multi-camera Configuration Tracking – Simulation Results Multi-camera Tracking

Multi-camera Configuration Tracking – Simulation Results Single-camera Tracking Single-camera Tracking

Multi-camera Configuration Tracking – Simulation Results Multi-camera Tracking

Multi-camera Configuration Tracking – Simulation Results Single-camera Tracking Single-camera Tracking

Multi-camera Configuration Event Recognition - Trajectories Extracted trajectories from both of the views are concatenated to obtain a multi-view trajectory

Multi-camera Configuration Event Recognition – Training

Multi-camera Configuration Event Recognition – Viterbi Distances of Training Samples Object ID Viterbi Distance to GM_HMM_1 Viterbi Distance to GM_HMM_2 Viterbi Distance to GM_HMM_1+2 1 10.0797 9.90183 19.7285 2 10.2908 10.1049 20.1867 3 10.2266 10.1233 20.1006 4 10.577 10.6716 21.2304 5 9.99018 9.84572 19.6763 6 10.0584 9.85901 19.6572 7 10.0608 9.88434 19.7496 8 10.2821 10.2472 20.3949 9 10.0773 9.8764 19.7181 10 10.3629 10.2508 20.3399 11 10.0322 9.86696 19.6382 12 10.0695 9.92222 19.7072 13 10.1321 9.95447 19.7818 14 10.2666 10.139 20.2119 15 10.2661 10.0629 20.0147 16 10.038 9.92932 19.6548 17 10.126 9.98202 19.7991 18 10.2134 10.108 19.9983 19 10.8046 10.5149 21.3008 20 10.4454 10.2919 20.333 21 10.111 9.90018 19.6983 22 10.1791 9.9294 19.9025 23 10.0511 10.0658 20.1564 24 10.1007 10.2248 20.248 25 10.3782 9.8986 19.9865 26 10.0308 9.9264 19.8682 27 10.0816 10.2139 20.1286

Multi-camera Configuration Event Recognition – Simulation Results with Abnormal Data Average distance to GM_HMM_1 : 10.20 Average distance to GM_HMM_2 : 10.06 Average distance to GM_HMM_1+2: 20.04 Object ID Viterbi Distance to GM_HMM_1 Viterbi Distance to GM_HMM_2 Viterbi Distance to GM_HMM_1+2 28 20.4058 19.9481 45.1818 29 21.2409 19.7736 45.034 30 26.9917 24.7016 55.2278 31 10.7213 10.5773 21.2099 32 10.4648 10.5105 22.1852 33 10.1611 9.97222 19.7785

Multi-camera Configuration Tracking & Event Recognition - Conclusions Successful results Correct initial segmentation Other tracker algorithms can be used Event recognition GM_HMM1+2 classifies the test data better

Thank you...

Summary - Surveillance Single camera configuration Moving object detection Frame differencing Eigen background Parzen window (KDE) Mixture of Gaussians Tracking Object association Mean-shift tracker Cam-shift tracker Pyramidal Kanade-Lucas-Tomasi tracker (KLT) Event recognition Multi-camera configuration Background modeling Foreground detection by unanimity Foreground detection by weighted voting Mixture of multivariate Gaussian distributions Occlusion handling