Robust Foreground Detection in Video Using Pixel Layers Kedar A. Patwardhan, Guillermoo Sapire, and Vassilios Morellas IEEE TRANSACTION ON PATTERN ANAYLSIS.

Slides:



Advertisements
Similar presentations
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
Human Identity Recognition in Aerial Images Omar Oreifej Ramin Mehran Mubarak Shah CVPR 2010, June Computer Vision Lab of UCF.
Spatial-Temporal Consistency in Video Disparity Estimation ICASSP 2011 Ramsin Khoshabeh, Stanley H. Chan, Truong Q. Nguyen.
Patch to the Future: Unsupervised Visual Prediction
Foreground Background detection from video Foreground Background detection from video מאת : אבישג אנגרמן.
Video Inpainting Under Constrained Camera Motion Kedar A. Patwardhan, Student Member, IEEE, Guillermo Sapiro, Senior Member, IEEE, and Marcelo Bertalm.
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/15/12.
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
Andrea Colombari and Andrea Fusiello, Member, IEEE.
IEEE TCSVT 2011 Wonjun Kim Chanho Jung Changick Kim
J. Mike McHugh,Janusz Konrad, Venkatesh Saligrama and Pierre-Marc Jodoin Signal Processing Letters, IEEE Professor: Jar-Ferr Yang Presenter: Ming-Hua Tang.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Mean Shift A Robust Approach to Feature Space Analysis Kalyan Sunkavalli 04/29/2008 ES251R.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Segmentation and Tracking of Multiple Humans in Crowded Environments Tao Zhao, Ram Nevatia, Bo Wu IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Incremental Learning of Temporally-Coherent Gaussian Mixture Models Ognjen Arandjelović, Roberto Cipolla Engineering Department, University of Cambridge.
Monte Carlo Localization
Object Detection and Tracking Mike Knowles 11 th January 2005
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Tracking Video Objects in Cluttered Background
Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance Yasuyuki Matsushita, Member, IEEE, Ko Nishino, Member, IEEE, Katsushi.
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Dorin Comaniciu Visvanathan Ramesh (Imaging & Visualization Dept., Siemens Corp. Res. Inc.) Peter Meer (Rutgers University) Real-Time Tracking of Non-Rigid.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 11, NOVEMBER 2011 Qian Zhang, King Ngi Ngan Department of Electronic Engineering, the Chinese university.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Kernel Density Estimation - concept and applications
Real-Time Decentralized Articulated Motion Analysis and Object Tracking From Videos Wei Qu, Member, IEEE, and Dan Schonfeld, Senior Member, IEEE.
Real Time Abnormal Motion Detection in Surveillance Video Nahum Kiryati Tammy Riklin Raviv Yan Ivanchenko Shay Rochel Vision and Image Analysis Laboratory.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE
Mean-shift and its application for object tracking
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Characterizing activity in video shots based on salient points Nicolas Moënne-Loccoz Viper group Computer vision & multimedia laboratory University of.
A General Framework for Tracking Multiple People from a Moving Camera
Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation Jianping Fan, Yuli Gao, Hangzai Luo, Guangyou Xu.
Road Scene Analysis by Stereovision: a Robust and Quasi-Dense Approach Nicolas Hautière 1, Raphaël Labayrade 2, Mathias Perrollaz 2, Didier Aubert 2 1.
Kevin Cherry Robert Firth Manohar Karki. Accurate detection of moving objects within scenes with dynamic background, in scenarios where the camera is.
CS654: Digital Image Analysis
Limitations of Cotemporary Classification Algorithms Major limitations of classification algorithms like Adaboost, SVMs, or Naïve Bayes include, Requirement.
Expectation-Maximization (EM) Case Studies
Region-Based Saliency Detection and Its Application in Object Recognition IEEE TRANSACTIONS ON CIRCUITS AND SYSTEM FOR VIDEO TECHNOLOGY, VOL. 24 NO. 5,
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Real-Time Tracking with Mean Shift Presented by: Qiuhua Liu May 6, 2005.
Robust Kernel Density Estimation by Scaling and Projection in Hilbert Space Presented by: Nacer Khalil.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Journal of Visual Communication and Image Representation
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Improving Image Matting using Comprehensive Sampling Sets CVPR2013 Oral.
Fast Human Detection in Crowded Scenes by Contour Integration and Local Shape Estimation Csaba Beleznai, Horst Bischof Computer Vision and Pattern Recognition,
Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Presented by: Yang Yu Spatiotemporal GMM for Background Subtraction with Superpixel Hierarchy Mingliang Chen, Xing Wei, Qingxiong.
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Chapter 10 – Image Segmentation
EM Algorithm and its Applications
Presentation transcript:

Robust Foreground Detection in Video Using Pixel Layers Kedar A. Patwardhan, Guillermoo Sapire, and Vassilios Morellas IEEE TRANSACTION ON PATTERN ANAYLSIS AND MACHINE INTELLIGENCE VOL.30, NO, 4, APRIL 2008

Outline A typical Nonparametric background modeling : kernel Density estimation Introduction Automatic image layering Foreground detection in video Implementation details and experimental results

Nonparametric kernel Density estimation Estimate the pdf directly from the data without any assumptions about the underlying distributions.

Introduction A framework for robust foreground detection that works under difficult conditions such as dynamic background and moderately moving camera. The proposed method includes two components: Coarse scene representation as the union of pixel layers Foreground detection in video by propagating these layers using a maximum-likelihood assignment.

Introduction Detection challenges: Dynamic background (water ripples, swaying trees, etc) Camera motion Real-time detection requirements

Introduction Modeling a video scene as a group of layers instead of single pixels.

Introduction Main contributions A principled way of extracting and automatically computing the number of scene-layers Different detection thresholds for different “ layers ” using a principled approach for threshold computation that allows for less Conversion of foreground layers to background and vice-versa based on global layer models Notion of background memory for each pixel, which helps to reduce false detections when foreground disoccludes the background

Introduction The main reasons for modeling the scene as a group of layers instead of individual pixels: 1) To exploit spatial-temporal correlation between pixels 2) Use other similar pixels in the scene to model a pixel x, giving a better nonparametric estimate of the process the generated x 3) Handling nominal camera motion without explicit registration since we are not constrained to look at instances of x at exactly the same spatial location in every frame.

d

Automatic image layering

Extracting a Layer: Initial Guess Compute local maximum of the image histogram ( at the gray value ) a radius ( ) which is the square root of the trace of the global covariance matrix. All pixels with gray-value between and form our initial guess or “ layer-candidate ” (L C ) of the layer to be extracted.

Automatic image layering

Extracting a Layer: Refinement Step By Sampling-Expectation (SE) approach

Automatic image layering 3 main steps in this refining steps: Initial step Start with an initial (spatial) probability distribution on the image pixel, where pixels belonging to have high and pixels not in get low values. S-step The image is uniformly sampled to get a set of samples A sample size about 10 to 20 percent of the pixels in the image has been found to be satisfactory.

Automatic image layering The likelihood of a pixel belonging to one of the two processes is refined using a weighted Kernel Density Estimation (KDE).

Automatic image layering E-step The distribution and are re-estimated.

Automatic image layering Extracting a layer: multiple layers and validation In order to ascertain that the extracted layer is both meaningful and of significant size Kullback-Leibler (KL) divergence is proposed. As long as the data (image) supports the initial guess and the refined layers is meaningful, the condition will hold.

Automatic image layering : before beginning the layer extraction process, assume that the entire image is the candidate layer (i.e., ), After a few refinement steps, compute the KL divergence. : find the real candidate layer (finite ), perform the refinement and after this layer is stable, compute the KL divergence.

Automatic image layering Some pixels being classified as belonging to multiple layers. These pixel along with the residual un-assigned pixels are assigned to one of the layers in their spatial vicinity using maximum-likehood. The video scene, where N are automatically computed.

Foreground Detection in video Assign all incoming pixels to : One of the existent layers Identify them as outliers/foreground (assign to layer ) Identify them as part of a temporally persistent (or uninteresting) foreground object and assign them to a completely new background layer.

Foreground Detection in video Density estimation

Foreground Detection in video s

A pixel can be detected as a foreground pixel (outlier) in two case: When the pixel does not belong to any other background layer. When the pixel belongs to the layer of outliers (L 0 ) by maximum-likelihood assignment.

Foreground Detection in video Automatic threshold computation Depending upon the homogeneity and integrity of the pixel belonging to a layer, we believe that each layer will need to have a different threshold to achieve the same “ Number of False Alarms ”

Foreground Detection in video

Temporal persistence and region background model Most state-of-art algorithms change the background model to accommodate persistent object after a particular time has elapsed.

Temporal persistence and region background model Persistent foreground After a particular time threshold t persit, we convert the persistent foreground object to a completely new background layer

Temporal persistence and region background model Region level background model

Temporal persistence and region background model Background memory Let pixel y at spatial location (x,y) be assigned to layers If y is assigned to a persistent outlier which occludes the background layers, we keep testing the pixel y against the layer models.

Implementation details and experimental results