Wallflower Principles and Practice of Background Maintenance

Slides:



Advertisements
Similar presentations
Opmaker2: Efficient Action Schema Acquisition T.L.McCluskey, S.N.Cresswell, N. E. Richardson and M.M.West The University of Huddersfield,UK
Advertisements

1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
Nathan Johnson1 Background Subtraction Various Methods for Different Inputs.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
Segmentation Course web page: vision.cis.udel.edu/~cv May 2, 2003  Lecture 29.
Classes and Object- Oriented... tMyn1 Classes and Object-Oriented Programming The essence of object-oriented programming is that you write programs in.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Chapter 1: Introduction to Pattern Recognition
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
International Conference on Image Analysis and Recognition (ICIAR’09). Halifax, Canada, 6-8 July Video Compression and Retrieval of Moving Object.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Segmentation Divide the image into segments. Each segment:
Region-Level Motion- Based Background Modeling and Subtraction Using MRFs Shih-Shinh Huang Li-Chen Fu Pei-Yung Hsiao 2007 IEEE.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Segmentation by Clustering Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set.
Object Detection and Tracking Mike Knowles 11 th January 2005
Recognition Of Textual Signs Final Project for “Probabilistic Graphics Models” Submitted by: Ezra Hoch, Golan Pundak, Yonatan Amit.
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications Lucia Maddalena and Alfredo Petrosino, Senior Member, IEEE.
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Tracking Video Objects in Cluttered Background
Trinity College Dublin PixelGT: A new Ground Truth specification for video surveillance Dr. Kenneth Dawson-Howe, Graphics, Vision and Visualisation Group.
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.
[cvPONG] A 3-D Pong Game Controlled Using Computer Vision Techniques Quan Yu and Chris Wagner.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 11, NOVEMBER 2011 Qian Zhang, King Ngi Ngan Department of Electronic Engineering, the Chinese university.
1 Real Time, Online Detection of Abandoned Objects in Public Areas Proceedings of the 2006 IEEE International Conference on Robotics and Automation Authors.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Image Subtraction for Real Time Moving Object Extraction Shahbe Mat Desa, Qussay A. Salih, CGIV’04.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
1 Video Surveillance systems for Traffic Monitoring Simeon Indupalli.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Patterns for Developing Ideas in Writing
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Vision Surveillance Paul Scovanner.
Perception Introduction Pattern Recognition Image Formation
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
Computer Vision Michael Isard and Dimitris Metaxas.
PSEUDO-RELEVANCE FEEDBACK FOR MULTIMEDIA RETRIEVAL Seo Seok Jun.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
CS654: Digital Image Analysis
Tracking and event recognition – the Etiseo experience Son Tran, Nagia Ghanem, David Harwood and Larry Davis UMIACS, University of Maryland.
Expectation-Maximization (EM) Case Studies
DETECTING AND TRACKING TRACTOR-TRAILERS USING VIEW-BASED TEMPLATES Masters Thesis Defense by Vinay Gidla Apr 19,2010.
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
Digital Image Processing
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Using decision trees to build an a framework for multivariate time- series classification 1 Present By Xiayi Kuang.
Image Segmentation Nitin Rane. Image Segmentation Introduction Thresholding Region Splitting Region Labeling Statistical Region Description Application.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
Motion Estimation of Moving Foreground Objects Pierre Ponce ee392j Winter March 10, 2004.
Video object segmentation and its salient motion detection using adaptive background generation Kim, T.K.; Im, J.H.; Paik, J.K.;  Electronics Letters 
Machine Vision ENT 273 Lecture 4 Hema C.R.
Contents Team introduction Project Introduction Applicability
Motion Detection And Analysis
Pattern Recognition Sergios Theodoridis Konstantinos Koutroumbas
Vehicle Segmentation and Tracking in the Presence of Occlusions
An Introduction to Supervised Learning
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Differential Emission Measure
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Presentation transcript:

Wallflower Principles and Practice of Background Maintenance Costache Theodor “Hermann Oberth” Faculty of Engineering – Advanced Computing Systems Master

Overview Introduction The Wallflower Algorithm Experiments Analysis and Principles Conclusions

Introduction Video surveillance systems seek to automatically identify people, objects, or events of interest in different kinds of environments. A common element of such surveillance systems is a module that performs background subtraction for differentiating background pixels from foreground pixels. The difficult part of background subtraction is not the differencing itself, but the maintenance of a background model.

Introduction An ideal background maintenance system would be able to avoid the following problems: Moved objects Time of day Light switch Waving trees Camouflage Bootstrapping Foreground aperture Sleeping person Waking person Shadows

The WallFlower Algorithm Wallflower was intended to solve as many of the canonical problems as possible. To handle problems that occur at various spatial scales, it processes images at the pixel, region, and frame levels. The pixel-level processing makes the preliminary classifications of foreground versus background and also handles adaptation to changing backgrounds. The pixel-level avoids many of the common problems immediately: moved objects, time of day, waving trees, camouflage, and bootstrapping. The region-level considers inter-pixel relationships that might help to refine the raw classification of the pixel level; doing so avoids the foreground aperture problem. The frame level addresses the light switch problem: it watches for sudden changes in large parts of the image and swaps in alternate background models that explain as much of the new background as possible

THE PIXEL LEVEL  

The Region Level The idea is to segment whole objects, rather than isolated pixels, as foreground. When an object is homogeneously colored, the problem is an instance of the classic vision problem where a moving homogeneous region exhibits no perceivable motion. Moving homogeneous regions are necessarily bracketed by pixels which occur on the frontier edges of object movement, and which exhibit identical properties as those inside the region So, we cast new foreground regions that have been discovered by the pixel level as seed regions that are grown by backprojecting the pixel values that occur in the seeds.

The Region Level As each new pair of raw and foreground-marked images, 𝐼 𝑡 and 𝐹 𝑡 , arrives, Compute image differences (a and b): 𝐽 𝑡 𝑥 = 1, 𝑖𝑓 𝐼 𝑡 𝑥 − 𝐼 𝑡−1 𝑥 > 𝑘 𝑚𝑜𝑡𝑖𝑜𝑛 0,𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 Compute the subset of pixels which occur at the intersection of adjacent pairs of differenced images and the previous foreground image (c): 𝐾 𝑡 𝑥 = 𝐽 𝑡 𝑥 ^ 𝐽 𝑡−1 𝑥 ^ 𝐹 𝑡−1 𝑥 Find 4-connected regions, 𝑅 𝑖 , in 𝐾 𝑡 , discarding regions consisting of less than 𝑘 𝑚𝑖𝑛 pixels Compute 𝐻 𝑖 , the normalized histogram of each 𝑅 𝑖 , as projected onto the image 𝐼 𝑡−1 (s is a pixel value): 𝐻 𝑖 𝑠 = |{𝑥:𝑥ε 𝑅 𝑖 𝑎𝑛𝑑 𝐼 𝑡−1 𝑥 =𝑠}| | 𝑅 𝑖 | Backproject histograms in 𝐼 𝑡 : For each 𝑅 𝑖 , compute 𝐹 𝑡−1 ^ 𝑅 𝑖 , and from each point in the intersection, grow 𝐿 𝑖 ,the 4-connected regions in the image, 𝐿 𝑖 𝑥 = 1, 𝑖𝑓 𝐻 𝑖 𝐼 𝑡 𝑥 >ε 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 where we use 𝑘 𝑚𝑜𝑡𝑖𝑜𝑛 = 16, 𝑘 𝑚𝑖𝑛 = 8, ε = 0.1.

The frame level

Experiments

Experiments

Analysis and principles Background maintenance seems simple at first. But it turns out to be a problem rich with hard cases and subtle tradeoffs. Therefore, a set of principles to which background maintenance modules should adhere, need to be established. Principle 1 No Semantics Principle 2 Proper Initial Segmentation Principle 3 Stationarity Criteria Principle 4 Adaptation Principle 5 Multiple Spatial Levels

PRInciple 1 – No Semantics The Wallflower algorithm outputs false positives in the waving trees sequence, where part of the sky, considered background by the pixel-level, becomes foreground after region-level processing. The region-level algorithm is therefore an unsound heuristic, the use of which is not justified in general because it is an attempt to extract object semantics from low-level vision. Semantic differentiation of objects should not be handled by the background maintenance module. A background maintenance module handles the default model for everything in a scene that is not modeled explicitly by other processing modules. The module performing background maintenance should not attempt to extract the semantics of foreground objects on its own. While background maintenance might be useful in determining gross traffic statistics of objects such as people and cars, which can alternately be moving or motionless, attempts to use it alone as a preprocessing step for continuous, accurate tracking are bound to fail.

Principle 2 – Proper initial segmentation As long as there is to be a background maintenance module, we must differentiate the task of finding foreground objects from the task of understanding Background subtraction should segment objects of interest when they first appear (or reappear) in a scene. In environments where adaptation is necessary, maintaining foreground objects as foreground is not a reasonable task for background modelers, since such accurate maintenance requires semantic understanding of foreground. Object recognition and tracking can be computationally expensive tasks – good background subtraction eliminates the need to perform these tasks for each frame, on every subregion.

Principle 3 – Stationarity criteria The principle of proper initial segmentation depends on the notion of object salience. Objects are salient when they deviate from some invariant property of the background. The key question, then, is how this invariance is modeled and what it means to deviate from it. Backgrounds, for instance, are not necessarily defined by absence of motion. Consider a fluttering leaf on a tree. As the leaf moves on and off a pixel, that pixel’s value will change radically. No unimodal distribution of pixel values can adequately capture such a background, because these models implicitly assume that the background, apart from some minimal amount of noise, is static. Indeed, all unimodal models failed to capture the complexity in the background required to handle the waving trees and camouflage experiments. An appropriate pixel-level stationarity criterion should be defined. Pixels that satisfy this criterion are declared background and ignored. We define stationarity as that quality of the background that a particular model assumes to be approximately constant. Carelessness in defining the stationarity criterion can lead to the waving trees and camouflage problems.

Principle 4 – Adaptation Backgrounds often change, even with liberal definitions of stationarity. The background model must adapt to both sudden and gradual changes in the background. The line between gradual and non-gradual changes should be chosen to maximize the distinction between events that cause them.

Principle 5 – multiple spatial levels Sudden light changes were best handled by normalized block correlation, eigenbackground, and Wallflower. On the other hand, neither eigenbackgrounds nor block correlation deals with the moved object problem or the bootstrapping problem, because they lack adaptive pixel level models. So, our final principle is the following: Background models should take into account changes at differing spatial scales. Most background maintenance algorithms maintain either pixel-wise models or whole-frame models, but not both. Pixel-level models are necessary to solve many of the most common background maintenance problems, while the light switch and time-of-day problems suggests that frame-wide models are useful, as well. A good background maintenance system is likely to explicitly model changes that happen at different spatial scales. Much of Wallflower’s success is attributable to its separate models for pixels and frames.

Conclusion Background maintenance, though frequently used for video surveillance applications, is often implemented ad hoc with little thought given to the formulation of realistic, yet useful goals. We presented Wallflower, a system that attempts to solve many of the common problems with background maintenance. Comparison of Wallflower with other algorithms establishes a case for five principles that we proposed based on analysis of the experiments