Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Wen-Hung Liao Department of Computer Science National Chengchi University November 27, 2008 Estimation of Skin Color Range Using Achromatic Features.
Color-Invariant Motion Detection under Fast Illumination Changes Paper by:Ming Xu and Tim Ellis CIS 750 Presented by: Xiangdong Wen Advisor: Prof. Latecki.
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
COLORCOLOR A SET OF CODES GENERATED BY THE BRAİN How do you quantify? How do you use?
Foreground Background detection from video Foreground Background detection from video מאת : אבישג אנגרמן.
Robust Foreground Detection in Video Using Pixel Layers Kedar A. Patwardhan, Guillermoo Sapire, and Vassilios Morellas IEEE TRANSACTION ON PATTERN ANAYLSIS.
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
Different Tracking Techniques  1.Gaussian Mixture Model:  1.Construct the model of the Background.  2.Given sequence of background images find the.
AlgirdasBeinaravičius Gediminas Mazrimas Salman Mosslem.
Color Image Processing
Physics-based Illuminant Color Estimation as an Image Semantics Clue Christian Riess Elli Angelopoulou Pattern Recognition Lab (Computer Science 5) University.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Kernel Similarity Modeling of Texture Pattern Flow for Motion Detection in Complex Background 2011 IEEE transection on CSVT Baochang Zhang, Yongsheng Gao,
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
Background Estimation with Gaussian Distribution for Image Segmentation, a fast approach Gianluca Bailo, Massimo Bariani, Paivi Ijas, Marco Raggio IEEE.
Robust Object Segmentation Using Adaptive Thresholding Xiaxi Huang and Nikolaos V. Boulgouris International Conference on Image Processing 2007.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Region-Level Motion- Based Background Modeling and Subtraction Using MRFs Shih-Shinh Huang Li-Chen Fu Pei-Yung Hsiao 2007 IEEE.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
Student: Hsu-Yung Cheng Advisor: Jenq-Neng Hwang, Professor
A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications Lucia Maddalena and Alfredo Petrosino, Senior Member, IEEE.
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Tracking Video Objects in Cluttered Background
University of MarylandComputer Vision Lab 1 A Perturbation Method for Evaluating Background Subtraction Algorithms Thanarat Horprasert, Kyungnam Kim, David.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
VINCENT URIAS, CURTIS HASH Detection of Humans in Images Using Skin-tone Analysis and Face Detection.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Algirdas Beinaravičius Gediminas Mazrimas Salman Mosslem.
Bo QIN, Zongshun MA, Zhenghua FANG, Shengke WANG Computer-Aided Design and Computer Graphics, th IEEE International Conference on, p Presenter.
Hierarchical Method for Foreground DetectionUsing Codebook Model Jing-Ming Guo, Yun-Fu Liu, Chih-Hsien Hsia, Min-Hsiung Shih, and Chih-Sheng Hsu IEEE TRANSACTIONS.
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Expectation-Maximization (EM) Case Studies
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Student Name: Honghao Chen Supervisor: Dr Jimmy Li Co-Supervisor: Dr Sherry Randhawa.
A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Scene Text Extraction Using Focus of Mobile Camera Egyul Kim, SeongHun Lee, JinHyung Kim Artificial Intelligence & Pattern Recognition Lab, KAIST, Korea.
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Detecting Occlusion from Color Information to Improve Visual Tracking
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
EE368 Final Project Spring 2003
Video object segmentation and its salient motion detection using adaptive background generation Kim, T.K.; Im, J.H.; Paik, J.K.;  Electronics Letters 
Color Image Processing
Color Image Processing
Human Detection in Surveillance Applications
Color Image Processing
Motion Detection And Analysis
Histogram—Representation of Color Feature in Image Processing Yang, Li
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Color Image Processing
Color-Invariant Motion Detection under Fast Illumination Changes
Estimation of Skin Color Range Using Achromatic Features
Color Image Processing
Background extraction with a coarse to fine approach
Presentation transcript:

Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE VOL.29, NO. 7, July 2007

Outline Introduction Proposed approach and underlying assumption Gaussian Mixture Models Learning shadow distributions Experience Conclusion

Introduction Shadow detection algorithm can be classified as property-based or model- based algorithms. Property-based: Geometry, brightness, color Without any priori knowledge Model-based: Well suited to particular situations

Introduction How to describe a shadow Reduce luminance values while maintaining chromaticity values. In an RGB color space, background values under a cast shadow are proportional to background values under direct lighting. Cucchiara et al. use the hypothesis that shadows reduce surface brightness and saturation while maintaining hue properties in the HSV color space. Schreer et al. observe shadows reduce the YUV pixel value linearly. etc.

Introduction However, these algorithms based on chroma constancy under cast shadows can falsely label pixels as cast shadows.

Introduction Without a priori knowledge of the scene, the shadow volume defined by property-based shadow detection algorithms cannot be reduced without increasing the number of missed detections. RGB background values under a cast shadow will not necessarily be proportional to RGB values under direct light.

Proposed approach and underlying assumption Gaussian Mixture Models Periodically transfer Gaussian Mixture Shadow Models

GMM v.s. GMSM GMMGMSM What is modeled ? The distribution at a fixed pixel of video sequence,including BK, FG, shadow, etc. Only shadow pixels What is learning from ? All pixels at same position from video sequence. Only shadow pixels which is satisfying for certain shadow description properties. Input form?Random variable in color spaceThe parameters of shadow distribution from GMM Ordering rule in decreasing order of Updating period Every frame interval

Proposed approach and underlying assumption Exploit the repetitiveness of the appearance of cast shadows to learn shadowed surface values. Shadow values are not as frequent as background values but their rate of appearance is higher than random foreground values. Shadow values are associated with frequently seen distributions which are labeled as foreground by the GMM.

Proposed approach and underlying assumption To prevent them from being quickly discarded, we increase their learning rate, leading to stable shadow distributions. A second multidistrubution learning process: Gaussian Mixture Shadow Model (GMSM) To prevent shadow distributions being discarded just as other foreground distribution, we need to store the parameters of the shadow distributions. Complex and changing scene condition may require learning and storing the parameters of more than one shadow distribution.

Proposed approach and underlying assumption Four distinct aspects: The properties of shadowed surfaces are learned. Adapt to scene ’ s nonuniform and time-varying lighting conditions. The approach can also learn shadow distributions whose RGB values deviate slightly from the hypothesis that they are proportional to the background RGB values. Regions where moving cast shadows cannot be detected are excluded from the shadow model.

Gaussian Mixture Models A fixed number of states K, typically between 3 and 5, is defined. Each pixel values X t is a sample in a color space of a random variable X. A Gaussian probability density function and a priori probability w k. Parameter

Gaussian Mixture Models The K states are ordered by decreasing values of the. The first B states whose combined a priori probability of appearing is greater than a threshold T are labeled as background states and the other states are labeled as foreground states.

Gaussian Mixture Models Pixel value X t is associated to the state k with the smallest label among the states satisfying If we cannot associate a pixel values to an existing distribution, a new state k is created around this value with a priori probability and the less probable state is dropped.

Gaussian Mixture Models Priori probability and distribution parameters are updated.

Learning shadow distribution Three steps: Identification of pixel values that could represent cast shadows Generation of stable shadow distributions within the GMM Learning and storing parameters of shadow distributions with the GMSM

Learning shadow distribution Identification of pixels whose values could describe a shadowed surface Use property-based description of a shadowed surfaces. If we match a pixel values X t to a nonbackground state of the GMM, we then verify if the pixel value matches the description of a shadowed surface for one of the background states k=1, …,B

Learning shadow distribution Generation of stable shadow distributions within the GMM.

Learning shadow distribution When a pixel value is associated to at state, the a priori probability of the sate increases as is the learning parameter. M k,t is equal to 1 for the state that is associated to the pixel value and zero for the other states.

Learning shadow distribution When the pixel value could describe a shadow over the background surface, we increase the learning rate of the state associated to this pixel value. By imposing a maximum value on the a priori probability of state k=1, we conserve the most frequently appearing foreground states, which are most likely to be shadow states.

Learning shadow distribution Gaussain mixture shadow models Periodically transfer to a second mixture model, the GMSM by inputing Gaussian probabililty density functions with parameters Test if the mean of this distribution could describe a shadowed surface:

Learning shadow distribution It the test is true, the parameters are then compared to the existing GMSM distributions: Parameters updating:

Learning shadow distribution If these is no match, a new state is added in the GMSM, up to a maximum of K s states. A priori probabilities are then normalized and the states sorted in decreasing order of.

Learning shadow distribution Shadow detection If a pixel,X t, is labeled as foreground, it is then compared to the shadow states of the GMSM: If this condition is met for a state k with, the pixel is labeled as a moving cast shadow.

Learning shadow distribution Summary of GMM/GMSM algorithm

Learning shadow distribution

Experience : YUV-based description of shadowed surfaces First estimate attenuation ratio using the luminance component Y and verify that both the U and V components are also reduced by a similar ratio.

Experience : YUV-based description of shadowed surfaces Office scene with complex illumination. (a) Mean value of the first GMM background state. (b) A frame in the sequence. (c) Mean value of the first GMSM state. (d) Mean value of the second SMSM state.

Experience : YUV-based description of shadowed surfaces Distributions of normalized background state (blue) and shadow states (red).

Experience : YUV-based description of shadowed surfaces Histogram of the background pixels acquired during the video sequence and the computed background distribution.

Experience : YUV-based description of shadowed surfaces Histogram of the cast shadow pixels acquired during the video sequence and the computed shadow distribution.

Experience : YUV-based description of shadowed surfaces Foreground and shadow detection. (a) Foreground detection from GMM. (b) shadow detection with the first GMSM state. (c) Shadow detection with any of the B s GMSM states.

(a) Mean value of first background of GMM. (b) Frame in the sequence. (c) Mean value of the first GMSM state. (d) Foreground detection from GMM. (e) shadow detection from the YUV description. (f) Shadow detection from the GMSM. (g) Foreground detection: Img(d)- Img(e). (h) Foreground detection: Img(d) – Img(f).

Experience : YUV-based description of shadowed surfaces Shadow volumes from the YUV (gray) and GMSM (blue) models and background volume (red). The red circle represents the YUV value of the pixel, which belongs to the foreground.

Experience : YUV-based description of shadowed surfaces (a) Mean value of the first background state of GMM. (b) Mean value of the first GMSM state. (c) Frame in the sequence. (d) Mean value of the second GMSM state. (e) Foreground detection from GMM. (f) Shadow detection from the GMSM.

Experience : Brightness and chromaticity distortion model Horprasert et al. proposed a pixel-based segmentation model in RGB color space which decomposed each background value into its brightness ( ) and chromaticity distortion (CD). The expected background value is computed from N training frames representing the static background.

Experience : Brightness and chromaticity distortion model Brightness: Chromaticity distortions:

Experience : Brightness and chromaticity distortion model During the training phase, the variation b of the chromaticity distortion is evaluated: Normalized chromaticity distortion:

Experience : Brightness and chromaticity distortion model Pixels labeling Background pixels: small normalized brightness distortion, and small normalized chromaticity distortion cast shadow small normalized chromaticity distortion and a lower brightness value than the background value. Unclassified pixels are labeled foreground.

Experience : Brightness and chromaticity distortion model (a) Mean value of first background of GMM. (b) Frame in the sequence. (c) Mean value of the first GMSM state. (d) Foreground detection from GMM. (e) shadow detection from the brightness and chromaticity distortion model. (f) Shadow detection from the GMSM. (g) Foreground detection: Img(d)-Img(e). (h) Foreground detection: Img(d) – Img(f).

Experience : Brightness and chromaticity distortion model Shadow volumes from the BCD (gray) and GMSM (blue) models and background volume (red). The red circle represents the YUV value of the pixel, which belongs to the foreground.

Highway. (a) Mean value of the first background state of GMM. (b) Mean value of the first GMSM state. (c), (g), and (k) Frames from the sequence. (d), (h), and (l) Foreground detection from the GMM. (e), (i), and (m) Shadow detection from the brightness and chromaticity distortion model. (f), (j), and (n) Shadow detection from the GMSM.

Experience : Brightness and chromaticity distortion model Highway: Background (red), shadow volumes using the BCD model (gray), and the GMSM (blue). The red circle represents the RGB value of the pixel belonging to a foreground car.

Experience : HSV shadow model Cucchiara et al. proposed. Shadows cast on a surface reduce the brightness value while maintaining chromaticity properties. A pixel

Conclusion The proposed approach uses a GMM to learn from repetition the properties of shadowed background surfaces. Build a second mixture model for moving shadows on background surfaces.