CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR
Differencing Detect changes in a sequence of images. Pixels of reference image are subtracted from the current image to determine how different they are. Pixels with exceed some difference threshold are assumed to correspond to different objects in the images.
Differencing Reference Image – Current Image = Difference
Problems Differences other than the object of interest may show up. –Pixel noise –Moving background objects (trees, water) –Lighting changes –Camera movement (small) –Shadows & Reflections
Pixel Noise
Solutions Variations can be included in a background model. –Reference frame may use eg Gaussian mixture models to characterize pixels –Reference frame can be updated at different rates. Very slow basically detects changes from when the system was started, very fast detects changes from the previous frame.
Camera Movement
Solutions Very small camera movements can be modeled in the background similar to pixel noise or moving background objects Other segmentation methods can be used to identify and track objects in the scene Camera motion can be identified and corrected (Optical flow, correspondence)
Shadows Shadow Detected Difference Good Bad
Solutions Color Space Conversion –Transform data into more useful form, eg normalized chromaticity or Hue Saturation Intensity colorspace, which separates color and intensity for robust detection in the presence of shadows.
HSI Hue angle determines color Saturation determines how ‘colorful’ or ‘washed out’ Intensity determines brightness
HIS Colorspace Detection Shadows simply decrease intensity without effecting hue Hue differencing is therefore quite robust to the presence of shadows Great But….
Hue Determination To decide what ‘color’ a pixel is, it must first have a ‘color’ Conversion –Normalize R,G,B s.t. 0 r,g,b 1 –h = acos (r-g)+(r-b) 2[(r-g) 2 + (r-b)(g-b)] 1/2 –Very sensitive when r g b
Hue Differencing Hue ‘Noise’ Causes False Detects
Idea Since hue information is unreliable for grayish pixels, ignore hue difference results at these pixels and use intensity instead. Need some weighting function which determines how to do this.
Previous Solution Francois and Medioni used a saturation threshold to ignore hue information for gray pixels – Works well –Requires threshold to be set
Goal Want a weighting function which will specify a combination of hue and intensity differencing. –Intensity should receive more weight when hue is unreliable –Hue should receive more weight when it can be reliably determined Hope to find some underlying relationship
Implementation Using Euclidian distance to gray line as a color measure –Saturation is somewhat tricky (a la Matlab) Ideal system would determine weighting function based on training data, similar to backpropogation
Backpropogation Outputs are weighted combinations of inputs Determine errors at outputs Determine how much each input was responsible for the error Adjust each weight accordingly
Current Algorithm Examines each pixel, changes weight in proportion to the error –For pixels which should have detected, weight is increased proportionally to 1-detection –For pixels which should NOT have detected, weight is DECREASED proportionally to detection
Insights Examination of hue errors shows a definite correlation to coloration
Results Weighting Functions
Lack of Colorful Data
Results Combined Detection
Problems Correlation can vary widely from image to image. Weights are noisy, skewed by lack of colorful data Probably needs more data processing No good model determined yet
Conclusion System shows definite promise Model still needs to be determined and adaptively fit
Shadow Supression
References A.R.J. Francois, G.G. Medioni, Adaptive Color Background Modeling for Real-Time Segmentation of Video Streams A. Prati, I. Mikic, M. Trivedi, R. Cucchiara, Detecting Moving Shadows: Formulation, Algorithms and Evaluation T. Horprasert, D. Harwood, L.S. Davis, A statistical approach for real-time robust background subtraction and shadow detection