Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry.

Similar presentations


Presentation on theme: "Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry."— Presentation transcript:

1 Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry S. Davis and Nikos Paragios Problem Description Single-camera background subtraction: Shadows. Illumination changes. Specularities. Stereo-based background subtraction: Can overcome many of these problems, but Slow and Inaccurate online matches. Project Goals 1.Develop a fast two camera background subtraction algorithm that doesn’t require solving the correspondence problem online. 2.Analyze advantages of various camera configurations with respect to robustness of background subtraction: –We assume objects to be detected move on a known ground plane. Fast Illumination-Invariant Multi-Camera Approach A clever idea: Yuri A. Ivanov, Aaron F. Bobick and John Liu, “Fast Lighting Independent Background Subtraction”, IEEE Workshop on Visual Surveillance, ICCV'98, Bombay, India, January 1998. Background model: Established conjugate pixels offline. Color dissimilarity measure between conjugate pixels. What are the problems? False and missed detections, caused by homogeneous objects. Detection Errors Given a conjugate pair (p, p’): p’ is occluded by a foreground object, and p is visible in the reference view. False detections, p and p’ are occluded by a foreground object. Missed detections, Eliminating False Detections Consider a two-cameras placement: Baseline orthogonal to ground plane. Lower camera used as reference. Reducing Missed Detections Initial detection free of false detections: And the missed detections form a component adjacent to the ground plane. For a detected pixel I t along each epipolar line in an initial foreground blob: 1.Compute conjugate pixel I’ t (constrained stereo). 2.Determine base point I b. 3.If |I t – I b | > thres, increment I t and repeat step 1. 4.Mark I t as the lowermost pixel. Base Point Proposition 1: In 3D space, the missed proportion of a homogeneous object with negligible front-to- back depth is independent of object position. Equivalently, the proportion that is correctly detected remains constant. Proof: Extent of missed detection = being the length of the baseline. Thus, proportion of missed detections =. ¤ Under weak perspective: Can be shown that  is the proportion of correct detection, I m =   * I’ t,  is the ground plane homography from reference to  second view. Homogeneous and background pixel on ground plane assumptions not necessary since I m can be independently determined using  and I’ t. Under perspective: A. Criminisi, I. Reid, A.Zisserman, “Single View Metrology”, 7th IEEE International Conference on Computer Vision, Kerkya, Greece, September 1999. Based on Criminisi et. al., we can show that in reference view,  ref is unknown scale factor, h is the height of I t, is the normalized vertical vanishing line of the ground plane, v ref is the vertical vanishing point. Equation also applies to the second camera, equating them can be used to determine I b. Base point in second camera is just  * I b. Robustness to Specularities After morphological operation, two possibilities: 1.Specularities in a single blob, or 2.Specularities in a different blob. Case 1 - Specularities in the same blob: Virtual image lies below the ground plane. Eliminated by base-finding operations. Hard to find a good stereo match. Lambertian + Specular at point of reflection. Even if matched, typically causes I m above I t. Case 2 – Specularities in different blob: Robustness to Near-BG Object Typical disparity-based background subtraction faces problem with near-background objects: 1.Our algorithm needs only detect top portion, follow by 2.Base-finding operations. Experiments 1.Dealing with illumination changes using our sensor placement. 2.Dealing with specularities (day raining scene). 3.Dealing with specularities (night scene). 4.Near-background object detection. 5.Indoor scene (requiring perspective model). Comparisons: Weak perspective model much simpler, ease of implementation. When objects close to camera, weak perspective model can be violated (e.g., indoor scenes). Perspective model, much less stable, sensitive to calibration errors. Robustness to Illumination Changes Geometrically, the algorithm is unaffected by: Lighting changes. Shadows. Extension to objects not moving on ground possible. Additional Advantages Very fast and stereo matches of background model can be established offline, much more accurate.


Download ppt "Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry."

Similar presentations


Ads by Google