Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry.

Slides:



Advertisements
Similar presentations
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
Advertisements

Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
Lecture 8: Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Camera calibration and epipolar geometry
Last Time Pinhole camera model, projection
Purposive Sensor Placement PhD Proposal Ser-Nam Lim.
Multi video camera calibration and synchronization.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Sensor, Motion & Temporal Planning PhD Defense for Ser-Nam Lim Department of Computer Science University of Maryland, College Park.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
The plan for today Camera matrix
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
Camera calibration and single view metrology Class 4 Read Zhang’s paper on calibration
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
Object recognition under varying illumination. Lighting changes objects appearance.
CSE473/573 – Stereo Correspondence
Announcements PS3 Due Thursday PS4 Available today, due 4/17. Quiz 2 4/24.
Stereo Guest Lecture by Li Zhang
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Projective Geometry and Single View Modeling CSE 455, Winter 2010 January 29, 2010 Ames Room.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Single View Geometry Course web page: vision.cis.udel.edu/cv April 7, 2003  Lecture 19.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Geometry 3: Stereo Reconstruction Introduction to Computer Vision Ronen Basri Weizmann Institute of Science.
Metrology 1.Perspective distortion. 2.Depth is lost.
Stereo Readings Szeliski, Chapter 11 (through 11.5) Single image stereogram, by Niklas EenNiklas Een.
Stereo Many slides adapted from Steve Seitz.
Project 2 code & artifact due Friday Midterm out tomorrow (check your ), due next Fri Announcements TexPoint fonts used in EMF. Read the TexPoint.
Single View Geometry Course web page: vision.cis.udel.edu/cv April 9, 2003  Lecture 20.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Using Adaptive Tracking To Classify And Monitor Activities In A Site W.E.L. Grimson, C. Stauffer, R. Romano, L. Lee.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Camera Model Calibration
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Epipolar geometry.
Geometry 3: Stereo Reconstruction
Common Classification Tasks
3D Photography: Epipolar geometry
Vehicle Segmentation and Tracking in the Presence of Occlusions
Filtering Things to take away from this lecture An image as a function
Announcements Project 3 out today demo session at the end of class.
Course 6 Stereo.
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Filtering An image as a function Digital vs. continuous images
Shape from Shading and Texture
Presentation transcript:

Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry S. Davis and Nikos Paragios Problem Description Single-camera background subtraction: Shadows. Illumination changes. Specularities. Stereo-based background subtraction: Can overcome many of these problems, but Slow and Inaccurate online matches. Project Goals 1.Develop a fast two camera background subtraction algorithm that doesn’t require solving the correspondence problem online. 2.Analyze advantages of various camera configurations with respect to robustness of background subtraction: –We assume objects to be detected move on a known ground plane. Fast Illumination-Invariant Multi-Camera Approach A clever idea: Yuri A. Ivanov, Aaron F. Bobick and John Liu, “Fast Lighting Independent Background Subtraction”, IEEE Workshop on Visual Surveillance, ICCV'98, Bombay, India, January Background model: Established conjugate pixels offline. Color dissimilarity measure between conjugate pixels. What are the problems? False and missed detections, caused by homogeneous objects. Detection Errors Given a conjugate pair (p, p’): p’ is occluded by a foreground object, and p is visible in the reference view. False detections, p and p’ are occluded by a foreground object. Missed detections, Eliminating False Detections Consider a two-cameras placement: Baseline orthogonal to ground plane. Lower camera used as reference. Reducing Missed Detections Initial detection free of false detections: And the missed detections form a component adjacent to the ground plane. For a detected pixel I t along each epipolar line in an initial foreground blob: 1.Compute conjugate pixel I’ t (constrained stereo). 2.Determine base point I b. 3.If |I t – I b | > thres, increment I t and repeat step 1. 4.Mark I t as the lowermost pixel. Base Point Proposition 1: In 3D space, the missed proportion of a homogeneous object with negligible front-to- back depth is independent of object position. Equivalently, the proportion that is correctly detected remains constant. Proof: Extent of missed detection = being the length of the baseline. Thus, proportion of missed detections =. ¤ Under weak perspective: Can be shown that  is the proportion of correct detection, I m =   * I’ t,  is the ground plane homography from reference to  second view. Homogeneous and background pixel on ground plane assumptions not necessary since I m can be independently determined using  and I’ t. Under perspective: A. Criminisi, I. Reid, A.Zisserman, “Single View Metrology”, 7th IEEE International Conference on Computer Vision, Kerkya, Greece, September Based on Criminisi et. al., we can show that in reference view,  ref is unknown scale factor, h is the height of I t, is the normalized vertical vanishing line of the ground plane, v ref is the vertical vanishing point. Equation also applies to the second camera, equating them can be used to determine I b. Base point in second camera is just  * I b. Robustness to Specularities After morphological operation, two possibilities: 1.Specularities in a single blob, or 2.Specularities in a different blob. Case 1 - Specularities in the same blob: Virtual image lies below the ground plane. Eliminated by base-finding operations. Hard to find a good stereo match. Lambertian + Specular at point of reflection. Even if matched, typically causes I m above I t. Case 2 – Specularities in different blob: Robustness to Near-BG Object Typical disparity-based background subtraction faces problem with near-background objects: 1.Our algorithm needs only detect top portion, follow by 2.Base-finding operations. Experiments 1.Dealing with illumination changes using our sensor placement. 2.Dealing with specularities (day raining scene). 3.Dealing with specularities (night scene). 4.Near-background object detection. 5.Indoor scene (requiring perspective model). Comparisons: Weak perspective model much simpler, ease of implementation. When objects close to camera, weak perspective model can be violated (e.g., indoor scenes). Perspective model, much less stable, sensitive to calibration errors. Robustness to Illumination Changes Geometrically, the algorithm is unaffected by: Lighting changes. Shadows. Extension to objects not moving on ground possible. Additional Advantages Very fast and stereo matches of background model can be established offline, much more accurate.