Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.

Slides:



Advertisements
Similar presentations
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Advertisements

www-video.eecs.berkeley.edu/research
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #17.
Depth from Structured Light II: Error Analysis
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Hybrid Position-Based Visual Servoing
Shape-from-X Class 11 Some slides from Shree Nayar. others.
Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Computer Graphics - Class 10
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Uncalibrated Geometry & Stratification Sastry and Yang
Face Recognition Based on 3D Shape Estimation
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Stereo Guest Lecture by Li Zhang
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Introduction 3D scene flow is the 3D motion field of points in the world. Structure is the depth of the scene. Motivation of our work: Numerous applications.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Automatic Camera Calibration
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Epipolar geometry The fundamental matrix and the tensor
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
3D SLAM for Omni-directional Camera
Video Based Palmprint Recognition Chhaya Methani and Anoop M. Namboodiri Center for Visual Information Technology International Institute of Information.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Stereo Many slides adapted from Steve Seitz.
A General-Purpose Platform for 3-D Reconstruction from Sequence of Images Ahmed Eid, Sherif Rashad, and Aly Farag Computer Vision and Image Processing.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Computer Vision, Robert Pless
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
3D Object Modelling and Classification Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University,
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
MAN-522 Computer Vision Spring
Processing visual information for Computer Vision
제 5 장 스테레오.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Common Classification Tasks
Vehicle Segmentation and Tracking in the Presence of Occlusions
Epipolar geometry continued
Course 6 Stereo.
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Shape from Shading and Texture
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha April 13 th, 2009

Motivation Measuring arbitrary scenes in ambient indoor light (Purpose: Visual Servoing for a Humanoid Robot) Addresses the problem of rejecting interference due to secondary specular reflections, cross-talk and other mechanisms in an active light stripe scanner.

Motivation

Basic Operation Color cameras capture stereo images of the stripe at 384 × 288 pixel Frame rate (25 Hz) on the 2.2 GHz dual Xeon host PC.

System Model Encoder measurement

Problem Statement Given the laser plane position and the measurements L x, R x, R x́, one of the left/right candidate pairs, ( L x, R x ) or ( L x, R x́ ), must be chosen as representing stereo measurements of the primary reflection. The measurements should then be combined to estimate the position of the ideal projection

Previous work In Trucco, et al. (1994) and Nakano, et al. (1988), laser stripe measurements are validated by applying a fixed threshold to the difference between corresponding single-camera reconstructions Such a comparison requires a uniform reconstruction error over all depths, which this figure illustrates is clearly not the case.

General Solution Given is the optimal reconstruction The Plücker matrix L describing the back-projection line is The intersection X of the light plane and L is Minimize S.t. Unconstrained Version

General Solution Finally, the ideal projection corresponding to is obtained by projecting onto the left image plane: And, the error function becomes By some simplifications:

Special Case: Rectilinear Stereo and Pin-Hole Cameras With and the image plane error E can be expressed as a function of a single unknown

Validation determining which pair of measurements correspond to the primary reflection 1) Light plane parameters α, β, and γ are calculated from e and the system parameters 2) 3) 4) the optimal reconstruction is finally calculated

Laser Plane Error The above solution assumes that the parameters of the laser plane are known exactly. In practice, the encoder measurements are noisy Let and, i = 1... n, represent valid corresponding measurements of the laser stripe on the n scanlines in a frame. Levenberg–Marquardt (LM) algorithm for minimization The optimal correspondences and encoder count are calculated recursively.

Additional Constraints 1) stripe candidates must be moving features It has little effect on cross-talk or reflections. 2) that valid measurements only occur within a subregion of the left and right image planes, depending on the angle of the light plane.

Active Calibration Unknown parameters in the model of the light stripe scanner The validation problem is approximated by recording only the brightest pair of features per scanline. Let and, represent the brightest corresponding features on n j scanlines of t captured frames, and let e j represent the measured encoder value for each frame. where

Active Calibration where implemented numerically using LM minimization The system parameters and encoder values are then sequentially refined in an iterative process. Initial estimate The calibration technique presented here is practical, fast, and accurate. The method does not require accurate knowledge of camera parameters b and f.

Implementation Output of the scanner is a 384 × 288 element range map The shaft encoder and stereo images are recorded at regular 40 ms intervals (25 Hz PAL frame rate). A complete scan requires approximately 384 processed frames (15 s).

Implementation: Light Stripe Measurement Laser stripe extraction is performed using: intensity data only (average of the color channels) motion of the stripe (by subtracting the intensity values in consecutive frames) predicted sub-region of the image. The intensity profile on each scanline is then examined to locate candidate stripe measurements.

Implementation: Range Data Post-Processing 1) Despite robust scanning, the raw range map may still contain outliers Thresholding: the minimum distance between each 3D point and its eight neighbors should be less than 10 mm 2) Holes fills these gaps with interpolated depth data. The distance between the bracketing points must be less than 30 mm 3) Finally, a color image is registered with the range map.

Implementation: Range Data Post-Processing

Experimental Results A mirror behind the objects simulates the effect of cross-talk and reflections.

Experimental Results output of the single- camera scanner phantom surfaces appear (Erroneous associations between the phantom stripe and laser plane)

Experimental Results output of the double- camera scanner Based on Nakano et al. (1988) and Trucco et al. (1994) The single-camera reconstructions X L and X R are calculated independently Discarded when |X L −X R | exceeds a fixed distance The final reconstruction is calculated as (1/2)(X L + X R )

Experimental Results robust scanner result

Discussion Main limitation unsuitable for dynamic scenes Robot must remain stationary during a scan The experimental prototype uses a red laser diode 1) Only can sense surfaces which contain a high component of red 2) laser diode could be replaced by a white light source 3) Advantages of LDs: physical compactness, low power consumption and heat generation. 4) The light plane could be generated using a triplet of red, green, and blue laser diodes. 5) High cost of green and blue laser diodes

Discussion surfaces with high specular and low Lambertian reflection may appear invisible

Summary and Conclusions Measuring arbitrary scenes in ambient indoor light Robustly identify the light stripe in the presence of secondary reflections, cross-talk and other sources of interference. Optimization-based formulation An image-based procedure for calibrating the light plane parameters

Future Research Development of a multistripe scanner. Multistripe scanners have the potential to solve a number of issues associated with single-stripe scanners: Illuminating a target with two stripes could double the acquisition rate Projecting the stripes from different positions reveals points that would otherwise be hidden in shadow. single-camera multistripe systems mostly rely on color, sequences of illumination or epipolar constraints to disambiguate the stripes. However, the method proposed in this paper could allow the stripes to be uniquely identified using the same principles that provide validation for a single stripe.