Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University.

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps Reporter :鄒嘉恆 Date : 2009/11/17.
CSE473/573 – Stereo and Multiple View Geometry
A Unified Approach to Calibrate a Network of Camcorders & ToF Cameras M 2 SFA 2 Marseille France 2008 Li Guan Marc Pollefeys {lguan, UNC-Chapel.
3D reconstruction.
Extracting Minimalistic Corridor Geometry from Low-Resolution Images Yinxiao Li, Vidya, N. Murali, and Stanley T. Birchfield Department of Electrical and.
Computer vision: models, learning and inference
Camera calibration and epipolar geometry
Multi video camera calibration and synchronization.
3D Multi-view Reconstruction Young Min Kim Karen Zhu CS 223B March 17, 2008.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
CSE473/573 – Stereo Correspondence
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Stereo Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision (with slides by James Rehg and.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
Computer vision: models, learning and inference
PG 2011 Pacific Graphics 2011 The 19th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2011) will be held on September 21 to.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Structure from images. Calibration Review: Pinhole Camera.
ICPR/WDIA-2012 High Quality Novel View Synthesis Based on Low Resolution Depth Image and High Resolution Color Image Jui-Chiu Chiang, Zheng-Feng Liu, and.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments D. Grießbach, D. Baumbach, A. Börner, S. Zuev German Aerospace Center.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
Image-based rendering Michael F. Cohen Microsoft Research.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Geometric Camera Models
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Affine Structure from Motion
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Development of a laser slit system in LabView
EECS 274 Computer Vision Affine Structure from Motion.
stereo Outline : Remind class of 3d geometry Introduction
Feature Matching. Feature Space Outlier Rejection.
Extracting Depth and Matte using a Color-Filtered Aperture Yosuke Bando TOSHIBA + The University of Tokyo Bing-Yu Chen National Taiwan University Tomoyuki.
Vision Sensors for Stereo and Motion Joshua Gluckman Polytechnic University.
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Computer vision: models, learning and inference
Advanced Computer Graphics
제 5 장 스테레오.
Image Restoration using Model-based Tracking
RGBD Camera Integration into CamC Computer Integrated Surgery II Spring, 2015 Han Xiao, under the auspices of Professor Nassir Navab, Bernhard Fuerst and.
Coding Approaches for End-to-End 3D TV Systems
By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley
Filtering Things to take away from this lecture An image as a function
Single-view geometry Odilon Redon, Cyclops, 1914.
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
The Pinhole Camera Model
Presentation transcript:

Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University

Young Min Kim TOF-CV Workshop 2 Outline  Motivation  System Architecture  Depth Sensor Characteristics  System Calibration  Results and Conclusion

Young Min Kim TOF-CV Workshop 3 Outline  Motivation  System Architecture  Depth Sensor Characteristics  System Calibration  Results and Conclusion

Young Min Kim TOF-CV Workshop 4 Motivation  Goal: Reconstruct geometry and texture of entire scene from minimum sensor data  State-of-the-art [Waschbuesch et al. 2006, 2007] –Stereo [Laurentini et al. 94, Kanade et al. 97, Matusik et al. 00, Matsuyama et al. 02, Wuermlin 02, Carranza et al. 03, Cheung et al. 03, Zitnick et al. 04, Bajcsy et al. 04, …]

Young Min Kim TOF-CV Workshop 5 Motivation  Limitation to stereo –Correspondence problem –Dependency on texture –Densely spaced cameras  Our idea: Build a system that can combine –TOF sensor –Video cameras

Young Min Kim TOF-CV Workshop 6 Outline  Introduction  System Architecture  Depth Sensor Characteristics  System Calibration  Results and Conclusion

Young Min Kim TOF-CV Workshop 7 Multi-view Sensor Fusion System

Young Min Kim TOF-CV Workshop 8 Multi-view Sensor Fusion System Point Grey Flea x768 pixels 30 Hz High resolution No depth

Young Min Kim TOF-CV Workshop 9 Multi-view Sensor Fusion System Swissranger SR3000 Flash Ladar 3D geometry at 60 Hz Resolution: 176 x 144 No visual interference Noisy low-resolution data Point Grey Flea x768 pixels 30 Hz High resolution No depth

Young Min Kim TOF-CV Workshop 10 Multi-view Sensor Fusion System Swissranger SR3000 Flash Ladar 3D geometry at 60 Hz Resolution: 176 x 144 No visual interference Noisy low-resolution data Point Grey Flea x768 pixels 30 Hz High resolution No depth Main Contribution System Architecture And Calibration

Young Min Kim TOF-CV Workshop 11 System Architecture … 19 MHz 20 MHz 21 MHz FireWire B FireWire A – Synchronization bus

Young Min Kim TOF-CV Workshop 12 System Architecture … 19 MHz 20 MHz 21 MHz FireWire B FireWire A – Synchronization bus –Synchronization  Hardware synch for video cameras  Initiate software for Swissrangers –Modulation frequency  Scales up to 4 Swissrangers

Young Min Kim TOF-CV Workshop 13 Example Data

Young Min Kim TOF-CV Workshop 14 Outline  Introduction  System Architecture  Depth Sensor Characteristics  System Calibration  Results and Conclusion

Young Min Kim TOF-CV Workshop 15 Related Work: TOF-Camera  Random noise characteristics of earlier models  Systematic depth errors –PMD (photomic mixer device) –Lookup table for Swissranger  Detailed analysis on noise characteristics of single TOF sensors  Our work –practical model for systematic bias –calibration for multiple cameras [Anderson et al. 05, Herbert et al. 92, …] [Lindner et al. 06, Lindner et al. 07] [Kahlmann et al. 92] [Rapp 07]

Young Min Kim TOF-CV Workshop 16 Depth Sensor Characteristics  Independent for each pixel (u, v)  Measurement model d m (u, v)=d g (u, v)+d r (u, v)+d s (u, v) Measurement uncertainty MeasuredGround-truth d r (u,v): random noise d s (u,v): systematic bias (u, v) d m (u, v) Center of Projection Image plane

Young Min Kim TOF-CV Workshop 17 Systematic Bias  Two components of systematic bias –Distance misalignment + rigid, directional –Influence of orientation, reflectance, and amplitude

Young Min Kim TOF-CV Workshop 18 Systematic Bias d s (u, v) = f(r) d’ s (u, v) d s (u, v) ≈ d’ s (u, v) for r > 0.3 r=0.3  Influence of orientation, reflectance, and amplitude  ratio of normalized amplitude

Young Min Kim TOF-CV Workshop 19 Outline  Introduction  System Architecture  Depth Sensor Characteristics  System Calibration  Results and Conclusion

Young Min Kim TOF-CV Workshop 20 Calibration  Video Cameras: –Standard calibration toolboxes –Intrinsic parameters + extrinsic parameters from checkerboard  Depth Cameras –No off-the-shelf solution –Camera provides XYZ / intensity –Use procedure based on optical method  practicability

Young Min Kim TOF-CV Workshop 21 Depth Camera Calibration  Use calibration procedure for video cameras  Space 1: optical camera model – viewpoint/projection  Camera XYZ  Space 2: 3D point cloud in sensor coordinates  Space 1 and Space 2 don’t match -> compensate

Young Min Kim TOF-CV Workshop 22 Depth Camera – Compensate Systematic Bias  Compensation: deform Space 2 to match Space 1 Intrinsics … N checkerboard positions spanning view frustum  ground truth point clouds in space 1 and measured point clouds in space 2

Young Min Kim TOF-CV Workshop 23 Depth Camera – Compensate Systematic Bias  Compensation: deform Space 2 to match Space 1  Step 1: Rigid alignment Intrinsics K, A … N checkerboard positions spanning view frustum  ground truth point clouds in space 1 and measured point clouds in space 2 P R rigid,t rigid P1 Space 1 Space 2

Young Min Kim TOF-CV Workshop 24 Depth Camera – Compensate Systematic Bias  Step 2: Warp of ray direction Space 1 Space 2 interpolate Φ(i,j),Ω(i,j) P1 P2

Young Min Kim TOF-CV Workshop 25 Depth Camera – Compensate Systematic Bias  Step 2: Warp of ray direction  Step 3: Constant per-pixel length bias along ray Space 1 Space 2 interpolate Φ(i,j),Ω(i,j) D(i,j) P1 P2 P3

Young Min Kim TOF-CV Workshop 26 Outline  Introduction  System Architecture  Depth Sensor Characteristics  System Calibration  Results and Conclusion

Young Min Kim TOF-CV Workshop 27 Result

Young Min Kim TOF-CV Workshop 28 Result

Young Min Kim TOF-CV Workshop 29 Result Combination of two depth maps Projectively textured from three video cameras

Young Min Kim TOF-CV Workshop 30 Result Combination of three depth maps Before bias correctionAfter bias correction

Young Min Kim TOF-CV Workshop 31 Result  Mean error of 4.94 cm reduced into 1.36cm

Young Min Kim TOF-CV Workshop 32 Conclusion  Design of a multi-view TOF fusion recording system  Detailed analysis of depth measurement inaccuracy  Calibration of depth and video data into a common frame  Starting point for improved dynamic shape and texture reconstruction  Acknowledgement: Max Planck Center for Visual Computing and Communication

Thank you