www-video.eecs.berkeley.edu/research

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

The fundamental matrix F
Spatial-Temporal Consistency in Video Disparity Estimation ICASSP 2011 Ramsin Khoshabeh, Stanley H. Chan, Truong Q. Nguyen.
High-Resolution Three- Dimensional Sensing of Fast Deforming Objects Philip Fong Florian Buron Stanford University This work supported by:
Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #17.
Depth from Structured Light II: Error Analysis
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Computer vision: models, learning and inference
Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Last Time Pinhole camera model, projection
1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
High-Quality Video View Interpolation
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
Announcements PS3 Due Thursday PS4 Available today, due 4/17. Quiz 2 4/24.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Stereo vision A brief introduction Máté István MSc Informatics.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Structure from images. Calibration Review: Pinhole Camera.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Stereo Many slides adapted from Steve Seitz.
Ray Divergence-Based Bundle Adjustment Conditioning for Multi-View Stereo Mauricio Hess-Flores 1, Daniel Knoblauch 2, Mark A. Duchaineau 3, Kenneth I.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
stereo Outline : Remind class of 3d geometry Introduction
Computer vision: models, learning and inference M Ahad Multiple Cameras
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
3D Reconstruction Using Image Sequence
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
John Morris These slides were adapted from a set of lectures written by Mircea Nicolescu, University of Nevada at Reno Stereo Vision Iolanthe in the Bay.
EECS 274 Computer Vision Projective Structure from Motion.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
CSE 185 Introduction to Computer Vision Stereo 2.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
A Plane-Based Approach to Mondrian Stereo Matching
Semi-Global Matching with self-adjusting penalties
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Two-view geometry Computer Vision Spring 2018, Lecture 10
Range Imaging Through Triangulation
Multiple View Geometry for Robotics
Presentation transcript:

www-video.eecs.berkeley.edu/research Introduction to Structured Light (SL) Systems and SL Based Phase Unwrapping R. Garcia & A. Zakhor EECS Department UC Berkeley www-video.eecs.berkeley.edu/research

Outline Background Structured Light Basics Phase Unwrapping Our Algorithms Results Summary

Determining Depth of a Scene Many applications: Biomedical Industrial Entertainment Navigation Methods for Determining Depth Triangulation Time of flight Depth from (de)focus Other…

Triangulation Based Depth Methods Scene observed from multiple views Correspondences between views solved Must know intrinsic and extrinsic parameters for each view Two categories Passive: Stereo – multiple cameras Active: Structured Light – camera with projector or other light source used I J Projector Cam I J Cam [S. Narasimhan]

Traditional Stereo Need 2 or more views of the scene Scene texture used to identify correspondences across views Dense stereo remains an active research area in computer vision Algorithms and new results @ Middlebury vision Given rectified images, triangulation is simple Using similar triangles: [Mirmehdi]

Potential Problem with Stereo Stereo works well on “textured” images

Potential Problem with Stereo Stereo can fail with lack of texture ?

Structured Light (SL) SL places “texture” onto the scene Projector patterns identify each scene region Classes of patterns Temporal Spatial Other Survey of patterns [J. Salvi et al., 2004]

Temporal Coding Multiple frames are projected to identify scene regions Camera pixel’s intensity change used for correspondence Scene assumed to be static

Spatial Coding Encodes unique information into small regions. Fewer captures Less dense reconstruction

Spatial Coding Recognize this?

Other Coding Methods Spacetime Coding Viewpoint Coding Zhang et al., “Spacetime Stereo: Shape Recovery for Dynamic Scenes”, 2003 Davis et al., “Spacetime Stereo: A Unifying Framework for Depth from Triangulation”, 2003 Viewpoint Coding Young et al., “Viewpoint-Coded Structured Light”, 2007

Structured Light v. Stereo Advantages of SL: SL does not require a richly textured scene Solving correspondences is not expensive Each scene point scene receives a unique code Advantages of Stereo: No interference with observed scene Only need acquisitions at one time for capture; SL often needs multiple images Too much texture can be problematic for SL.

Outline Background Structured Light Basics Phase Unwrapping Our Algorithms Results Summary

Structured Light Geometry Geometry of SL is simple Triangulation performed by ray-plane intersection Intrinsic and extrinsic parameters needed Calibration matrix for camera and projector Transformation between coordinate frame of camera and projector

Triangulation Intrinsic and extrinsic parameters Left Cam Projector

Triangulation Left Cam Projector

SL for Dynamic Scene Capture SL capable of capturing dynamic scenes Want to limit capture time when capturing dynamic scenes “One-shot” approaches require single capture Trade-off Fewer frames -> lower capture resolution More frames -> more sensitive to scene motion

Overview of Phase Shifted Structured Light (SL) Systems Project patterns to find correspondences between camera and projector Phase shifted sinusoidal patterns: Fast capture: 3 shots Simple to decode Insensitive to blur Used in optical metrology M periods

Example: Scene Illuminated with Phase-Shifted Sinusoids Wrapped Phase I2 I3 Unwrapped Phase M periods in sinusoid  Need to unwrap phase

Outline Background Structured Light Basics Phase Unwrapping Our Algorithms Results Summary

What is Phase Unwrapping? Phase values usually expressed from [-π , π) Would like to recover original continuous phase measurement

Overview of Phase Unwrapping Unwrapping assumptions: Single continuous object in scene Slowly varying depth; discontinuities less than |π| 2D phase unwrapping results in relative phase: Need absolute phase for triangulation. Wrapped Phase Unwrapped Phase

Creating Point Cloud

Stereo-Assisted Phase Unwrapping Stereo assisted phase unwrapping [Wiese 2007]: Results in absolute phase Deals with depth discontinuities System Setup: Two cameras, single projector Cam 1 2 Projector A B C D

Overview of Phase Unwrapping with two Cameras [Weise et al. 2007] To resolve the absolute phase for a camera pixel: Determine wrapped phase for all pixels in first and second camera Project a ray from pixel in camera 1 with wrapped phase Project M planes from the projector corresponding to in space Find the M intersections of the M planes with the ray in (1) in 3D space, Project onto camera 2 Compare the M phase values at M pixel locations in camera 2 to the and choose the closest . . . .

Stereo Phase Unwrapping [Weise et. al. 2007] Right Cam Left Cam Projector

Drawbacks of Stereo Phase Unwrapping Must be run twice. Possible to incorrectly assign absolute phase values to corresponding points between views. P A B C D Left Camera Right Camera

Comparison of Stereo Assisted Phase Unwrapping with Merging Stereo and 3D (x,y,t)

Temporal Inconsistencies Consecutive phase images highly correlated Correlated information not used during phase unwrapping Results in inconsistent unwrapping

3D Phase Unwrapping Multi-dimensional phase unwrapping 2D: traditional image processing 3D: medical imaging (i.e. MRI)

Overview of 3D (x,y,t) Phase Unwrapping Treat consecutive phase images as volume of phase values Edges defined between neighboring pixels Quality assigned to each pixel Inversely proportional to spatio-temporal second derivative Want to unwrap according to quality of edges

3D Phase Unwrapping (cont.) Evaluate edges in a 3D volume from highest quality to lowest Evaluating edge = unwrapping pixels connected to it Unwrapped pixels connected in chains Grow the chain by adding pixels/edges in x, y, and t Chains merged together

Outline Background Structured Light Basics Phase Unwrapping Our Algorithms Results Summary

Consistent Stereo-Assisted Phase-Unwrapping Methods for Structured Light Systems

Merging Stereo & 3D (x,y,t) Phase Unwrapping Goal: Solve for absolute phase offset of a chain by using stereo matching data. Approach: use quality to find phase offset probabilistically M periods in the sinusoidal pattern  M possible phase offsets Find offset probability for each pixel in the chain, then combine them to find phase offset for the whole chain How to find phase offset for a pixel P: Use phase difference between wrapped phase at P and its corresponding M projected pixels to generate likelihood of each of the M phase offset values.

Computing Phase Offset for an Entire Chain Combine offset probabilities for each pixel in the chain to determine phase offset for the whole chain (a) (b) + π-2δ π-δ P(k1=0) P(k1=1) : P(k1=M-1) P(k2=0) P(k2=1) P(k2=M-1) -π+δ + 0π + 2π Pixels in the same period Pixels in different periods

Comparison with 3D (x,y,t) only Proposed 3D (x,y,t) Proposed algorithm avoids unwrapping low quality pixels by using stereo Handling multiple disjoint objects

Comparison with Stereo Only Proposed Consecutive unwrapped phase frames and their difference

Comparison of Stereo Assisted Phase Unwrapping with Merging Stereo and 3D (x,y,t)

Stereo Phase Unwrapping [Weise et. al. 2007] Right Cam Left Cam Projector

Proposed Method Perform unwrapping w.r.t. projector pixels rather than camera pixels Right Cam Left Cam Projector

Overview of the proposed Method For each projector pixel with phase , find the corresponding epipolar line in each image. For each camera, find all points along the epipolar line with phase Find the 3D point in space resulting from intersection of rays resulting from these points with the plane for the projector: N1 points in 3D space for the left camera 1  Ai N2 points in 3D space for the right camera 2  Bi Find the corresponding pair of Ai and Bi  closest in 3D space Assign global phase to corresponding pixels of the “best” pair of the two cameras Camera 1 Camera 2

Projector Domain Stereo Method Right Cam Left Cam Projector Top down view of the projector plane corresponding to the projector column with absolute phase theta

Finding corresponding “pair” of points in 3D from the two cameras Compute pairwise distance for all pairs of points in the two views. The distance between the 3D locations of corresponding points is small. Compute possible correspondences for each projector pixel. Find the correct correspondence labeling for each projector pixel. Use loopy belief propagation (LBP) Distance B1 B2 A1 A2 A3 Possible Labels B1 B2 A1 {A1,B1} {A1,B2} A2 {A2,B1} {A2,B2} A3 {A3,B1} {A3,B2}

Loopy Belief Propagation Cost Function Minimize cost function: where: Locations of pixels in Cam A & B Labeling for projector image 3D location of image pixel Set of projector pixels 2D location of image pixel Projector pixel 4- connected pixel neighborhood 3D distance cost threshold 2D distance cost threshold

Local Phase Unwrapping for Remaining Pixels Camera Points Computed via LBP Occluded pixels: Corresponding pair are too far apart in 3D space Use quality based local unwrapping Unwrapping order for remaining pixels depends on: Density of stereo unwrapped points Local derivatives Merge pixel density and derivative maps to generate quality map Unwrap from highest to lowest quality. Density Local Derivative Merged

Results Proposed method results in Right Camera Proposed Proposed method results in consistent phase results across cameras Left Camera Proposed In a 700 frame sequence, our method: Has same accuracy in 80% of frames Has better than or equal accuracy in 96% of frames. Left Camera [Weise et. al.]

Results: Captured Dynamic Scene

Dynamic Point Cloud

Advantages of Projector Domain Unwrapping Only scene points illuminated by the projector can be reconstructed Unwrapping only needs to be performed once for any # of cameras more consistent and efficient. Computational complexity scales with projector resolution rather than image resolution.

Conclusion Provided introduction to the structured light systems Presented two phase unwrapping algorithms: A three-dimensional stereo-assisted phase unwrapping method A projector-centric stereo-assisted unwrapping method Results in accurate, consistent phase maps across both views. Results in accurate 3D point clouds