Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.

Slides:



Advertisements
Similar presentations
High-Resolution Three- Dimensional Sensing of Fast Deforming Objects Philip Fong Florian Buron Stanford University This work supported by:
Advertisements

Kawada Industries Inc. has introduced the HRP-2P for Robodex 2002
www-video.eecs.berkeley.edu/research
--- some recent progress Bo Fu University of Kentucky.
(thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #17.
Depth from Structured Light II: Error Analysis
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Motivation Application driven -- VoD, Information on Demand (WWW), education, telemedicine, videoconference, videophone Storage capacity Large capacity.
Structured Light + Range Imaging Lecture #17 (Thanks to Content from Levoy, Rusinkiewicz, Bouguet, Perona, Hendrik Lensch)
Structured Lighting Guido Gerig CS 6320, 3D Computer Vision Spring 2012 (thanks: some slides S. Narasimhan CMU, Marc Pollefeys UNC)
Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Noman Haleem B.E. in Textile Engineering from National Textile University Specialization in Yarn Manufacturing Topic “Determination of Twist per Inch.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
A Laser Range Scanner Designed for Minimum Calibration Complexity James Davis, Xing Chen Stanford Computer Graphics Laboratory 3D Digital Imaging and Modeling.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
High-Quality Video View Interpolation
CS290 Spring 2000 Slide:1 Structured Light for Laparoscopic Surgery CS290 Computer Vision Jeremy Ackerman CS290 Computer Vision Jeremy Ackerman.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Lecture 10: Stereo and Graph Cuts
Presented by: Ali Agha March 02, Outline Sterevision overview Motivation & Contribution Structured light & method overview Related work Disparity.
Stereo Guest Lecture by Li Zhang
Project 1 artifact winners Project 2 questions Project 2 extra signup slots –Can take a second slot if you’d like Announcements.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Today: Calibration What are the camera parameters?
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
Design Principles. Design Process 1. Define the problem 2. Research the project 3. Create thumbnails and roughs ◦ Thumbnail – small, fast sketches that.
Real-Time Phase-Stamp Range Finder with Improved Accuracy Akira Kimachi Osaka Electro-Communication University Neyagawa, Osaka , Japan 1August.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Video Video.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Chapter 2 : Imaging and Image Representation Computer Vision Lab. Chonbuk National University.
Stereo Many slides adapted from Steve Seitz.
Depth Edge Detection with Multi- Flash Imaging Gabriela Martínez Final Project – Processamento de Imagem IMPA.
CS 4487/6587 Algorithms for Image Analysis
A General-Purpose Platform for 3-D Reconstruction from Sequence of Images Ahmed Eid, Sherif Rashad, and Aly Farag Computer Vision and Image Processing.
Project 2 code & artifact due Friday Midterm out tomorrow (check your ), due next Fri Announcements TexPoint fonts used in EMF. Read the TexPoint.
Outline Kinds of Coding Need for Compression Basic Types Taxonomy Performance Metrics.
Spring 2015 CSc 83020: 3D Photography Prof. Ioannis Stamos Mondays 4:15 – 6:15
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Tutorial Visual Perception Towards Computer Vision
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Triangulation Scanner Design Options
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
3D Reconstruction Using Image Sequence
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Graphics II Image Processing I. Acknowledgement Most of this lecture note has been taken from the lecture note on Multimedia Technology course of University.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Using Clouds Shadows to Infer Scene Structure and Camera Calibration
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Optical Coherence Tomography
Image Segmentation Classify pixels into groups having similar characteristics.
Range Imaging Through Triangulation
Coding Approaches for End-to-End 3D TV Systems
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab

Coding structured light

Why coding structured light Point lighting - O(n²) “Line” lighting - O(n) Coded light - O(log n)* * Log base depends on code base By reducing the number of captured images we reduce the requirements on processing power and storage.

Pipeline Calibrate a pair camera/projector. Capture images of the object with projected patterns. Process images in order to correlate camera and projector pixels : –Pattern detection. –Decoding projector position. Triangulate to recover depth.

Working volume

Our Goal How to correlate camera and projector pixels? –Different codes gives us several possibilities to solve correlation problem. –We are going to show you an overview of many different approaches.

CSL research Early approaches (the 80’s) Structuring the problem (the 90’s) New Taxonomy Recent trends Going back to colors

Main ideas Structured Light Codes can be classified observing the restrictions imposed on objects to be scanned: Temporal Coherence. Spatial Coherence. Reflectance restrictions.

Temporal Coherence Coding in more than one frame. Does not allow movement. Results in simple codes that are as less restrictive as possible regarding reflectivity.

Gray Code

… in practice :

Spatial Coherence Coding in a single frame. Spatial Coherence can be local or global. The minimum number of pixels used to identify the projected code defines the accuracy of details to be recovered in the scene.

Some examples

Binary spatial coding

Problems in recovering pattern

Introducing color in coding Allowing colors in coding is the same as augmenting code basis. This gives us more words with the same length. If the scene changes the color of projected light, then information can be lost. Reflectivity restrictions (neutral scene colors) have to be imposed to guarantee the correct decoding.

Examples

Medical Imaging Laboratory Departments of Biomedical Engineering and Radiology Johns Hopkins University School of Medicine Baltimore, MD Local spatial Coherence

Images from: Tim Monks, University of Southampton 6 different colors used to produce sequences of 3 without repetition

Rainbow Pattern Assumes that the scene does not change the color of projected light Examples:

Boundary Coding Slide 2 (column 2) Slide 1 (column 1) The graph edges correspond to the stripe transition code. The maximal code results from an Eulerian path on graph. Obs.: 2 frames of Gray code gives us 4 stripes. In this case we have 10. edge 00  01 Slide 1 Slide 2 Ghost boundary

Comments It scans moving objects. It is designed to acquire geometry in real-time. Some textures can produce false transitions leading to decoding errors. It does not acquire texture. Mostrar vídeo

Noisy transmission channel There is a natural analogy between coded structured light and a digital communication system. The camera is recieving the signal transmitted through object by the projector. Structure of coding: –Number of slides –Number of characters used in as “alphabet” –Size of the “words” –Neighborhood considered

Designing codes Goal: design a light pattern to acquire depth information with minimum number of frames without restricting the object to be scanned (impose only minimal constraints on reflectivity, temporal coherence and spatial coherence.)  What is Minimal?  Temporal Coherence: 2 frames.  Spatial Coherence: 2 pixels.  Reflectivity restrictions: allow non neutral objects to be scanned without loosing information.

Processing images To recover coded information a pattern detection procedure has to be carried out on captured images. The precision of pattern detection is crucial to the accuracy of resulting range data. Shadow areas also have to be detected.  Edge detection  Stripes transitions produce edges on camera images.  Transitions can be detected with sub-pixel precision.  Projecting positive and negative slides is a robust way to recover edges.

Revisiting Colors Taking advantage of successively projecting positive and negative slides, reflectivity restrictions can be eliminated. To solve the problem of allowing ghost boundaries we have to augment the basis of code, that is, allowing colors.

Recovering colored codes u is the ambient light r is the local intensity transfer factor mainly determined by local surface properties p is the projected intensity for each channel  Negative slide  Positive slide

Colored Gray code

(b,s)-BCSL Augment basis of Rusinkiewicz code eliminating ghost boundaries. We proposed a coding scheme that generates a boundary stripe codes with a number b of colors in s slides, it is called (b,s)-BCSL.

(3,2)-BCSL

Decoding table verticesd(0)d(1)d(2)d(3) V(00)0369 V(01) V(02) V(10) V(11) V(12) V(20) V(21) V(22) th transition

(3,2)-BCSL

…after processing: Mostrar Vídeo

How to reconstruct the entire object? Capturing images from many different points of view. The resultant clouds of points have to be aligned to be unified. The clouds of points can be processed to become a mesh.

From: Medical Imaging Laboratory Departments of Biomedical Engineering and Radiology Johns Hopkins University School of Medicine Baltimore, MD Projected pattern changing object’s position