Triangulation Scanner Design Options

Slides:



Advertisements
Similar presentations
3D Head Mesh Data Stereo Vision Active Stereo 3D Reconstruction 3dMD System 1.
Advertisements

High-Resolution Three- Dimensional Sensing of Fast Deforming Objects Philip Fong Florian Buron Stanford University This work supported by:
Kawada Industries Inc. has introduced the HRP-2P for Robodex 2002
Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.
www-video.eecs.berkeley.edu/research
--- some recent progress Bo Fu University of Kentucky.
(thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)
Structured light and active ranging techniques Class 11.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #17.
Depth from Structured Light II: Error Analysis
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Po-Hsiang Chen Advisor: Sheng-Jyh Wang 2/13/2012.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Structured Light + Range Imaging Lecture #17 (Thanks to Content from Levoy, Rusinkiewicz, Bouguet, Perona, Hendrik Lensch)
Shadow Scanning [Bouguet 1999] Desk Lamp Camera Stick or pencil Desk The idea [Bouguet and Perona’98] J.-Y. Bouguet and P. Perona, “3D Photography on your.
Structured Lighting Guido Gerig CS 6320, 3D Computer Vision Spring 2012 (thanks: some slides S. Narasimhan CMU, Marc Pollefeys UNC)
Real-time Acquisition and Rendering of Large 3D Models Szymon Rusinkiewicz.
Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Stanford CS223B Computer Vision, Winter 2005 Lecture 6: Stereo 2 Sebastian Thrun, Stanford Rick Szeliski, Microsoft Hendrik Dahlkamp and Dan Morris, Stanford.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Structured light and active ranging techniques Class 8.
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Acknowledgement: some content and figures by Brian Curless
Multi-view stereo Many slides adapted from S. Seitz.
Feature tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on good features.
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Real-Time 3D Model Acquisition
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
1 Lecture 11 Scene Modeling. 2 Multiple Orthographic Views The easiest way is to project the scene with parallel orthographic projections. Fast rendering.
Presented by: Ali Agha March 02, Outline Sterevision overview Motivation & Contribution Structured light & method overview Related work Disparity.
Stereo Guest Lecture by Li Zhang
Project 1 artifact winners Project 2 questions Project 2 extra signup slots –Can take a second slot if you’d like Announcements.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Stereo matching Class 10 Read Chapter 7 Tsukuba dataset.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
Structured light and active ranging techniques Class 8
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Form Shape Color Texture Space
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Stereo Many slides adapted from Steve Seitz.
CS 4487/6587 Algorithms for Image Analysis
Project 2 code & artifact due Friday Midterm out tomorrow (check your ), due next Fri Announcements TexPoint fonts used in EMF. Read the TexPoint.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Computer Vision Michael Isard and Dimitris Metaxas.
Computer Vision, Robert Pless
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Basic Ray Tracing CMSC 435/634.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Real Time Nonphotorealistic Rendering. How to achieve real time NPR? Economy of line: present a lot of information with very few strokes. Silhouettes.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Representation in Vision Derek Hoiem CS 598, Spring 2009 Jan 22, 2009.
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
Real-Time 3D Model Acquisition Szymon Rusinkiewicz Olaf Hall-Holt Marc Levoy Ilya Korsunsky Princeton University Stanford University Hunter College.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Common Classification Tasks
Range Imaging Through Triangulation
Acknowledgement: some content and figures by Brian Curless
CSSE463: Image Recognition Day 30
Real-time Acquisition and Rendering of Large 3D Models
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Triangulation Scanner Design Options

Triangulation System Options Single-stripe systems most robust, but slowest To go faster, project multiple stripes But which stripe is which? In limit, project 2D pattern, determine projector/camera correspondence

Time-Coded Light Patterns Assign each stripe a unique illumination code over time [Posdamer 82] Time Just as an example, here’s a very simple code based on binary numbers. If you look at a single position in space (i.e., a single pixel), you see a certain on/off pattern over these four frames. This conveys a code that tells you which stripe you are looking at, which gives you a plane in space with which you can then triangulate to find depths. Space

Gray-Code Patterns To minimize effects of quantization error: each point may be a boundary only once Time Space

Accounting for Reflectance Because of surface reflectance and ambient light, distinguishing between black and white not always easy Solution: project all-white (and sometimes all-black) frame Permits multiple shades of gray

Multiple Shades of Gray

Multiple Shades of Gray

Intensity Wedges Limiting case: intensity wedges

Temporal vs. Spatial Continuity Structured-light systems make certain assumptions about the scene: Temporal continuity assumption: Assume scene is static Assign stripes a code over time Spatial continuity assumption: Assume scene is one object Project a grid, pattern of dots, etc.

Grid Methods Assume exactly one continuous surface Count dots or grid lines Occlusions cause problems Some methods use dynamic programming [Maas]

Codes Assuming Local Spatial Continuity Codeword

Codes Assuming Local Spatial Continuity [Zhang] [Ozturk]

Spatio-Temporal Continuity Another possible assumption: Object may move Velocity low enough to permit tracking “Spatio-temporal” continuity

Designing a Code for Moving Scenes Projector Camera

Designing a Code for Moving Scenes t

Designing a Code for Moving Scenes Codeword t

Codes for Moving Scenes Assign time codes to stripe boundaries Perform frame-to-frame tracking of corresponding boundaries Propagate illumination history [Hall-Holt & Rusinkiewicz, ICCV 2001] In our system, though, the objects are moving from frame to frame, so if you just used the simple algorithm you might get half of one code and half of another, and come up with the wrong code. So, we need to do some tracking from frame to frame. In fact, we do this based on looking at the boundaries between stripes. The stripes are tracked from frame to frame, and at the end of the day the illumination on both sides of the boundary is what conveys the code. Illumination history = (WB),(BW),(WB) Code

Designing a Code Want many “features” to track: lots of black/white edges at each frame Try to minimize ghosts – WW or BB “boundaries” that can’t be seen directly There’s an extra little wrinkle, though. If you’re looking for the boundaries between stripes, sometimes you have a “boundary” between two stripes of the same color, which we’ll call a ghost. So, some stripes are easy to track, because you can see them in both frames, but in some cases you have to infer the presence of a stripe you can’t see directly. In order to make this at all feasible, we have to design a code that tries to minimize these ghosts and ensures, for example, that a ghost in one frame becomes visible in the next.

Designing a Code Design a code to make tracking possible: Do not allow two spatially adjacent ghosts Do not allow two temporally adjacent ghosts t

Designing a Code Graph (for 4 frames): Nodes: stripes (over time) 0011 1110 1011 0110 0100 1001 0001 1100 0000 1101 1010 0111 1000 0101 1111 0010 Edges: boundaries (over time) Nodes: stripes (over time) Space Time

Designing a Code Graph (for 4 frames): 0011 1110 1011 0110 0100 1001 0001 1100 0000 1101 1010 0111 1000 0101 1111 0010 Nodes: stripes (over time) Boundary visible at even times Boundary visible at odd times Edges: boundaries (over time) Path with alternating colors: 55 edges in graph  maximal-length traversal has 110 boundaries (111 stripes)

Designing a Code Many solutions to the graph problem as stated, so we can add more constraints Maximize effect of errors No static boundaries Even distribution of stripes of width 1 and 2

Implementation Pipeline: DLP projector illuminates scene @ 60 Hz. Synchronized NTSC camera captures video Pipeline returns range images @ 60 Hz. Project Code Capture Images Find Boundaries Match Boundaries Decode Compute Range So here’s how the range scanner looks. We project stripes at 60 Hz., capture video frames at the same rate, and process them to get range images at 60 Hz.

Results Video frames Stripe boundaries unknown known ghosts

More Options: Shadow Scanning Variant of single-stripe triangulation Use a simple lamp, stick to create a shadow [Bouguet]

More Options: Active Stereo Benefit: only camera/camera calibration, not camera/projector Projection possibilities: Random texture Single stripe Others (space-time codes?) [Davis]