3D Laser Stripe Scanner or “A Really Poor Man’s DeltaSphere” Chad Hantak December 6, 2004.

Slides:



Advertisements
Similar presentations
DEVELOPMENT OF A COMPUTER PLATFORM FOR OBJECT 3D RECONSTRUCTION USING COMPUTER VISION TECHNIQUES Teresa C. S. Azevedo João Manuel R. S. Tavares Mário A.
Advertisements

1Ellen L. Walker Stereo Vision Why? Two images provide information to extract (some) 3D information We have good biological models (our own vision system)
A Unified Approach to Calibrate a Network of Camcorders & ToF Cameras M 2 SFA 2 Marseille France 2008 Li Guan Marc Pollefeys {lguan, UNC-Chapel.
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Interactive Rendering using the Render Cache Bruce Walter, George Drettakis iMAGIS*-GRAVIR/IMAG-INRIA Steven Parker University of Utah *iMAGIS is a joint.
Exploiting Homography in Camera-Projector Systems Tal Blum Jiazhi Ou Dec 11, 2003 [Sukthankar, Stockton & Mullin. ICCV-2001]
Render Cache John Tran CS851 - Interactive Ray Tracing February 5, 2003.
Visualisation of head.txt. Data capture Data for the head figure was captured by a laser scanner. The object is mounted on a turntable, and illuminated.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Laser Webcam Turntable Low cost 3D scanner Costs: Laser………….100euro Webcam………100euro Turntable……...150euro Side view Top view Object In order to perform.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.
John A. Bender Applications in Real-time 3D Tracking Collaborators: Pietro Perona Luis Goncalves Ken Goldberg Karl Chen Ilan Lobel Steve Nowlin.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Ch 1 Intro to Graphics page 1CS 367 First Day Agenda Best course you have ever had (survey) Info Cards Name, , Nickname C / C++ experience, EOS experience.
Algorithm Optimization for Swept Source Fourier Domain Optical Coherence Tomography (OCT) with Galvo Scanner and Digital Signal Processor (DSP) Kevin Seekell.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Today: Calibration What are the camera parameters?
COMPUTER SYSTEM COMPONENTS ACTIVITY
1/22/04© University of Wisconsin, CS559 Spring 2004 Last Time Course introduction Image basics.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
On the Design, Construction and Operation of a Diffraction Rangefinder MS Thesis Presentation Gino Lopes A Thesis submitted to the Graduate Faculty of.
Electrical and Computer Engineer Large Portable Projected Peripheral Touchscreen Team Jackson Brian Gosselin Jr. Greg Langlois Nick Jacek Dmitry Kovalenkov.
Capturing, Modeling, Rendering 3D Structures
Peripherals and Storage Looking at: Scanners Printers Why do we need storage devices anyway? What are magnetic disks? How do magnetic disks physically.
Image Stitching Ali Farhadi CSE 455
2D TO 3D MODELLING KCCOE PROJECT PRESENTATION Student: Ashish Nikam Ashish Singh Samir Gaykar Sanoj Singh Guidence: Prof. Ashwini Jaywant Submitted by.
Knowledge Systems Lab JN 8/24/2015 A Method for Temporal Hand Gesture Recognition Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
AIBO Camera Stabilization Tom Stepleton Ethan Tira-Thompson , Fall 2003.
MACHINE VISION GROUP Graphics hardware accelerated panorama builder for mobile phones Miguel Bordallo López*, Jari Hannuksela*, Olli Silvén* and Markku.
A HIGH RESOLUTION 3D TIRE AND FOOTPRINT IMPRESSION ACQUISITION DEVICE FOR FORENSICS APPLICATIONS RUWAN EGODA GAMAGE, ABHISHEK JOSHI, JIANG YU ZHENG, MIHRAN.
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Rendering Adaptive Resolution Data Models Daniel Bolan Abstract For the past several years, a model for large datasets has been developed and extended.
Landing a UAV on a Runway Using Image Registration Andrew Miller, Don Harper, Mubarak Shah University of Central Florida ICRA 2008.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
COMPUTER GRAPHICS Hochiminh city University of Technology Faculty of Computer Science and Engineering CHAPTER 01: Graphics System.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership.
Submitted by: Giorgio Tabarani, Christian Galinski Supervised by: Amir Geva CIS and ISL Laboratory, Technion.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
Eurecom, 6 Feb 2007http://biobimo.eurecom.fr Project BioBiMo 1.
CSE 185 Introduction to Computer Vision Feature Matching.
Su-ting, Chuang 1. Outline Introduction Related work Hardware configuration Detection system Optimal parameter estimation framework Conclusion 2.
Su-ting, Chuang 1. Outline Introduction Related work Hardware configuration Finger Detection system Optimal parameter estimation framework Conclusion.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
HOW SCANNERS WORK A scanner is a device that uses a light source to electronically convert an image into binary data (0s and 1s). This binary data can.
Computer Vision Computer Vision based Hole Filling Chad Hantak COMP December 9, 2003.
What you need: In order to use these programs you need a program that sends out OSC messages in TUIO format. There are a few options in programs that.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Electronic Visualization Laboratory University of Illinois at Chicago “Fast And Reliable Space Leaping For Interactive Volume Rendering” by Ming Wan, Aamir.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Range Imaging Through Triangulation
Idea: projecting images onto a common plane
Filtering Things to take away from this lecture An image as a function
Presentation transcript:

3D Laser Stripe Scanner or “A Really Poor Man’s DeltaSphere” Chad Hantak December 6, 2004

Overview Introduction Acquisition Device Calibration Software Framework Processing Limitations Future Work

Introduction or “What’s the project about?” 3D Scanning System Similar to DeltaSphere Range Samples from fixed COP Acquire sparse depth with color Acquisition Device Cheap Simple to operate

DeltaSphere Dense Depth Time of Flight Acquisition Collects Range Samples over 360 degrees of Azimuth and 135 degrees of elevation Cons Slow Scanning Processing Expensive

Acquisition Device Components Laser Level Available at hardware stores Used for leveling (pictures, shelves, …) Emits a plane of laser light (Vertical or Horizontal) Camcorder Canon GL2 (3 CCD) Little overkill for this

Acquisition Device Construction Camcorder rotates about fixed point Laser Level is fixed next to camcorder

Depth Range Acquisition “The Theory” Calibrated Rig Know laser stripe’s plane equation in camera’s coordinate system During Rotation Determine amount of rotation Find laser’s stripe points in image Determine where image rays intersect stripe plane Yields depth samples At same time acquire color

Calibration Need to know laser stripe plane in camera coordinate system Record calibration grid with laser stripe activated

Bouget’s Matlab Toolkit Camera Intrinsics Standard Bouget’s Toolkit Extras from toolkit Calibration plane in camera’s coordinate system for each Image

Laser Stripe’s Plane For each image Find stripe points (user clicks) Determine world rays Intersect with grid’s plane for world points Take all the world points and fit them to a plane (least squares) Result is laser stripe plane

Software Framework Implemented in C++ Libraries OpenCV Image Processing DirectShow Interface to Camcorder y/en-us/directshow/htm/introductiontodirectshow.asphttp://msdn.microsoft.com/library/default.asp?url=/librar y/en-us/directshow/htm/introductiontodirectshow.asp

Two Executables depthPanorama Hook into camcorder Acquires / processes frames Saves into shared memory panoramaView Reads from shared memory Displays result to user

depthPanorama 3 Systems RotationTracker Estimates the amount of rotation ColorProcessor Creates the color panorama DepthProcessor Creates the depth panorama

RotationTracker Estimates the amount of rotation between two frames Uses Lucas-Kanade Point Tracking Estimate rotation if “enough” points moved Uses RANSAC to estimate the amount of rotation

ColorProcessor Updates the Color Panorama Process Only update after “enough” rotation Extract and resize left half of frame into panorama’s memory Right half of frame contains the laser stripe

Depth Processor Updates the Depth Panorama Process Find the laser stripe points in the frame (samples every x scan lines) Turn image points into world rays Intersect rays with laser stripe plane For valid intersections, place depth value in depth panorama For “some” invalid intersections, attempt to interpolate

Finding Laser Stripe Image Points or “The Hard Part” Know in the frame where points are (right half) Extension to ModelCamera system Process scan line looking for candidate peaks with symmetry Exploit fact camcorder is just rotating Determine homography from one frame to the next, warp previous frame, subtract from current frame Homography is based on amount of rotation between frames Since laser stripe moves differently it will stand out better in resultant image There is still noise in the image Need some image processing Edge Detection, Erosion, Dilation

Turing Image Points into Depth Samples Know the camcorder intrinsics Allows image points to be turned into rays into world from camcorder’s COP Intersect these rays with plane Splat depth to neighboring depth samples Associate a confidence level with each depth sample Perfect; confidence = 1.0 Interpolated; confidence = 0.75 Splatted; confidence = 0.6 When updating if incoming confidence higher, use it

No Intersection Due to noise, image points of stripe may not be correct World ray may not intersect stripe plane If neighboring samples are valid, interpolate missing depth value Lower confidence

panoramaView Simply reads values sent by depthPanorama Displays texture-mapped quads of depth values Updates continually from shared memory

Results

Current Problems Rotation tracking Off by a few degrees at end of 360 degree pan (< 10 degrees) Tweaking LK Tracking parameters Smooth out the rotation (use LK & correlation to determine when rotation complete) Stripe point determination Adjusting image processing parameters Depth discontinuities Take them into account when rendering

Future Work Fix current problems Better laser light / Point detection Stronger at distance Want longer range scans Combine different scans Introduces new slew of problems

References R. Laganiere, “Programming computer vision applications: A step-by-step guide to the use of the Intel OpenCV library and the Microsoft DirectShow technology”, V. Popescu, E. Sacks and G. Bahmutov, “Interactive Modeling from Dense Color and Sparse Depth”, 3DPTV, S. Sinha and M. Pollefeys, “Towards Calibrating a Pan-Tilt-Zoom Camera Network”, 5th Workshop on Omnidirectional Vision, Camera Networks, and Non-classical Cameras (OMNIVIS), May 16, 2004.