José Manuel Iñesta José Martínez Sotoca Mateo Buendía

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Area and perimeter calculation using super resolution algorithms M. P. Cipolletti – C. A. Delrieux – M. C. Piccolo – G. M. E. Perillo IADO – UNS – CONICET.
The fundamental matrix F
Lecture 11: Two-view geometry
QR Code Recognition Based On Image Processing
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Computer vision: models, learning and inference
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
Structure from motion.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Multi video camera calibration and synchronization.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Single-view geometry Odilon Redon, Cyclops, 1914.
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
CSE473/573 – Stereo Correspondence
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
Today: Calibration What are the camera parameters?
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
3D-2D registration Kazunori Umeda Chuo Univ., Japan CRV2010 Tutorial May 30, 2010.
Digital Image Processing Lecture 7: Geometric Transformation March 16, 2005 Prof. Charlene Tsai.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Metrology 1.Perspective distortion. 2.Depth is lost.
Global Parametrization of Range Image Sets Nico Pietroni, Marco Tarini, Olga Sorkine, Denis Zorin.
Single View Geometry Course web page: vision.cis.udel.edu/cv April 9, 2003  Lecture 20.
Feature based deformable registration of neuroimages using interest point and feature selection Leonid Teverovskiy Center for Automated Learning and Discovery.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
3D Imaging Motion.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Development of a laser slit system in LabView
Calibration.
Tracking Systems in VR.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Robust and Accurate Surface Measurement Using Structured Light IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 57, NO. 6, JUNE 2008 Rongqian.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
3D Reconstruction Using Image Sequence
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
Single-view geometry Odilon Redon, Cyclops, 1914.
Camera calibration from multiple view of a 2D object, using a global non linear minimization method Computer Engineering YOO GWI HYEON.
GEOMETRIC PROPERTIES OF THE 3D SPINE CURVE
Calibrating a single camera
A Plane-Based Approach to Mondrian Stereo Matching
Computer vision: models, learning and inference
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Depth from disparity (x´,y´)=(x+D(x,y), y)
Common Classification Tasks
Multiple View Geometry for Robotics
By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley
Uncalibrated Geometry & Stratification
Single-view geometry Odilon Redon, Cyclops, 1914.
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Presentation transcript:

José Manuel Iñesta José Martínez Sotoca Mateo Buendía A NEW STRUCTURED LIGHT CALIBRATION METHOD OF PARTIALLY UNKNOWN GEOMETRY José Manuel Iñesta José Martínez Sotoca Mateo Buendía University of Alicante University Jaume I, Castellón Univeristy of Valencia Spain

Range retrieval method alternative to stereo imaging STRUCTURED LIGHT Range retrieval method alternative to stereo imaging A light source with a known pattern is utilised instead of a camera. A set of landmarks are created on the objects by the light pattern. The 3D positions of those landmarks are computed. Pros: Makes it easier to solve the stereo correspondence problem. A light source is expected to be cheaper than a digital camera. Cons: Only valid in controlled environments. Sensitive to light condition changes and kinds of surfaces.

EXPERIMENTAL SETTING CALIBRATION PHASE IMAGE 1: IMAGE 2: CAMERA CAMERA BACK PLANE CAMERA PROJECTOR FRONT PLANE CAMERA PROJECTOR VALID CALIBRATED SPACE

EXPERIMENTAL SETTING zi,j? OPERATION PHASE OBJECT CAMERA PROJECTOR BACK PLANE zi,j? OBJECT CAMERA PROJECTOR

THE INDEXATION PROBLEM It’s the problem in structured light dual to the correspondence problem in stereovision. It represents the labelling of the landmarks artificially created by the pattern when it is projected over the scene. Once solved, the range data can be retrieved. Different approaches to help the solution: colour codes, binary patterns, constraints. We have introduced a mark in the pattern that sets a reference for landmark indexation.

THE CALIBRATION PROBLEM Problem to be solved: The determination of the translation and orientation of the co-ordinate axes of the imaging system with respect to the global co-ordinate system. The presented approach is based on geometric considerations on the information provided by two previous calibration projections over two reference planes. This approach does not need to know the whole system geometry.

S Y T E M G O R X Z Y O z O D D O O D r r r' r rp rp rp r" r' 3 r (OBJECT POINT) 3 3 X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 2 3 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

RANGE RETRIEVAL EQUATIONS X Z Y O z O D D O O D r r r' r rp rp rp r" 3 r (OBJECT POINT) 3 3 RANGE RETRIEVAL EQUATIONS X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 3 2 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

RANGE RETRIEVAL EQUATIONS X Z Y O z O D D O O D r r r' r rp rp rp r" 3 r (OBJECT POINT) 3 3 RANGE RETRIEVAL EQUATIONS X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 3 2 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

RANGE RETRIEVAL EQUATIONS X Z Y O z O D D O O D r r r' r rp rp rp r" 3 r (OBJECT POINT) 3 3 RANGE RETRIEVAL EQUATIONS X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 3 2 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

RANGE RETRIEVAL EQUATIONS X Z Y O z O D D O O D r r r' r rp rp rp r" 3 r (OBJECT POINT) 3 3 RANGE RETRIEVAL EQUATIONS X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 3 2 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

RANGE RETRIEVAL EQUATIONS: Using these similarities, it is possible to derive an expression where the z value depends only on the image co-ordinates of the light dots for the object net and both calibration nets (rp1 , rp2 , rp3) and distances between planes (D, D1, D2) : In addition, if we take two given nodes (A and B) on the calibration planes (1 and 2) then, it can be derived that k can be expressed as: This way z is computed as a function only of distances between pixels and the distance between both calibration planes, D.

 r  3 % S Y T E M R O ERROR SOURCES: UPPER VIEW OF THE EXPERIMENTAL SETTING r r’ r ” r’ BACK PLANE 1 3 3 2 z r q 3 D r FRONT PLANE 2 LIGHT SOURCE D 1 ERROR SOURCES: Discretization error: (256x256)   0.8% Calibration error: (D = 5001mm)   0.2% Setting error: (related to k )   2% Others? rp 3 rp rp 1 2 D 2  r  3 % CAMERA

S Y T E M R O In addition, errors due to experimental setting vary with the projection angle  Relative error measuring on a known dimension object with different . 8 7 6 5 e (%) 4 r 3 2 1 10 20 30 40 50 q ( degrees )

S Y T E M R O The same object with the pattern projected with different angle  values  = 10º  = 20º  = 30º  = 35º  = 40º

THE PROJECTED PATTERN The points where the z(i,j) are to be computed are the nodes of a square grid. To be decided: the line spacing and their thickness These parameters are problem and surface dependent: object sizes, expected surface topology, textures, precision needed, etc.

OBJECT AND PATTERN SEGMENTATION Object segmentation is carried out by a logic difference between the posterior reference net and the object net images. Pattern segmentation A maxima detection in the profile lines inside the segmented object zone is performed A semiautomatic mechanism has been devised for reconstruction of discontinuities (if they appear) The reconstructed lines are skeletonized. The distorted pattern is re-built.

NODE MAPPING AND Z EXTRACTION The node-seeking algorithm performs a line tracking in the four cardinal directions, looking for points that hold a crossing condition. The first node will be the one at the upper left corner of the pattern mark. After node tracking, for each node we have: Coordinates of its projection on the back reference plane (rp1) Coordinates of its projection on the front reference plane (rp2) Coordinates of its projection on the object (rp3) So we have all the information we need to compute z(i,j)

SURFACE TOPOGRAPHY: The z(i,j) values for the grid nodes are obtained. These values represent the object surface. If further information is needed, a 2D interpolation is performed: (bilinear, cubic splines, Hermite polynoms, etc.)

USING THE DATA... In our project, we use the 3D data to analyse the shape of the human back. The first step is to achieve the line corresponding to the spine on the skin surface. The method is based on 3D deformable models (active shape models) trained with hand-marked spine lines. After this line is achieved, differential geometry is being applied to study its features

CONCLUSIONS: A new method for calibration of a structured light system has been presented. Their main advantages are an easy calibration procedure and its ability to retrieve the range of large object surfaces. The main limitation of the method in its current state is the need of the mark in the net for indexation. This is not important for the back applications, and could be solved by grid coding. For the human back application the error of the method is around 3 mm, which is below the limit recommended by the experts.

FUTURE LINES: To evaluate the difficulty of grid coding and its advantages to other real-world applications in which the objects are a priori totally unknown. Improve the accuracy of the method (lens distortion, subpixel information, etc.) Change the indexation approach using a coded pattern (to allow multiple objects and position variability.