3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.

Slides:



Advertisements
Similar presentations
3D Computer Vision and Video Computing 3D Vision Lecture 3 - Part 1 Camera Models CS80000 Section 2 Spring 2005 Professor Zhigang Zhu, Rm 4439
Advertisements

Introduction to Computer Vision 3D Vision Lecture 4 Calibration CSc80000 Section 2 Spring 2005 Professor Zhigang Zhu, Rm 4439
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Computer vision: models, learning and inference
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
CSc D Computer Vision – Ioannis Stamos 3-D Computer Vision CSc Camera Calibration.
Camera calibration and epipolar geometry
Vision, Video And Virtual Reality 3D Vision Lecture 13 Calibration CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
3D Vision Topic 1 of Part II Camera Models CSC I6716 Fall 2010
A synthetic camera model to test calibration procedures A four step procedure (last slide) based on an initial position (LookAt) and 13 parameters: ( 
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
1 3D Sensing Camera Model and 3D Transformations Camera Calibration (Tsai’s Method) Depth from General Stereo (overview) Pose Estimation from 2D Images.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
CS485/685 Computer Vision Prof. George Bebis
1 Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1.
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Uncalibrated Epipolar - Calibration
Vision, Video and Virtual Reality 3D Vision Lecture 12 Camera Models CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
Projected image of a cube. Classical Calibration.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Camera Parameters and Calibration. Camera parameters From last time….
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Today: Calibration What are the camera parameters?
1 Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation.
CSE 6367 Computer Vision Stereo Reconstruction Camera Coordinate Transformations “Everything should be made as simple as possible, but not simpler.” Albert.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Introduction to 3D Vision How do we obtain 3D image data? What can we do with it?
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Camera Geometry and Calibration Thanks to Martial Hebert.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
Geometric Camera Models
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Affine Structure from Motion
Single-view geometry Odilon Redon, Cyclops, 1914.
CS-498 Computer Vision Week 7, Day 2 Camera Parameters Intrinsic Calibration  Linear  Radial Distortion (Extrinsic Calibration?) 1.
3D Sensing Camera Model Camera Calibration
EECS 274 Computer Vision Affine Structure from Motion.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
3D Computer Vision and Video Computing 3D Vision Topic 2 of Part II Calibration CSc I6716 Fall 2009 Zhigang Zhu, City College of New York
3D Reconstruction Using Image Sequence
3D Computer Vision and Video Computing 3D Vision Topic 2 of Part II Calibration CSc I6716 Spring2013 Zhigang Zhu, City College of New York
Single-view geometry Odilon Redon, Cyclops, 1914.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
3D Computer Vision and Video Computing 3D Vision Topic 3 of Part II Calibration CSc I6716 Spring 2008 Zhigang Zhu, City College of New York
3D Computer Vision 3D Vision Lecture 3 - Part 1 Camera Models Spring 2006 Zhang Aiwu.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Computer vision: models, learning and inference
Processing visual information for Computer Vision
Depth from disparity (x´,y´)=(x+D(x,y), y)
Geometric Camera Models
3D Sensing Camera Model and 3D Transformations
Multiple View Geometry for Robotics
Single-view geometry Odilon Redon, Cyclops, 1914.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: 13
The Pinhole Camera Model
Presentation transcript:

3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction

3D Shape from X shading silhouette texture stereo light striping motion mainly research used in practice

Structured Light 3D data can also be derived using a single camera a light source that can produce stripe(s) on the 3D object light source camera light stripe

Structured Light 3D Computation 3D data can also be derived using a single camera a light source that can produce stripe(s) on the 3D object light source x axis f (x´,y´,f) 3D point (x, y, z)  b b [x y z] = [x´ y´ f] f cot  - x´ (0,0,0) 3D image

Depth from Multiple Light Stripes What are these objects?

Our (former) System 4-camera light-striping stereo projector rotation table cameras 3D object

Review: The Camera Model How do we get an image point IP from a world point P? c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 1 s IP r s IP c s PxPyPz1PxPyPz1 = image point camera matrix C world point What’s in C?

Review: The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates. 1. CP = T R WP 2. FP =  (f) CP s FP x s FP y s FP z s /f 0 CP x CP y CP z 1 = perspective transformation image point 3D point in camera coordinates Why is there not a scale factor here?

Camera Calibration In order work in 3D, we need to know the parameters of the particular camera setup. Solving for the camera parameters is called calibration. ywyw xwxw zwzw W ycyc xcxc zczc C intrinsic parameters are of the camera device extrinsic parameters are where the camera sits in the world

Intrinsic Parameters principal point (u 0,v 0 ) scale factors (d x,d y ) aspect ratio distortion factor  focal length f lens distortion factor  (models radial lens distortion) C (u 0,v 0 ) f

Extrinsic Parameters translation parameters t = [t x t y t z ] rotation matrix r 11 r 12 r 13 0 r 21 r 22 r 23 0 r 31 r 32 r R = Are there really nine parameters?

Calibration Object The idea is to snap images at different depths and get a lot of 2D-3D point correspondences.

The Tsai Procedure The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used. Several images are taken of the calibration object yielding point correspondences at different distances. Tsai’s algorithm requires n > 5 correspondences {(x i, y i, z i ), (u i, v i )) | i = 1,…,n} between (real) image points and 3D points.

In this* version of Tsai’s algorithm, The real-valued (u,v) are computed from their pixel positions (r,c): u =  d x (c-u 0 ) v = -d y (r - v 0 ) where - (u 0,v 0 ) is the center of the image - d x and d y are the center-to-center (real) distances between pixels and come from the camera’s specs -  is a scale factor learned from previous trials * This version is for single-plane calibration.

Tsai’s Procedure 1. Given the n point correspondences ((x i,y i,z i ), (u i,v i )) Compute matrix A with rows a i a i = (v i *x i, v i *y i, -u i *x i, -u i *v i, v i ) These are known quantities which will be used to solve for intermediate values, which will then be used to solve for the parameters sought.

Intermediate Unknowns 2. The vector of unknowns is  = (  1,  2,  3,  4,  5 ):  1 =r 11 /t y  2 =r 12 /t y  3 =r 21 /t y  4 =r 22 /t y  5 =t x /t y where the r’s and t’s are unknown rotation and translation parameters. 3. Let vector b = (u 1,u 2,…,u n ) contain the u image coordinates. 4. Solve the system of linear equations A  = b for unknown parameter vector .

Use  to solve for ty, tx, and 4 rotation parameters 5. Let U =     4 2. Use U to calculate t y 2.

6. Try the positive square root t y = (t 2 ) and use it to compute translation and rotation parameters. 1/2 r 11 =  1 t y r 12 =  2 t y r 21 =  3 t y r 22 =  4 t y t x =  5 t y Now we know 2 translation parameters and 4 rotation parameters. except…

Determine true sign of t y and compute remaining rotation parameters. 7. Select an object point P whose image coordinates (u,v) are far from the image center. 8. Use P’s coordinates and the translation and rotation parameters so far to estimate the image point that corresponds to P. If its coordinates have the same signs as (u,v), then keep t y, else negate it. 9. Use the first 4 rotation parameters to calculate the remaining 5.

Calculating the remaining 5 rotation parameters:

Solve another linear system. 10. We have t x and t y and the 9 rotation parameters. Next step is to find t z and f. Form a matrix A´ whose rows are: a i ´ = (r 21 *x i + r 22 *y i + t y, v i ) and a vector b´ whose rows are: b i ´ = (r 31 *x i + r 32 *y i ) * v i 11. Solve A´*v = b´ for v = (f, t z ).

Almost there 12. If f is negative, change signs (see text). 13. Compute the lens distortion factor  and improve the estimates for f and t z by solving a nonlinear system of equations by a nonlinear regression. 14. All parameters have been computed. Use them in 3D data acquisition systems.

We use them for general stereo. P P 1 =(r 1,c 1 ) P 2 =(r 2,c 2 ) y1y1 y2y2 x1x1 x2x2 e1e1 e2e2 B C

Now we can build 3D models from real objects.

image plane at depth d (u,v,d) (x,y,z) OUTSIDE one of many cubes 3D space is made up of many cubes.