Plane-based external camera calibration with accuracy measured by relative deflection angle Chunhui Cui , KingNgiNgan Journal Image Communication Volume.

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

Vanishing points  .
Computer Vision, Robert Pless
Lecture 11: Two-view geometry
QR Code Recognition Based On Image Processing
Introduction to Computer Vision 3D Vision Lecture 4 Calibration CSc80000 Section 2 Spring 2005 Professor Zhigang Zhu, Rm 4439
3D reconstruction.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Computer vision: models, learning and inference
1 pb.  camera model  calibration  separation (int/ext)  pose Don’t get lost! What are we doing? Projective geometry Numerical tools Uncalibrated cameras.
Mapping: Scaling Rotation Translation Warp
Self-calibration.
Jan-Michael Frahm, Enrique Dunn Spring 2012
Two-view geometry.
Fisheye Camera Calibration Arunkumar Byravan & Shai Revzen University of Pennsylvania.
Camera calibration and epipolar geometry
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Calibration Dorit Moshe.
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Single-view geometry Odilon Redon, Cyclops, 1914.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Simple Calibration of Non-overlapping Cameras with a Mirror
Camera Geometry and Calibration Thanks to Martial Hebert.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
CSCE 643 Computer Vision: Structure from Motion
Self-Calibration and Metric Reconstruction from Single Images Ruisheng Wang Frank P. Ferrie Centre for Intelligent Machines, McGill University.
Geometric Camera Models
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
1 Camera calibration based on arbitrary parallelograms 授課教授:連震杰 學生:鄭光位.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
3D Reconstruction Using Image Sequence
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Calibration ECE 847: Digital Image Processing Stan Birchfield Clemson University.
Lecture 16: Image alignment
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Epipolar geometry.
Epipolar geometry continued
Multiple View Geometry for Robotics
Reconstruction.
Video Compass Jana Kosecka and Wei Zhang George Mason University
Two-view geometry.
Generally Discriminant Analysis
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

Plane-based external camera calibration with accuracy measured by relative deflection angle Chunhui Cui , KingNgiNgan Journal Image Communication Volume 25 Issue 3, March, 2010 Elsevier Science Inc. New York, NY, USA South Taiwan University ESLab Professor : Tsai, Lian-Jou Student : Tsai, Yu-Ming PPT Production rate : 100% Date : 2011/09/28

Outline Introduction Basic equations Proposed pair-wise pose estimation Homography estimation (R, t) estimation Accuracy measure based on relative deflection angle Experimental results Conclusion 1 South Taiwan University ESLab

Camera calibration has always been an essential component of photogrammetric measurement. Traditional Photogrammetric Calibration employs the reference grids and determines the intrinsic matrix K using images of a known object point array. External camera calibration : estimate the poses of different cameras and register them to get her in a world coordinate system. The separation of the internal and external calibration benefits both the accuracy and efficiency of parameter estimation. 2 South Taiwan University ESLab Introduction (1/2)

3 South Taiwan University ESLab Introduction (2/2) In this paper, we present an efficient plane-based external calibration method to estimate the relative pose between two neighboring cameras. Based on homography, three different pair-wise estimation algorithms are proposed to recover the rotation and translation between cameras. It is convenient to take the RDA measure since it does not require any prior knowledge of the true 3D coordinates of control points.

4 South Taiwan University ESLab Camera model Pinhole camera model, the relationship between a 3D point M and its image projection m.

5 South Taiwan University ESLab Homography (1/3) Suppose two cameras capture a planar pattern 3D Point : M Camera2 : C2 Camera1 : C1

choose C1 as the world coordinate system. 6 South Taiwan University ESLab Homography (2/3) m1 is the projection of M onto camera1 m2 is the projection of M onto camera2

where λ is an unknown arbitrary scalar. 7 South Taiwan University ESLab Homography (3/3)

8 South Taiwan University ESLab pair-wise pose estimation (1/6) We apply Zhang’s method to do the intrinsic calibration for each individual camera. Only the external calibration is necessary every time the cameras are moved and refocused to capture new 3D video. We only use the planar pattern to estimate the relative relationship between two neighboring cameras. It is argued that the chaining procedure is prone to errors. Accurate and stable pair-wise calibration can benefit the accuracy of transform chaining.

9 South Taiwan University ESLab pair-wise pose estimation (2/6) Using this information, (R, t) between two cameras can be determined through an arbitrarily chosen model plane. using these estimates may result in very unstable calibration results, since the accuracy of corner detection highly depends on the plane location and orientation.

10 South Taiwan University ESLab pair-wise pose estimation (3/6) Homography estimation Each image pair, one view from camera1 and the other from camera2, leads to a homography H. Suppose there are totally P image pairs and then we can estimate P homographies Hi (i=1,2,…,P) induced by different planes.

11 South Taiwan University ESLab pair-wise pose estimation (4/6) Calculation of n and λ P different stereo views lead to P different normals ni. The plane position and orientation (R1, t1) relative to C1, where the plane is now assumed on Z=0 of the world coordinate system.

12 South Taiwan University ESLab pair-wise pose estimation (5/6)

13 South Taiwan University ESLab pair-wise pose estimation (6/6) The collineation matrix G has median singular value equal to one. That matrix K1, K2 and H are known,so we can compute λ and then recover the matrix G.

14 South Taiwan University ESLab (R, t) estimation A linear to solve the rotation R and translation t. As each model plane introduces a normal ni, a homography Hi and thus a matrix Gi, by stacking P equations.

15 South Taiwan University ESLab Implicit orthogonal constraint (1/3) The three row vectors form an orthonormal basis of R^3.

16 South Taiwan University ESLab Implicit orthogonal constraint (2/3) One of the necessary conditions to guarantee the orthogonality of R.

17 South Taiwan University ESLab Implicit orthogonal constraint (3/3)

18 South Taiwan University ESLab Two-step method We obtain the uniform ratio o fthe three elements of t vector : t vector is reduced to a single scale s as : While keeps the problem linear. (R, t) estimated by this method not only conform to the homography geometry, but also satisfy the orthogonal constraint R^T ‧ R=1 much better.

19 South Taiwan University ESLab Three-step method (1/2) In three dimensions a rotation can be defined by a single angle of rotation θ, and the direction of a unit vector : v is the corresponding eigenvector ; Rv = v (a) (c)

20 South Taiwan University ESLab Three-step method (2/2) Estimate cosθ, sinθ and vector t together by stacking P such equations : Based on the above description, the Three-step method is outlined as follows : (1) Use Two-step method to compute the initial estimation of t. (2) Estimate vector v based on (a) with t fixed. (3) Estimate parameters cosθ, sinθ and refine t together based on (b) with v fixed. cosθ, sinθ and v already computed, the rotation matrix R can be recovered by (c). (b)

21 South Taiwan University ESLab Nonlinear method We should impose the constraint R^T ‧ R=1 explicitly in the maximum likelihood estimation and it turns out to be a constrained nonlinear minimization problem.

22 South Taiwan University ESLab relative deflection angle (1/4) It can be achieved with good measures by: (1)Using lens with larger focal length so that the array of pixels focus on a smaller area of the scene. (2) Reducing the distance between the scene and the cameras to take a close look. (3) Directly increasing the image resolution. But the focal length and resolution of the digital camera will not benefit the measurement. We propose to measure the calibration accuracy based on the relative deflection angle(RDA).

23 South Taiwan University ESLab Therefore the deflection angle θerr integrates the errors from both internal and external calibration. The mean value E[θerr] over all the control points can be reasonably utilized to reveal the inaccuracy of stereo camera calibration. relative deflection angle (2/4)

24 South Taiwan University ESLab relative deflection angle (3/4) Consider a pixel rectangle of size du*dv. Uniform digitization noise in the rectangle has a standard deviation : This digitization error corresponds to the uncertainty of the projection of a 3D point onto the image plane, which can be measured as the angle of the deflected projection rays.

25 South Taiwan University ESLab relative deflection angle (4/4) θsys is associated with the image resolution and the camera focal length.

26 South Taiwan University ESLab Experimental results Simulation on (R, t) estimation Two cameras capture a planar pattern with9*12 50mm square corners. Total 30 different plane poses are used for calibration. The second camera is translated and rotated by ; Add zero mean uncorrelated Gaussian noise to the image projections of the control points (square corners on the plane) from a standard deviation of 0.1 pixels up to 0.9 pixels. The relative estimation are used to measure the calibration accuracy,The results of 50 simulations are summarized in :

27 South Taiwan University ESLab

28 South Taiwan University ESLab

29 South Taiwan University ESLab Test on RDA metric (1/4) For all three experiments, the nonlinear method initialized by 3-step is applied to estimate the external parameters. For comparison, we apply another error metric to measure the calibration accuracy. The mean square distance MSD defined in following will be used as the second error metric to measure the calibration results.

30 South Taiwan University ESLab Test on RDA metric (2/4) In the first experiment, the baseline of the two cameras is kept 1m fixed. We change the distance of the pattern from the cameras as well as the pattern orientations.

31 South Taiwan University ESLab Test on RDA metric (3/4) In the second experiment, the camera setup is varied in terms of the baseline length. the distance between the pattern and the baseline midpoint is slightly adjusted such that the distances from the pattern to the two cameras are the same and kept unchanged

32 South Taiwan University ESLab Test on RDA metric (4/4) we change the camera focal length while keeping the baseline length and the viewing distance fixed.

33 South Taiwan University ESLab Multi-camera calibration (1/2) The intrinsic parameters for the five cameras are estimated individually beforehand using Zhang’s method. A checkerboard pattern with 9*12 50 mm square corners is used for providing control points. Two different system setups are tested as sketched in Fig. 7.

34 South Taiwan University ESLab Multi-camera calibration (2/2) We apply the proposed RDA metric to measure the calibration errors. To investigate the error introduced by the chaining of pair-wise (R, t) estimation.

35 South Taiwan University ESLab Conclusion we present a convenient and efficient method to calibrate a typical multi-camera system. Three different algorithms are proposed based on homography. As the orthogonal constraint is fully imposed in the nonlinear method, it achieves the best calibration results in terms of the robustness to data noise. The Three-step and Nonlinear methods is more flexible and universal than the traditional multi-camera calibration methods, where all the cameras are required to capture the same control points. The RDA can be employed to provide a fair and meaningful evaluation on the calibration results for different cameras under different working conditions.

Thanks for your listening!! Any questions? South Taiwan University ESLab