Camera Calibration.

Slides:



Advertisements
Similar presentations
Single-view geometry Odilon Redon, Cyclops, 1914.
Advertisements

Epipolar Geometry.
The fundamental matrix F
Lecture 11: Two-view geometry
3D reconstruction.
Robot Vision SS 2005 Matthias Rüther 1 ROBOT VISION Lesson 3: Projective Geometry Matthias Rüther Slides courtesy of Marc Pollefeys Department of Computer.
Jan-Michael Frahm, Enrique Dunn Spring 2012
Two-view geometry.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Lecture 8: Stereo.
Camera calibration and epipolar geometry
Structure from motion.
Used slides/content with permission from
Parameter estimation class 5 Multiple View Geometry Comp Marc Pollefeys.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single View Metrology Class 3. 3D photography course schedule (tentative) LectureExercise Sept 26Introduction- Oct. 3Geometry & Camera modelCamera calibration.
Camera calibration and single view metrology Class 4 Read Zhang’s paper on calibration
CSE 803 Stockman Fall Stereo models Algebraic models for stereo.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
3D Urban Modeling Marc Pollefeys Jan-Michael Frahm, Philippos Mordohai Fall 2006 / Comp Tue & Thu 14:00-15:15.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Camera Calibration class 9 Multiple View Geometry Comp Marc Pollefeys.
Multiple View Geometry
Computer Vision Calibration Marc Pollefeys COMP 256 Read F&P Chapter 2 Some slides/illustrations from Ponce, Hartley & Zisserman.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Geometria della telecamera. the image of a point P belongs to the line (P,O) p P O p = image of P = image plane ∩ line(O,P) interpretation line of p:
Multiple View Geometry. THE GEOMETRY OF MULTIPLE VIEWS Reading: Chapter 10. Epipolar Geometry The Essential Matrix The Fundamental Matrix The Trifocal.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
CSE 6367 Computer Vision Stereo Reconstruction Camera Coordinate Transformations “Everything should be made as simple as possible, but not simpler.” Albert.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Introduction à la vision artificielle III Jean Ponce
Single View Geometry Course web page: vision.cis.udel.edu/cv April 7, 2003  Lecture 19.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Camera Calibration class 9 Multiple View Geometry Comp Marc Pollefeys.
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
视觉的三维运动理解 刘允才 上海交通大学 2002 年 11 月 16 日 Understanding 3D Motion from Images Yuncai Liu Shanghai Jiao Tong University November 16, 2002.
Camera Model & Camera Calibration
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Vision Review: Image Formation Course web page: September 10, 2002.
Computing F. Content Background: Projective geometry (2D, 3D), Parameter estimation, Algorithm evaluation. Single View: Camera model, Calibration, Single.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Geometry of Multiple Views
Ch. 3: Geometric Camera Calibration
Day x 5-10 minute quiz SVD demo review of metric rectification algorithm 1.build a 2x3 matrix A from orthog line pairs A = [L1M1, L1M2+L2M1, L2M2; L1’M1’,
EECS 274 Computer Vision Geometric Camera Calibration.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
ROBOT VISION Lesson 4: Camera Models and Calibration Matthias Rüther Slides partial courtesy of Marc Pollefeys Department of Computer Science University.
Determining 3D Structure and Motion of Man-made Objects from Corners.
Single-view geometry Odilon Redon, Cyclops, 1914.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Calibrating a single camera
Geometric Camera Calibration
Parameter estimation class 5
Epipolar geometry.
More on single-view geometry class 10
Estimating 2-view relationships
Camera Calibration class 9
Two-view geometry.
Two-view geometry.
Presentation transcript:

Camera Calibration

Camera calibration

Resectioning

Basic equations

Basic equations n  6 points minimal solution P has 11 dof, 2 independent eq./points 5½ correspondences needed (say 6) Over-determined solution n  6 points minimize subject to constraint

Degenerate configurations More complicate than 2D case (see Ch.21) Camera and points on a twisted cubic Points lie on plane and single line passing through projection center No unique solution; camera solution moves arbitrarily along the twisted Cube or straight line.

Data normalization Less obvious (i) Simple, as before RMS distance from origin is √ 3 so that “average” point has coordinate mag of (1,1,1,1)T. Works when variation in depth of the points is relatively small, i.e. compact distribution of points (ii) Anisotropic scaling some points are close to camera, some at infinity; simple normalization doesn’t work.

Line correspondences Extend DLT to lines (back-project line li to plane ) For each 3D to 2D line correspondence: (2 independent eq.) X1 X2 X1 and X2 are points

Geometric error P Xi = Assumes no error in 3D point Xi e.g. accurately Machined calibration object; xi is measured point and has error

Gold Standard algorithm Objective Given n≥6 3D to 2D point correspondences {Xi↔xi’}, determine the Maximum Likelyhood Estimation of P Algorithm Linear solution: Normalization: DLT: form 2n x 12 matrix A by stacking correspondence Ap = 0 subj. to ||p|| = 1  unit singular vector of A corresponding to smallest singular value Minimization of geometric error: using the linear estimate as a starting point minimize the geometric error: Denormalization: ~ ~ ~ use Levenson Marquardt

Calibration example Canny edge detection Straight line fitting to the detected edges Intersecting the lines to obtain the images corners typically precision of xi <1/10 pixel (HZ rule of thumb: 5n constraints for n unknowns  28 points = 56 constraints for 11 unknowns DLT Gold standard

Errors in the image and in the world Errors in the world is closest point in space to Xi that maps exactly to xi Errors in the image and in the world Must augment set of parameters by including Mhanobolis distance w.r.t. known error covariance matrices for Each measurement xi and Xi

Geometric interpretation of algebraic error note invariance to 2D and 3D similarities given proper normalization

Estimation of affine camera note that in this case algebraic error = geometric error

Gold Standard algorithm Objective Given n≥4 3D to 2D point correspondences {Xi↔xi’}, determine the Maximum Likelyhood Estimation of P (remember P3T=(0,0,0,1)) Algorithm Normalization: For each correspondence solution is Denormalization:

Restricted camera estimation Find best fit that satisfies skew s is zero pixels are square principal point is known complete camera matrix K is known Minimize geometric error Impose constraint through parametrization Assume square pixels and zero skew  9 parameters Apply Levinson-Marquardt Minimizing image only error f: 9  2n, But minimizing 3D and 2D error, f: 3n+9  5n 3D points must be included among the measurements Minimization also includes estimation of the true position of 3D points

Minimizing Alg. Error. for any vector p assume map from parameters q  P=K[R|-RC], i.e. p=g(q) Parameters are principal point, size of square pixels and 6 for rotation and translation minimize ||Ag(q)|| A is 2n x 12; reduce A to a square 12x12 matrix independent of n such that for any vector p Mapping is from R9  R12

Restricted camera estimation Initialization Use general DLT to get an initial camera matrix Clamp values to desired values, e.g. s=0, x= y Use the remaining values as initial condition for iteration Hopefully DLT answer close to “known” parameters and clamping doesn’t move things too much; Not always the case  in practice clamping leads to large error and lack of convergence. Alternative initialization Use general DLT for all parameters Impose soft constraints Start with low values of weights and then gradually increase them as iterations go on.

Exterior orientation Calibrated camera, position and orientation unkown  Pose estimation 6 dof  3 points minimal Can be shown that the resulting non linear equations have Four solutions in general

Restricted camera parameters; algebraic R9  R12 super fast Geometric: R9  R2n 2n = 396 in this case; super slow Same problem with no restriction on camera matrix Note: solutions are different!

Covariance estimation ML residual error Example: n=197, =0.365, =0.37

Covariance for estimated camera Compute Jacobian at ML solution, then (variance per parameter can be found on diagonal) cumulative-1 (chi-square distribution =distribution of sum of squares)

Radial distortion short focal length long focal length; Distortion worse for short focal length

Correction of distortion x and y are the measured coordinates; Choice of the distortion function and center: xc and yc Computing the parameters of the distortion function Minimize with additional unknowns: i.e. estimate kappa, center and P simultaneously Straighten lines: determine L (r) so that images of straight scene lines should be straight; then solve for P

K1 = 0.103689; k2 = .00487908; k3 = .0016894; k4 = .00841614 Xc = 321.87; yc = 241.18 ; original image: 640 x 480; periphery of image Moved by 30 pixels