Auto-calibration we have just calibrated using a calibration object –another calibration object is the Tsai grid of Figure 7.1 on HZ182, which can be used.

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

More on single-view geometry
Computer Vision, Robert Pless
Lecture 11: Two-view geometry
1 A camera is modeled as a map from a space pt (X,Y,Z) to a pixel (u,v) by ‘homogeneous coordinates’ have been used to ‘treat’ translations ‘multiplicatively’
3D reconstruction.
Primitives Behaviour at infinity HZ 2.2 Projective DLT alg Invariants
Conics DLT alg HZ 4.1 Rectification HZ 2.7 Hierarchy of maps Invariants HZ 2.4 Projective transform HZ 2.3 Behaviour at infinity Primitives pt/line/conic.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Projective Geometry- 3D
Self-calibration.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
Recovering metric and affine properties from images
More single view geometry Describes the images of planes, lines,conics and quadrics under perspective projection and their forward and backward properties.
Camera calibration and epipolar geometry
Structure from motion.
Recovering metric and affine properties from images
The 2D Projective Plane Points and Lines.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D reconstruction class 11
Projective 2D geometry (cont’) course 3
Used slides/content with permission from
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Self-calibration Class 21 Multiple View Geometry Comp Marc Pollefeys.
Camera model Relation between pixels and rays in space ?
Multiple View Geometry
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D photography Marc Pollefeys Fall 2007
Projective Geometry and Camera model Class 2
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
More on single-view geometry class 10 Multiple View Geometry Comp Marc Pollefeys.
Lec 21: Fundamental Matrix
3D photography Marc Pollefeys Fall 2004 / Comp Tue & Thu 9:30-10:45
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Pole/polar consider projective geometry again we may want to compute the tangents of a curve that pass through a point (e.g., visibility) let C be a conic.
Structure from Motion Course web page: vision.cis.udel.edu/~cv April 25, 2003  Lecture 26.
Objects at infinity used in calibration
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Two-view geometry Epipolar geometry F-matrix comp. 3D reconstruction
December 12 th, 2001C. Geyer/K. Daniilidis GRASP Laboratory Slide 1 Structure and Motion from Uncalibrated Catadioptric Views Christopher Geyer and Kostas.
1 Camera calibration based on arbitrary parallelograms 授課教授:連震杰 學生:鄭光位.
Camera diagram Computing K from 1 image HZ8.8 IAC and K HZ8.5 Camera matrix from F HZ9.5 IAC HZ , 8.5 Extracting camera parameters HZ6.2 Camera matrix.
EECS 274 Computer Vision Affine Structure from Motion.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Class 51 Multi-linear Systems and Invariant Theory in the Context of Computer Vision and Graphics Class 5: Self Calibration CS329 Stanford University Amnon.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
More on single-view geometry class 10 Multiple View Geometry Comp Marc Pollefeys.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
EECS 274 Computer Vision Projective Structure from Motion.
Projective 2D geometry (cont’) course 3 Multiple View Geometry Modified from Marc Pollefeys’s slides.
Calibration ECE 847: Digital Image Processing Stan Birchfield Clemson University.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Epipolar geometry.
More on single-view geometry class 10
Omnidirectional epipolar geometry
Lecture 3: Camera Rotations and Homographies
3D reconstruction class 11
Camera Calibration class 9
Uncalibrated Geometry & Stratification
Presentation transcript:

Auto-calibration we have just calibrated using a calibration object –another calibration object is the Tsai grid of Figure 7.1 on HZ182, which can be used to solve for the camera matrix, including automatic calibration using Canny edge detection (if the grid is the only object in the image?) we now want to calibrate without any calibration objects, instead using multiple views recall that our main goal is to promote affine reconstructions to metric reconstructions –what is the homography that corrects the camera matrix? auto-calibration = “computation of metric properties of the cameras and/or the scene from a set of uncalibrated images” HZ458, Chapter 19

Calibration seeks a rectifying homography consider multiple views with fixed K –suppose that we have acquired an image sequence by a camera with fixed internal parameters (fixed K; basically fixed focal length) we are looking for the m calibrated cameras P M i (M for ‘metric’) that took the images –in our case, the motion of these cameras is arbitrary, but the calibration matrix is constant –P M i = K [R i | t i ] (where we have replaced -RC by the translation vector t) we presently know the m uncalibrated cameras P i, where P M i = P i H –consider the projective reconstruction (P i, X j ) computed from these views –projective camera matrices P i and projectively reconstructed 3D points X j –built from fundamental matrices (which are built from point correspondences) –these camera matrices will not satisfy the constraint of constant K goal: compute rectifying homography H that yields a metric reconstruction: –(P i, X j )  metric (P i H, H -1 X j ) –the cameras of P i H have a consistent calibration matrix K –note that the same H is used to correct all cameras direct vs stratified methods –stratified methods get metric from affine reconstruction very quickly (linear time) aside: sometimes a camera calibration may be found more quickly than a metric scene reconstruction (e.g., if camera just rotates) HZ459

Simplifying the homography to simplify H, we can make one global similarity (rotation, translation, scale) transform, since we are only solving up to a similarity anyway –i.e., factor out the homography’s irrelevant similarity component assume wlog P M 1 = K [I | 0] and P 1 = [I | 0] –apply a similarity transform to move world frame to first camera’s frame –this is the global translation and rotation let the homography be H = [A t; v t k] the world frame change allows us to simplify H to [K 0; v t k] –applying P M 1 = P 1 H, we get A = K and t = 0 –[K 0] = [I 0] H or (since premultiplication with [I 0] ignores the 4 th row) [K 0] = [A t] now we use our scale dof to set k = 1 –H is nonsingular (nonzero det) so k \neq 0, scale the frame to make k=1 H = [K 0; v t 1]: 8 dof (5 + 3) v may be replaced by –Kp t where the plane at infinity in the projective reconstruction is (p t 1) now you see the relationship between H and K (knowledge of K is enough) HZ

Absolute dual quadric Q* ∞ (abridged) recall π ∞ and Ω ∞, objects at infinity want to motivate the importance of Q* ∞ for calibration to metric structure absolute dual quadric Q* ∞ is: –dual of absolute conic –planes tangent to absolute conic Ω ∞ (a degenerate tangent space of a surface) –represented by 4x4 matrix diag(1,1,1,0) just like Ω ∞ –Q* ∞ = dual(Ω ∞ ) is fixed iff map is a similarity i.e., Q* ∞ characterizes the similarity –measures angle: cos (angle between planes π 1 and π 2 ) = π 1 t Q* ∞ π 2 / sqrt(π 1 t Q* ∞ π 1 ) sqrt(π 2 t Q* ∞ π 2 ) use dual quadric Q* ∞ instead of primal conic Ω ∞ in calibration –just like we used the dual conic C* ∞ instead of (primal) circular points in 2d rectification interesting aside: null vector of Q* ∞ = π ∞ HZ83-85

Calibration with quadrics, conics, and the calibration matrix recall relationship between ω and K (see slide on ω): –ω = (KK t ) -1 –this is a relationship between a conic and K we want to establish a relationship between quadrics and conics, which will be combined with the relationship between conics and K –Q  C and C  K yields Q  K –then constraints on K can be transferred to Q in particular, to the absolute dual quadric –will allow us to find the absolute dual quadric to calibrate HZ462

Outlines consider the tangent space of a surface consider all tangent planes that contain the camera center the silhouette of a smooth surface S = the locus of points on S whose tangent planes contain the camera center –it defines a smooth curve –called a contour generator in HZ outline of this surface = image of the silhouette –outline = apparent contour = profile silhouette depends on the camera center outline depends on the camera center and the image plane in graphics, the silhouette is important because it defines the boundary between visible and invisible parts of the surface –we shall revisit when we discuss tangential varieties, growing out of dual surfaces HZ200

Spherical outlines the silhouette of a sphere is a circle plane of circle is orthogonal to Cc (c = sphere center) so cone of rays from camera center C is a right cone the outline of a sphere is a conic –intersection of image plane and cone of rays is a conic section HZ201

Quadric outlines now projectively transform 3-space sphere becomes a quadric Q outline becomes a conic –silhouette is a conic too (intersection of plane with Q) therefore, outline of a quadric is a conic Thm: Under camera matrix P, the outline of quadric Q is the conic C with dual C* = PQ*P t –proof: tangent lines L of C back-project to tangent planes π = P t L of Q  π t Q* π = 0  (L t P) Q* (P t L) = 0  L t (PQ*P t )L = 0  PQ*Pt is dual conic of C note that this relates the camera, quadric and conic HZ201

Exploiting mutual bonds to calibrate C* = PQ*Pt relates camera, quadric and conic apply it to the absolute conic and the absolute dual quadric –fact: Q* ∞ projects to ω new bond ω  Q* ∞ old bond ω  K this creates another new bond Q* ∞  K also have bond K  H we use this bond to solve for Q* ∞ from constraints on K then the calibrating homography may be determined from Q* ∞ (diag(1,1,1,0)  H diag(1,1,1,0) H t = Q* ∞ ) metric reconstruction: apply H -1 to points and H to cameras summary: K constraints  Q* ∞  homography HZ462-3 (including algorithm 19.1)

Solving for Q* ∞ build the matrix ω* = KK t (e.g., when skew = 0) –Table 19.1 the zero entries of this matrix generate constraints on Q* ∞ from ω i * = P i Q* ∞ P i t = 0 –e.g., known principal point in image (which is then moved to origin) yields two constraints ω*13 = ω*23 = 0 –known principal point: 2; also zero skew: 3; also known aspect ratio: 4 (all linear) –zero skew only: 1 quadratic constraint need to find 10 constraints to solve for Q* ∞ need to find 8 constraints to solve for Q* ∞ if we use det Q* ∞ = 0 as 9 th yielding a quartic equation that has 4 potential solutions –use same approach as the 7-point algorithm for F what will yield 8 or 10 constraints? –camera where only focal length is unknown (principal point known, pixel aspect ratio known, zero skew) yields 4 constraints per image, so 2 images are enough this is the common case –known principal point only: 2 constraints per image, so need 4-5 images see Table 19.3 HZ469 for other scenarios and the required # of views this completes calibration HZ