Download presentation
1
Calibration
2
Camera Calibration Geometric Radiometric
Intrinsics: Focal length, principal point, distortion Extrinsics: Position, orientation Radiometric Mapping between pixel value and scene radiance Can be nonlinear at a pixel (gamma, etc.) Can vary between pixels (vignetting, cos4, etc.) Dynamic range (calibrate shutter speed, etc.)
3
Geometric Calibration Issues
Camera Model Orthogonal axes? Square pixels? Distortion? Calibration Target Known 3D points, noncoplanar Known 3D points, coplanar Unknown 3D points (structure from motion) Other features (e.g., known straight lines)
4
Geometric Calibration Issues
Optimization method Depends on camera model, available data Linear vs. nonlinear model Closed form vs. iterative Intrinsics only vs. extrinsics only vs. both Need initial guess?
5
Caveat - 2D Coordinate Systems
y axis up vs. y axis down Origin at center vs. corner Will often write (u, v) for image coordinates u v v u v u
6
Camera Calibration – Example 1
Given: 3D 2D correspondences General perspective camera model (no distortion) Don’t care about “z” after transformation Homogeneous scale ambiguity 11 free parameters
7
Camera Calibration – Example 1
Write equations:
8
Camera Calibration – Example 1
Linear equation Overconstrained (more equations than unknowns) Underconstrained (rank deficient matrix – any multiple of a solution, including 0, is also a solution)
9
Camera Calibration – Example 1
Standard linear least squares methods for Ax=0 will give the solution x=0 Instead, look for a solution with |x|= 1 That is, minimize |Ax|2 subject to |x|2=1
10
Camera Calibration – Example 1
Minimize |Ax|2 subject to |x|2=1 |Ax|2 = (Ax)T(Ax) = (xTAT)(Ax) = xT(ATA)x Expand x in terms of eigenvectors of ATA: x = m1e1+ m2e2+… xT(ATA)x = l1m12+l2m22+… |x|2 = m12+m22+…
11
Camera Calibration – Example 1
To minimize l1m12+l2m22+… subject to m12+m22+… = 1 set mmin= 1 and all other mi=0 Thus, least squares solution is eigenvector corresponding to minimum (non-zero) eigenvalue of ATA
12
Camera Calibration – Example 2
Incorporating additional constraints into camera model No shear (u, v axes orthogonal) Square pixels etc. Doing minimization in image space All of these impose nonlinear constraints on camera parameters
13
Camera Calibration – Example 2
Option 1: nonlinear least squares Usually “gradient descent” techniques e.g. Levenberg-Marquardt Option 2: solve for general perspective model, find closest solution that satisfies constraints Use closed-form solution as initial guess for iterative minimization
14
Radial Distortion Radial distortion can not be represented by matrix
(cu, cv) is image center, u*img= uimg– cu, v*img= vimg– cv, k is first-order radial distortion coefficient
15
Camera Calibration – Example 3
Incorporating radial distortion Option 1: Find distortion first (e.g., straight lines in calibration target) Warp image to eliminate distortion Run (simpler) perspective calibration Option 2: nonlinear least squares
16
Calibration Targets Full 3D (nonplanar) 2D (planar)
Can calibrate with one image Difficult to construct 2D (planar) Can be made more accuracte Need multiple views Better constrained than full SFM problem
17
Calibration Targets Identification of features
Manual Regular array, manually seeded Regular array, automatically seeded Color coding, patterns, etc. Subpixel estimation of locations Circle centers Checkerboard corners
18
Calibration Target w. Circles
19
3D Target w. Circles
20
Planar Checkerboard Target
[Bouguet]
21
Coded Circles [Marschner et al.]
Here’s a photo of the actual setup. You can see the camera on the left; we move this during the measurement session as we capture images. The light source is on the right; it’s a studio-grade electronic flash in a Fome-Cor box we built to resuce stray light. In the middle is the test object: in this case my daughter, who just turned 10 this week. You can also see a lot of little white dots, which I’ll explain next. [Marschner et al.]
22
Concentric Coded Circles
[Gortler et al.]
23
Color Coded Circles [Culbertson]
24
Calibrating Projector
Calibrate camera Project pattern onto a known object (usually plane) Can use time-coded structured light Form (uproj, vproj, x, y, z) tuples Use regular camera calibration code Typically lots of keystoning relative to cameras
25
Multi-Camera Geometry
Epipolar geometry – relationship between observed positions of points in multiple cameras Assume: 2 cameras Known intrinsics and extrinsics
26
Epipolar Geometry P p1 p2 C1 C2
27
Epipolar Geometry P l2 p1 p2 C1 C2
28
Epipolar Geometry P Epipolar line l2 p1 p2 C1 C2 Epipoles
29
Epipolar Geometry Goal: derive equation for l2
Observation: P, C1, C2 determine a plane P l2 p1 p2 C1 C2
30
Epipolar Geometry Work in coordinate frame of C1
Normal of plane is T Rp2, where T is relative translation, R is relative rotation P l2 p1 p2 C1 C2
31
Epipolar Geometry p1 is perpendicular to this normal: p1 (T Rp2) = 0 P l2 p1 p2 C1 C2
32
Epipolar Geometry Write cross product as matrix multiplication P C1 C2
33
Epipolar Geometry p1 T* R p2 = 0 p1T E p2 = 0
E is the essential matrix P l2 p1 p2 C1 C2
34
Essential Matrix E depends only on camera geometry
Given E, can derive equation for line l2 P l2 p1 p2 C1 C2
35
Fundamental Matrix Can define fundamental matrix F analogously, operating on pixel coordinates instead of camera coordinates u1T F u2 = 0 Advantage: can sometimes estimate F without knowing camera calibration
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.