Download presentation
Presentation is loading. Please wait.
Published byAlexia Cummings Modified over 9 years ago
1
Camera calibration Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz,, Fred Pighin and Marc Pollefyes
2
Outline Camera projection models Camera calibration Nonlinear least square methods A camera calibration tool Applications
3
Camera projection models
4
Pinhole camera
5
Pinhole camera model (optical center) origin principal point P (X,Y,Z) p (x,y)
6
Pinhole camera model principal point
7
Pinhole camera model principal point
8
Principal point offset intrinsic matrix principal point only related to camera projection
9
Intrinsic matrix non-square pixels (digital video) skew radial distortion Is this form of K good enough?
10
Distortion Radial distortion of the image –Caused by imperfect lenses –Deviations are most noticeable for rays that pass through the edge of the lens No distortionPin cushionBarrel
11
Camera rotation and translation extrinsic matrix
12
Two kinds of parameters internal or intrinsic parameters such as focal length, optical center, aspect ratio: what kind of camera? external or extrinsic (pose) parameters including rotation and translation: where is the camera?
13
Other projection models
14
Orthographic projection Special case of perspective projection –Distance from the COP to the PP is infinite –Also called “ parallel projection ” : (x, y, z) → (x, y) Image World
15
Other types of projections Scaled orthographic –Also called “ weak perspective ” Affine projection –Also called “ paraperspective ”
16
Illusion
18
Fun with perspective
19
Perspective cues
21
Fun with perspective Ames room Ames videoBBC story
22
Forced perspective in LOTR
23
Camera calibration
24
Estimate both intrinsic and extrinsic parameters. Two main categories: 1.Photometric calibration: uses reference objects with known geometry 2.Self calibration: only assumes static scene, e.g. structure from motion
25
Camera calibration approaches 1.linear regression (least squares) 2.nonlinear optimization
26
Chromaglyphs (HP research)
27
Camera calibration
28
Linear regression
29
Directly estimate 11 unknowns in the M matrix using known 3D points (X i,Y i,Z i ) and measured feature positions (u i,v i )
30
Linear regression
32
Solve for Projection Matrix M using least-square techniques
33
Normal equation Given an overdetermined system the normal equation is that which minimizes the sum of the square differences between left and right sides
34
Linear regression Advantages: –All specifics of the camera summarized in one matrix –Can predict where any world point will map to in the image Disadvantages: –Doesn’t tell us about particular parameters –Mixes up internal and external parameters pose specific: move the camera and everything breaks –More unknowns than true degrees of freedom
35
Nonlinear optimization A probabilistic view of least square Feature measurement equations Probability of M given {( u i,v i )} P
36
Optimal estimation Likelihood of M given {( u i,v i )} It is a least square problem (but not necessarily linear least square) How do we minimize L ? PL
37
Optimal estimation Non-linear regression (least squares), because the relations between û i and u i are non-linear functions of M We can use Levenberg-Marquardt method to minimize it known constant We could have terms like in this unknown parameters
38
Nonlinear least square methods
39
Least square fitting number of data points number of parameters
40
Linear least square fitting y t
41
y t modelparameters
42
Linear least square fitting y t modelparameters
43
Linear least square fitting y t modelparameters residual prediction
44
Linear least square fitting y t is linear, too. modelparameters residual prediction
45
Nonlinear least square fitting model parameters residuals
46
Function minimization It is very hard to solve in general. Here, we only consider a simpler problem of finding local minimum. Least square is related to function minimization.
47
Function minimization
48
Quadratic functions Approximate the function with a quadratic function within a small neighborhood
49
Quadratic functions A is positive definite. All eigenvalues are positive. For all x, x T Ax>0. negative definite A is indefinite A is singular
50
Function minimization Why? By definition, if is a local minimizer, is small enough
51
Function minimization
53
Descent methods
54
Descent direction
55
Steepest descent method the decrease of F(x) per unit along h direction → h sd is a descent direction because h T sd F’(x) = -F’(x) 2 <0
56
Line search
58
Steepest descent method isocontourgradient
59
Steepest descent method It has good performance in the initial stage of the iterative process. Converge very slow with a linear rate.
60
Newton’s method → → → →
61
Another view Minimizer satisfies
62
Newton’s method It requires solving a linear system and H is not always positive definite. It has good performance in the final stage of the iterative process, where x is close to x*.
63
Gauss-Newton method Use the approximate Hessian No need for second derivative H is positive semi-definite
64
Hybrid method This needs to calculate second-order derivative which might not be available.
65
Levenberg-Marquardt method LM can be thought of as a combination of steepest descent and the Newton method. When the current solution is far from the correct one, the algorithm behaves like a steepest descent method: slow, but guaranteed to converge. When the current solution is close to the correct solution, it becomes a Newton’s method.
66
Nonlinear least square
67
Levenberg-Marquardt method
68
μ=0 → Newton’s method μ → ∞ → steepest descent method Strategy for choosing μ –Start with some small μ –If F is not reduced, keep trying larger μ until it does –If F is reduced, accept it and reduce μ for the next iteration
69
Recap (the Rosenbrock function) Global minimum at (1,1)
70
Steepest descent
73
In the plane of the steepest descent direction
74
Regularized Least- Squares Steepest descent (1000 iterations)
75
Gauss-Newton method With the approximate Hessian No need for second derivative H is positive semi-definite
77
Regularized Least- Squares Newton’s method (48 evaluations)
78
Levenberg-Marquardt Blends steepest descent and Gauss-Newton At each step, solve for the descent direction h If μ large,, steepest descent If μ small,, Gauss-Newton
79
Regularized Least- Squares Levenberg-Marquardt (90 evaluations)
80
A popular calibration tool
81
Multi-plane calibration Images courtesy Jean-Yves Bouguet, Intel Corp. Advantage Only requires a plane Don’t have to know positions/orientations Good code available online! –Intel’s OpenCV library: http://www.intel.com/research/mrl/research/opencv/ http://www.intel.com/research/mrl/research/opencv/ –Matlab version by Jean-Yves Bouget: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html http://www.vision.caltech.edu/bouguetj/calib_doc/index.html –Zhengyou Zhang’s web site: http://research.microsoft.com/~zhang/Calib/ http://research.microsoft.com/~zhang/Calib/
82
Step 1: data acquisition
83
Step 2: specify corner order
84
Step 3: corner extraction
86
Step 4: minimize projection error
87
Step 4: camera calibration
89
Step 5: refinement
90
Optimized parameters
91
Applications
92
How is calibration used? Good for recovering intrinsic parameters; It is thus useful for many vision applications Since it requires a calibration pattern, it is often necessary to remove or replace the pattern from the footage or utilize it in some ways…
93
Example of calibration
95
Videos from GaTech DasTatoo, MakeOfDasTatooMakeOf P!NG, MakeOfP!NGMakeOf Work, MakeOfWorkMakeOf LifeInPaints, MakeOfLifeInPaintsMakeOf
96
PhotoBook MakeOf PhotoBook
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.