Global Alignment and Structure from Motion Computer Vision CSE576, Spring 2005 Richard Szeliski.

Slides:



Advertisements
Similar presentations
Structure from Motion Introduction to Computer Vision CS223B, Winter 2005 Richard Szeliski.
Advertisements

The fundamental matrix F
Lecture 11: Two-view geometry
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2011 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Self-calibration.
Chapter 6 Feature-based alignment Advanced Computer Vision.
Two-view geometry.
Fisheye Camera Calibration Arunkumar Byravan & Shai Revzen University of Pennsylvania.
Camera calibration and epipolar geometry
Structure from motion.
Structure from Motion Computer Vision CSE P 576, Spring 2011 Richard Szeliski Microsoft Research.
Today Feature Tracking Structure from Motion on Monday (1/29)
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Global Alignment and Structure from Motion
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Camera Calibration and Pose Estimation
Many slides and illustrations from J. Ponce
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Camera calibration and single view metrology Class 4 Read Zhang’s paper on calibration
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Reconstruction Class 23 Multiple View Geometry Comp Marc Pollefeys.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.
Non-Linear Least Squares and Sparse Matrix Techniques: Fundamentals Richard Szeliski Microsoft Research UW-MSR Course on Vision Algorithms CSE/EE 577,
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
Today: Calibration What are the camera parameters?
Mosaics CSE 455, Winter 2010 February 8, 2010 Neel Joshi, CSE 455, Winter Announcements  The Midterm went out Friday  See to the class.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Single-view modeling, Part 2 CS4670/5670: Computer Vision Noah Snavely.
Automatic Camera Calibration
Computer vision: models, learning and inference
Image Stitching Ali Farhadi CSE 455
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
875: Recent Advances in Geometric Computer Vision & Recognition Jan-Michael Frahm Fall 2011.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Last Lecture (optical center) origin principal point P (X,Y,Z) p (x,y) x y.
Camera calibration Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz,, Fred Pighin and Marc Pollefyes.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
Example: line fitting. n=2 Model fitting Measure distances.
CSCE 643 Computer Vision: Structure from Motion
3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Last lecture Passive Stereo Spacetime Stereo Multiple View Stereo.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Affine Structure from Motion
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Last lecture Passive Stereo Spacetime Stereo.
EECS 274 Computer Vision Affine Structure from Motion.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Structure from Motion ECE 847: Digital Image Processing
3D Reconstruction Using Image Sequence
Last Two Lectures Panoramic Image Stitching
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
EECS 274 Computer Vision Projective Structure from Motion.
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Epipolar geometry.
3D Photography: Epipolar geometry
Uncalibrated Geometry & Stratification
Noah Snavely.
Presentation transcript:

Global Alignment and Structure from Motion Computer Vision CSE576, Spring 2005 Richard Szeliski

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision2 Today’s lecture Rotational alignment (“3D stitching”) [Project 3] pairwise alignment (Procrustes) global alignment (linearized least squares) Calibration camera matrix (Direct Linear Transform) non-linear least squares separating intrinsics and extrinsics focal length and optic center

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision3 Today’s lecture Structure from Motion triangulation and pose two-frame methods factorization bundle adjustment robust statistics

Global rotational alignment Fully Automated Panoramic Stitching [Project 3]

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision5 AutoStitch [Brown & Lowe’03] Stitch panoramic image from an arbitrary collection of photographs (known focal length) 1.Extract and (pairwise) match features 2.Estimate pairwise rotations using RANSAC 3.Add to stitch and re-run global alignment 4.Warp images to sphere and blend

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision6 3D Rotation Model Projection equations 1.Project from image to 3D ray (x 0,y 0,z 0 ) = (u 0 -u c,v 0 -v c,f) 2.Rotate the ray by camera motion (x 1,y 1,z 1 ) = R 01 (x 0,y 0,z 0 ) 3.Project back into new (source) image (u 1,v 1 )= (fx 1 /z 1 +u c,fy 1 /z 1 +v c ) (u,v,f)(x,y,z) f (u,v,f) R

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision7 Pairwise alignment Absolute orientation [Arun et al., PAMI 1987] [Horn et al., JOSA A 1988], Procrustes Algorithm [Golub & VanLoan] Given two sets of matching points, compute R p i ’ = R p i with 3D rays p i = N(x i,y i,z i ) = N(u i -u c,v i -v c,f) A = Σ i p i p i ’ T = Σ i p i p i T R T = U S V T = (U S U T ) R T V T = U T R T R = V U T

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision8 Pairwise alignment RANSAC loop: 1.Select two feature pairs (at random) p i = N(u i -u c,v i -v c,f ), p i ’ = N(u i ’-u c,v i ’-v c,f ), i=0,1 2.Compute outer product matrix A = Σ i p i p i ’ T 3.Compute R using SVD, A = U S V T, R = V U T 4.Compute inliers where f | p i ’ - R p i | < ε 5.Keep largest set of inliers 6.Re-compute least-squares SVD estimate on all of the inliers, i=0..n

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision9 Automatic stitching 1.Match all pairs and keep the good ones (# inliers > threshold) 2.Sort pairs by strength (# inliers) 3.Add in next strongest match (and other relevant matches) to current stitch 4.Perform global alignment

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision10 Incremental selection & addition [3] [4] (3,4) (4,3) [2] (2,4) (4,2) (2,3) (3,2) [1] (1,2) (2,1) (1,4) (4,1) (1,3) (3,1) [5] (5,3) (3,5) (4,5) (5,4)

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision11 Global alignment Task: Compute globally consistent set of rotations {R i } such that R j p ij ≈ R k p ik or min |R j p ij - R k p ik | 2 1.Initialize “first” frame R i = I 2.Multiply “next” frame by pairwise rotation R ij 3.Globally update all of the current {R i } Q: How to parameterize and update the {R i } ?

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision12 Parameterizing rotations How do we parameterize R and ΔR? Euler angles: bad idea quaternions: 4-vectors on unit sphere use incremental rotation R(I +  R) update with Rodriguez formula

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision13 Global alignment Least-squares solution of min |R j p ij - R k p ik | 2 or R j p ij - R k p ik = 0 1.Use the linearized update (I+[ω j ]  )R j p ij - (I+[ω k ]  ) R k p ik = 0 or [q ij ]  ω j - [q ik ]  ω k = q ij -q ik,q ij = R j p ij 2.Estimate least square solution over {ω i } 3.Iterate a few times (updating the {R i } )

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision14 Iterative focal length adjustment (Optional) [Szeliski & Shum’97; MSR-TR-03] Simplest approach: arg min f f |R j p ij - R k p ik | 2 More complex approach: full bundle adjustment (op. cit. & later in talk)

Camera Calibration

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision16 Camera calibration Determine camera parameters from known 3D points or calibration object(s) 1.internal or intrinsic parameters such as focal length, optical center, aspect ratio: what kind of camera? 2.external or extrinsic (pose) parameters: where is the camera? How can we do this?

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision17 Camera calibration – approaches Possible approaches: 1.linear regression (least squares) 2.non-linear optimization 3.vanishing points 4.multiple planar patterns 5.panoramas (rotational motion)

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision18 Image formation equations u (X c,Y c,Z c ) ucuc f

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision19 Calibration matrix Is this form of K good enough? non-square pixels (digital video) skew radial distortion

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision20 Camera matrix Fold intrinsic calibration matrix K and extrinsic pose parameters (R,t) together into a camera matrix M = K [R | t ] (put 1 in lower r.h. corner for 11 d.o.f.)

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision21 Camera matrix calibration Directly estimate 11 unknowns in the M matrix using known 3D points (X i,Y i,Z i ) and measured feature positions (u i,v i )

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision22 Camera matrix calibration Linear regression: Bring denominator over, solve set of (over- determined) linear equations. How? Least squares (pseudo-inverse) Is this good enough?

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision23 Optimal estimation Feature measurement equations Likelihood of M given {(u i,v i )}

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision24 Optimal estimation Log likelihood of M given {(u i,v i )} How do we minimize C? Non-linear regression (least squares), because û i and v i are non-linear functions of M

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision25 Levenberg-Marquardt Iterative non-linear least squares [Press’92] Linearize measurement equations Substitute into log-likelihood equation: quadratic cost function in  m

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision26 Iterative non-linear least squares [Press’92] Solve for minimum Hessian: error: Levenberg-Marquardt

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision27 What if it doesn’t converge? Multiply diagonal by (1 + ), increase  until it does Halve the step size  m (my favorite) Use line search Other ideas? Uncertainty analysis: covariance  = A -1 Is maximum likelihood the best idea? How to start in vicinity of global minimum? Levenberg-Marquardt

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision28 Camera matrix calibration Advantages: very simple to formulate and solve can recover K [R | t] from M using QR decomposition [Golub & VanLoan 96] Disadvantages: doesn't compute internal parameters more unknowns than true degrees of freedom need a separate camera matrix for each new view

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision29 Separate intrinsics / extrinsics New feature measurement equations Use non-linear minimization Standard technique in photogrammetry, computer vision, computer graphics [Tsai 87] – also estimates  1 CMU) [Bogart 91] – View Correlation

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision30 Intrinsic/extrinsic calibration Advantages: can solve for more than one camera pose at a time potentially fewer degrees of freedom Disadvantages: more complex update rules need a good initialization (recover K [R | t] from M)

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision31 Vanishing Points Determine focal length f and optical center (u c,v c ) from image of cube’s (or building’s) vanishing points [Caprile ’90][Antone & Teller ’00] u0u0u0u0 u1u1u1u1 u2u2u2u2

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision32 Vanishing Points X, Y, and Z directions, X i = (1,0,0,0) … (0,0,1,0) correspond to vanishing points that are scaled version of the rotation matrix: u0u0u0u0 u1u1u1u1 u2u2u2u2 u (X c,Y c,Z c ) ucuc f

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision33 Vanishing Points Orthogonality conditions on rotation matrix R, r i ¢r j =  ij Determine (u c,v c ) from orthocenter of vanishing point triangle Then, determine f 2 from two equations (only need 2 v.p.s if (u c,v c ) known) u0u0u0u0 u1u1u1u1 u2u2u2u2

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision34 Vanishing point calibration Advantages: only need to see vanishing points (e.g., architecture, table, …) Disadvantages: not that accurate need rectahedral object(s) in scene

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision35 Single View Metrology A. Criminisi, I. Reid and A. Zisserman (ICCV 99) Make scene measurements from a single image Application: 3D from a single image Assumptions 1 3 orthogonal sets of parallel lines 2 4 known points on ground plane 3 1 height in the scene Can still get an affine reconstruction without 2 and 3

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision36 Criminisi et al., ICCV 99 Complete approach Load in an image Click on parallel lines defining X, Y, and Z directions Compute vanishing points Specify points on reference plane, ref. height Compute 3D positions of several points Create a 3D model from these points Extract texture maps Output a VRML model

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision37 3D Modeling from a Photograph

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision38 3D Modeling from a Photograph

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision39 Multi-plane calibration Use several images of planar target held at unknown orientations [Zhang 99] Compute plane homographies Solve for K -T K -1 from H k ’s –1plane if only f unknown –2 planes if (f,u c,v c ) unknown –3+ planes for full K Code available from Zhang and OpenCV

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision40 Rotational motion Use pure rotation (large scene) to estimate f 1.estimate f from pairwise homographies 2.re-estimate f from 360º “gap” 3.optimize over all {K,R j } parameters [Stein 95; Hartley ’97; Shum & Szeliski ’00; Kang & Weiss ’99] Most accurate way to get f, short of surveying distant points f=510f=468

Pose estimation and triangulation

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision42 Pose estimation Once the internal camera parameters are known, can compute camera pose [Tsai87] [Bogart91] Application: superimpose 3D graphics onto video How do we initialize (R,t)?

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision43 Pose estimation Previous initialization techniques: vanishing points [Caprile 90] planar pattern [Zhang 99] Other possibilities Through-the-Lens Camera Control [Gleicher92]: differential update 3+ point “linear methods”: [DeMenthon 95][Quan 99][Ameller 00]

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision44 Pose estimation Solve orthographic problem, iterate [DeMenthon 95] Use inter-point distance constraints [Quan 99][Ameller 00] Solve set of polynomial equations in x i 2p u (X c,Y c,Z c ) ucuc f

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision45 Triangulation Problem: Given some points in correspondence across two or more images (taken from calibrated cameras), {(u j,v j )}, compute the 3D location X

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision46 Triangulation Method I: intersect viewing rays in 3D, minimize: X is the unknown 3D point C j is the optical center of camera j V j is the viewing ray for pixel (u j,v j ) s j is unknown distance along V j Advantage: geometrically intuitive CjCj VjVj X

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision47 Triangulation Method II: solve linear equations in X advantage: very simple Method III: non-linear minimization advantage: most accurate (image plane error)

Structure from Motion

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision49 Structure from motion Given many points in correspondence across several images, {(u ij,v ij )}, simultaneously compute the 3D location x i and camera (or motion) parameters (K, R j, t j ) Two main variants: calibrated, and uncalibrated (sometimes associated with Euclidean and projective reconstructions)

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision50 Structure from motion How many points do we need to match? 2 frames: ( R,t ): 5 dof + 3n point locations  4n point measurements  n  5 k frames: 6(k–1)-1 + 3n  2kn always want to use many more

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision51 Two-frame methods Two main variants: 1.Calibrated: “Essential matrix” E use ray directions (x i, x i ’ ) 2.Uncalibrated: “Fundamental matrix” F [Hartley & Zisserman 2000]

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision52 Essential matrix Co-planarity constraint: x’ ≈ R x + t [t]  x’ ≈ [t]  R x x’ T [t]  x’ ≈ x’ T [t]  R x x’ T E x = 0 with E =[t]  R Solve for E using least squares (SVD) t is the least singular vector of E R obtained from the other two s.v.s

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision53 Fundamental matrix Camera calibrations are unknown x’ F x = 0 with F = [e]  H = K’[t]  R K -1 Solve for F using least squares (SVD) re-scale (x i, x i ’ ) so that |x i |≈1/2 [Hartley] e (epipole) is still the least singular vector of F H obtained from the other two s.v.s “plane + parallax” (projective) reconstruction use self-calibration to determine K [Pollefeys]

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision54 Three-frame methods Trifocal tensor [Hartley & Zisserman 2000]

Multi-frame Structure from Motion

Factorization [Tomasi & Kanade, IJCV 92]

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision57 Structure [from] Motion Given a set of feature tracks, estimate the 3D structure and 3D (camera) motion. Assumption: orthographic projection Tracks: (u fp,v fp ), f: frame, p: point Subtract out mean 2D position… u fp = i f T s p i f : rotation, s p : position v fp = j f T s p

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision58 Measurement equations u fp = i f T s p i f : rotation, s p : position v fp = j f T s p Stack them up… W = R S R = (i 1,…,i F, j 1,…,j F ) T S = (s 1,…,s P )

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision59 Factorization W = R 2F  3 S 3  P SVD W = U Λ V Λ must be rank 3 W’ = (U Λ 1/2 )(Λ 1/2 V) = U’ V’ Make R orthogonal R = QU’, S = Q -1 V’ i f T Q T Qi f = 1 …

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision60 Results Look at paper figures…

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision61 Extensions Paraperspective [Poelman & Kanade, PAMI 97] Sequential Factorization [Morita & Kanade, PAMI 97] Factorization under perspective [Christy & Horaud, PAMI 96] [Sturm & Triggs, ECCV 96] Factorization with Uncertainty [Anandan & Irani, IJCV 2002]

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision62 Bundle Adjustment What makes this non-linear minimization hard? many more parameters: potentially slow poorer conditioning (high correlation) potentially lots of outliers gauge (coordinate) freedom

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision63 Lots of parameters: sparsity Only a few entries in Jacobian are non-zero

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision64 Sparse Cholesky (skyline) First used in finite element analysis Applied to SfM by [Szeliski & Kang 1994] structure | motion fill-in

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision65 Conditioning and gauge freedom Poor conditioning: use 2 nd order method use Cholesky decomposition Gauge freedom fix certain parameters (orientation) or zero out last few rows in Cholesky decomposition

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision66 Robust error models Outlier rejection use robust penalty applied to each set of joint measurements for extremely bad data, use random sampling [RANSAC, Fischler & Bolles, CACM’81]

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision67 Correspondences Can refine feature matching after a structure and motion estimate has been produced decide which ones obey the epipolar geometry decide which ones are geometrically consistent (optional) iterate between correspondences and SfM estimates using MCMC [Dellaert et al., Machine Learning 2003]Dellaert et al., Machine Learning 2003

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision68 Structure from motion: limitations Very difficult to reliably estimate metric structure and motion unless: large (x or y) rotation or large field of view and depth variation Camera calibration important for Euclidean reconstructions Need good feature tracker

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision69 Bibliography M.-A. Ameller, B. Triggs, and L. Quan. Camera pose revisited -- new linear algorithms M. Antone and S. Teller. Recovering relative camera rotations in urban scenes. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'2000), volume 2, pages , Hilton Head Island, June S. Becker and V. M. Bove. Semiautomatic {3-D model extraction from uncalibrated 2-d camera views. In SPIE Vol. 2410, Visual Data Exploration and Analysis {II, pages , San Jose, CA, February Society of Photo-Optical Instrumentation Engineers. R. G. Bogart. View correlation. In J. Arvo, editor, Graphics Gems II, pages Academic Press, Boston, 1991.

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision70 Bibliography D. C. Brown. Close-range camera calibration. Photogrammetric Engineering, 37(8): , B. Caprile and V. Torre. Using vanishing points for camera calibration. International Journal of Computer Vision, 4(2): , March R. T. Collins and R. S. Weiss. Vanishing point calculation as a statistical inference on the unit sphere. In Third International Conference on Computer Vision (ICCV'90), pages , Osaka, Japan, December IEEE Computer Society Press. A. Criminisi, I. Reid, and A. Zisserman. Single view metrology. In Seventh International Conference on Computer Vision (ICCV'99), pages , Kerkyra, Greece, September 1999.

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision71 Bibliography L. {de Agapito, R. I. Hartley, and E. Hayman. Linear calibration of a rotating and zooming camera. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'99), volume 1, pages , Fort Collins, June D. I. DeMenthon and L. S. Davis. Model-based object pose in 25 lines of code. International Journal of Computer Vision, 15: , June M. Gleicher and A. Witkin. Through-the-lens camera control. Computer Graphics (SIGGRAPH'92), 26(2): , July R. I. Hartley. An algorithm for self calibration from several views. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'94), pages , Seattle, Washington, June IEEE Computer Society.

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision72 Bibliography R. I. Hartley. Self-calibration of stationary cameras. International Journal of Computer Vision, 22(1):5--23, R. I. Hartley, E. Hayman, L. {de Agapito, and I. Reid. Camera calibration and the search for infinity. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'2000), volume 1, pages , Hilton Head Island, June R. I. Hartley. and A. Zisserman. Multiple View Geometry. Cambridge University Press, B. K. P. Horn. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A, 4(4): , 1987.

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision73 Bibliography S. B. Kang and R. Weiss. Characterization of errors in compositing panoramic images. Computer Vision and Image Understanding, 73(2): , February M. Pollefeys, R. Koch and L. Van Gool. Self-Calibration and Metric Reconstruction in spite of Varying and Unknown Internal Camera ParametersSelf-Calibration and Metric Reconstruction in spite of Varying and Unknown Internal Camera Parameters. International Journal of Computer Vision, 32(1), 7-25, [pdf] L. Quan and Z. Lan. Linear N-point camera pose determination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(8): , August G. Stein. Accurate internal camera calibration using rotation, with analysis of sources of error. In Fifth International Conference on Computer Vision (ICCV'95), pages , Cambridge, Massachusetts, June 1995.

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision74 Bibliography Stewart, C. V. (1999). Robust parameter estimation in computer vision. SIAM Reviews, 41(3), 513–537. R. Szeliski and S. B. Kang. Recovering 3D Shape and Motion from Image Streams using Nonlinear Least Squares Journal of Visual Communication and Image Representation, 5(1):10-28, March R. Y. Tsai. A versatile camera calibration technique for high-accuracy {3D machine vision metrology using off-the-shelf {TV cameras and lenses. IEEE Journal of Robotics and Automation, RA-3(4): , August Z. Zhang. Flexible camera calibration by viewing a plane from unknown orientations. In Seventh International Conference on Computer Vision (ICCV'99), pages , Kerkyra, Greece, September 1999.