Automatic Camera Calibration

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

Lecture 11: Two-view geometry
CSE473/573 – Stereo and Multiple View Geometry
3D reconstruction.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-view geometry.
Lecture 8: Stereo.
Camera calibration and epipolar geometry
Structure from motion.
Computer Vision : CISC 4/689
Lecture 11: Structure from motion, part 2 CS6670: Computer Vision Noah Snavely.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Lecture 21: Multiple-view geometry and structure from motion
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from:  David Lowe, UBC  Jiri Matas, CMP Prague
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
Projected image of a cube. Classical Calibration.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Lec 21: Fundamental Matrix
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
CSE 6367 Computer Vision Stereo Reconstruction Camera Coordinate Transformations “Everything should be made as simple as possible, but not simpler.” Albert.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Computer vision: models, learning and inference
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
IMAGE MOSAICING Summer School on Document Image Processing
Epipolar geometry Epipolar Plane Baseline Epipoles Epipolar Lines
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
CSCE 643 Computer Vision: Structure from Motion
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
Binocular Stereo #1. Topics 1. Principle 2. binocular stereo basic equation 3. epipolar line 4. features and strategies for matching.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Feature Matching. Feature Space Outlier Rejection.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
3D Reconstruction Using Image Sequence
Auto-calibration we have just calibrated using a calibration object –another calibration object is the Tsai grid of Figure 7.1 on HZ182, which can be used.
Rectification of Stereo Images Милош Миловановић Урош Поповић.
Stereo March 8, 2007 Suggested Reading: Horn Chapter 13.
EECS 274 Computer Vision Projective Structure from Motion.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Epipolar geometry.
Two-view geometry.
3D reconstruction class 11
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Reconstruction.
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

Automatic Camera Calibration Lu Zhang Sep 22, 2005

Outline Projective geometry and camera modals Principles of projective geometry Camera modals Camera calibration methods Basic principles of self-calibration Stratified self-calibration

Projective Geometry Basic principles in projective spaces: Points: A point of a dimensional projective space is represented by an n+1 vector of coordinates , are called the homogeneous or projective coordinates of the points, x is called a a coordinate vector. If for the two n+1 vectors represent the same point.

Projective Geometry The Projective Line The space is known as projective line. The standard projective basis of projective line is and . A point on the line is , and are not both=0. The point at infinity: If let , when =0, -> infinity, we call this point ‘point at infinity’

Projective Geometry Projective space Points: Planes: Lines: A line is defined as the set of points that are linearly dependent on two points. Plane at infinity: The points with =0 are said to be at infinity or ideal points. The set of all ideal points may be written (X; Y; Z; 0). The set of all ideal points lies on a single plane, the plane at infinity.

Camera models Camera models A point M on an object with coordinates (X,Y,Z) will be imaged at some point m=(x, y) in the image plane. If consider the effect of focal length f, the relationship between image coordinate and 3-D space coordinate can be written as here u=U/S v=V/S if S≠0, m=PM

Camera modals The general intrinsic matrix is Intrinsic parameter indicates the property of camera itself. focal length pixel width pixel height, x coordinate at the optical centre y coordinate at the optical centre

Camera models Camera Motion (Extrinsic parameters) If we go from the old coordinate system centered at C to the new coordinate system centered at O by a rotation R followed by a translation T, in projective coordinates The 4*4 matrix K is

Camera models Intrinsic calibration The graph shows the transformation from retinal plane to itself. The 3*3 matrix H is given by Cause We have thus

Self-calibration Self-calibration refers to the process of calculating all the intrinsic parameters of the camera using only the information available in the images taken by that camera. No calibration frame or known object is needed: the only requirement is that there is a static object in the scene, and the camera moves around taking images.

Self-calibration Why could we use self-calibration? projective invariants Epipolar geometry: The epipole is the point of intersection of the line joining the optical centers with the image plane. Thus the epipole is the image, in one camera, of the optical centre of the other camera

Self-calibration a point x in one image generates a line in the other on which its corresponding point x’ must lie. With two views, the two camera coordinate systems are related by a rotation R and a translation T.

Self-calibration Therefore Or if rewrite it as Then we can define E, the essential matrix:

Self-calibration If we have two views of a point M in three dimensional space, with M imaged at m in view 1 and m' in view 2 m=PM and m’=P’M If we set C(0,0,1), then we can find p through PC=0, where P=[P p] e’ is the projection of C on view2, therefore

Self-calibration We also can get Then can rewrite this equation as

Self-calibration A is the 3*3 left corner matrix of intrinsic matrix P We get the relationship between F and E

Self-calibration We can get three fundamental matrices F By using the relationship between F,E and A, and the already known coordinate of epipolar points: finally use a matrix equation we can calculate the parameters in matrix A which is also the parameters in Matrix P, P is the intrinsic Matrix which include camera characteristics.

Stratified self-calibration What is Stratified self-calibration? Self-calibration is the process of determining internal camera parameters directly from multiple uncalibrated images. First, Stratified self-calibration performs a projective reconstruction. Then, it obtains an affine reconstruction as the initial value. Finally, it applies metric reconstruction. Stratification of geometry Projective Affine Metric

Stratified self-calibration Obtaining Projective camera matrix Determine the rectifying homography H is the projective camera matrix for each view i For the actual cameras, the internal parameter K is the same for each view, but in general the calibration matrix will differ for each view. Therefore, the purpose for self-calibration is to find a rectifying homography H Obtaining the metric reconstruction

Stratified self-calibration Stratified self-calibration algorithm Step 1: affine calibration formulate the modulus constraint for all pairs of views, need at least 3 views (for n>3) solve the set of equations compute the affine projection matrices Step 2: metric calibration compute the position of plane at infinity find the intrinsic parameters K compute the metric projection matrices

Pipeline Projective reconstruction Relating images Initial reconstruction Adding views Self-calibration Finding plane at infinity Compute K Metric reconstruction Dense depth estimation Rectification Dense stereo matching Modeling

Projective reconstruction Relating images Detecting feature points Harris corner detector Matching feature points RANSAC (RANdom Sampling Consensus) Determine Fundamental Matrix Least square estimation The whole procedure: Repeat take minimal sample(8) compute F estimate inliers Until (inliers, trials)> 95% Refine F (using all inliers)

Projective reconstruction Results from projective reconstruction Feature points detection (the original images are captured from a video sequence) After applying Harris Corner Detector

Projective reconstruction Improvement by using RANSAC Computation of Fundamental Matrix by applying least square estimation

Projective reconstruction Result for projective reconstruction (con’t) Feature points detection After applying Harris Corner Detector Figure 9 Figure 10

Projective reconstruction Improvement by using RANSAC Computation of Fundamental Matrix by applying least square estimation

Self-calibration Finding initial Projective Matrix Compute epipoles and epipolar lines Epipolar lines: & Epipoles: on all epipolar lines, thus x  ,

Self-calibration Result from Initial Projective Matrix Epipolar lines Epipoles

Self-calibration Result from Initial Projective Matrix Epipolar lines Epipoles

Self-calibration Projective Matrix

Thanks guys, any questions?