Calibrating a single camera

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

Single-view geometry Odilon Redon, Cyclops, 1914.
Computer Vision, Robert Pless
Computer vision: models, learning and inference
Two-View Geometry CS Sastry and Yang
Two-view geometry.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Camera calibration and epipolar geometry
Single-view metrology
3D Geometry and Camera Calibration. 3D Coordinate Systems Right-handed vs. left-handedRight-handed vs. left-handed x yz x yz.
Used slides/content with permission from
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Uncalibrated Epipolar - Calibration
Camera model Relation between pixels and rays in space ?
Panorama signups Panorama project issues? Nalwa handout Announcements.
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Cameras, lenses, and calibration
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Projective Geometry and Single View Modeling CSE 455, Winter 2010 January 29, 2010 Ames Room.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Camera Geometry and Calibration Thanks to Martial Hebert.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
Geometric Models & Camera Calibration
Geometric Camera Models
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Single-view geometry Odilon Redon, Cyclops, 1914.
Ch. 3: Geometric Camera Calibration
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Calibration.
Single-view geometry Odilon Redon, Cyclops, 1914.
Project 1 Due NOW Project 2 out today –Help session at end of class Announcements.
Project 2 went out on Tuesday You should have a partner You should have signed up for a panorama kit Announcements.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
CS682, Jana Kosecka Rigid Body Motion and Image Formation Jana Kosecka
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Computer vision: models, learning and inference
Depth from disparity (x´,y´)=(x+D(x,y), y)
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Homogeneous Coordinates (Projective Space)
Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/4/15
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
Lecturer: Dr. A.H. Abdul Hafez
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
George Mason University
Reconstruction.
Projective geometry Readings
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
Calibration and homographies
Announcements Project 2 extension: Friday, Feb 8
Camera model and calibration
Presentation transcript:

Calibrating a single camera Odilon Redon, Cyclops, 1914

Our goal: Recovery of 3D structure Recovery of structure from one image is inherently ambiguous X? X? X? x

Single-view ambiguity

Single-view ambiguity Rashad Alakbarov shadow sculptures

Single-view ambiguity Ames room

Our goal: Recovery of 3D structure We will need multi-view geometry

Review: Pinhole camera model world coordinate system Normalized (camera) coordinate system: camera center is at the origin, the principal axis is the z-axis, x and y axes of the image plane are parallel to x and y axes of the world Goal of camera calibration: go from world coordinate system to image coordinate system

Review: Pinhole camera model

Principal point py px Principal point (p): point where principal axis intersects the image plane Normalized coordinate system: origin of the image is at the principal point Image coordinate system: origin is in the corner

Principal point offset We want the principal point to map to (px, py) instead of (0,0) py px

Principal point offset py px calibration matrix

Pixel coordinates Pixel size: mx pixels per meter in horizontal direction, my pixels per meter in vertical direction To convert from metric image coordinates to pixel coordinates, we have to multiply the x, y coordinates by the x, y pixel magnification factors (pix/m) Beta_x, beta_y is the principal point coordinates in pixels pixels/m m pixels

Camera rotation and translation In general, the camera coordinate frame will be related to the world coordinate frame by a rotation and a translation Conversion from world to camera coordinate system (in non-homogeneous coordinates): coords. of point in camera frame coords. of camera center in world frame coords. of a point in world frame

Camera rotation and translation

Camera parameters Intrinsic parameters Principal point coordinates Focal length Pixel magnification factors Skew (non-rectangular pixels) Radial distortion

coords. of camera center in world frame Camera parameters Intrinsic parameters Principal point coordinates Focal length Pixel magnification factors Skew (non-rectangular pixels) Radial distortion Extrinsic parameters Rotation and translation relative to world coordinate system What is the projection of the camera center? coords. of camera center in world frame The camera center is the null space of the projection matrix!

Camera calibration

Camera calibration Given n points with known 3D coordinates Xi and known image projections xi, estimate the camera parameters Xi xi

Camera calibration: Linear method Two linearly independent equations

Camera calibration: Linear method P has 11 degrees of freedom One 2D/3D correspondence gives us two linearly independent equations 6 correspondences needed for a minimal solution Homogeneous least squares: find p minimizing ||Ap||2 Solution given by eigenvector of ATA with smallest eigenvalue

Camera calibration: Linear method Note: for coplanar points that satisfy ΠTX=0, we will get degenerate solutions (Π,0,0), (0,Π,0), or (0,0,Π)

Camera calibration: Linear method The linear method only estimates the entries of the projection matrix: What we ultimately want is a decomposition of this matrix into the intrinsic and extrinsic parameters: State-of-the-art methods use nonlinear optimization to solve for the parameter values directly

Camera calibration: Linear method Advantages: easy to formulate and solve Disadvantages Doesn’t directly tell you camera parameters Doesn’t model radial distortion Can’t impose constraints, such as known focal length and orthogonality Non-linear methods are preferred Define error as sum of squared distances between measured 2D points and estimated projections of 3D points Minimize error using Newton’s method or other non-linear optimization

A taste of multi-view geometry: Triangulation Given projections of a 3D point in two or more images (with known camera matrices), find the coordinates of the point

Triangulation Given projections of a 3D point in two or more images (with known camera matrices), find the coordinates of the point O1 O2 x1 x2 X?

Triangulation We want to intersect the two visual rays corresponding to x1 and x2, but because of noise and numerical errors, they don’t meet exactly O1 O2 x1 x2 X?

Triangulation: Geometric approach Find shortest segment connecting the two viewing rays and let X be the midpoint of that segment X x2 x1 O1 O2

Triangulation: Nonlinear approach Find X that minimizes X? P1X x2 x1 P2X O1 O2

Triangulation: Linear approach Cross product as matrix multiplication:

Triangulation: Linear approach Two independent equations each in terms of three unknown entries of X