Single-view geometry Odilon Redon, Cyclops, 1914.

Slides:



Advertisements
Similar presentations
Single-view geometry Odilon Redon, Cyclops, 1914.
Advertisements

Epipolar Geometry.
Computer Vision, Robert Pless
Computer vision: models, learning and inference
Jan-Michael Frahm, Enrique Dunn Spring 2012
Two-view geometry.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
Camera calibration and epipolar geometry
Structure from motion.
Single-view metrology
Used slides/content with permission from
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Camera model Relation between pixels and rays in space ?
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Cameras, lenses, and calibration
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Introduction à la vision artificielle III Jean Ponce
Camera Geometry and Calibration Thanks to Martial Hebert.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
Geometric Camera Models
Vision Review: Image Formation Course web page: September 10, 2002.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Affine Structure from Motion
Single-view geometry Odilon Redon, Cyclops, 1914.
Ch. 3: Geometric Camera Calibration
CS-498 Computer Vision Week 7, Day 2 Camera Parameters Intrinsic Calibration  Linear  Radial Distortion (Extrinsic Calibration?) 1.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
EECS 274 Computer Vision Affine Structure from Motion.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Single-view geometry Odilon Redon, Cyclops, 1914.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Calibrating a single camera
Computer vision: models, learning and inference
Geometric Model of Camera
Depth from disparity (x´,y´)=(x+D(x,y), y)
Homogeneous Coordinates (Projective Space)
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
Geometric Camera Models
Lecturer: Dr. A.H. Abdul Hafez
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Reconstruction.
Two-view geometry.
Two-view geometry.
Multi-view geometry.
The Pinhole Camera Model
Presentation transcript:

Single-view geometry Odilon Redon, Cyclops, 1914

Our goal: Recovery of 3D structure Recovery of structure from one image is inherently ambiguous X? X? X? x

Our goal: Recovery of 3D structure Recovery of structure from one image is inherently ambiguous

Our goal: Recovery of 3D structure Recovery of structure from one image is inherently ambiguous

Our goal: Recovery of 3D structure We will need multi-view geometry

Recall: Pinhole camera model Principal axis: line from the camera center perpendicular to the image plane Normalized (camera) coordinate system: camera center is at the origin and the principal axis is the z-axis

Recall: Pinhole camera model

Principal point Principal point (p): point where principal axis intersects the image plane (origin of normalized coordinate system) Normalized coordinate system: origin is at the principal point Image coordinate system: origin is in the corner How to go from normalized coordinate system to image coordinate system?

Principal point offset

Principal point offset calibration matrix

Pixel coordinates Pixel size: mx pixels per meter in horizontal direction, my pixels per meter in vertical direction To convert from metric image coordinates to pixel coordinates, we have to multiply the x, y coordinates by the x, y pixel magnification factors (pix/m) Beta_x, beta_y is the principal point coordinates in pixels pixels/m m pixels

Camera rotation and translation In general, the camera coordinate frame will be related to the world coordinate frame by a rotation and a translation coords. of point in camera frame coords. of camera center in world frame coords. of a point in world frame (nonhomogeneous)

Camera rotation and translation In non-homogeneous coordinates: Note: C is the null space of the camera projection matrix (PC=0)

Camera parameters Intrinsic parameters Principal point coordinates Focal length Pixel magnification factors Skew (non-rectangular pixels) Radial distortion

Camera parameters Intrinsic parameters Extrinsic parameters Principal point coordinates Focal length Pixel magnification factors Skew (non-rectangular pixels) Radial distortion Extrinsic parameters Rotation and translation relative to world coordinate system

Camera calibration Given n points with known 3D coordinates Xi and known image projections xi, estimate the camera parameters Xi xi

Camera calibration Two linearly independent equations

Camera calibration P has 11 degrees of freedom (12 parameters, but scale is arbitrary) One 2D/3D correspondence gives us two linearly independent equations Homogeneous least squares 6 correspondences needed for a minimal solution

Camera calibration Note: for coplanar points that satisfy ΠTX=0, we will get degenerate solutions (Π,0,0), (0,Π,0), or (0,0,Π)

Camera calibration Once we’ve recovered the numerical form of the camera matrix, we still have to figure out the intrinsic and extrinsic parameters This is a matrix decomposition problem, not an estimation problem (see F&P sec. 3.2, 3.3)

Two-view geometry Scene geometry (structure): Given projections of the same 3D point in two or more images, how do we compute the 3D coordinates of that point? Correspondence (stereo matching): Given a point in just one image, how does it constrain the position of the corresponding point in a second image? Camera geometry (motion): Given a set of corresponding points in two images, what are the cameras for the two views?

Triangulation Given projections of a 3D point in two or more images (with known camera matrices), find the coordinates of the point O1 O2 x1 x2 X?

Triangulation We want to intersect the two visual rays corresponding to x1 and x2, but because of noise and numerical errors, they don’t meet exactly R1 R2 O1 O2 x1 x2 X?

Triangulation: Geometric approach Find shortest segment connecting the two viewing rays and let X be the midpoint of that segment X x2 x1 O1 O2

Triangulation: Linear approach Cross product as matrix multiplication:

Triangulation: Linear approach Two independent equations each in terms of three unknown entries of X

Triangulation: Nonlinear approach Find X that minimizes X? x’1 x2 x1 x’2 O1 O2