Single-view geometry Odilon Redon, Cyclops, 1914.

Slides:



Advertisements
Similar presentations
C280, Computer Vision Prof. Trevor Darrell Lecture 2: Image Formation.
Advertisements

Single-view geometry Odilon Redon, Cyclops, 1914.
Epipolar Geometry.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Computer vision: models, learning and inference
Two-View Geometry CS Sastry and Yang
Jan-Michael Frahm, Enrique Dunn Spring 2012
Two-view geometry.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
Camera calibration and epipolar geometry
Structure from motion.
Single-view metrology
Used slides/content with permission from
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Camera model Relation between pixels and rays in space ?
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
Projected image of a cube. Classical Calibration.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Cameras, lenses, and calibration
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
9 – Stereo Reconstruction
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Introduction à la vision artificielle III Jean Ponce
Epipolar Geometry and Stereo Vision Computer Vision ECE 5554 Virginia Tech Devi Parikh 10/15/13 These slides have been borrowed from Derek Hoiem and Kristen.
Structure from images. Calibration Review: Pinhole Camera.
Camera Geometry and Calibration Thanks to Martial Hebert.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Computer Vision – Lecture 16
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Geometric Camera Models
Vision Review: Image Formation Course web page: September 10, 2002.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Geometry of Multiple Views
Affine Structure from Motion
CS-498 Computer Vision Week 7, Day 2 Camera Parameters Intrinsic Calibration  Linear  Radial Distortion (Extrinsic Calibration?) 1.
3D Sensing Camera Model Camera Calibration
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
EECS 274 Computer Vision Affine Structure from Motion.
Computer vision: models, learning and inference M Ahad Multiple Cameras
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Single-view geometry Odilon Redon, Cyclops, 1914.
EECS 274 Computer Vision Projective Structure from Motion.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Multiple View Geometry and Stereo. Overview Single camera geometry – Recap of Homogenous coordinates – Perspective projection model – Camera calibration.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Calibrating a single camera
Depth from disparity (x´,y´)=(x+D(x,y), y)
Epipolar geometry.
Geometric Camera Models
Lecturer: Dr. A.H. Abdul Hafez
Multiple View Geometry for Robotics
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

Single-view geometry Odilon Redon, Cyclops, 1914

Geometric vision Goal: Recovery of 3D structure What cues in the image allow us to do this?

Shading Visual cues Merle Norman Cosmetics, Los Angeles Slide credit: S. Seitz

Visual cues From The Art of Photography, Canon Focus Slide credit: S. Seitz

Visual cues Perspective Slide credit: S. Seitz

Visual cues Motion Slide credit: S. Seitz

Our goal: Recovery of 3D structure We will focus on perspective and motion We need multi-view geometry because recovery of structure from one image is inherently ambiguous x X?

Our goal: Recovery of 3D structure We will focus on perspective and motion We need multi-view geometry because recovery of structure from one image is inherently ambiguous

Our goal: Recovery of 3D structure We will focus on perspective and motion We need multi-view geometry because recovery of structure from one image is inherently ambiguous

Recall: Pinhole camera model

Pinhole camera model

Camera coordinate system Principal axis: line from the camera center perpendicular to the image plane Normalized (camera) coordinate system: camera center is at the origin and the principal axis is the z-axis Principal point (p): point where principal axis intersects the image plane (origin of normalized coordinate system)

Principal point offset Camera coordinate system: origin is at the prinicipal point Image coordinate system: origin is in the corner principal point:

Principal point offset principal point:

Principal point offset calibration matrix principal point:

Pixel coordinates m x pixels per meter in horizontal direction, m y pixels per meter in vertical direction Pixel size: pixels/m mpixels

Camera rotation and translation In general, the camera coordinate frame will be related to the world coordinate frame by a rotation and a translation coords. of point in camera frame coords. of camera center in world frame coords. of a point in world frame (nonhomogeneous)

Camera rotation and translation In non-homogeneous coordinates: Note: C is the null space of the camera projection matrix (PC=0)

Camera parameters Intrinsic parameters Principal point coordinates Focal length Pixel magnification factors Skew (non-rectangular pixels) Radial distortion

Camera parameters Intrinsic parameters Principal point coordinates Focal length Pixel magnification factors Skew (non-rectangular pixels) Radial distortion Extrinsic parameters Rotation and translation relative to world coordinate system

Camera calibration Given n points with known 3D coordinates X i and known image projections x i, estimate the camera parameters XiXi xixi

Camera calibration Two linearly independent equations

Camera calibration P has 11 degrees of freedom (12 parameters, but scale is arbitrary) One 2D/3D correspondence gives us two linearly independent equations Homogeneous least squares 6 correspondences needed for a minimal solution

Camera calibration Note: for coplanar points that satisfy Π T X=0, we will get degenerate solutions (Π,0,0), (0,Π,0), or (0,0,Π)

Camera calibration Once we’ve recovered the numerical form of the camera matrix, we still have to figure out the intrinsic and extrinsic parameters This is a matrix decomposition problem, not an estimation problem (see F&P sec. 3.2, 3.3)

Two-view geometry Scene geometry (structure): Given projections of the same 3D point in two or more images, how do we compute the 3D coordinates of that point? Correspondence (stereo matching): Given a point in just one image, how does it constrain the position of the corresponding point in a second image? Camera geometry (motion): Given a set of corresponding points in two images, what are the cameras for the two views?

Triangulation Given projections of a 3D point in two or more images (with known camera matrices), find the coordinates of the point O1O1 O2O2 x1x1 x2x2 X?

Triangulation We want to intersect the two visual rays corresponding to x 1 and x 2, but because of noise and numerical errors, they don’t meet exactly O1O1 O2O2 x1x1 x2x2 X? R1R1 R2R2

Triangulation: Geometric approach Find shortest segment connecting the two viewing rays and let X be the midpoint of that segment O1O1 O2O2 x1x1 x2x2 X

Triangulation: Linear approach Cross product as matrix multiplication:

Triangulation: Linear approach Two independent equations each in terms of three unknown entries of X

Triangulation: Nonlinear approach Find X that minimizes O1O1 O2O2 x1x1 x2x2 X? x’ 1 x’ 2