Image Correspondence and Depth Recovery Gene Wang 4/26/2011.

Slides:



Advertisements
Similar presentations
Single-view geometry Odilon Redon, Cyclops, 1914.
Advertisements

3D reconstruction.
Geometry 2: A taste of projective geometry Introduction to Computer Vision Ronen Basri Weizmann Institute of Science.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-View Geometry CS Sastry and Yang
Two-view geometry.
Relations between image coordinates Given coordinates in one image, and the tranformation Between cameras, T = [R t], what are the image coordinates In.
Camera calibration and epipolar geometry
Computer Vision : CISC 4/689
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Epipolar Geometry and the Fundamental Matrix F
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
CAU Kiel DAGM 2001-Tutorial on Visual-Geometric 3-D Scene Reconstruction 1 The plan for today Leftovers and from last time Camera matrix Part A) Notation,
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
CSE 803 Stockman Fall Stereo models Algebraic models for stereo.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Lec 21: Fundamental Matrix
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Multi-view geometry.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Metrology 1.Perspective distortion. 2.Depth is lost.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Geometry of Multiple Views
Single-view geometry Odilon Redon, Cyclops, 1914.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Feature Matching. Feature Space Outlier Rejection.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
3D Reconstruction Using Image Sequence
Reconstruction from Two Calibrated Views Two-View Geometry
Determining 3D Structure and Motion of Man-made Objects from Corners.
Single-view geometry Odilon Redon, Cyclops, 1914.
EECS 274 Computer Vision Projective Structure from Motion.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Depth from disparity (x´,y´)=(x+D(x,y), y)
René Vidal and Xiaodong Fan Center for Imaging Science
Epipolar geometry.
Computer Graphics Recitation 12.
Reconstruction.
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

Image Correspondence and Depth Recovery Gene Wang 4/26/2011

 Stereopsis and epipolar geometry can be used to determine 3D point information from 2D images.  Does adding a third camera give the system any more information for solving the correspondence problem and depth recovery?

Source:

 Given three cameras with centers C 1, C 2, C 3.  X, C 1, C 2, C 3 forms a triangular prism.  The plane C 1 C 2 X and C 1 C 3 X intersects image 1 in a line.  The two lines intersect at the projected point of X on image 1. (Likewise for images 2 and 3).

 Given projected point x 1 on image 1, find the corresponding points on the other images. x 1

 Given:  Camera calibration matrices K 1,K 2  Translational matrix S(t)  Rotational matrix R  Point x 1 in image 1  Then: x 1 T F e = 0  Where F = (K 1 -1 ) T S(t) R -1 K 2 -1  Can be used to find a linear set of candidates for corresponding points in image 2. Source: http://

 By repeating the fundamental matrix calculation for camera 3, the correspondence problem can be done with 2 linear searches.  However, this is simply repeating the two camera solution twice.

 With the same method, the projected point x 1 on image 1 will determine one epipolar line on each of the other images.

Camera base plane (analogous to the base line in stereopsis) C 1 C 2 C 3

C 1 C 2 X half plane C 1 C 2 C 3

C 1 C 3 X half plane C 1 C 2 C 3

ray C 1 X C 1 C 2 C 3

 Using the epipolar lines in images 2 and 3, it’s only possible to “undo” the projections and recover the ray C 1 X  Any possible new information must come from between cameras C 2 and C 3.

 Image 2 and 3 both contain 1 epipolar line.  Each line represents a half plane in the world space.  Projected back on to the other camera results in a region.

 So the information between cameras 2 and 3 is actually a 2D search range.  It does not help in any way to improve the search range from the original epipolar line derived from image 1.

 Depth recovery is predicated on having completed image correspondence.  Given two corresponding points and the camera calibration matrices, the world coordinates for any point X can be easily calculated.  Using camera angles and triangulation.

 In real world situations, estimations and probability are involved.  Having a trinocular system can reduce error and increase robustness.

 Once a point is located in 3D space, its projection on any calibrated camera can be calculated.  Error reduction can be done by calculating projection for the third camera using the other two for each camera and find the best match.

 Testing and evaluating trade-off between extra correspondence calculations and error reduction.