CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.

Slides:



Advertisements
Similar presentations
Epipolar Geometry.
Advertisements

CS 376 Introduction to Computer Graphics 02 / 02 / 2007 Instructor: Michael Eckmann.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Lecture 8: Stereo.
Mirrors Physics 202 Professor Lee Carkner Lecture 22.
Camera calibration and epipolar geometry
CS 325 Introduction to Computer Graphics 04 / 09 / 2010 Instructor: Michael Eckmann.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
CS 376b Introduction to Computer Vision 03 / 26 / 2008 Instructor: Michael Eckmann.
1 Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1.
CS 376b Introduction to Computer Vision 02 / 27 / 2008 Instructor: Michael Eckmann.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
CS 376b Introduction to Computer Vision 04 / 01 / 2008 Instructor: Michael Eckmann.
CS 376b Introduction to Computer Vision 04 / 14 / 2008 Instructor: Michael Eckmann.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
CSE 803 Stockman Fall Stereo models Algebraic models for stereo.
© 2004 by Davi GeigerComputer Vision March 2004 L1.1 Binocular Stereo Left Image Right Image.
Mirrors Physics 202 Professor Lee Carkner Lecture 22.
CS 376b Introduction to Computer Vision 03 / 04 / 2008 Instructor: Michael Eckmann.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
CSE473/573 – Stereo Correspondence
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Reading Gregory 24 th Pinker 26 th. Seeing Depth What’s the big problem with seeing depth ?
Stereo Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision (with slides by James Rehg and.
COMP322/S2000/L271 Stereo Imaging Ref.V.S.Nalwa, A Guided Tour of Computer Vision, Addison Wesley, (ISBN ) Slides are adapted from CS641.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
1 Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
CS 376b Introduction to Computer Vision 02 / 26 / 2008 Instructor: Michael Eckmann.
CS 376b Introduction to Computer Vision 04 / 02 / 2008 Instructor: Michael Eckmann.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
CAP4730: Computational Structures in Computer Graphics 3D Concepts.
Structure from images. Calibration Review: Pinhole Camera.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
CS 376b Introduction to Computer Vision 03 / 21 / 2008 Instructor: Michael Eckmann.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
CS 376b Introduction to Computer Vision 04 / 28 / 2008 Instructor: Michael Eckmann.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
stereo Outline : Remind class of 3d geometry Introduction
CS 376b Introduction to Computer Vision 03 / 18 / 2008 Instructor: Michael Eckmann.
CS 325 Introduction to Computer Graphics 04 / 12 / 2010 Instructor: Michael Eckmann.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
CS 376b Introduction to Computer Vision 03 / 17 / 2008 Instructor: Michael Eckmann.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
CS 325 Introduction to Computer Graphics 04 / 07 / 2010 Instructor: Michael Eckmann.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
CS262 – Computer Vision Lect 4 - Image Formation
Computational Vision CSCI 363, Fall 2016 Lecture 15 Stereopsis
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Common Classification Tasks
Computer Vision Stereo Vision.
Presentation transcript:

CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann

Michael Eckmann - Skidmore College - CS 376b - Spring 2008 Today’s Topics Comments/Questions perspective projection –I'll first draw figure w/ real & “front” image planes –then our text only shows the “front” image plane in later figures stereo vision –sparse depth map from stereo perceiving 3D from 2D –human depth cues –shape from shading basically this lecture goes in the following order: 12.5, 12.6 then 12.3

Perspective projection from Shapiro and Stockman, figure Perspective projection

Perspective projection The image coordinates are related to the world coordinates by the following equations. Similar triangles yield: z c /f = x c /x i which means x i = (f/z c ) * x c z c /f = y c /y i which means y i = (f/z c ) * y c so, if the camera is viewing a plane in the world that is parallel to the viewing plane, then the view on the image plane is a scaled version of the world plane Every 3D point along a ray corresponds to the same 2D point on the image plane.

Stereo from Shapiro and Stockman, figure 12.24

Stereo from Shapiro and Stockman, figure Let me show the steps that get skipped on the board to go from (2) to (3) for x and z. The equation for y is simply as we did it for the perspective projection.

So, as disparity increases, the distance to the world point decreases. And vice-versa. Think about close objects vs. far objects and their expected disparities. When baseline increases, less chance for correspondences (less overlap in what the two cameras view), but if we decrease the baseline, then small errors in corresponding image points, result in larger errors in determining where P is in the world. Stereo

Therefore, computing the depth of a particular world point is easy as long as we know –baseline (distance between the cameras)‏ –and focal length of the camera –and the disparity in the images of the corresponding points The hardest part in all this is? Stereo

Therefore, computing the depth of a particular world point is easy as long as we know –baseline (distance between the cameras)‏ –and focal length of the camera –and the disparity in the images of the corresponding points The hardest part in all this is –getting correct correspondences –think about if the correspondence of points are off, by even a little (assuming baseline << distance to world points)‏ Stereo

Cross correlation is often used between pixels in left image to right image –can compute the depth at all? pixels Other possibilities include –finding good features (those that are localizable) in left image –then finding correspondences in right image, either by cross correlation or by some other matching scheme end up with a sparse depth map --- only have depth calculations at places where we found a good feature AND that feature found a correspondence not every feature in left image will find its match in right image (why do you think?)‏ given a sparse depth map, we'll need to do some error prone interpolation between computed depth points to fill in the depth map Stereo

Can take advantage of the epipolar constraint –says that a point in left image can only appear on an epipolar line in the right image (reduces the search space from a potentially large region to a line)‏ define epipolar lines, epipolar plane, epipoles (see next slides)‏ Can take advantage of the ordering constraint –says that two points which are on a continuous surface in the 3D world will appear in the same order in the left image as the right image –problem is, you might incorrectly assume that two points belong to the same surface when in actuality they do not and the constraint doesn't hold e.g. consider a small object occluding a larger object which is further away (let me draw on the board)‏ Stereo

from Shapiro and Stockman, figure Epipolar geometry for aligned cameras

Stereo from Shapiro and Stockman, figure Epipolar geometry for unaligned cameras

Focus, blur and depth of field Let's look at figures and A point will be blurred into a larger spot when the image plane is off in either direction from its ideal location when the point will be in focus. The depth of field is related to how large a range of depths will be in focus (or within some acceptable level of blur). –e.g. our text computes the depth of field assuming a maximum blur of an area the size of one pixel

3D cues from 1 2D image We just considered stereo which used two images to get 3D information. We can gather some 3D information from a single image with various depth cues (see section 12.3)‏ –occlusion –changing texture density on a plane –changing size of an object which extends throughout various depths These cues can be used to get relative (not absolute) depths between various objects/surfaces.

3D cues from 1 2D image Interposition --- if an object A occludes another object B then object A is closer Perspective scaling --- distance of an object is inversely proportional to its size in the image plane Foreshortening --- viewing an object at an acute angle causes the object to be compressed (in a perspective manner)‏ Motion parallax --- when an observer moves, stationary closer objects move faster than stationary further objects

3D cues from 1 2D image figure from Shapiro and Stockman

3D cues from 1 2D image Shape from shading example: figure in Shapiro and Stockman (original credit: courtesy of D. Trytten.)‏ Compute surface normals based on intesities and known properties of the light (energy, position, direction) and surface properties (how light is reflected, absorbed)‏

3D cues from 1 2D image So, if one is able to control the lighting in an environment, one can do various things to help with implying 3D structure, such as projecting light stripes in your environment as in figure in Shapiro and Stockman (original credit: courtesy of Gongzhu Hu.)‏