Wednesday March 23 Kristen Grauman UT-Austin

Slides:



Advertisements
Similar presentations
Lecture 11: Two-view geometry
Advertisements

CSE473/573 – Stereo and Multiple View Geometry
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Two-view geometry.
Lecture 8: Stereo.
Stereo.
Camera calibration and epipolar geometry
Structure from motion.
Computer Vision : CISC 4/689
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Review Previous section: – Feature detection and matching – Model fitting and outlier rejection.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Stereo and Structure from Motion
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
© 2004 by Davi GeigerComputer Vision March 2004 L1.1 Binocular Stereo Left Image Right Image.
Projected image of a cube. Classical Calibration.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Instance-level recognition I. - Camera geometry and image alignment Josef Sivic INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Epipolar Geometry and Stereo Vision Computer Vision ECE 5554 Virginia Tech Devi Parikh 10/15/13 These slides have been borrowed from Derek Hoiem and Kristen.
Structure from images. Calibration Review: Pinhole Camera.
Multi-view geometry.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Epipolar geometry Epipolar Plane Baseline Epipoles Epipolar Lines
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Miniature faking In close-up photo, the depth of field is limited.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
נתחיל בסגירת חוב... Geometric vision Goal: Recovery of 3D structure – Structure and depth are inherently ambiguous from single views. מבוסס על השקפים.
Stereo Many slides adapted from Steve Seitz.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
stereo Outline : Remind class of 3d geometry Introduction
Feature Matching. Feature Space Outlier Rejection.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Geometry Reconstruction March 22, Fundamental Matrix An important problem: Determine the epipolar geometry. That is, the correspondence between.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
CSE 185 Introduction to Computer Vision Stereo 2.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Lecture 11 Homogeneous coordinates. Stereo.
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Miniature faking In close-up photo, the depth of field is limited.
Computational Vision CSCI 363, Fall 2016 Lecture 15 Stereopsis
Miniature faking In close-up photo, the depth of field is limited.
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
Common Classification Tasks
What have we learned so far?
Two-view geometry.
Depth Estimation (Sz 11) “3d” movies are trendy lately, but arguably they should be called “stereo 3d” movies because movies already have many 3d cues.
Multiple View Geometry for Robotics
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Presentation transcript:

Wednesday March 23 Kristen Grauman UT-Austin Stereo: Epipolar geometry Wednesday March 23 Kristen Grauman UT-Austin CS 376 Lecture 16: Stereo 1

Announcements Reminder: Pset 3 due next Wed, March 30

Last time Image formation affected by geometry, photometry, and optics. Projection equations express how world points mapped to 2d image. Parameters (focal length, aperture, lens diameter,…) affect image obtained. CS 376 Lecture 16: Stereo 1

Review How do the perspective projection equations explain this effect? http://www.mzephotos.com/gallery/mammals/rabbit-nose.html flickr.com/photos/lungstruck/434631076/ CS 376 Lecture 16: Stereo 1

Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/File:Jodhpur_tilt_shift.jpg CS 376 Lecture 16: Stereo 1

Miniature faking

Miniature faking http://en.wikipedia.org/wiki/File:Oregon_State_Beavers_Tilt-Shift_Miniature_Greg_Keene.jpg CS 376 Lecture 16: Stereo 1

Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman CS 376 Lecture 16: Stereo 1

Why multiple views? Structure and depth are inherently ambiguous from single views. Images from Lana Lazebnik CS 376 Lecture 16: Stereo 1

Why multiple views? Structure and depth are inherently ambiguous from single views. P1 P2 P1’=P2’ Optical center CS 376 Lecture 16: Stereo 1

What cues help us to perceive 3d shape and depth?

Shading [Figure from Prados & Faugeras 2006] CS 376 Lecture 16: Stereo 1 12

Focus/defocus Images from same point of view, different camera parameters 3d shape / depth estimates [figs from H. Jin and P. Favaro, 2002] CS 376 Lecture 16: Stereo 1

Texture CS 376 Lecture 16: Stereo 1 14 [From A.M. Loh. The recovery of 3-D structure using visual texture patterns. PhD thesis] CS 376 Lecture 16: Stereo 1 14

Perspective effects Image credit: S. Seitz

Motion CS 376 Lecture 16: Stereo 1 Figures from L. Zhang http://www.brainconnection.com/teasers/?main=illusion/motion-shape CS 376 Lecture 16: Stereo 1

Estimating scene shape “Shape from X”: Shading, Texture, Focus, Motion… Stereo: shape from “motion” between two views infer 3d shape of scene from two (multiple) images from different viewpoints Main idea: scene point image plane optical center CS 376 Lecture 16: Stereo 1 17

Outline Human stereopsis Stereograms Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

Human eye Rough analogy with human visual system: Pupil/Iris – control amount of light passing through lens Retina - contains sensor cells, where image is formed Fovea – highest concentration of cones Fig from Shapiro and Stockman CS 376 Lecture 16: Stereo 1

Human stereopsis: disparity Human eyes fixate on point in space – rotate so that corresponding images form in centers of fovea. CS 376 Lecture 16: Stereo 1 20

Human stereopsis: disparity Disparity occurs when eyes fixate on one object; others appear at different visual angles CS 376 Lecture 16: Stereo 1 21

Human stereopsis: disparity Disparity: d = r-l = D-F. Forsyth & Ponce CS 376 Lecture 16: Stereo 1 22

Random dot stereograms Julesz 1960: Do we identify local brightness patterns before fusion (monocular process) or after (binocular)? To test: pair of synthetic images obtained by randomly spraying black dots on white objects CS 376 Lecture 16: Stereo 1 23

Random dot stereograms Forsyth & Ponce CS 376 Lecture 16: Stereo 1 24

Random dot stereograms CS 376 Lecture 16: Stereo 1 25

Random dot stereograms When viewed monocularly, they appear random; when viewed stereoscopically, see 3d structure. Conclusion: human binocular fusion not directly associated with the physical retinas; must involve the central nervous system Imaginary “cyclopean retina” that combines the left and right image stimuli as a single unit CS 376 Lecture 16: Stereo 1 26

Stereo photography and stereo viewers Take two pictures of the same subject from two slightly different viewpoints and display so that each eye sees only one of the images. Invented by Sir Charles Wheatstone, 1838 Image from fisher-price.com CS 376 Lecture 16: Stereo 1

http://www.johnsonshawmuseum.org CS 376 Lecture 16: Stereo 1 28

http://www.johnsonshawmuseum.org CS 376 Lecture 16: Stereo 1 29

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 CS 376 Lecture 16: Stereo 1 30

http://www.well.com/~jimg/stereo/stereo_list.html CS 376 Lecture 16: Stereo 1 31

Autostereograms Exploit disparity as depth cue using single image. (Single image random dot stereogram, Single image stereogram) Images from magiceye.com CS 376 Lecture 16: Stereo 1 32

Autostereograms Images from magiceye.com CS 376 Lecture 16: Stereo 1 33

Estimating depth with stereo Stereo: shape from “motion” between two views We’ll need to consider: Info on camera pose (“calibration”) Image point correspondences scene point image plane optical center CS 376 Lecture 16: Stereo 1 34

Stereo vision Two cameras, simultaneous views Single moving camera and static scene

Camera parameters Camera frame 2 Camera frame 1 Extrinsic parameters: Camera frame 1  Camera frame 2 Intrinsic parameters: Image coordinates relative to camera  Pixel coordinates Extrinsic params: rotation matrix and translation vector Intrinsic params: focal length, pixel sizes (mm), image center point, radial distortion parameters We’ll assume for now that these parameters are given and fixed. CS 376 Lecture 16: Stereo 1 36

Outline Human stereopsis Stereograms Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

Geometry for a simple stereo system First, assuming parallel optical axes, known camera parameters (i.e., calibrated cameras): CS 376 Lecture 16: Stereo 1

optical center (right) optical center (left) World point Depth of p image point (left) image point (right) Focal length optical center (right) optical center (left) baseline CS 376 Lecture 16: Stereo 1 39

Geometry for a simple stereo system Assume parallel optical axes, known camera parameters (i.e., calibrated cameras). What is expression for Z? Similar triangles (pl, P, pr) and (Ol, P, Or): disparity CS 376 Lecture 16: Stereo 1 40

Depth from disparity (x´,y´)=(x+D(x,y), y) image I(x,y) Disparity map D(x,y) image I´(x´,y´) (x´,y´)=(x+D(x,y), y) So if we could find the corresponding points in two images, we could estimate relative depth… CS 376 Lecture 16: Stereo 1

Outline Human stereopsis Stereograms Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

General case, with calibrated cameras The two cameras need not have parallel optical axes. Vs. CS 376 Lecture 16: Stereo 1

Stereo correspondence constraints Given p in left image, where can corresponding point p’ be? CS 376 Lecture 16: Stereo 1

Stereo correspondence constraints CS 376 Lecture 16: Stereo 1

Epipolar constraint Geometry of two views constrains where the corresponding pixel for some image point in the first view must occur in the second view. It must be on the line carved out by a plane connecting the world point and optical centers. CS 376 Lecture 16: Stereo 1 46

Epipolar geometry Epipolar Line Epipolar Plane Epipole Baseline http://www.ai.sri.com/~luong/research/Meta3DViewer/EpipolarGeo.html CS 376 Lecture 16: Stereo 1 47

Epipolar geometry: terms Baseline: line joining the camera centers Epipole: point of intersection of baseline with image plane Epipolar plane: plane containing baseline and world point Epipolar line: intersection of epipolar plane with the image plane All epipolar lines intersect at the epipole An epipolar plane intersects the left and right image planes in epipolar lines Why is the epipolar constraint useful? CS 376 Lecture 16: Stereo 1

Epipolar constraint This is useful because it reduces the correspondence problem to a 1D search along an epipolar line. Image from Andrew Zisserman

Example CS 376 Lecture 16: Stereo 1

What do the epipolar lines look like? 1. Ol Or 2. Ol Or CS 376 Lecture 16: Stereo 1

Example: converging cameras Figure from Hartley & Zisserman CS 376 Lecture 16: Stereo 1 52

Example: parallel cameras Where are the epipoles? Figure from Hartley & Zisserman CS 376 Lecture 16: Stereo 1 53

So far, we have the explanation in terms of geometry. Now, how to express the epipolar constraints algebraically? CS 376 Lecture 16: Stereo 1

Stereo geometry, with calibrated cameras Main idea CS 376 Lecture 16: Stereo 1 55

Stereo geometry, with calibrated cameras If the stereo rig is calibrated, we know : how to rotate and translate camera reference frame 1 to get to camera reference frame 2. Rotation: 3 x 3 matrix R; translation: 3 vector T. CS 376 Lecture 16: Stereo 1 56

Stereo geometry, with calibrated cameras If the stereo rig is calibrated, we know : how to rotate and translate camera reference frame 1 to get to camera reference frame 2. CS 376 Lecture 16: Stereo 1 57

An aside: cross product Vector cross product takes two vectors and returns a third vector that’s perpendicular to both inputs. So here, c is perpendicular to both a and b, which means the dot product = 0. CS 376 Lecture 16: Stereo 1 58

From geometry to algebra Normal to the plane CS 376 Lecture 16: Stereo 1 59

Another aside: Matrix form of cross product Can be expressed as a matrix multiplication. CS 376 Lecture 16: Stereo 1 60

From geometry to algebra Normal to the plane CS 376 Lecture 16: Stereo 1 61

Essential matrix Let E is called the essential matrix, and it relates corresponding image points between both cameras, given the rotation and translation. If we observe a point in one image, its position in other image is constrained to lie on line defined by above. Note: these points are in camera coordinate systems. CS 376 Lecture 16: Stereo 1 62

Essential matrix example: parallel cameras 0 0 0 0 0 d 0 –d 0 For the parallel cameras, image of any point must lie on same horizontal line in each image plane. CS 376 Lecture 16: Stereo 1 63

What about when cameras’ optical axes are not parallel? image I(x,y) Disparity map D(x,y) image I´(x´,y´) (x´,y´)=(x+D(x,y),y) What about when cameras’ optical axes are not parallel? CS 376 Lecture 16: Stereo 1

Stereo image rectification In practice, it is convenient if image scanlines (rows) are the epipolar lines. reproject image planes onto a common plane parallel to the line between optical centers pixel motion is horizontal after this transformation two homographies (3x3 transforms), one for each input image reprojection Slide credit: Li Zhang CS 376 Lecture 16: Stereo 1 65

Stereo image rectification: example Source: Alyosha Efros CS 376 Lecture 16: Stereo 1

An audio camera & epipolar geometry Adam O' Donovan, Ramani Duraiswami and Jan Neumann Microphone Arrays as Generalized Cameras for Integrated Audio Visual Processing, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, 2007 Spherical microphone array CS 376 Lecture 16: Stereo 1

An audio camera & epipolar geometry CS 376 Lecture 16: Stereo 1

Summary Depth from stereo: main idea is to triangulate from corresponding image points. Epipolar geometry defined by two cameras We’ve assumed known extrinsic parameters relating their poses Epipolar constraint limits where points from one view will be imaged in the other Makes search for correspondences quicker Terms: epipole, epipolar plane / lines, disparity, rectification, intrinsic/extrinsic parameters, essential matrix, baseline

Coming up Computing correspondences Non-geometric stereo constraints Weak calibration