© 2006 by Davi GeigerComputer Vision April 2006 L1.1 Binocular Stereo Left Image Right Image.

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Stereo Vision Reading: Chapter 11
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Last Time Pinhole camera model, projection
Stanford CS223B Computer Vision, Winter 2005 Lecture 6: Stereo 2 Sebastian Thrun, Stanford Rick Szeliski, Microsoft Hendrik Dahlkamp and Dan Morris, Stanford.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
© 2004 by Davi GeigerComputer Vision April 2004 L1.1 Binocular Stereo Left Image Right Image.
Computer Vision : CISC 4/689 Adaptation from: Prof. James M. Rehg, G.Tech.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
The plan for today Camera matrix
CS 223b 1 More on stereo and correspondence. CS 223b 2 =?f g Mostpopular For each window, match to closest window on epipolar line in other image. (slides.
Announcements Readings for today:
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Belief Propagation Kai Ju Liu March 9, Statistical Problems Medicine Finance Internet Computer vision.
Stereo Computation using Iterative Graph-Cuts
© 2004 by Davi GeigerComputer Vision March 2004 L1.1 Binocular Stereo Left Image Right Image.
© 2002 by Davi GeigerComputer Vision October 2002 L1.1 Binocular Stereo Left Image Right Image.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
CSE473/573 – Stereo Correspondence
Stereo Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision (with slides by James Rehg and.
Stereo matching Class 10 Read Chapter 7 Tsukuba dataset.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
Computer vision: models, learning and inference
COMPUTER GRAPHICS CS 482 – FALL 2014 AUGUST 27, 2014 FIXED-FUNCTION 3D GRAPHICS MESH SPECIFICATION LIGHTING SPECIFICATION REFLECTION SHADING HIERARCHICAL.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
The Brightness Constraint
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
1 Computational Vision CSCI 363, Fall 2012 Lecture 20 Stereo, Motion.
Geometry 3: Stereo Reconstruction Introduction to Computer Vision Ronen Basri Weizmann Institute of Science.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Computer Vision, Robert Pless
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
6.4 Random Fields on Graphs 6.5 Random Fields Models In “Adaptive Cooperative Systems” Summarized by Ho-Sik Seok.
A global approach Finding correspondence between a pair of epipolar lines for all pixels simultaneously Local method: no guarantee we will have one to.
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
1 Computational Vision CSCI 363, Fall 2012 Lecture 16 Stereopsis.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Advanced Computer Vision Chapter 11 Stereo Correspondence Presented by: 蘇唯誠 指導教授 : 傅楸善 博士.
Media Processor Lab. Media Processor Lab. Trellis-based Parallel Stereo Matching Media Processor Lab. Sejong univ.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
1 Computational Vision CSCI 363, Fall 2012 Lecture 18 Stereopsis III.
Computational Vision CSCI 363, Fall 2012 Lecture 17 Stereopsis II
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
© 2010 by Davi GeigerComputer Vision March 2010 L1.1 Binocular Stereo Left Image Right Image.
© 2006 by Davi GeigerComputer Vision November 2006 L1.1 Binocular Stereo Left Image Right Image.
© 2008 by Davi GeigerComputer Vision March 2008 L1.1 Binocular Stereo Left Image Right Image.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Biointelligence Laboratory, Seoul National University
Summary of “Efficient Deep Learning for Stereo Matching”
Computational Vision CSCI 363, Fall 2016 Lecture 15 Stereopsis
Geometry 3: Stereo Reconstruction
Common Classification Tasks
Range Imaging Through Triangulation
Binocular Stereo Vision
Binocular Stereo Vision
Detecting image intensity changes
Course 6 Stereo.
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Occlusion and smoothness probabilities in 3D cluttered scenes
Presentation transcript:

© 2006 by Davi GeigerComputer Vision April 2006 L1.1 Binocular Stereo Left Image Right Image

© 2006 by Davi GeigerComputer Vision April 2006 L1.2 Each potential match is represented by a square. The black ones represent the most likely scene to “explain” the image, but other combinations could have given rise to the same image (e.g., red) Stereo Correspondence: Ambiguities What makes the set of black squares preferred/unique is that they have similar disparity values, the ordering constraint is satisfied and there is a unique match for each point. Any other set that could have given rise to the two images would have disparity values varying more, and either the ordering constraint violated or the uniqueness violated. The disparity values are inversely proportional to the depth values

© 2006 by Davi GeigerComputer Vision April 2006 L1.3 AB C D E F A B A C D D C F F E Stereo Correspondence: Matching Space Right boundary no match Boundary no match Left depth discontinuity Surface orientation discontinuity F D C B A AC DEFAC DEF Note 2: Due to pixel discretization, points A and C in the right frame are neighbors. Note 1: Depth discontinuities and very tilted surfaces can/will yield the same images ( with half occluded pixels) In the matching space, a point (or a node) represents a match of a pixel in the left image with a pixel in the right image

© 2006 by Davi GeigerComputer Vision April 2006 L1.4 Cyclopean Eye The cyclopean eye “sees” the world in 3D where x represents the coordinate system of this eye and w is the disparity axis For manipulating with integer coordinate values, one can also use the following representation Restricted to integer values. Thus, for l,r=0,…,N-1 we have x=0,…2N-2 and w=-N+1,.., 0, …, N-1 Note: Not every pair (x,w) have a correspondence to (l,r), when only integer coordinates are considered. Indeed, the integer coordinate system (x,w) exhibit subpixel accuracy. For x+w even we have integer values for pixels r and l and for x+w odd we have supixel locations. x

© 2006 by Davi GeigerComputer Vision April 2006 L1.5 Surface Constraints Smoothness : In nature most surfaces are smooth in depth compared to their distance to the observer, but depth discontinuities also occur. Usually smoothness implies an ordering constraint, where points to the right of match point to the right of Uniqueness: There should be only one disparity value associated to each cyclopean coordinate x. Note, multiple matches for left eye points or right eye points are allowed. Left Epipolar Line w

© 2006 by Davi GeigerComputer Vision April 2006 L1.6 Bayesian Formulation The probability of a surface w(x,e) to account for the left and right image can be described by the Bayes formula as Let us develop formulas for both probability terms on the numerator. The denominator is a normalization to make the probability sum to 1.

© 2006 by Davi GeigerComputer Vision April 2006 L1.7 C(e,x,w) Є [0,1], x+w even, represents how good is a match between a point (e,l) in the left image and a point (e,r) in the right image (where x=l+r is the cyclopean eye coordinate and w=r-l is the disparity.) The epipolar lines are indexed by e (for the homework, they are just the horizontal lines). C(e,x,w) Є [0,1], x+w odd, represents how good is a match between an edge (e,l - > l+1) in the left image and an edge (e,r ->r+1) in the right image The parameter  reduces the effect of the gradient values.

© 2006 by Davi GeigerComputer Vision April 2006 L1.8 w w=2 Right Epipolar Line l-1 l=3 l+1 r+1 r=5 r-1 x x=8 Epipolar interaction: the higher the intensity edges the less the cost (the higher the probability) to have disparity changes across epipolar lines

© 2006 by Davi GeigerComputer Vision April 2006 L1.9 w w=2 Right Epipolar Line l-1 l=3 l+1 r+1 r=5 r-1 x x=8

© 2006 by Davi GeigerComputer Vision April 2006 L1.10 Limit Disparity The matrix is updated only within a range of disparity : 2D+1, i.e., The rational is: (i)Less computations (ii)Larger disparity matches imply larger errors in 3D estimation.

© 2006 by Davi GeigerComputer Vision April 2006 L1.11 Stereo Correspondence: Belief Propagation (BP) We want to obtain/compute the marginal We have finally the posterior distribution for disparity values (surface {w(x,e)}) These are exponential computations on the size of the grid N

© 2006 by Davi GeigerComputer Vision April 2006 L1.12 “Horizontal” Belief Tree “Vertical” Belief Tree Kai Ju’s approximation to BP We use Kai Ju’s Ph.D. thesis work to approximate the (x,e) graph/lattice by horizontal and vertical graphs, which are singly connected. Thus, exact computation of the marginal in these graphs can be obtained in linear time. We combine the probabilities obtained for the horizontal and vertical graphs, for each lattice site, by “picking” the “best” one (the ones with lower entropy, where.)

© 2006 by Davi GeigerComputer Vision April 2006 L1.13 Result

© 2006 by Davi GeigerComputer Vision April 2006 L1.14 Regi on A Regi on B Region A Left Region A Right Region B Left Region B Right Junctions and its properties (false matches that reveal information from vertical disparities (see Malik 94, ECCV) Some Issues in Stereo: