Contents Description of the big picture Theoretical background on this work The Algorithm Examples.

Slides:



Advertisements
Similar presentations
CSE473/573 – Stereo and Multiple View Geometry
Advertisements

QR Code Recognition Based On Image Processing
Efficient High-Resolution Stereo Matching using Local Plane Sweeps Sudipta N. Sinha, Daniel Scharstein, Richard CVPR 2014 Yongho Shin.
3D reconstruction.
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
Stereo Vision Reading: Chapter 11
Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer Vision Lecture 16: Region Representation
Lecture 8: Stereo.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Epipolar lines epipolar lines Baseline O O’ epipolar plane.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Last Time Pinhole camera model, projection
1Ellen L. Walker Recognizing Objects in Computer Images Ellen L. Walker Mathematical Sciences Dept Hiram College Hiram, OH 44234
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
Stereopsis Mark Twain at Pool Table", no date, UCR Museum of Photography.
The plan for today Camera matrix
CS 223b 1 More on stereo and correspondence. CS 223b 2 =?f g Mostpopular For each window, match to closest window on epipolar line in other image. (slides.
Fitting a Model to Data Reading: 15.1,
Objective of Computer Vision
Stereo Computation using Iterative Graph-Cuts
© 2006 by Davi GeigerComputer Vision April 2006 L1.1 Binocular Stereo Left Image Right Image.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
CSE473/573 – Stereo Correspondence
Announcements PS3 Due Thursday PS4 Available today, due 4/17. Quiz 2 4/24.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Automatic Camera Calibration
Recap Low Level Vision –Input: pixel values from the imaging device –Data structure: 2D array, homogeneous –Processing: 2D neighborhood operations Histogram.
Last Week Recognized the fact that the 2D image is a representation of a 3D scene thus contains a consistent interpretation –Labeled edges –Labeled vertices.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
CS 6825: Binary Image Processing – binary blob metrics
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Stereo Many slides adapted from Steve Seitz.
Binocular Stereo #1. Topics 1. Principle 2. binocular stereo basic equation 3. epipolar line 4. features and strategies for matching.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
stereo Outline : Remind class of 3d geometry Introduction
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
By Pushpita Biswas Under the guidance of Prof. S.Mukhopadhyay and Prof. P.K.Biswas.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Machine Vision ENT 273 Regions and Segmentation in Images Hema C.R. Lecture 4.
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
John Morris These slides were adapted from a set of lectures written by Mircea Nicolescu, University of Nevada at Reno Stereo Vision Iolanthe in the Bay.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
SIFT Scale-Invariant Feature Transform David Lowe
Summary of “Efficient Deep Learning for Stereo Matching”
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Mean Shift Segmentation
EECS 274 Computer Vision Stereopsis.
Presented by: Cindy Yan EE6358 Computer Vision
Haim Kaplan and Uri Zwick
Detecting image intensity changes
Course 6 Stereo.
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Contents Description of the big picture Theoretical background on this work The Algorithm Examples

Depth reconstruction process Image acquisition Camera modeling Feature acquisition Image matching Depth determination Interpolation

Image acquisition (in aerial photo) Images are vertical (the camera directed downwards) Images are taken from high point Images are taken at very small time intervals Rotation is minimal The height is almost the same

Aerial photo

Camera modeling To provide function that maps pairs of corresponding points from two stereo images onto scene points one need the camera model: Focal length of the camera Focal length = Zoom Camera location Tilt – No tilt

Feature acquisition Pixels grouped into a region contain match more information There exists two techniques of image matching: –Area matching: Compare pixels in both images –Feature based: Corners, junctions, straight or curved segments or other features are extracted from images and then are being matched The algorithm combines the both techniques.

Problems in feature based matching – occlusion

Feature acquisition problems – different angle (not our case)

Image matching Both images are projections of the same 3D object, thus along with camera model one could reconstruct the original 3D object. Common matching constraints: –Epipolarity constraint: –Epipolarity constraint: point on a given epipolar line can match only to the same point on the corresponding epipolar line on the other image. –Uniqueness constraint: –Uniqueness constraint: no two objects in the scene project to the same object in an image –Ordering constraint: –Ordering constraint: projections of two objects along an epipolar line can't swap order

Image matching continued Common matching constraints (continued): –Smoothness constraint: –Smoothness constraint: difference of disparities of two nearby objects doesn’t exceed a threshold value. –Orientation constraint: –Orientation constraint: difference of orientations of two matching segments is bounded. –Contrast constraint: –Contrast constraint: matched features has the same contrast sign.

Matching techniques Best resemblance approachBest resemblance approach Coarse to fineCoarse to fine Evaluation-propagation approachEvaluation-propagation approach There are more techniques to do it.

Best resemblance approach Use a local pixel information to decide a potential match Match decision is made independently of the others (pixels) Main match criteria is a good resemblance between regions of pixels in two stereo images.

Coarse to fine approach Hierarchical approach – matching information is passed from coarse to finer level of computation. Features are matched with different level of resolution. The coarse disparities are used to constrain the finer matches.

Evaluation propagation approach Uses neighborhood relations between matching features. Potential matches form a correspondence graph whose nodes are pairs of matching segments. Edges of graph join features that doesn’t violate the smoothness and uniqueness constraints.

Our algorithm’s technique. The algorithm uses combination of Best resemblance and evaluation- propagation. It builds neighborhood graph. Initial disparity of some features is propagated to other features. Local matches are evaluated with the best resemblance approach.

Theoretical Background The algorithm is another implementation of partial Hausdorff distance Pixel based method and segment based method

The Algorithm Preprocessing Boot process Evaluate-propagate stage Postprocessing

Pixel based method Amit’s thesis Divide both images to grid of search windows Find the disparity of corresponding windows, while they are treated as set of black pixels. If percent of matching pixels is small, repeat the previous step using the previously found disparity as approximated initial disparity. The major disadvantage in our case is the impossibility to obtain objects original height, because the calculated height will be the average per search window!

Segment based method There exists a method, which treats both images as set of segments. It’s main disadvantage is Runtime complexity The algorithm will mix both techniques!

Preprocessing - 1 Use edge detection on both images. Create non crossing segments. Split wiggly segments Build weakly visibility graph.

Split wiggly segment Find the farthest point P where d>= some threshold value. Split the segment at point P. Repeat recursively.

Weak visibility

Weakly visibility graph Vertices – segments in left image. Edges connects vertices if the corresponding segments are weakly visible. Edge’s weight is the minimal vertical or horizontal distance between the corresponding segments. Graph is built by sweepline technique.

Sweepline structure Y - A x i s X N L L L L N N N Seg_field X_field

Sweepline technique Create events stack Iteratively process the event points e from left to right: Y is the Y coordinate of e (black pixel of segment I) a.If seg_field(Y) <> Nil then I and J (pointed by seg_field) are neighbors, and their distance is X-X_field(Y); If J is already in the Graph then the distance is the minimum b.Seg_filed(Y)=I, x_field(Y)=X

Preprocessing - 2 After the graph was built its MST is constructed by PRIM algorithm. Neighboring vertices are visible and close Long and strait segment is chosen as the root. In the right image distance transform is computed and each pixel hold the coordinates of the closest black pixel and distance from it.

Why initial disparity ? Wrong values may be obtained if there is no way to minimize the search window for a specific segment.

Initial disparity Find the most long and straight segment in the right image. Find it in the left image. Let its corresponding vertex’s initial disparity be its disparity.

Evaluate – propagate stage BFS like algorithm is implemented in this stage. Calculation for a vertex in BFS: –Find its disparity using the disparity of its parent vertex –Propagate it’s disparity as the initial disparity to it’s children

Segment’s disparity Segment disparity – disparity of the whole segment. Pixel disparity – for pixel of some segment S, distance to the closest black pixel in the right image, under translation t, which is it’s segment’s disparity. Each pixel in left image holds the closest black pixels from the right image

Segment’s disparity (2) Each segment is being searched in the window defined by it’s initial disparity and double the maximum allowed disparity For each tested disparities two values are minimized: –D 1 Hausdorf distance of segment –D 2 Hausdorf distance of segment with its MST neighbors 80% of pixels are used during the match process Best disparity is chosen among competing disparities

Post processing Pixels may hold more then one close black pixel or none. –If closest pixel is too far it remains unmatched –If it is unmatched it remains unmatched –If it have more then one matched pixel 1)Remove already matched candidates 2)Among the rest choose pixel from segment which has the most matching pixels with the segment of the pixel

Runtime The runtime of algorithm is- Which may be represented by Where A n is average number of neighbors L is total number of black pixels W is the size of search window

Depth determination When images are rectified (only horizontal displacement of images)

Interpolation The problem – feature based approach doesn’t provide depth for the whole scene. There are two methods: – Suppose that there is a continuous function of depth that can be approximated to the sparse depth array achieved in previous steps – Try to fit known geometric models to the sparse depth array.