3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

Lecture 11: Two-view geometry
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-View Geometry CS Sastry and Yang
Self-calibration.
Two-view geometry.
Lecture 8: Stereo.
Camera calibration and epipolar geometry
Structure from motion.
Multiple View Geometry
Robot Vision SS 2008 Matthias Rüther 1 ROBOT VISION Lesson 6: Shape from Stereo Matthias Rüther Slides partial courtesy of Marc Pollefeys Department of.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Lecture 21: Multiple-view geometry and structure from motion
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Two-view geometry Epipolar geometry F-matrix comp. 3D reconstruction Structure comp.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D Computer Vision and Video Computing 3D Vision Lecture 15 Stereo Vision (II) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Epipolar geometry Class 5. Geometric Computer Vision course schedule (tentative) LectureExercise Sept 16Introduction- Sept 23Geometry & Camera modelCamera.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Automatic Camera Calibration
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/05/15 Many slides adapted from Lana Lazebnik,
Computer vision: models, learning and inference
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Computing the Fundamental matrix Peter Praženica FMFI UK May 5, 2008.
Structure from Motion Computer Vision CS 143, Brown James Hays 11/18/11 Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz,
Example: line fitting. n=2 Model fitting Measure distances.
CSCE 643 Computer Vision: Structure from Motion
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 6a: Shape from Stereo, short summary Matthias Rüther Slides partial courtesy of Marc Pollefeys.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Computing F. Content Background: Projective geometry (2D, 3D), Parameter estimation, Algorithm evaluation. Single View: Camera model, Calibration, Single.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
EECS 274 Computer Vision Affine Structure from Motion.
3D reconstruction from uncalibrated images
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
3D Reconstruction Using Image Sequence
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
EECS 274 Computer Vision Projective Structure from Motion.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Epipolar Geometry and Stereo Vision
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
3D Photography: Epipolar geometry
Structure from motion Input: Output: (Tomasi and Kanade)
Multiple View Geometry Comp Marc Pollefeys
Estimating 2-view relationships
Uncalibrated Geometry & Stratification
Multi-view geometry.
Calibration and homographies
Back to equations of geometric transformations
Structure from motion Input: Output: (Tomasi and Kanade)
Image Stitching Linda Shapiro ECE/CSE 576.
Presentation transcript:

3D Reconstruction Jeff Boody

Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the resulting 3D depth map as an input into the Biomimetic vision system

Overview (1)Feature Extraction/ Matching (2)Relating Images (3)Projective Reconstruction (4)Self-Calibration (5)Dense Matching (6)3D Model Building

Feature Extraction ● Harris corner detector – Create gradient images Ix, Iy using convolution mask [-2, -1, 0, 1, 2] – Compute Ixx, Ixy, Iyy (smoothing optional) – The corner function: R(x,y) = det(x,y) - k trace(x,y)^2 – det(x,y) = Ixx(x,y) Iyy(x, y) – Ixy(x,y) Ixy(x,y) – trace(x,y) = Ixx(x,y) + Iyy(x,y) – Select local maxima over a threshold (vxl – adaptive) – Compute the sub-pixel location of corners

Feature Matching ● Correlation – Compare features in I to features in I' that are within a search widow (approx 1/8 th the image) – C = ∑∑ (I(x-i,y-j)-Imean)(I'(x-i,y-j)-I'mean) – The correlation score is summed over a region i=(-N,N), j=(-N,N), where N = 3 ● Zhang's robust matching – Correlation, strength/unambiguity of matches, iterative relaxation – Eliminates ambiguity caused by multiple matches

Relating Images(1) ● Fundamental matrix – u' T Fu = 0 – uu'F11 + uv'F21 + uF31 + vu'F12 + vv'F22 + vF32 + u'F13 + v'F23 + F33 = 0 – Af = 0, (uu', uv', u, vu', vv', v, u', v', 1) – ||f|| = 1, (F is only defined up to an unknown scale) – det(F) = 0 (singularity constraint, rank 2) – The epipoles are the left and right null spaces of F

Relating Images(2) ● 8-Point Algorithm – Normalize points, u = T1 u, u' = T2 u' – Points are translated to their centroid – Points are scaled so their average distance from the origin is √2 – Solve Af = 0, using SVD (A = UDV T, F is V 9 ) – Take the SVD of F (F = UDV T ) and set the smallest eigenvalue to 0, corresponding to the closest singular matrix under the Frobenius norm (||F|| = 1) – De-normalize F, F = T2 T F T1

Relating Images(3) ● 7-Point Algorithm – Same as the 8-point algorithm, except det(F) = 0 is enforced differently, and only 7 point correspondences are required – F = aF1 + (1-a)F2, where F1 and F2 are V 9 and V 8 respectively – Enforce the singularity constraint by solving det(aF1 + (1-a)F2) = 0 for a – This leads to a cubic equation in a that has one or three real solutions

Relating Images(4) ● In practice we have many more matches then 8, some of which can be noisy (є). ● Ransac or Least Median Squares (to name a few) – N is the number of matches – m is the number of samples chosen – p is the sample size (i.e. 7 or 8) – є is the percentage of outliers

Relating Images(5) ● Ransac ● while(1 - (1 - (1 - є)^p)^m < 0.95) – Select random sample (bucketing) – Solve for FM – Determine inliers (over all matches) ● Matches that are within a given threshold (typically 1 or 2 pixels) are chosen as inliers ● Keep the FM which has the largest number of inliers

Relating Images(6) ● Least Median Squares ● The same loop as Ransac can be used ● Inliers are chosen as follows – Calculate the residuals for every match, and compute the median – б = (1 + 5 / (N-p))√M J – inlier if r i 2 ≤ (2.5 б ) 2, outlier otherwise – Select the FM for which the inliers minimize ∑r i 2

Relating Images(7) ● Distance metric: Euclidean distance from the point u' to its epipolar line Fu – Fu ~ l', FTu' ~ l – d(u', Fu) = |u' T Fu| / √((Fu) (Fu) 2 2 ) ● Residual – r i 2 = d 2 (u' i, Fu i ) + d 2 (u i, FTu' i )

Relating Images(8) ● Non-linear minimization (Levenberg- Marquardt) – Often, the fundamental matrix that was found using the robust method is still not good enough – Noise in the input data, outliers might not be detected, errors in setting the singularity constraint,... – Parameterize F = – Solve min d 2 (u' i, Fu i ) + d 2 (u i, FTu' i ), for inlier matches – The solution requires taking the derivative of the distance metric and possibly using up to 36 different parameterizations of F (i.e. epipoles at infiniti)

Projective Reconstruction(1) ● Projective Transform – x = PX – P 1 M = K[I 3 | 0 3 ] – P 2 M = K[R T | -R T t] – K =

Projective Reconstruction(2) ● Initial projection matrices (first method) – Normalize images: m = K *-1 m – P 1 = [I 3 | 0 3 ] – P 2 = [[e 2 ]xF + e 2 pi T | sigma e 2 ] – Choose pi such that: [e 2 ]xF + e 2 pi T = R * = I – sigma can be arbitrarily set to 1, cx = Image.width/2, cy = Image.height/2, s = 0, aspect ratio = 1 – For the focal length, several guesses must be tried, keeping the one with the most points reconstructed

Projective Reconstruction(3) ● Initial structure (first method) – u = PX where u = w(u,v,1) T, X = (x, y, z, w) T – wu = p 1 T X, wv = p 2 T X, w = p 3 T X ● Each point in each view results in – [up 3 T -p 1 T ]X = 0, [vp 3 T -p 2 T ]X = 0 ● Resulting in this set of linear equations: AX = 0 – M is number of points, N is number of views – A is 2N x 4M, X is 4M x 1

Projective Reconstruction(4) ● Perspective Factorization (second method)

Projective Reconstruction(5) ● Perspective Factorization (second method)

Projective Reconstruction(6) ● Other algorithms in projective reconstruction – Triangulation: a more robust method for reconstructing the initial structure – Iterated Extended Kalman Filters: a method to update the structure when more than two views are available – Bundle Adjustment: another non-linear minimization of the structure which requires the solution to take advantage of the sparse structure of the problem matrix. Otherwise, for a typical image sequence, the solution must be minimized over more than 6000 variables

Self-calibration(1) ● Self-calibration is the process that upgrades a projective reconstruction to a metric reconstruction

Self-calibration(2) ● The image of the absolute conic

Self-calibration(3) ● Constraints on the DIAC ● n x (#known) + (n - 1) x (#fixed) >= 8

Self-calibration(4) ● Linear algorithm

Self-calibration(5) ● Linear algorithm

Self-calibration(6) ● Example critical motion sequences – Pure translation: Scaling of optical axis (1 DOF) – Pure rotation: Arbitrary position of PI (3 DOF) – Orbital motion: ? – Planar motion: ?

Results(1) ● Feature extraction (first and third images)

Results(2) ● Feature matching

Results(3) ● Inliers after Ransac

Results(4) ● Inliers after tracking

Results(5) ● Epipolar lines

Results(6) ● Metric reconstruction using 3 virtual cameras

Future Work ● Finish projective reconstruction and self- calibration algorithms ● Non-linear minimization for FM, projective reconstruction, self-calibration ● Dense matching (image rectification followed by correlation along epipolar lines) ● Model building (Delunay triangulation) ● Critical motion sequences