Structure from Motion Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/06/12 Many slides adapted from Lana Lazebnik, Silvio Saverese,

Slides:



Advertisements
Similar presentations
Structure from motion.
Advertisements

The fundamental matrix F
Lecture 11: Two-view geometry
3D reconstruction.
Stereo and Projective Structure from Motion
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
N-view factorization and bundle adjustment CMPUT 613.
Two-view geometry.
Lecture 8: Stereo.
Lecture 11: Structure from Motion
Camera calibration and epipolar geometry
Structure from motion.
Algorithms & Applications in Computer Vision
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Lecture 21: Multiple-view geometry and structure from motion
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Stereo and Structure from Motion
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Multiple View Reconstruction Class 23 Multiple View Geometry Comp Marc Pollefeys.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/12/11 Many slides adapted from Lana Lazebnik,
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/05/15 Many slides adapted from Lana Lazebnik,
Computer vision: models, learning and inference
Epipolar Geometry and Stereo Vision Computer Vision ECE 5554 Virginia Tech Devi Parikh 10/15/13 These slides have been borrowed from Derek Hoiem and Kristen.
Camera Geometry and Calibration Thanks to Martial Hebert.
Multi-view geometry.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Structure from Motion Computer Vision CS 143, Brown James Hays 11/18/11 Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz,
Structure from Motion, Feature Tracking, and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/15/10 Many slides adapted.
CSCE 643 Computer Vision: Structure from Motion
Computer Vision – Lecture 17
3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Perceptual and Sensory Augmented Computing Computer Vision WS 08/09 Computer Vision – Lecture 17 Structure-from-Motion Bastian Leibe RWTH Aachen.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Affine Structure from Motion
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
EECS 274 Computer Vision Affine Structure from Motion.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Structure from Motion ECE 847: Digital Image Processing
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
EECS 274 Computer Vision Projective Structure from Motion.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
CSE 185 Introduction to Computer Vision Stereo 2.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Epipolar Geometry and Stereo Vision
Undergrad HTAs / TAs Help me make the course better!
Epipolar geometry.
3D Photography: Epipolar geometry
Structure from motion Input: Output: (Tomasi and Kanade)
Estimating 2-view relationships
Uncalibrated Geometry & Stratification
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Structure from motion.
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Structure from Motion Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/06/12 Many slides adapted from Lana Lazebnik, Silvio Saverese, Steve Seitz, Martial Hebert

This class: structure from motion Recap of epipolar geometry – Depth from two views Projective structure from motion Affine structure from motion

Recap: Epipoles Point x in left image corresponds to epipolar line l’ in right image Epipolar line passes through the epipole (the intersection of the cameras’ baseline with the image plane

Recap: Fundamental Matrix Fundamental matrix maps from a point in one image to a line in the other If x and x’ correspond to the same 3d point X:

Recap: Automatic Estimation of F 8-Point Algorithm for Recovering F Correspondence Relation 1.Normalize image coordinates 2.RANSAC with 8 points – Randomly sample 8 points – Compute F via least squares – Enforce by SVD – Repeat and choose F with most inliers 3.De-normalize: Assume we have matched points x x’ with outliers

Recap We can get projection matrices P and P’ up to a projective ambiguity (see HZ p ) Code: Code function P = vgg_P_from_F(F) [U,S,V] = svd(F); e = U(:,3); P = [-vgg_contreps(e)*F e]; See HZ p

Recap Fundamental matrix song

Triangulation: Linear Solution Generally, rays C  x and C’  x’ will not exactly intersect Can solve via SVD, finding a least squares solution to a system of equations X xx' Further reading: HZ p

Triangulation: Linear Solution Given P, P’, x, x’ 1.Precondition points and projection matrices 2.Create matrix A 3.[U, S, V] = svd(A) 4. X = V(:, end) Pros and Cons Works for any number of corresponding images Not projectively invariant Code:

Triangulation: Non-linear Solution Minimize projected error while satisfying Figure source: Robertson and Cipolla (Chpt 13 of Practical Image Processing and Computer Vision)

Triangulation: Non-linear Solution Minimize projected error while satisfying Solution is a 6-degree polynomial of t, minimizing Further reading: HZ p. 318

Projective structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,…, m, j = 1, …, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding 2D points x ij x1jx1j x2jx2j x3jx3j XjXj P1P1 P2P2 P3P3 Slides from Lana Lazebnik

Projective structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,…, m, j = 1, …, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding points x ij With no calibration info, cameras and points can only be recovered up to a 4x4 projective transformation Q: X → QX, P → PQ -1 We can solve for structure and motion when 2mn >= 11m +3n – 15 For two cameras, at least 7 points are needed

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation For each additional view: – Determine projection matrix of new camera using all the known 3D points that are visible in its image – calibration cameras points

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation For each additional view: – Determine projection matrix of new camera using all the known 3D points that are visible in its image – calibration – Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera – triangulation cameras points

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation For each additional view: – Determine projection matrix of new camera using all the known 3D points that are visible in its image – calibration – Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera – triangulation Refine structure and motion: bundle adjustment cameras points

Bundle adjustment Non-linear method for refining structure and motion Minimizing reprojection error x1jx1j x2jx2j x3jx3j XjXj P1P1 P2P2 P3P3 P1XjP1Xj P2XjP2Xj P3XjP3Xj

Auto-calibration Auto-calibration: determining intrinsic camera parameters directly from uncalibrated images For example, we can use the constraint that a moving camera has a fixed intrinsic matrix – Compute initial projective reconstruction and find 3D projective transformation matrix Q such that all camera matrices are in the form P i = K [R i | t i ] Can use constraints on the form of the calibration matrix, such as zero skew

Summary so far From two images, we can: – Recover fundamental matrix F – Recover canonical cameras P and P’ from F – Estimate 3D positions (if K is known) that correspond to each pixel For a moving camera, we can: – Initialize by computing F, P, X for two images – Sequentially add new images, computing new P, refining X, and adding points – Auto-calibrate assuming fixed calibration matrix to upgrade to similarity transform

Photo synth Noah Snavely, Steven M. Seitz, Richard Szeliski, "Photo tourism: Exploring photo collections in 3D," SIGGRAPH 2006Photo tourism: Exploring photo collections in 3D

3D from multiple images Building Rome in a Day: Agarwal et al. 2009

Structure from motion under orthographic projection 3D Reconstruction of a Rotating Ping-Pong Ball C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November 1992.Shape and motion from image streams under orthography: A factorization method. Reasonable choice when Change in depth of points in scene is much smaller than distance to camera Cameras do not move towards or away from the scene

Orthographic projection for rotated/translated camera x X a1a1 a2a2

Affine structure from motion Affine projection is a linear mapping + translation in inhomogeneous coordinates 1.We are given corresponding 2D points (x) in several frames 2.We want to estimate the 3D points (X) and the affine parameters of each camera (A) x X a1a1 a2a2 Projection of world origin

Step 1: Simplify by getting rid of t: shift to centroid of points for each camera 2d normalized point (observed) 3d normalized point Linear (affine) mapping

Suppose we know 3D points and affine camera parameters … then, we can compute the observed 2d positions of each point Camera Parameters (2mx3) 3D Points (3xn) 2D Image Points (2mxn)

Affine structure from motion Centering: subtract the centroid of the image points For simplicity, assume that the origin of the world coordinate system is at the centroid of the 3D points After centering, each normalized point x ij is related to the 3D point X i by

Affine structure from motion Given: m images of n fixed 3D points: x ij = A i X j + b i, i = 1,…, m, j = 1, …, n Problem: use the mn correspondences x ij to estimate m projection matrices A i and translation vectors b i, and n points X j The reconstruction is defined up to an arbitrary affine transformation Q (12 degrees of freedom): We have 2mn knowns and 8m + 3n unknowns (minus 12 dof for affine ambiguity) Thus, we must have 2mn >= 8m + 3n – 12 For two views, we need four point correspondences

What if we instead observe corresponding 2d image points? Can we recover the camera parameters and 3d points? cameras (2 m) points (n) What rank is the matrix of 2D points?

Affine structure from motion Let’s create a 2m × n data (measurement) matrix: cameras (2 m × 3) points (3 × n) The measurement matrix D = MS must have rank 3! C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November 1992.Shape and motion from image streams under orthography: A factorization method.

Factorizing the measurement matrix Source: M. Hebert AX

Factorizing the measurement matrix Source: M. Hebert Singular value decomposition of D:

Factorizing the measurement matrix Source: M. Hebert Singular value decomposition of D:

Factorizing the measurement matrix Source: M. Hebert Obtaining a factorization from SVD:

Factorizing the measurement matrix Source: M. Hebert Obtaining a factorization from SVD:

Affine ambiguity The decomposition is not unique. We get the same D by using any 3×3 matrix C and applying the transformations A → AC, X →C -1 X That is because we have only an affine transformation and we have not enforced any Euclidean constraints (like forcing the image axes to be perpendicular, for example) Source: M. Hebert

Orthographic: image axes are perpendicular and of unit length Eliminating the affine ambiguity x X a1a1 a2a2 a 1 · a 2 = 0 |a 1 | 2 = |a 2 | 2 = 1 Source: M. Hebert

Solve for orthographic constraints Solve for L = CC T Recover C from L by Cholesky decomposition: L = CC T Update A and X: A = AC, X = C -1 X where ~ ~ Three equations for each image i

Algorithm summary Given: m images and n tracked features x ij For each image i, center the feature coordinates Construct a 2m × n measurement matrix D: – Column j contains the projection of point j in all views – Row i contains one coordinate of the projections of all the n points in image i Factorize D: – Compute SVD: D = U W V T – Create U 3 by taking the first 3 columns of U – Create V 3 by taking the first 3 columns of V – Create W 3 by taking the upper left 3 × 3 block of W Create the motion (affine) and shape (3D) matrices: A = U 3 W 3 ½ and X = W 3 ½ V 3 T Eliminate affine ambiguity Source: M. Hebert

Dealing with missing data So far, we have assumed that all points are visible in all views In reality, the measurement matrix typically looks something like this: One solution: – solve using a dense submatrix of visible points – Iteratively add new cameras cameras points

Reconstruction results (your HW 3.4) C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November 1992.Shape and motion from image streams under orthography: A factorization method.

Further reading Short explanation of Affine SfM: class notes from Lischinksi and Gruber Clear explanation of epipolar geometry and projective SfM – ok/2008-SFM-chapters.pdf ok/2008-SFM-chapters.pdf

Review of Affine SfM from Interest Points 1.Detect interest points (e.g., Harris) Image derivatives 2. Square of derivatives 3. Gaussian filter g(  I ) IxIx IyIy Ix2Ix2 Iy2Iy2 IxIyIxIy g(I x 2 )g(I y 2 ) g(I x I y ) 4. Cornerness function – both eigenvalues are strong har 5. Non-maxima suppression

Review of Affine SfM from Interest Points 2. Correspondence via Lucas-Kanade tracking a)Initialize (x’,y’) = (x,y) b)Compute (u,v) by c)Shift window by (u, v): x’=x’+u; y’=y’+v; d)Recalculate I t e)Repeat steps 2-4 until small change Use interpolation for subpixel values 2 nd moment matrix for feature patch in first image displacement I t = I(x’, y’, t+1) - I(x, y, t) Original (x,y) position

Review of Affine SfM from Interest Points 3. Get Affine camera matrix and 3D points using Tomasi-Kanade factorization Solve for orthographic constraints

HW 3: Problem 4 summary Tips – Helpful matlab functions: interp2, meshgrid, ordfilt2 (for getting local maximum), svd, chol – When selecting interest points, must choose appropriate threshold on Harris criteria or the smaller eigenvalue, or choose top N points – Vectorize to make tracking fast (interp2 will be the bottleneck) – If you get stuck on one part, can use the included intermediate results – Get tracking working on one point for a few frames before trying to get it working for all points Extra optional problems – Affine verification – Missing track completion – Optical flow – Coarse-to-fine tracking

Tips for HW 3 Problem 1: vanishing points – Use lots of lines to estimate vanishing points – For estimation of VP from lots of lines, see single-view geometry chapter, or use robust estimator of a central intersection point – For obtaining intrinsic camera matrix, numerical solver (e.g., fsolve in matlab) may be helpful Problem 3: epipolar geometry – Use reprojection distance for inlier check (make sure to compute line to point distance correctly) Problem 4: structure from motion – Use Matlab’s chol and svd – If you weren’t able to get tracking to work from HW2 can use provided points

Distance of point to epipolar line x. x‘=[u v 1]. l=Fx=[a b c]

Next class Clustering and using clustered interest points for matching images in a large database