Last lecture Passive Stereo Spacetime Stereo Multiple View Stereo.

Slides:



Advertisements
Similar presentations
Epipolar Geometry.
Advertisements

The fundamental matrix F
Lecture 11: Two-view geometry
3D reconstruction.
Structure from motion Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Zhengyou Zhang and Marc Pollefyes.
Structure from motion Digital Visual Effects, Spring 2005 Yung-Yu Chuang 2005/4/20 with slides by Richard Szeliski, Steve Seitz, Zhengyou Zhang and Marc.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Two-view geometry.
Lecture 8: Stereo.
Camera calibration and epipolar geometry
Structure from motion.
Robot Vision SS 2008 Matthias Rüther 1 ROBOT VISION Lesson 6: Shape from Stereo Matthias Rüther Slides partial courtesy of Marc Pollefeys Department of.
Epipolar Geometry Class 7 Read notes Feature tracking run iterative L-K warp & upsample Tracking Good features Multi-scale Transl. Affine.
Today Feature Tracking Structure from Motion on Monday (1/29)
Global Alignment and Structure from Motion
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Lecture 21: Multiple-view geometry and structure from motion
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple-view Reconstruction from Points and Lines
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.
Epipolar geometry Class 5
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Epipolar geometry Class 5. Geometric Computer Vision course schedule (tentative) LectureExercise Sept 16Introduction- Sept 23Geometry & Camera modelCamera.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
CSCE 643 Computer Vision: Structure from Motion
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 6a: Shape from Stereo, short summary Matthias Rüther Slides partial courtesy of Marc Pollefeys.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
Geometry of Multiple Views
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Last lecture Passive Stereo Spacetime Stereo.
EECS 274 Computer Vision Affine Structure from Motion.
Feature Matching. Feature Space Outlier Rejection.
Structure from motion Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/4/22 with slides by Richard Szeliski, Steve Seitz, Zhengyou Zhang and Marc.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Structure from Motion ECE 847: Digital Image Processing
Structure from Motion Paul Heckbert, Nov , Image-Based Modeling and Rendering.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
Structure from motion Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/24 with slides by Richard Szeliski, Steve Seitz, Zhengyou Zhang and Marc.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Structure from motion Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Zhengyou Zhang and Marc Pollefyes.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
SFM under orthographic projection
Epipolar geometry.
Epipolar Geometry class 11
3D Photography: Epipolar geometry
Structure from motion Input: Output: (Tomasi and Kanade)
Estimating 2-view relationships
Reconstruction.
Two-view geometry.
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/4/26
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Last lecture Passive Stereo Spacetime Stereo Multiple View Stereo

Today Structure from Motion: Given pixel correspondences, how to compute 3D structure and camera motion? Slides stolen from Prof Yungyu Chuang

Epipolar geometry & fundamental matrix

The epipolar geometry What if only C,C’,x are known?

The epipolar geometry C,C’,x,x’ and X are coplanar epipolar geometry demo

The epipolar geometry All points on  project on l and l’

The epipolar geometry Family of planes  and lines l and l’ intersect at e and e’

The epipolar geometry epipolar plane = plane containing baseline epipolar line = intersection of epipolar plane with image epipolar pole = intersection of baseline with image plane = projection of projection center in other image epipolar geometry demo

The fundamental matrix F C C’ T=C’-C R p p’ The equation of the epipolar plane through X is

The fundamental matrix F essential matrix

The fundamental matrix F C C’ T=C’-C R p p’

The fundamental matrix F Let M and M’ be the intrinsic matrices, then fundamental matrix

The fundamental matrix F C C’ T=C’-C R p p’

The fundamental matrix F The fundamental matrix is the algebraic representation of epipolar geometry The fundamental matrix satisfies the condition that for any pair of corresponding points x↔x’ in the two images

F is the unique 3x3 rank 2 matrix that satisfies x’ T Fx=0 for all x ↔ x’ 1.Transpose: if F is fundamental matrix for (P,P’), then F T is fundamental matrix for (P’,P) 2.Epipolar lines: l’=Fx & l=F T x’ 3.Epipoles: on all epipolar lines, thus e’ T Fx=0,  x  e’ T F=0, similarly Fe=0 4.F has 7 d.o.f., i.e. 3x3-1(homogeneous)-1(rank2) 5.F maps from a point x to a line l’=Fx (not invertible) The fundamental matrix F

It can be used for –Simplifies matching –Allows to detect wrong matches

Estimation of F — 8-point algorithm The fundamental matrix F is defined by for any pair of matches x and x’ in two images. Let x=(u,v,1) T and x’=(u’,v’,1) T, each match gives a linear equation

8-point algorithm In reality, instead of solving, we seek f to minimize, least eigenvector of.

8-point algorithm To enforce that F is of rank 2, F is replaced by F’ that minimizes subject to. It is achieved by SVD. Let, where, let then is the solution.

8-point algorithm % Build the constraint matrix A = [x2(1,:)‘.*x1(1,:)' x2(1,:)'.*x1(2,:)' x2(1,:)'... x2(2,:)'.*x1(1,:)' x2(2,:)'.*x1(2,:)' x2(2,:)'... x1(1,:)' x1(2,:)' ones(npts,1) ]; [U,D,V] = svd(A); % Extract fundamental matrix from the column of V % corresponding to the smallest singular value. F = reshape(V(:,9),3,3)'; % Enforce rank2 constraint [U,D,V] = svd(F); F = U*diag([D(1,1) D(2,2) 0])*V';

8-point algorithm Pros: it is linear, easy to implement and fast Cons: susceptible to noise

Problem with 8-point algorithm ~10000 ~100 1 ! Orders of magnitude difference between column of data matrix  least-squares yields poor results

Normalized 8-point algorithm (0,0) (700,500) (700,0) (0,500) (1,-1) (0,0) (1,1)(-1,1) (-1,-1) normalized least squares yields good results Transform image to ~[-1,1]x[-1,1]

Normalized 8-point algorithm 1.Transform input by, 2.Call 8-point on to obtain 3.

Normalized 8-point algorithm A = [x2(1,:)‘.*x1(1,:)' x2(1,:)'.*x1(2,:)' x2(1,:)'... x2(2,:)'.*x1(1,:)' x2(2,:)'.*x1(2,:)' x2(2,:)'... x1(1,:)' x1(2,:)' ones(npts,1) ]; [U,D,V] = svd(A); F = reshape(V(:,9),3,3)'; [U,D,V] = svd(F); F = U*diag([D(1,1) D(2,2) 0])*V'; % Denormalise F = T2'*F*T1; [x1, T1] = normalise2dpts(x1); [x2, T2] = normalise2dpts(x2);

Normalization function [newpts, T] = normalise2dpts(pts) c = mean(pts(1:2,:)')'; % Centroid newp(1,:) = pts(1,:)-c(1); % Shift origin to centroid. newp(2,:) = pts(2,:)-c(2); meandist = mean(sqrt(newp(1,:).^2 + newp(2,:).^2)); scale = sqrt(2)/meandist; T = [scale 0 -scale*c(1) 0 scale -scale*c(2) ]; newpts = T*pts;

RANSAC repeat select minimal sample (8 matches) compute solution(s) for F determine inliers until  (#inliers,#samples)>95% or too many times compute F based on all inliers

Results (ground truth)

Results (8-point algorithm)

Results (normalized 8-point algorithm)

From F to R, T If we know camera parameters Hartley and Zisserman, Multiple View Geometry, 2 nd edition, pp 259

Application: View morphing

Main trick Prewarp with a homography to rectify images So that the two views are parallel Because linear interpolation works when views are parallel

Problem with morphing Without rectification

prewarp morph homographies input output

Video demo

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 40 Triangulation Problem: Given some points in correspondence across two or more images (taken from calibrated cameras), {(u j,v j )}, compute the 3D location X

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 41 Triangulation Method I: intersect viewing rays in 3D, minimize: X is the unknown 3D point C j is the optical center of camera j V j is the viewing ray for pixel (u j,v j ) s j is unknown distance along V j Advantage: geometrically intuitive CjCj VjVj X

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 42 Triangulation Method II: solve linear equations in X advantage: very simple Method III: non-linear minimization advantage: most accurate (image plane error)

Structure from motion

structure from motion: automatic recovery of camera motion and scene structure from two or more images. It is a self calibration technique and called automatic camera tracking or matchmoving. Unknowncameraviewpoints

Applications For computer vision, multiple-view shape reconstruction, novel view synthesis and autonomous vehicle navigation. For film production, seamless insertion of CGI into live-action backgrounds

Structure from motion 2D feature tracking 3D estimation optimization (bundle adjust) geometry fitting SFM pipeline

Structure from motion Step 1: Track Features Detect good features, Shi & Tomasi, SIFT Find correspondences between frames –Lucas & Kanade-style motion estimation –window-based correlation –SIFT matching

Structure from Motion Step 2: Estimate Motion and Structure Simplified projection model, e.g., [Tomasi 92] 2 or 3 views at a time [Hartley 00]

Structure from Motion Step 3: Refine estimates “Bundle adjustment” in photogrammetry Other iterative methods

Structure from Motion Step 4: Recover surfaces (image-based triangulation, silhouettes, stereo…) Good mesh

Example : Photo Tourism

Factorization methods

Problem statement

Other projection models

SFM under orthographic projection 2D image point orthographic projection matrix 3D scene point image offset Trick Choose scene origin to be centroid of 3D points Choose image origins to be centroid of 2D points Allows us to drop the camera translation:

factorization (Tomasi & Kanade) projection of n features in one image: projection of n features in m images W measurement M motion S shape Key Observation: rank(W) <= 3

Factorization Technique –W is at most rank 3 (assuming no noise) –We can use singular value decomposition to factor W: Factorization –S ’ differs from S by a linear transformation A: –Solve for A by enforcing metric constraints on M known solve for

Metric constraints Orthographic Camera Rows of  are orthonormal: Enforcing “Metric” Constraints Compute A such that rows of M have these properties Trick (not in original Tomasi/Kanade paper, but in followup work) Constraints are linear in AA T : Solve for G first by writing equations for every  i in M Then G = AA T by SVD

Results

Extensions to factorization methods Paraperspective [Poelman & Kanade, PAMI 97] Sequential Factorization [Morita & Kanade, PAMI 97] Factorization under perspective [Christy & Horaud, PAMI 96] [Sturm & Triggs, ECCV 96] Factorization with Uncertainty [Anandan & Irani, IJCV 2002]

Bundle adjustment

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 62 Structure from motion How many points do we need to match? 2 frames: ( R,t ): 5 dof + 3n point locations  4n point measurements  n  5 k frames: 6(k–1)-1 + 3n  2kn always want to use many more

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 63 Bundle Adjustment What makes this non-linear minimization hard? many more parameters: potentially slow poorer conditioning (high correlation) potentially lots of outliers

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 64 Lots of parameters: sparsity Only a few entries in Jacobian are non-zero

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 65 Robust error models Outlier rejection use robust penalty applied to each set of joint measurements for extremely bad data, use random sampling [RANSAC, Fischler & Bolles, CACM’81]

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 66 Correspondences Can refine feature matching after a structure and motion estimate has been produced decide which ones obey the epipolar geometry decide which ones are geometrically consistent (optional) iterate between correspondences and SfM estimates using MCMC [Dellaert et al., Machine Learning 2003]Dellaert et al., Machine Learning 2003

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 67 Structure from motion: limitations Very difficult to reliably estimate metric structure and motion unless: large (x or y) rotation or large field of view and depth variation Camera calibration important for Euclidean reconstructions Need good feature tracker Lens distortion

Issues in SFM Track lifetime Nonlinear lens distortion Prior knowledge and scene constraints Multiple motions

Track lifetime every 50th frame of a 800-frame sequence

Track lifetime lifetime of 3192 tracks from the previous sequence

Track lifetime track length histogram

Nonlinear lens distortion

effect of lens distortion

Prior knowledge and scene constraints add a constraint that several lines are parallel

Prior knowledge and scene constraints add a constraint that it is a turntable sequence

Applications of Structure from Motion

Jurassic park

PhotoSynth