Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.

Slides:



Advertisements
Similar presentations
Structure from motion.
Advertisements

Assignment 4 Cloning Yourself
The fundamental matrix F
Lecture 11: Two-view geometry
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-view geometry.
Discrete-Continuous Optimization for Large-scale Structure from Motion David Crandall, Andrew Owens, Noah Snavely, Dan Huttenlocher Presented by: Rahul.
Lecture 8: Stereo.
Camera calibration and epipolar geometry
Structure from motion.
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Announcements Project 2 Out today Sign up for a panorama kit ASAP! –best slots (weekend) go quickly...
Lecture 23: Structure from motion and multi-view stereo
Lecture 11: Structure from motion, part 2 CS6670: Computer Vision Noah Snavely.
Global Alignment and Structure from Motion
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Lecture 21: Multiple-view geometry and structure from motion
Automatic Panoramic Image Stitching using Local Features Matthew Brown and David Lowe, University of British Columbia.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Image Stitching and Panoramas
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from:  David Lowe, UBC  Jiri Matas, CMP Prague
Single-view geometry Odilon Redon, Cyclops, 1914.
Panoramas and Calibration : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Mosaics CSE 455, Winter 2010 February 8, 2010 Neel Joshi, CSE 455, Winter Announcements  The Midterm went out Friday  See to the class.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Computer vision: models, learning and inference
Image Stitching Ali Farhadi CSE 455
Multi-view geometry.
Advanced Computer Vision Feature-based Alignment Lecturer: Lu Yi & Prof. Fuh CSIE NTU.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Mosaics Today’s Readings Szeliski, Ch 5.1, 8.1 StreetView.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
CSCE 643 Computer Vision: Structure from Motion
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
CS4670 / 5670: Computer Vision KavitaBala Lecture 17: Panoramas.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
EECS 274 Computer Vision Geometric Camera Calibration.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Mosaics part 3 CSE 455, Winter 2010 February 12, 2010.
COS429 Computer Vision =++ Assignment 4 Cloning Yourself.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
3D Reconstruction Using Image Sequence
Geometry Reconstruction March 22, Fundamental Matrix An important problem: Determine the epipolar geometry. That is, the correspondence between.
Announcements No midterm Project 3 will be done in pairs same partners as for project 2.
EECS 274 Computer Vision Projective Structure from Motion.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
CSE 185 Introduction to Computer Vision Stereo 2.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
CS4670 / 5670: Computer Vision Kavita Bala Lecture 20: Panoramas.
Modeling the world with photos
Structure from motion Input: Output: (Tomasi and Kanade)
Idea: projecting images onto a common plane
Noah Snavely.
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Structure from motion.
Computational Photography
Calibration and homographies
Structure from motion Input: Output: (Tomasi and Kanade)
Lecture 15: Structure from motion
Presentation transcript:

Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely

Announcements Project 2 out, due next Wednesday, October 14 – Artifact due Friday, October 16 Questions?

Readings Szeliski, Chapter 7.2 My thesis, Chapter 3

Energy minimization via graph cuts Labels (disparities) d1d1 d2d2 d3d3 edge weight

Other uses of graph cuts

Agarwala et al., Interactive Digital Photo Montage, SIGGRAPH 2004 Other uses of graph cuts

Panoramic stitching Problem: copy every pixel in the output image from one of the input images (without blending) Make transitions between two images seamless

Panoramic stitching Can be posed as a labeling problem: label each pixel in the output image with one of the input images Input images: Output panorama:

Panoramic stitching Number of labels: k = number of input images Objective function (in terms of labeling L) Input images: Output panorama:

Panoramic stitching Infinite cost of labeling a pixel (x,y) with image I if I doesn’t cover (x,y) Else, cost = 0

Photographing long scenes with multi- viewpoint panoramas

Today: Drift copy of first image (x n,y n ) (x 1,y 1 ) – add another copy of first image at the end – this gives a constraint: y n = y 1 – there are a bunch of ways to solve this problem add displacement of (y 1 – y n )/(n - 1) to each image after the first compute a global warp: y’ = y + ax run a big optimization problem, incorporating this constraint – best solution, but more complicated – known as “bundle adjustment”

Global optimization Minimize a global energy function: – What are the variables? The translation t j = (x j, y j ) for each image I j – What is the objective function? We have a set of matched features p i,j = (u i,j, v i,j ) – We’ll call these tracks For each point match (p i,j, p i,j+1 ): p i,j+1 – p i,j = t j+1 – t j I1I1 I2I2 I3I3 I4I4 p 1,1 p 1,2 p 1,3 p 2,2 p 2,3 p 2,4 p 3,3 p 3,4 p 4,4 p 4,1 track

Global optimization I1I1 I2I2 I3I3 I4I4 p 1,1 p 1,2 p 1,3 p 2,2 p 2,3 p 2,4 p 3,3 p 3,4 p 4,4 p 4,1 p 1,2 – p 1,1 = t 2 – t 1 p 1,3 – p 1,2 = t 3 – t 2 p 2,3 – p 2,2 = t 3 – t 2 … v 4,1 – v 4,4 = y 1 – y 4 minimize w ij = 1 if track i is visible in images j and j+1 0 otherwise

Global optimization I1I1 I2I2 I3I3 I4I4 p 1,1 p 1,2 p 1,3 p 2,2 p 2,3 p 2,4 p 3,3 p 3,4 p 4,4 p 4,1 A 2m x 2n 2n x 1 x 2m x 1 b

Global optimization Defines a least squares problem: minimize Solution: Problem: there is no unique solution for ! (det = 0) We can add a global offset to a solution and get the same error A 2m x 2n 2n x 1 x 2m x 1 b

Ambiguity in global location Each of these solutions has the same error Called the gauge ambiguity Solution: fix the position of one image (e.g., make the origin of the 1 st image (0,0)) (0,0) (-100,-100) (200,-200)

Solving for camera parameters Projection equation Recap: a camera is described by several parameters Translation t of the optical center from the origin of world coords Rotation R of the image plane focal length f, principle point (x’ c, y’ c ), pixel size (s x, s y ) blue parameters are called “extrinsics,” red are “intrinsics” K

Solving for camera rotation Instead of spherically warping the images and solving for translation, we can directly solve for the rotation R j of each camera Can handle tilt / twist

Solving for rotations R1R1 R2R2 f I1I1 I2I2 p 12 = (u 12, v 12 ) p 11 = (u 11, v 11 ) (u 11, v 11, f) = p 11 R 1 p 11 R 2 p 22

Solving for rotations minimize

3D rotations How many degrees of freedom are there? How do we represent a rotation? – Rotation matrix (too many degrees of freedom) – Euler angles (e.g. yaw, pitch, and roll) – bad idea – Quaternions (4-vector on unit sphere) Usually involves non-linear optimization

Revisiting stereo How do we find the epipolar lines? Need to calibrate the cameras (estimate relative position, orientation) p p' epipolar line Image I Image J

Solving for rotations and translations Structure from motion (SfM) Unlike with panoramas, we often need to solve for structure (3D point positions) as well as motion (camera parameters)

p 1,1 p 1,2 p 1,3 Image 1 Image 2 Image 3 x1x1 x4x4 x3x3 x2x2 x5x5 x6x6 x7x7 R1,t1R1,t1 R2,t2R2,t2 R3,t3R3,t3

Structure from motion Input: images with points in correspondence p i,j = (u i,j,v i,j ) Output structure: 3D location x i for each point p i motion: camera parameters R j, t j Objective function: minimize reprojection error Reconstruction (side) (top)

p 1,1 p 1,2 p 1,3 Image 1 Image 2 Image 3 x1x1 x4x4 x3x3 x2x2 x5x5 x6x6 x7x7 R1,t1R1,t1 R2,t2R2,t2 R3,t3R3,t3

SfM objective function Given point x and rotation and translation R, t Minimize sum of squared reprojection errors: predicted image location observed image location

Solving structure from motion Minimizing g is difficult – g is non-linear due to rotations, perspective division – lots of parameters: 3 for each 3D point, 6 for each camera – difficult to initialize – gauge ambiguity: error is invariant to a similarity transform (translation, rotation, uniform scale) Many techniques use non-linear least-squares (NLLS) optimization (bundle adjustment) – Levenberg-Marquardt is one common algorithm for NLLS – Lourakis, The Design and Implementation of a Generic Sparse Bundle Adjustment Software Package Based on the Levenberg-Marquardt Algorithm, – Marquardt_algorithm Marquardt_algorithm

Extensions to SfM Can also solve for intrinsic parameters (focal length, radial distortion, etc.) Can use a more robust function than squared error, to avoid fitting to outliers For more information, see: Triggs, et al, “Bundle Adjustment – A Modern Synthesis”, Vision Algorithms 2000.

Photo Tourism Structure from motion on Internet photo collections

Photo Tourism

Scene reconstruction Feature detection Pairwise feature matching Pairwise feature matching Incremental structure from motion Correspondence estimation

Feature detection Detect features using SIFT [Lowe, IJCV 2004]

Feature detection Detect features using SIFT [Lowe, IJCV 2004]

Feature detection Detect features using SIFT [Lowe, IJCV 2004]

Feature matching Match features between each pair of images

Feature matching Refine matching using RANSAC [Fischler & Bolles 1987] to estimate fundamental matrices between pairs

Image connectivity graph (graph layout produced using the Graphviz toolkit:

Incremental structure from motion