U Direction (unit) vectors from cameras (blue) to points (black) are given : Find the positions of the cameras and points.

Slides:



Advertisements
Similar presentations
Epipolar Geometry.
Advertisements

3D reconstruction.
Two-View Geometry CS Sastry and Yang
Self-calibration.
Jan-Michael Frahm, Enrique Dunn Spring 2012
Two-view geometry.
Structure from motion.
Image Correspondence and Depth Recovery Gene Wang 4/26/2011.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Epipolar Geometry and the Fundamental Matrix F
Multiple-view Reconstruction from Points and Lines
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Many slides and illustrations from J. Ponce
Uncalibrated Epipolar - Calibration
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Projected image of a cube. Classical Calibration.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Scene planes and homographies. Homographies given the plane and vice versa.
Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.
Bootstrapping a Heteroscedastic Regression Model with Application to 3D Rigid Motion Evaluation Bogdan Matei Peter Meer Electrical and Computer Engineering.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Simple Calibration of Non-overlapping Cameras with a Mirror
875: Recent Advances in Geometric Computer Vision & Recognition Jan-Michael Frahm Fall 2011.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Geometry and Algebra of Multiple Views
Chapter 2 Robot Kinematics: Position Analysis
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
The Brightness Constraint
POSITION & ORIENTATION ANALYSIS. This lecture continues the discussion on a body that cannot be treated as a single particle but as a combination of a.
视觉的三维运动理解 刘允才 上海交通大学 2002 年 11 月 16 日 Understanding 3D Motion from Images Yuncai Liu Shanghai Jiao Tong University November 16, 2002.
CSCE 643 Computer Vision: Structure from Motion
CSCE 441: Computer Graphics Forward/Inverse kinematics Jinxiang Chai.
1 Camera calibration based on arbitrary parallelograms 授課教授:連震杰 學生:鄭光位.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Geometric Transformations
Computer vision: models, learning and inference M Ahad Multiple Cameras
3D Reconstruction Using Image Sequence
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Reconstruction from Two Calibrated Views Two-View Geometry
Determining 3D Structure and Motion of Man-made Objects from Corners.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Stereo March 8, 2007 Suggested Reading: Horn Chapter 13.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Projective 2D geometry course 2 Multiple View Geometry Comp Marc Pollefeys.
René Vidal and Xiaodong Fan Center for Imaging Science
The Brightness Constraint
Epipolar geometry.
3D Photography: Epipolar geometry
Structure from motion Input: Output: (Tomasi and Kanade)
The Brightness Constraint
The Brightness Constraint
CSCE 441: Computer Graphics Forward/Inverse kinematics
Uncalibrated Geometry & Stratification
George Mason University
Course 7 Motion.
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

U

Direction (unit) vectors from cameras (blue) to points (black) are given : Find the positions of the cameras and points.

Branch and Bound in Rotation Space (ICCV 2007)

Essential Matrix Estimation Encodes the relative displacement between two cameras. Rotation Translation Needs at least 5 points X x1 x2 (R, t)

2-view SfM with known rotations

Best current error We can eliminate all rotations within the ball of radius 0.3 about trial. Rotation Space

theta v

Angle between two quaternions is half the angle between the corresponding rotations, defined by All rotations within a delta- neighbourhood of a reference rotation form a circle on the quaternion sphere. Isometry of Rotations and Quaternions

Flatten out the meridians (longitude lines) Azimuthal Equidistant Projection Angle-axis representation of Rotations Rotations are represented by a ball of radius pi in 3- Dimensional space.

Subdividing and testing rotation space

Numbers of cubes left at each iteration (Log-10 scale) Remaining Volume at each iteration (Log-10 scale in cubic radians). Performance

V’ t V C’ C X Point correspondence in two views Coplanarity constraint with uncertainty Linear Programming, not SOCP

Multi-Camera Systems (Non-overlapping) – L inf Method Translation direction lies in a polyherdron (Green) from point correspondences

Multi-Camera Systems (Non-overlapping) – L inf Method

Each point correspondence gives two LP constraints on the direction t (epipolar direction).

Essential Matrix Calculated from 3 points (above) or 4 points (below) Possible rotations.

Timing Examples 29 correspondences : 2.9 seconds 794 correspondences : 75 seconds correspondeces : 3m 30 seconds Timing (in milliseconds) for E-matrix computation – 360 degree camera. 360 degree camera

Further Application – 1D camera (e.g. robot moving in a plane) Joint work with Kalle Astrom, Fredrik Kahl, Carl Olsson and Olof Enquist Complete structure and motion problem for “planar motion” Optimal solution in L-infinity norm. Same idea of searching in rotation space.

Original and dual problems Reconstructed points and path Hockey Rink Data

Method works also for rigidly placed multi-camera systems. Can be considered as a single “generalized” camera One rotation, one translation to be estimated.

Robust 6DOF motion estimation from Non-overlapping images, Multi-camera systems 4 images from the right 4 images from the left (Images: Courtesy of UNC-Chapel Hill)

Generalized Cameras (Non-overlapping) Ladybug2 camera (The locally-central case) 5 cameras (horizontal) 1 camera (top)

Generalized Cameras (Non-overlapping) Experiment setup

Generalized Cameras (Non-overlapping) An Infinity-like path which the Ladybug2 camera follows (total 108 frames)

Robust 6DOF motion estimation from Non-overlapping images, Multi-camera systems Critical configuration

Generalized Cameras (Non-overlapping) – Linear Method Estimated path (Linear Method) vs. Ground truth

Generalized Cameras (Non-overlapping) – Linear Method

Demo video : 16 sec (Click to play)

Multi-Camera Systems (Non-overlapping) – SOCP Method

Multi-Camera Systems (Non-overlapping) – L inf Method

E+SOCP : Motion of multi-camera rigs using SOCP method BB+LP : Motion of multi-camera rigs using L inf method

Multi-Camera Systems (Non-overlapping) – L inf Method E+SOCP : Motion of multi-camera rigs using SOCP method BB+LP : Motion of multi-camera rigs using L inf method

Multi-Camera Systems (Non-overlapping) – L inf Method Estimated path (L inf Method) vs. Ground truth

Multi-Camera Systems (Non-overlapping) – L inf Method

Demo video : 16 sec (Click to play)

Obtaining an initial region

277,000 3D points triangulated. All but 281 proved by simple test to be minima. All except 153 proved to be global minima by more complex test.

Hardy: Pure mathematics is on the whole distinctly more useful than applied. For what is useful above all is technique, and mathematical technique is taught mainly through pure mathematics.