WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

Single-view geometry Odilon Redon, Cyclops, 1914.
Epipolar Geometry.
Computer Vision, Robert Pless
3D reconstruction.
Computer vision: models, learning and inference
Surveillance and Security
Two-view geometry.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Camera calibration and epipolar geometry
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
3D Vision Topic 1 of Part II Camera Models CSC I6716 Fall 2010
Single-view metrology
Used slides/content with permission from
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Epipolar Geometry and the Fundamental Matrix F
Lecture 7: Image Alignment and Panoramas CS6670: Computer Vision Noah Snavely What’s inside your fridge?
CS485/685 Computer Vision Prof. George Bebis
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Camera model Relation between pixels and rays in space ?
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
Projected image of a cube. Classical Calibration.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Cameras, lenses, and calibration
Stereo Ranging with verging Cameras Based on the paper by E. Krotkov, K.Henriksen and R. Kories.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Projective Geometry and Single View Modeling CSE 455, Winter 2010 January 29, 2010 Ames Room.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
COMP 175: Computer Graphics March 24, 2015
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
SS5305 – Motion Capture Initialization 1. Objectives Camera Setup Data Capture using a Single Camera Data Capture using two Cameras Calibration Calibration.
May 9, 2005 Andrew C. Gallagher1 CRV2005 Using Vanishing Points to Correct Camera Rotation Andrew C. Gallagher Eastman Kodak Company
Geometric Camera Models
Single-view geometry Odilon Redon, Cyclops, 1914.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Feature Matching. Feature Space Outlier Rejection.
JASS `04 Benjamin Fingerle, Christian Wachinger1 2nd Joined Advanced Student School Calibration Benjamin Fingerle Christian Wachinger.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Last Two Lectures Panoramic Image Stitching
Single-view geometry Odilon Redon, Cyclops, 1914.
Project 1 Due NOW Project 2 out today –Help session at end of class Announcements.
EECS 274 Computer Vision Projective Structure from Motion.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Calibrating a single camera
CMSC5711 Image processing and computer vision
CMSC5711 Revision (1) (v.7.b) revised
Epipolar geometry.
CMSC5711 Image processing and computer vision
Reconstruction.
Projective geometry Readings
Video Compass Jana Kosecka and Wei Zhang George Mason University
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions selected by user (GUI) – Camera matrix and additional parameters

WP3 - 3D reprojection Objectives: – Use known table dimensions and its projection to estimate the position (+rotation) of the cameras – Use camera positions, camera matrix and 2D ball points to reproject and estimate real ball positions

WP3 - 3D reprojection Two phase process – 1 x scene analyses = 2 x pose estimation – N x reprojection Step 1: scene analyses POSIT (Pose from Orthography and Scaling with ITeration) – Originally proposed in 1992 – Computes the pose (position and rotation) of a known object

WP3 – 3D reprojection step 1: scene analysis POSIT (continued) – Requires: At least four non-coplanar points of the object Image projections of these object points Focal length in pixels f x,y  assumes square pixels f x = F * s x, f y = F * s y s x and s y being the number of pixels/mm on the imager – Estimates: Translation vector T from center of projection towards origin object model Rotation matrix R relative to object model origin

WP3 – 3D reprojection step 1: scene analysis POSIT (continued) – Resctriction: weak-perspective approximation Assumes that the points on the object are all at effectively the same depth  which means internal depth differences within the object are neglectable Still converges properly, probably due to: Regular shape of the table Imager and table being approximately aligned

WP3 – 3D reprojection step 1: scene analysis POSIT (continued) – Obvious choice of points would be:

WP3 – 3D reprojection step 1: scene analysis POSIT (continued) However, – Algorithm does not benifit from additional coplanar points – Experimental results are only descent if coordinates variate enough on each axis. Proposed points vary to less in y-direction  use height of table

WP3 – 3D reprojection step 1: scene analysis POSIT (continued) Different set of points: Advantage: + Converges properly Disadvantages: - Position of table leg not official - Bottom table not on our footage (camera 2)

WP3 – 3D reprojection step 2: reprojection Step 2: reprojection Input, for both cameras : – Rotation matrix R of object model – Tranlation vector T of object model – Focal length F – Pixels/mm on the imager – Cx cy (principle ray does not go through center of imager exactly) Assumes a simple camera pinhole model

WP3 – 3D reprojection step 2: reprojection Step 2: reprojection 3D point is located on the ray r from the center of projection, through the point on the projection plane where a ball was detected For each camera, this ray can be calculated using: – F, sx, sy, and the coordinates of the detected ball Using R and T these rays can be converted to the coordinate system of the table. The 3D point can be approximated by the crossing of the rays

WP3 – 3D reprojection step 2: reprojection However, this step is executed for each frame  has to be computationally efficient  using intersection of planes 1.For each camera, a vertical plane is constructed defined by – Normal n, being cross product of: » Ray r (through center of projection and projected point) » Unity vector (0,1,0) – Point p, being the projected point 2.For each camera, a horizontal plane is constructed defined by – Normal n, being cross product of: » Ray r » Unity vector (1,0,0) – Point p, being the projected point

WP3 – 3D reprojection step 2: reprojection 1.vertical planes construction 2.Horizontal planes construction 3.All planes are converted to the table coordinate system – Using R and T – Including 180 degree turn 4.Intersection between vertical planes is calculated  results in line l 5.Intersections between line l and horizontal planes is calculated  results in points p1 and p2 6.3D point is approximated by the average of p1 and p2

WP3 - 3D reprojection Issues: – Should have exact location of model point (and its projections) which varies in y-direciton solution: can use table leg (unofficial and not in our footage) – Not enough good frames for calibration  estimed focal lengths wrong solution: focal lengths defined experimentally