Geodetic Metrology and Engineering Geodesy Institute of Geodesy and Photogrammetry www.geometh.ethz.ch C URRENT I NVESTIGATIONS AT THE ETH Z URICH IN O.

Slides:



Advertisements
Similar presentations
Epipolar Geometry.
Advertisements

Lecture 11: Two-view geometry
3D reconstruction.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Jan-Michael Frahm, Enrique Dunn Spring 2012
Two-view geometry.
Camera calibration and epipolar geometry
A Generic Concept for Camera Calibration Peter Sturm and Srikumar Ramaligam Sung Huh CPSC 643 Individual Presentation 4 April 15, 2009.
Structure from motion.
Image Correspondence and Depth Recovery Gene Wang 4/26/2011.
Stanford CS223B Computer Vision, Winter 2006 Lecture 5 Stereo I
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Epipolar Geometry and the Fundamental Matrix F
Lecture 21: Multiple-view geometry and structure from motion
CAU Kiel DAGM 2001-Tutorial on Visual-Geometric 3-D Scene Reconstruction 1 The plan for today Leftovers and from last time Camera matrix Part A) Notation,
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Projected image of a cube. Classical Calibration.
Multiple View Geometry
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Stereo Ranging with verging Cameras Based on the paper by E. Krotkov, K.Henriksen and R. Kories.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
CHAPTER 7 Viewing and Transformations © 2008 Cengage Learning EMEA.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
CSE 6367 Computer Vision Stereo Reconstruction Camera Coordinate Transformations “Everything should be made as simple as possible, but not simpler.” Albert.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Introduction à la vision artificielle III Jean Ponce
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Geometric Camera Models and Camera Calibration
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
CSCE 643 Computer Vision: Structure from Motion
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Geometry of Multiple Views
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Geometric Camera Calibration.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Plane-based external camera calibration with accuracy measured by relative deflection angle Chunhui Cui , KingNgiNgan Journal Image Communication Volume.
Feature Matching. Feature Space Outlier Rejection.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Part II Plane Equation and Epipolar Geometry. Duality.
3D Reconstruction Using Image Sequence
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Smart Camera Network Localization Using a 3D Target John Kassebaum Nirupama Bulusu Wu-Chi Feng Portland State University.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Epipolar geometry.
Common Classification Tasks
Epipolar geometry continued
Reconstruction.
Two-view geometry.
Course 6 Stereo.
Two-view geometry.
Multi-view geometry.
Presentation transcript:

Geodetic Metrology and Engineering Geodesy Institute of Geodesy and Photogrammetry C URRENT I NVESTIGATIONS AT THE ETH Z URICH IN O PTICAL I NDOOR P OSITIONING C URRENT I NVESTIGATIONS AT THE ETH Z URICH IN O PTICAL I NDOOR P OSITIONING Sebastian Tilch, Rainer Mautz Central Idea The central idea of the new indoor positioning system is based on the fundamentals of stereo photogrammetry. But instead of using a second camera, we replace it with a device (called laser- hedgehog) that projects well-distributed laser spots as flexible reference points on the ceiling, walls and furniture in any indoor environment (Fig. 1). The projecting light source consists of several focused laser-beams that originate from a static, well- defined central point. The 3D directions of the laser-beams are also precisely known through a previous calibration. Since the concept of camera pinhole projection and the projection of laser spots from a central origin rely on the same principle, namely the central projection, the beams of the laser- light emitting device simulate the light path of a camera. Hence, the second camera can be simulated by the projecting device (Fig. 2). During the measurement phase the digital camera observes the reference field. Having carried out the point identification for the individual laser beams in the camera image, the relative orientation can be derived by solving the coplanarity constraint of the epipolar geometry (Fig. 3). Relative Orientation For solving the coplanarity constraint a 5-point-algorithm has been chosen. Unfortunately the 5-point-algorithm provides up to 10 solutions for every camera position. In order to identify the correct essential matrix, the algorithm has been embedded into a RANSAC algorithm. Then, the correct essential matrix is decomposed into a translational vector b and a rotational matrix R and finally refined by a least-squares estimation of the relative orientation parameters. Introduction of the System Scale The system-scale cannot be determined by relative orientation. It can be introduced by measuring the distance between the laser hedgehog and the camera. Note that in our case, the distance measurements are carried out only for the first four camera positions. Then, the spatial coordinates of the laser points are determined by intersection. Once the 3D positions of the laser spots are known, the relative orientation parameters for subsequent camera positions can be determined by spatial resection. Evaluation of the System First tests have shown that the relative orientation of the camera could be correctly determined in all cases. However, the identification of the laser-spots and the introduction of the system scale need to be improved in further investigations. Fig. 1. Central Idea of CLIPS Fig. 4. Determination of the relative orientation of the camera Fig. 5. Introduction of the system scale Fig. 2. Concept of the virtual image Camera image Virtual image Relative orientation using the 5-Point-Solver and RANSAC Approximate parameters of the relative orientation Refinement of the RO parameters using least squares adjustment Adjusted parameters of the relative orientation Fig. 3. Epipolar geometry of the camera and laser-hedgehog arrangement