Presentation is loading. Please wait.

Presentation is loading. Please wait.

Smart Camera Network Localization Using a 3D Target John Kassebaum Nirupama Bulusu Wu-Chi Feng Portland State University.

Similar presentations


Presentation on theme: "Smart Camera Network Localization Using a 3D Target John Kassebaum Nirupama Bulusu Wu-Chi Feng Portland State University."— Presentation transcript:

1 Smart Camera Network Localization Using a 3D Target John Kassebaum Nirupama Bulusu Wu-Chi Feng Portland State University

2 Problem Statement and Design Goals Problem: Given a view-connected network, automatically determine camera positions and orientations in a single global 3D coordinate frame Design goals/contributions: Give node positions in a meaningful real world coordinate frame Runnable on resource constrained sensor platforms by reducing (compared to previous methods): computation message passing deployment constraints Distributed

3 Epipolar geometry is the geometry of stereoscopic imaging Estimate relative orientation and position between view-sharing pairs of cameras by estimating the fundamental matrix Related Work: Epipolar Geometry-based Solutions* Network must be view-connected * Devarajan 2006, Lymberopoulos 2005, Mantzel 2004, Kurillo 2008, Medeiros 2008

4 The Point Correspondence Problem : How to detect and correlate at least 8 world feature points imaged between all pairs of view-sharing cameras? Most solutions suggest using SIFT Image processing at each node to opportunistically detect and categorize likely feature points Broadcasts of descriptor sets to neighbors Processing at each node to find likely point correlations with neighbors and thereby determine which neighbors a node can localize with by estimating epipolar geometry Difficulties with Epipolar Geometry Estimation

5 The Unknown Scale Factors Problem : Epipolar geometry estimation gives baseline distance only up to an unknown scale factor Scale factor is between cameras’ coordinate frame and real world coordinate frame Unknown scale factors differ for all pairwise localizations When realigning pairwise localizations to a global coordinate frame, varying scale factors must be resolved to a consistent scale factor Difficulties with Epipolar Geometry Estimation

6 The Unknown Scale Factors Problem Most proposed solutions require camera ‘triples’: 3 cameras must view the same world feature points Camera with 2 localizations determine how to scale one localization coordinate frame to the other

7 Localizing to the target’s coordinate frame means: Pairwise localization occurs when 2 cameras localize to the same target position All separate pairwise localizations have the same scale Easy global realignment: Cameras localized to 2 target positions determine and pass the simple translation and rotation that aligns one to the other Our Solution - Projection Based Localization Target’s feature points are sufficient for localization Target’s known geometry allows localization by estimating projection matrices instead of epipolar geometries

8 Position Localizing With The Projection Matrix Cameras estimate projection matrices using the known 3D coordinates of the target’s feature points and 2D coordinates of their detected pixel points Recovered orientation and position are relative to the target’s current location Orientation

9 3D Targets Targets must be suitable for both the characteristics of the deployment and the environment Target size will depend on the distances between cameras Targets must be efficiently detectable Targets must have at least 28 feature points for projection matrix estimation Due to noise in feature point detection, more feature points increases localization accuracy

10 Inaccuracies in single camera localizations relative to the target will be propagated during global realignment Position error varies by less than 0.5% at each target distance using different number of points At 13.5% frame area, a 1.2% position error = 0.27” At 2.55% frame area, a 0.4% position error = 0.21” 640x480 images taken by Logitech Quickcam at Stargates Feature point frame area coverage -- near: 13.5%, far: 2.5% Evaluation - Single Camera Localization Testbed - single camera configuration Gutted Logitech Quickcam ~5.5mm lens ~41° field of view Crossbow Stargate XScale-PXA255 rev 6 (v5l) 64MB RAM Linux x, y, and z-axis orientation error in degrees using 288 points Orientation error varies by less than 0.4 of a degree per axis

11 x and y-axis orientation angle error show same small fluctuations at different target positions shown in single camera localization tests Position error % using 288 points Orientation error % using 288 points Localization of a small network of 5 cameras Target position at hop 0 (localized alone) is the origin of the global coordinate frame Evaluation - Network Localization hop 0 hop 1hop 2 hop 3 hop 4 Hop 0 has 0.8% error Hop 1 shows an increase, but errors at hops 2 and 3 are consistent Increase at hop 4 Again, as with single camera localization, position error % declines as target distance increases z-axis orientation angle error is off by 4° at hop 4 hops 1 and 3 have a negative error, 2 and 4 have a positive error z Most likely manual measurement or construction flaws will manifest as z-axis orientation angle errors

12 Future Work Determine cause of z-axis orientation angle error Inaccurate measurement or construction? Noise in pixel coordinates when detecting target feature points? Target design? Virtualize the testbed configuration Remove the possibility of human measurement errors Refine pairwise localization error using bundle adjustment

13 Conclusions Developed and implemented a smart camera network localization solution using a 3D target, which has these advantages: Non-opportunistic feature point detection, so less image processing Localization by projection matrix estimation, so no messages for point correlation Message passing only for: Determining simultaneous views of target by 2 cameras 3D rotation matrices and 3D translation vectors for global realignment Requires only 2 cameras share an overlapping view No unknown scale factors problem Available for resource-constrained sensor platforms Distributed


Download ppt "Smart Camera Network Localization Using a 3D Target John Kassebaum Nirupama Bulusu Wu-Chi Feng Portland State University."

Similar presentations


Ads by Google