Presentation is loading. Please wait.

Presentation is loading. Please wait.

KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.

Similar presentations


Presentation on theme: "KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings."— Presentation transcript:

1 KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings (Best paper reward)

2 Target Normal maps Greyscales Noisy data

3 Outline Introduction Motivation Background System diagram Experiment results Conclusion

4 Introduction Passive camera Simultaneous localization and mapping (SLAM) Structure from motion (SFM) – MonoSLAM [8](ICCV 2003) MonoSLAM – Parallel Tracking and Mapping [17] (ISMAR 2007) Parallel Tracking and Mapping Disparity – Depth model [26] (2010) Depth model Pose of camera from Depth models [20] (ICCV 2011) Pose of camera from Depth models

5 Motivation Active camera : Kinect sensor Pose estimation from depth information Real-time mapping – GPU

6 Background- Camera sensor Kinect Sensor – Infra-red light Input Information – RGB image(1) – Raw depth data – Calibrated depth image(2) (1)(2)

7 Background – Pose estimation Depth maps from two views Iterative closest points (ICP) [7] Point-plane metric [5] ICP

8 Background – Pose estimation Projective data association algorithm [4]

9 Background – Scene Representation Volume of space Signed distance function [7]

10 System Diagram

11

12 Pre-defined parameter Pose estimation with sensor camera Raw depth map R k Calibrated depth image R k (u) where and Raw data K RkRk R k (u)

13 Surface Measurement Reduce noise Bilateral filter With bilateral filter Without bilateral filter

14 Surface Measurement Vertex map Normal vector

15 Define camera pose Camera frame k is transferred into the global frame

16 System Diagram

17 Surface Reconstruction : Operate environment LL L L 3 voxel reconstruction

18 Surface Reconstruction Signed distance function

19 Truncated Signed Distance Function Surface sensor F k (p) 0 +v -v Axis x +v-v

20 Weighting running average Dynamic object motion

21 System Diagram

22 Surface Prediction from Ray Casting Store Ray casting marches from +v to zero-crossing Corresponding ray

23 Surface Prediction from Ray Casting Speed-up – Ray skipping – Truncation distance Surface sensor Axis x

24 System Diagram

25 Sensor Pose Estimation Previous frame Current frame Assume small motion frame Fast projective data association algorithm – Initialized with previous frame pose where

26 Vertex correspondences where Point-plane energy

27 For z > 0 Modified equation where

28

29 Experiment Results Reconstruction resolution : 256 3 Test camera pose kinect camera rotates and captures 560 frame over 19 seconds in turntable

30 Experiment Results Using every 8 th frame

31 Experiment Results : Processing time Pre-processing raw data, data-associations; pose optimisations; raycasting the surface prediction and surface measurement integration Demo

32 Conclusion Robust tracking of camera pose by all aligning all depth points Parallel algorithms for both tracking and mapping

33 Reference [8] A. J. Davison. Real-time simultaneous localization and mapping with a single camera. In Proceedings of the International Conference on Computer Vision (ICCV), 2003. [17] G. Klein and D. W. Murray. Parallel tracking and mapping for small AR workspaces. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2007. [26] J. Stuehmer, S. Gumhold, and D. Cremers. Real-time dense geometry from a handheld camera. In Proceedings of the DAGM Symposium on Pattern Recognition, 2010.

34 [20] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison. DTAM: Dense tracking and mapping in real-time. In Proceedings of the International Conference on Computer Vision (ICCV), 2011 [7] B. Curless and M. Levoy. A volumetric method for building complex models from range images. In ACM Transactions on Graphics (SIGGRAPH), 1996. [5] Y. Chen and G. Medioni. Object modeling by registration of multiple range images. Image and Vision Computing (IVC), 10(3):145–155, 1992. [4] G. Blais and M. D. Levine. Registering multiview range data to create 3D computer objects. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 17(8):820–824, 1995.


Download ppt "KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings."

Similar presentations


Ads by Google