Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu International Workshop on Depth Image Analysis November 11, 2012.

Slides:



Advertisements
Similar presentations
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Advertisements

3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
SPONSORED BY SA2014.SIGGRAPH.ORG Annotating RGBD Images of Indoor Scenes Yu-Shiang Wong and Hung-Kuo Chu National Tsing Hua University CGV LAB.
Accurate On-Line 3D Occupancy Grids Using Manhattan World Constraints Brian Peasley and Stan Birchfield Dept. of Electrical and Computer Engineering Clemson.
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Extracting Minimalistic Corridor Geometry from Low-Resolution Images Yinxiao Li, Vidya, N. Murali, and Stanley T. Birchfield Department of Electrical and.
Patch to the Future: Unsupervised Visual Prediction
Nikolas Engelhard 1, Felix Endres 1, Jürgen Hess 1, Jürgen Sturm 2, Wolfram Burgard 1 1 University of Freiburg, Germany 2 Technical University Munich,
Using Perception for mobile robot. 2D ranging for mobile robot.
4/15/2017 Using Gaussian Process Regression for Efficient Motion Planning in Environments with Deformable Objects Barbara Frank, Cyrill Stachniss, Nichola.
Proximity Computations between Noisy Point Clouds using Robust Classification 1 Jia Pan, 2 Sachin Chitta, 1 Dinesh Manocha 1 UNC Chapel Hill 2 Willow Garage.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Toward Object Discovery and Modeling via 3-D Scene Comparison Evan Herbst, Peter Henry, Xiaofeng Ren, Dieter Fox University of Washington; Intel Research.
Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Goal: Fast and Robust Velocity Estimation P1P1 P2P2 P3P3 P4P4 Our Approach: Alignment Probability ●Spatial Distance ●Color Distance (if available) ●Probability.
InerVis Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra Contact Person: Jorge Lobo Human inertial sensor:
SLAM: Simultaneous Localization and Mapping: Part II BY TIM BAILEY AND HUGH DURRANT-WHYTE Presented by Chang Young Kim These slides are based on: Probabilistic.
Vision Guided Robotics
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Francois de Sorbier Hiroyuki Shiino Hideo Saito. I. Introduction II. Overview of our system III. Violin extraction and 3D registration IV. Virtual advising.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
3D SLAM for Omni-directional Camera
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
Image-based Plant Modeling Zeng Lanling Mar 19, 2008.
MESA LAB Two papers in icfda14 Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California,
I 3D: Interactive Planar Reconstruction of Objects and Scenes Adarsh KowdleYao-Jen Chang Tsuhan Chen School of Electrical and Computer Engineering Cornell.
Computer Vision, Robert Pless
Asian Institute of Technology
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Acquiring 3D Indoor Environments with Variability and Repetition Young Min Kim Stanford University Niloy J. Mitra UCL/ KAUST Dong-Ming Yan KAUST Leonidas.
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Peter Henry1, Michael Krainin1, Evan Herbst1,
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Digital Image Processing
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Sensors Uncertainties, Line extraction from laser scans Vision
Matching of Objects Moving Across Disjoint Cameras Eric D. Cheng and Massimo Piccardi IEEE International Conference on Image Processing
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Application of Stereo Vision in Tracking *This research is supported by NSF Grant No. CNS Opinions, findings, conclusions, or recommendations.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Line fitting.
Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
A. M. R. R. Bandara & L. Ranathunga
Semi-Global Matching with self-adjusting penalties
3D Graphics Rendering PPT By Ricardo Veguilla.
Resource Allocation for Distributed Streaming Applications
Stereo vision Many slides adapted from Steve Seitz.
INDOOR DENSE DEPTH MAP AT DRONE HOVERING
Computing the Stereo Matching Cost with a Convolutional Neural Network
Presentation transcript:

Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu International Workshop on Depth Image Analysis November 11, 2012

Henry2012 Motivation: SLAM Andreasson2010 Wurm2010 Du2011 Henry2010 Where is everyone?

Moving objects can cause issues… Registration Localization Mapping Navigation GOAL: A SLAM algorithm that ignores moving objects, but creates accurate, detailed, and consistent maps.

One Solution Remove moving objects before registration!

Overview Identifying and removing arbitrary moving objects from two point cloud views of a scene.

Plane Removal Why? – Not moving – Helps segmentation How? RANSAC. Iteratively remove the largest plane until the one just removed is approximately horizontal

Euclidean Cluster Segmentation Two points are put in the same cluster if they are within 15 cm of each other

Viewpoint Feature Histograms

Finding Correspondences Allow Warping  5 bins (1.6%) Allow Warping  5 bins (1.6%)

Dynamic Time Warping Euclidean distance Dynamic Time Warping Iteratively take the closest pair of objects (in feature space) until there are no objects left in at least one cloud

Correspondences Some objects will have no correspondences Object motion:

Correspondences Some objects will have no correspondences Camera motion:

Correspondences Some objects will have no correspondences Occlusion:

Recreating the Clouds Each cloud is reconstructed from: – Planes that were removed – Objects that were not removed original recreatedrecreated, viewpoint changed

Experiments

Results input output

Results input output

Results input output

Results input output

Results input output

Object ROC Plot TPR: 1.00 FPR: 0.47

Fraction of Static Points Retained Mean: 0.85

Conclusions & Future Direction Remove moving objects from point cloud scenes – Arbitrary objects – Allow camera motion Considerations: – Just look for people? – Runtime speed

Questions? Thank you.

References H. Du et al., “Interactive 3D modeling of indoor environments with a consumer depth camera,” in Proceedings of the 13th international conference on Ubiquitous computing - UbiComp ’11, 2011, p. 75. H. Andreasson and A. J. Lilienthal, “6D scan registration using depth- interpolated local image features,” Robotics and Autonomous Systems, vol. 58, no. 2, pp , Feb P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments,” The International Journal of Robotics Research, p , Feb K. M. Wurm, A. Hornung, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: A probabilistic, flexible, and compact 3D map representation for robotic systems,” in Proc. of the ICRA 2010 Workshop on Best Practice in 3D Perception and Modeling for Mobile Manipulation, P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D Mapping: Using depth cameras for dense 3D modeling of indoor environments,” in the 12th International Symposium on Experimental Robotics (ISER), 2010.