3D Mapping Robots Intelligent Robotics School of Computer Science Jeremy Wyatt James Walker.

Slides:



Advertisements
Similar presentations
Yang Yang, Miao Jin, Hongyi Wu Presenter: Buri Ban The Center for Advanced Computer Studies (CACS) University of Louisiana at Lafayette 3D Surface Localization.
Advertisements

Verification of specifications and aptitude for short-range applications of the Kinect v2 depth sensor Cecilia Chen, Cornell University Lewis’ Educational.
Exploration of bump, parallax, relief and displacement mapping
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Visualisation of head.txt. Data capture Data for the head figure was captured by a laser scanner. The object is mounted on a turntable, and illuminated.
May 16, 2015 Sparse Surface Adjustment M. Ruhnke, R. Kümmerle, G. Grisetti, W. Burgard.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Hybrid Position-Based Visual Servoing
(Includes references to Brian Clipp
LHC Collimation Working Group – 19 December 2011 Modeling and Simulation of Beam Losses during Collimator Alignment (Preliminary Work) G. Valentino With.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Active Contours / Planes Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision Some slides.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Stanford CS223B Computer Vision, Winter 2005 Lecture 6: Stereo 2 Sebastian Thrun, Stanford Rick Szeliski, Microsoft Hendrik Dahlkamp and Dan Morris, Stanford.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Stanford CS223B Computer Vision, Winter 2006 Lecture 6 Stereo II Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg Corrado Stereo.
Signal Processing Jeremy Wyatt Intelligent Robotics School of Computer Science.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
High Speed Obstacle Avoidance using Monocular Vision and Reinforcement Learning Jeff Michels Ashutosh Saxena Andrew Y. Ng Stanford University ICML 2005.
Weighted Range Sensor Matching Algorithms for Mobile Robot Displacement Estimation Sam Pfister, Kristo Kriechbaum, Stergios Roumeliotis, Joel Burdick Mechanical.
EL-E: Assistive Mobile Manipulator David Lattanzi Dept. of Civil and Environmental Engineering.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
3D Global Registration. The Problem Given: n scans around an objectGiven: n scans around an object Goal: align them allGoal: align them all First attempt:
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
PixelLaser: Range scans from image segmentation Nicole Lesperance ’11 Michael Leece ’11 Steve Matsumoto ’12 Max Korbel ’13 Kenny Lei ’15 Zach Dodds ‘62.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Perception Introduction Pattern Recognition Image Formation
3D SLAM for Omni-directional Camera
Integral University EC-024 Digital Image Processing.
November 10, 2004 Prof. Christopher Rasmussen Lab web page: vision.cis.udel.edu.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
CO1301: Games Concepts Dr Nick Mitchell (Room CM 226) Material originally prepared by Gareth Bellaby.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Quantitative Analyses of Human Pubic Symphyseal Morphology Using Three Dimensional Data: The Potential Utility for Aging Adult Human Skeletons Matthew.
CS332 Visual Processing Department of Computer Science Wellesley College Binocular Stereo Vision Region-based stereo matching algorithms Properties of.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
Statistics in the Image Domain for Mobile Robot Environment Modeling L. Abril Torres-Méndez and Gregory Dudek Centre for Intelligent Machines School of.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
3D Object Modelling and Classification Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University,
Vertices, Edges and Faces By Jordan Diamond. Vertices In geometry, a vertices is a special kind of point which describes the corners or intersections.
Three Dimensional Information Visualisation Peter Young Visualisation Research Group Centre for Software Maintenance Department of Computer Science University.
Particle Filter for Robot Localization Vuk Malbasa.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
Date of download: 5/30/2016 Copyright © 2016 SPIE. All rights reserved. Raw sensor performance. The performance of all sensors is measured in an empty.
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
A Plane-Based Approach to Mondrian Stereo Matching
Fitting: Voting and the Hough Transform
Semi-Global Matching with self-adjusting penalties
Paper – Stephen Se, David Lowe, Jim Little
Range Image Segmentation for Modeling and Object Detection in Urban Scenes Cecilia Chen & Ioannis Stamos Computer Science Department Graduate Center, Hunter.
Terrestrial Lidar Imaging of active normal fault scarps
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
ISO ASIS Automation & Fueling Systems A.Ş.
Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek
دکتر سعید شیری قیداری & فصل 4 کتاب
Common Classification Tasks
Fitting Curve Models to Edges
Three Dimensional Viewing
A Short Introduction to the Bayes Filter and Related Models
Artistic Rendering Final Project Initial Proposal
Presentation transcript:

3D Mapping Robots Intelligent Robotics School of Computer Science Jeremy Wyatt James Walker

What Are 3D Mapping Robots and Their Uses? Robots which produce a 3-dimensional model of their environment from the data they collect They can be used by people who need to know more about the interior of a building: Architects Fire fighters Human rescue workers

Types of Sensing Techniques Stereo vision Laser range finders A combination of the two

Stereo Vision Use stereo disparities to compute depth Inaccurate in detecting the position of walls and objects especially in cluttered environments

Laser Range Finders Very accurate in measuring distances to walls and objects in the environment Has a range of 8m with a resolution of 1mm and a statistical error of +/-10mm Can not detect any texture in the environment so can only produce single coloured models

A Combination of the Two Laser range finders for detecting the distance of walls and objects An omni-cam for producing texture maps for a realistic visualisation of the environment

The GATech Robot Equipped with a laser range finder positioned vertically to scan perpendicular to the movement of the robot

How the Robot Builds the 3D Models Collects raw data from the environment using the laser range finder Converts the raw data into Cartesian co-ordinates Converts the Cartesian co-ordinates into a mesh for the 3D model

How the Robot Collects the Raw Data Laser moves through 180˚ in 0.5˚ steps from one side of the robot over the top to the other recording the distance Approximately 38 scans are completed every second Robot moves forward at 0.25m/s Therefore approximately one scan every 5cm

Transforming the Raw Data Into Co-ordinates Raw data is in the form of cylindrical co-ordinates Transformed using the pose of the robot, the angle of the scan and the height of the centre of the laser scanner

Collecting the Co-ordinates to Form Triangles Choose two scan points p1 and p2 from the same scan, taken at angles α and α + 0.5˚ Choose the two corresponding points q1 and q2 from the next scan Form two triangles p1p2q1 and q1p2q2 For each triangle calculate its normal vector

GATech Model

Disadvantages of This Approach The corridor appears to be slightly curved due to the way the robot moves Obstacles below a height of 0.52m can not be detected by the robot No filtering techniques were used so the model is very noisy but retains a high level of complexity because of this

Further Examples: Thrun et al Uses two laser range finders and an omni- cam Uses a technique called expectation maximisation Processes the data to reduce the noise

Expectation Maximisation Estimates the number of surfaces and their location Adds and removes surfaces until it converges on the best fit model for the data

Thrun et al

Summary Brief overview of what 3D mapping is and some uses for 3D mapping Different types of sensors used How to collect data and convert it into a 3D model Some more advanced methods for 3D mapping and processing of the data

References www-2.cs.cmu.edu/~thrun/3d/

Processing the Data Various techniques and algorithms have been used to reduce the noise in the data Smoothing is always used as a final post processing step as nearby measurements are likely to belong to the same surface

Types of 3D Mapping Robots Stationary Move with manual guidance Fully automated