STEREO VISION BASED MOBILE ROBOT NAVIGATION IN ROUGH TERRAIN

Slides:



Advertisements
Similar presentations
Stereo Vision System Principle of Operation: Difference between two cameras gives depth information Steps: –Compute disparity image –Find obstacles in.
Advertisements

Sonar and Localization LMICSE Workshop June , 2005 Alma College.
Hybrid Position-Based Visual Servoing
Stereo.
Abstract This project focuses on realizing a series of operational improvements for WPI’s unmanned ground vehicle Prometheus with the end goal of a winning.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Obstacle detection using v-disparity image
Study on Mobile Robot Navigation Techniques Presenter: 林易增 2008/8/26.
Planetary Surface Robotics ENAE 788U, Spring 2005 U N I V E R S I T Y O F MARYLAND Lecture 8 Mapping 5 April, 2005.
A Crowd Simulation Using Individual- Knowledge-Merge based Path Construction and Smoothed Particle Hydrodynamics Weerawat Tantisiriwat, Arisara Sumleeon.
Sonar-Based Real-World Mapping and Navigation by ALBERTO ELFES Presenter Uday Rajanna.
Based on slides by Nicholas Roy, MIT Finding Approximate POMDP Solutions through Belief Compression.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
June 12, 2001 Jeong-Su Han An Autonomous Vehicle for People with Motor Disabilities by G. Bourhis, O.Horn, O.Habert and A. Pruski Paper Review.
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Friday, 4/8/2011 Professor Wyatt Newman Smart Wheelchairs.
ROBOT LOCALISATION & MAPPING: MAPPING & LIDAR By James Mead.
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
Autonomous Robot Localisation, Path Planning, Navigation, and Obstacle Avoidance The Problem: –Robot needs to know where it is? where it’s going? is there.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Activity 1: Multi-sensor based Navigation of Intelligent Wheelchairs Theo Theodoridis and Huosheng Hu University of Essex 27 January 2012 Ecole Centrale.
Robot Vision SS 2007 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Waikato Margaret Jefferies Dept of Computer Science University of Waikato.
ROBOT LOCALISATION & MAPPING: NAVIGATION Ken Birbeck.
오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교
© Manfred Huber Autonomous Robots Robot Path Planning.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.
Asian Institute of Technology
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
ROBOTIC VISION CHAPTER 5: NAVIGATION. CONTENTS INTRODUCTION OF ROBOT NAVIGATION SYSTEM. VISUAL GUIDED ROBOT APPLICATION: - LAND BASED NAVIGATION - MAP.
Work meeting Interreg SYSIASS project 24 th June 2011 ISEN 1 Part-financed by the European Regional Development Fund.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Vision and Obstacle Avoidance In Cartesian Space.
Digital Image Processing
Ghislain Fouodji Tasse Supervisor: Dr. Karen Bradshaw Computer Science Department Rhodes University 24 March 2009.
Suggested Machine Learning Class: – learning-supervised-learning--ud675
Ghislain Fouodji Tasse Supervisor: Dr. Karen Bradshaw Computer Science Department Rhodes University 04 August 2009.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments P. Michel, J. Chestnutt, J. Kuffner, T. Kanade Carnegie Mellon University – Robotics.
Path Planning for Articulated Rovers using Fuzzy Logic and Genetic Algorithm Mahmoud Tarokh Intelligent Machines and Systems (IMS) Lab California State.
3D Perception and Environment Map Generation for Humanoid Robot Navigation A DISCUSSION OF: -BY ANGELA FILLEY.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Noah Snavely, Zhengqi Li
Autonomous Navigation of a
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Automatic Speed Control Using Distance Measurement By Single Camera
Depth Analysis With Stereo Cameras
Paper – Stephen Se, David Lowe, Jim Little
CS b659: Intelligent Robotics
War Field Spying Robot with Night Vision Wireless Camera
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Schedule for next 2 weeks
Real-time Wall Outline Extraction for Redirected Walking
Simultaneous Localization and Mapping
SoC and FPGA Oriented High-quality Stereo Vision System
EE631 Cooperating Autonomous Mobile Robots Lecture: Collision Avoidance in Dynamic Environments Prof. Yi Guo ECE Dept.
Mathematics & Path Planning for Autonomous Mobile Robots
دکتر سعید شیری قیداری & فصل 4 کتاب
Multiple View Geometry for Robotics
Day 29 Bug Algorithms 12/7/2018.
Day 29 Bug Algorithms 12/8/2018.
CHAPTER 14 ROBOTICS.
Probabilistic Map Based Localization
-Koichi Nishiwaki, Joel Chestnutt and Satoshi Kagami
Planning.
Presentation transcript:

STEREO VISION BASED MOBILE ROBOT NAVIGATION IN ROUGH TERRAIN Supervisor: Professor Ray. A. Jarvis Author: Sardjono Trihatmo Statement of the problem A mobile robot needs to attain a given goal position autonomously in rough terrain. The environment is unknown and possibly unstructured. The robot must find a feasible path to the destination point and avoid any collision with obstacles using a stereo vision system and natural landmarks. Environmental Representation The cell based model is used to represent the environment. This model is characterised by dividing the environment into smaller cells. Cells are either occupied or not. The accuracy of this model can be increased by reducing the cell size. Figure 3. Oobstacles representation using the cell based model. Localisation and Mapping Localisation is a sufficiently estimation of the current position of the mobile robot. Mapping is a sufficiently approximate map of the navigation area. The research is focused on the problem of simultaneous localisation and mapping. That is, the relocation of the mobile robot at each step along its path by using current and previous information about the environment. This method is very useful if a map is not given. The robot generates the map using the sensor data and estimates its position from this generated map. Stereo Vision Data i Local map Li Relocalisation & Mapping Odometry i -1  i Global map Gi -1 Global map Gi Figure 1. The mobile robot ROBUTER and the arm RTX used in the research. The cameras are installed on the wrist of the arm. Robot Perception The robot uses a stereo vision system as an external sensor to perceive the environment that is possibly unpredictable. The sensor information is used to model the environment and perform localisation. The stereo vision system delivers discrete disparity values that are inverse to distance. An appropriate calibration is needed to get reliable data. Figure 4. Simultaneous Localisation and Mapping Path Planning The research implements the Distance Transform [1] path planning method. This method was devised specifically for the cell based environmental model. The main idea is to number the cells. Starting from the goal position, the cells are numbered, with each cell getting the minimum number of any of its neighbours, plus one and itself. Then, for the robot to move from any start point, it needs to make the steepest descent traversal to achieve the shortest possible path. 2 1 G 3 4 7 6 5 S Figure 5. The Distance Transform path planning G: Goal position S: Start position a. b. Depth from camera baseline (not scaled) Figure 2. The real world (a) is converted to the disparity image (b). The brighter image indicates that it is closer to the cameras. The depth (c) correspondence with the maximum disparity in each column of the disparity image. Obstacle Reference: [1] R. A. Jarvis. Collision free trajectory planning using distance transform. In Proc. National Conference and Exhibition on Robotics, Melbourne, August 1984. Camera baseline c. Electrical and Computer Systems Engineering Postgraduate Student Research Forum 2001