Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.

Slides:



Advertisements
Similar presentations
ICRA 2002 Topological Mobile Robot Localization Using Fast Vision Techniques Paul Blaer and Peter Allen Dept. of Computer Science, Columbia University.
Advertisements

Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
MICHAEL MILFORD, DAVID PRASSER, AND GORDON WYETH FOLAMI ALAMUDUN GRADUATE STUDENT COMPUTER SCIENCE & ENGINEERING TEXAS A&M UNIVERSITY RatSLAM on the Edge:
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Segmentation of Floor in Corridor Images for Mobile Robot Navigation Yinxiao Li Clemson University Committee Members: Dr. Stanley Birchfield (Chair) Dr.
Extracting Minimalistic Corridor Geometry from Low-Resolution Images Yinxiao Li, Vidya, N. Murali, and Stanley T. Birchfield Department of Electrical and.
Silvina Rybnikov Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin HomePage:
Space Syntax & multi-agent simulation
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
IEEE TCSVT 2011 Wonjun Kim Chanho Jung Changick Kim
A New Omnidirectional Vision Sensor for the Spatial Semantic Hierarchy E. Menegatti, M. Wright, E. Pagello Dep. of Electronics and Informatics University.
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Integration of Representation Into Goal-Driven Behavior-Based Robots By Maja J. Mataric Presented by Murali Kiran. M.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Dynamic Medial Axis Based Motion Planning in Sensor Networks Lan Lin and Hyunyoung Lee Department of Computer Science University of Denver
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Vision for mobile robot navigation Jannes Eindhoven
1/53 Key Problems Localization –“where am I ?” Fault Detection –“what’s wrong ?” Mapping –“what is my environment like ?”
Sonar-Based Real-World Mapping and Navigation by ALBERTO ELFES Presenter Uday Rajanna.
Firefighter Indoor Navigation using Distributed SLAM (FINDS) Major Qualifying Project Matthew Zubiel Nick Long Advisers: Prof. Duckworth, Prof. Cyganski.
Integrated Videos and Maps for Driving Direction UIST 2009 Billy Chen, Boris Neubert, Eyal Ofek,Oliver Deussen, Michael F.Cohen Microsoft Research, University.
Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
Biologically Inspired Turn Control for Autonomous Mobile Robots Xavier Perez-Sala, Cecilio Angulo, Sergio Escalera.
Fuzzy control of a mobile robot Implementation using a MATLAB-based rapid prototyping system.
Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots Xavier Pérez *, Cecilio Angulo *, Sergio Escalera + and Diego Pardo *
Using Incomplete Online Metric Maps for Topological Exploration with the Gap Navigation Tree Liz Murphy and Paul Newman Mobile Robotics Research Group,
Localisation & Navigation
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; March Adaptive Intelligent Mobile Robotics Leslie Pack Kaelbling Artificial Intelligence.
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
A General Framework for Tracking Multiple People from a Moving Camera
Biologically-inspired Visual Landmark Navigation for Mobile Robots
3D SLAM for Omni-directional Camera
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
CSSE463: Image Recognition Day 31 Today: Bayesian classifiers Today: Bayesian classifiers Tomorrow: project day. Tomorrow: project day. Questions? Questions?
Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.
9 Introduction to AI Robotics (MIT Press), copyright Robin Murphy 2000 Chapter 9: Topological Path Planning1 Part II Chapter 9: Topological Path Planning.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Topological Path Planning JBNU, Division of Computer Science and Engineering Parallel Computing Lab Jonghwi Kim Introduction to AI Robots Chapter 9.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Project Overview  Autonomous robot  Simulates behavior of dog fetching  Tracks a thrown object, picks it up, and returns it to thrower  Able to avoid.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
Wearable Virtual Guide for Indoor Navigation. Introduction Assistance for indoor navigation using a wearable vision system Novel cognitive model for representing.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Using IR For Maze Navigation Kyle W. Lawton and Liz Shrecengost.
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
Textural Features for HiRISE Image Classification Lauren Hunkins 1,2, Mario Parente 1,3, Janice Bishop 1,2 1 SETI Institute; 2 NASA Ames; 3 Stanford University.
Towards the autonomous navigation of intelligent robots for risky interventions Janusz Bedkowski, Grzegorz Kowalski, Zbigniew Borkowicz, Andrzej Masłowski.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
2011 IEEE International Conference on Fuzzy Systems The Development of the Automatic Lane Following Navigation System for the Intelligent Robotic Wheelchair.
A Plane-Based Approach to Mondrian Stereo Matching
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Paper – Stephen Se, David Lowe, Jim Little
Part II Chapter 9: Topological Path Planning
Common Classification Tasks
ISOMAP TRACKING WITH PARTICLE FILTERING
Introduction to Robot Mapping
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Presentation transcript:

Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT ALGORITHM OVERVIEW AUTONOMOUS DRIVING DOWN THE CORRIDOR EXPERIMENTAL RESULTS CONCLUSION An algorithm is proposed to answer the challenges of autonomous corridor navigation and mapping by a mobile robot equipped with a single forward-facing camera. Using a combination of corridor ceiling lights, visual homing, and entropy, the robot is able to perform straight line navigation down the center of an unknown corridor. Turning at the end of a corridor is accomplished using Jeffrey divergence and time-to-collision, while deflection from dead ends and blank walls uses a scalar entropy measure of the entire image. When combined, these metrics allow the robot to navigate in both textured and untextured environments. The robot can autonomously explore an unknown indoor environment, recovering from difficult situations like corners, blank walls, and initial heading toward a wall. While exploring, the algorithm constructs a Voronoi-based topo-geometric map with nodes representing distinctive places like doors, water fountains, and other corridors. Because the algorithm is based entirely upon low-resolution (32 x24) grayscale images, processing occurs at over 1000 frames per second. END OF THE CORRIDOR Navigate the corridor: Ceiling lights, Entropy, Homing Home image End of the corridor Jeffrey Divergence (J) Time To Crash (TTC)  The mobile robot’s navigational behaviour is modelled by a set of paradigms that work in conjunction to correct its path in an indoor environment based on different metrics.  Special emphasis is placed on using low resolution images for computational efficiency and metrics that capture information content and variety that cannot be represented using traditional sparse features and methods.  The resulting algorithm enables end-to-end navigation in unstructured indoor environments with self- directed decision making at corridor ends, without the use of any prior information or a map.  The system forms the basis of an autonomous mapping system using low resolution metrics. Results are shown for autonomous navigation in three complete corridors in Riggs Hall, Clemson University. The robot displays tropism, turning at corridor ends, and continues by searching for lights controlled by entropy. Entropy: To avoid blank walls Entropy: To detect open corridors in a T-junction AUTONOMOUS MAPPING Joint Probability distribution of distinct landmark measures gives a topological set of landmarks (based on the regional maxima of P xy (X,Y)), which are superimposed on the navigation path to give a Voronoi-based map of the environment, where links represent the collision-free path and the nodes represent the left/right landmarks. Time To Crash (TTC) : The time taken for the viewed surface to reach the camera COP. G and E t are spatial and temporal image brightness derivatives respectively. Relative Entropy: The symmetric Jeffrey divergence J(p,q) is calculated between two image graylevel histograms (p,q) to arrive at the relative entropy. This metric measures how different the current scene is from a given image. X and Y are image entropy (of the image) and Jeffrey divergence (between consecutive images) along the route respectively. Therefore P xy represents distinctiveness. N Y N Y AUTONOMOUS DRIVING ACTION AT CORRIDOR END DETECTING THE END OF THE CORRIDOR Y N Y N Lights visible && Entropy >H low Jeffrey Divergence > J th && Time-to-collision < T min || Entropy < H low Lights visible Entropy > H high Centering on ceiling lights Homing