Rijo Santhosh Dr. Mircea Agapie The topic of machine learning is at the forefront of Artificial Intelligence.

Slides:



Advertisements
Similar presentations
Wall-Following Research Researcher: Benjamin Domingo Mentor: Joey Durham.
Advertisements

Sonar and Localization LMICSE Workshop June , 2005 Alma College.
MICHAEL MILFORD, DAVID PRASSER, AND GORDON WYETH FOLAMI ALAMUDUN GRADUATE STUDENT COMPUTER SCIENCE & ENGINEERING TEXAS A&M UNIVERSITY RatSLAM on the Edge:
The Bioloid Robot Project Presenters: Michael Gouzenfeld Alexey Serafimov Supervisor: Ido Cohen Winter Department of Electrical Engineering.
Navigating the BOE-BOT
DARPA Mobile Autonomous Robot SoftwareMay Adaptive Intelligent Mobile Robotics William D. Smart, Presenter Leslie Pack Kaelbling, PI Artificial.
Hybrid architecture for autonomous indoor navigation Georgia Institute of Technology CS 7630 – Autonomous Robotics Spring 2008 Serge Belinski Cyril Roussillon.
CS 452 – Software Engineering Workshop Acquire-Playing Agent System Group 1: Lisa Anthony Mike Czajkowski Luiza da Silva Winter 2001, Department of Mathematics.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Top Level System Block Diagram BSS Block Diagram Abstract In today's expanding business environment, conference call technology has become an integral.
1 Abstract This paper presents a novel modification to the classical Competitive Learning (CL) by adding a dynamic branching mechanism to neural networks.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Chapter Seven The Network Approach: Mind as a Web.
Integration of Representation Into Goal- Driven Behavior-Based Robots By Dr. Maja J. Mataric` Presented by Andy Klempau.
Path Protection in MPLS Networks Using Segment Based Approach.
Behavior- Based Approaches Behavior- Based Approaches.
A Versatile and Safe Mobility Assistant * Kim, Min-Jung 2001/6/12 Special Topics in Robotics Design and Control of Devices for Human-Movement Assistance.
Sonar-Based Real-World Mapping and Navigation by ALBERTO ELFES Presenter Uday Rajanna.
Patent Liability Analysis Andrew Loveless. Potential Patent Infringement Autonomous obstacle avoidance 7,587,260 – Autonomous navigation system and method.
Chuang-Hue Moh Spring Embodied Intelligence: Final Project.
Fuzzy control of a mobile robot Implementation using a MATLAB-based rapid prototyping system.
West Virginia University
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
IMPLEMENTATION ISSUES REGARDING A 3D ROBOT – BASED LASER SCANNING SYSTEM Theodor Borangiu, Anamaria Dogar, Alexandru Dumitrache University Politehnica.
1 Constant Following Distance Simulations CS547 Final Project December 6, 1999 Jeremy Elson.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Behavior Based Robotics: A Wall Following Behavior Arun Mahendra - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
The George Washington University Electrical & Computer Engineering Department ECE 002 Dr. S. Ahmadi Class 2.
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; March Adaptive Intelligent Mobile Robotics Leslie Pack Kaelbling Artificial Intelligence.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Leslie Luyt Supervisor: Dr. Karen Bradshaw 2 November 2009.
Robotics Simulation (Skynet) Andrew Townsend Advisor: Professor Grant Braught.
Activity 3: Multimodality HMI for Hands-free control of an intelligent wheelchair L. Wei, T. Theodovidis, H. Hu, D. Gu University of Essex 27 January 2012.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
CEG 4392 : Maze Solving Robot Presented by: Dominic Bergeron George Daoud Bruno Daoust Erick Duschesneau Bruno Daoust Erick Duschesneau Martin Hurtubise.
CONTENTS:  Introduction  What is neural network?  Models of neural networks  Applications  Phases in the neural network  Perceptron  Model of fire.
Final Presentation.  Software / hardware combination  Implement Microsoft Robotics Studio  Lego NXT Platform  Flexible Platform.
RoboTeam 05/04/2012 Submitted by:Costia Parfeniev, Boris Pinzur Supervised by: Kobi Kohai.
Lecture 10: 8/6/1435 Machine Learning Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Mobile Robot Navigation Using Fuzzy logic Controller
Boundary Assertion in Behavior-Based Robotics Stephen Cohorn - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
Artificial Intelligence in Game Design Complex Steering Behaviors and Combining Behaviors.
Topological Path Planning JBNU, Division of Computer Science and Engineering Parallel Computing Lab Jonghwi Kim Introduction to AI Robots Chapter 9.
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
AN INTELLIGENT ASSISTANT FOR NAVIGATION OF VISUALLY IMPAIRED PEOPLE N.G. Bourbakis*# and D. Kavraki # #AIIS Inc., Vestal, NY, *WSU,
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
The George Washington University Department of ECE ECE Intro: Electrical & Computer Engineering Dr. S. Ahmadi Class 4/Lab3.
The George Washington University Electrical & Computer Engineering Department ECE 002 Dr. S. Ahmadi Class3/Lab 2.
Using IR For Maze Navigation Kyle W. Lawton and Liz Shrecengost.
Path Planning Based on Ant Colony Algorithm and Distributed Local Navigation for Multi-Robot Systems International Conference on Mechatronics and Automation.
Abstract Neurobiological Based Navigation Map Created During the SLAM Process of a Mobile Robot Peter Zeno Advisors: Prof. Sarosh Patel, Prof. Tarek Sobh.
VEX IQ Curriculum Smart Machines Lesson 09 Lesson Materials:
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Intelligent Mobile Robotics
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Review and Ideas for future Projects
CIS 488/588 Bruce R. Maxim UM-Dearborn
Networks of Autonomous Unmanned Vehicles
An Introduction to VEX IQ Programming with Modkit
An Introduction to VEX IQ Programming with Modkit
Robot Intelligence Kevin Warwick.
The Network Approach: Mind as a Web
Presentation transcript:

Rijo Santhosh Dr. Mircea Agapie The topic of machine learning is at the forefront of Artificial Intelligence and Robotics research today. There is biological evidence that, in order to perform navigation, organisms rely on two major mechanisms: 1.Memorization of simplified representations of their environment, called snapshots, and 2.Organization of the snapshots in an internal map. We propose a biologically-inspired two-stage algorithm for autonomous navigation of a path. In the first stage, the robot is “taught” the path under human supervision. It captures a series of snapshots and directions along the way, memorizing them in a list. In the second stage, the robot operates autonomously, using the information in the list to navigate the same path. To avoid the pitfalls of classical dead- reckoning, we implement simple self- correcting behaviors: the robot compares the memorized snapshots with the real-time image of the environment and moves in order to minimize the perceived error. The first phase of our algorithm was successfully tested using a simulated robot in a simulated environment. Dead-reckoning navigation for autonomous robots using a list of snapshots Rijo Santhosh - Dept. of Engineering & Physics, Tarleton State University Mentor: Dr. Mircea Agapie Robotic navigation is of major interest today in academia, government and private industry alike. In order to be useful, robots have to be capable of autonomous navigation, and this requires learning. Artificial Intelligence recognizes two main classes of learning algorithms: supervised and unsupervised. We propose a two-stage algorithm that combines the two: first the robot is guided by teleoperation on the given path, and then it is able to autonomously navigate the path as many times as needed. Our work is also biologically-inspired [3]. It is known that, in order to perform navigation, organisms rely on (at least) two major mechanisms: memorization of simplified representations of their environment, called snapshots [2], and organization of the snapshots in an internal map [1]. Dead reckoning (DR) is a well-known navigation algorithm: at each step, the robot estimates its current position based upon a previously determined position and its known speed, direction and the time elapsed. If no feedback is taken from the environment, DR has a severe limitation: errors accumulate, sooner or later causing the robot to stray from the desired path. Our algorithm extends DR, using learning (supervised and unsupervised), snapshots and maps to correct the errors. INTRODUCTION 1.M. Mataric, Navigating with a rat brain: A neurobiologically- inspired model for robot spatial representation, in J.-A. Meyer and S. Wilson, eds., From Animals to Animats, Proc. 1 st Internat. Conf. on Simulation of Adaptive Behavior, , MIT Press, T.S. Collett, Insect navigation en route to the goal: Multiple strategies for the use of landmarks, The Journal of Experimental Biology, 199, , M.O. Franz and H.A. Mallot, Biomimetic robot navigation, Robotics and Autonomous Systems, 30: , CONCLUSIONS and FUTURE WORK C++ IMPLEMENTATION REFERENCES Figure 2. Placement of sonars Figure 3 (above). Sample path of the robot Figure 4 (below). List of directions, stored as a file ABSTRACT CONTACT Figure 1. Schematic of the AmigoBot This project is in progress. Only the first phase, supervised learning, has been implemented, with the robot storing the list of directions in PC memory. We have worked exclusively in a simulator, with no tests so far on the real AmigoBot. The following steps will be taken to complete the project: storing sonar readings alongside angles and distances in the list; simulated robot navigating the path autonomously, first by dead-reckoning alone, then using error-correcting behaviors; and finally implementing the entire algorithm on the real robot. Future work will include: 1.Teaching the robot several paths (a repertoire), with the flexibility of choosing any of them for the autonomous navigation stage. 2.Developing an algorithm for returning home by “retracing the steps”. 3.Avoiding obstacles that were not encountered in the supervised stage (dynamic environment). The following function is responsible for turning the robot clockwise through a given angle (if angle is negative, it turns counter-clockwise). It demonstrates how real-time feedback from the robot can be used to make sure that the desired motion has indeed completed: void turnRight(int angle){ robot.setDeltaHeading(-angle); // API function turns left for( ArUtil::sleep(3000) ; // allow 3 sec. to complete turn robot.getRotVel() !=0; // is robot still moving? ArUtil::sleep(500) ); // give it 0.5 sec. more } We use the commercially-available AmigoBot™ and the accompanying software tools from ActivMedia [5]. ROBOT AND SENSORS Eight sonars constitute the input sensors. The robot detects objects and estimates their distance by measuring the round- trip time of a ping signal, much like a bat. In our application only the front sonars (2 and 3 in Fig.2) are used when moving the robot the desired distance. The robot communicates with a PC across an wireless network, with information packets sent back and forth every 100 ms. The navigation algorithm runs on the PC, while the lower-level tasks (e.g. running the sonars) are left to the robot’s processor. SIMULATED ENVIRONMENT A simulated room is shown in Fig.3, with the robot starting from the upper left corner (point A) in teleoperated mode, and being led on a path with ten turns separated by variable distances. Fig.4 shows a partial list of directions (angles and distances) that was generated automatically during this test and stored in a text file on the PC. The list is truncated, with the last 90-degree turn corresponding to the lower-right corner of the room (C). For ease of processing, angle and distance information is stored in separate nodes of the list. The human operator can cause consecutive turns, as well as consecutive forward motions. The robot is able to detect imminent collisions through the sonars and will stop and inform the operator when this happens, e.g. at point B in Fig.3. This is implemented through the following C++ function, which makes use of the ARIA API: void moveDesiredDistance(int distance){ printf("%d",robot.getSonarRange(2)); if ((robot.getSonarRange(2) < 500) && (robot.getSonarRange(3) < 500)) { //obstacle is closer than 500 mm printf("\n Cannot proceed, obstacle in front"); printf("\n sonar 2 = %d \t”, robot.getSonarRange(2)); printf(“\n sonar 3 = %d \t”, robot.getSonarRange(3)); printf(“theta %.2f\n",robot.getTh()); printf("\n Please select other options"); } else { robot.move(distance); //move if no collision ArUtil::sleep(5000); //allow 5 sec. for move to complete }