Robot Intelligence Technology Lab. Evolution of simple navigation Chapter 4 of Evolutionary Robotics Jan. 12, 2007 YongDuk Kim.

Slides:



Advertisements
Similar presentations
Higher Coordination with Less Control – A Result of Information Maximization in the Sensorimotor Loop Keyan Zahedi, Nihat Ay, Ralf Der (Published on: May.
Advertisements

Overview of Computer Vision CS491E/791E. What is Computer Vision? Deals with the development of the theoretical and algorithmic basis by which useful.
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Design of Autonomous Navigation Controllers for Unmanned Aerial Vehicles using Multi-objective Genetic Programming Gregory J. Barlow March 19, 2004.
1 Autonomous Controller Design for Unmanned Aerial Vehicles using Multi-objective Genetic Programming Gregory J. Barlow North Carolina State University.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Introduction to Robotics Principles of Robotics. What is a robot? The word robot comes from the Czech word for forced labor, or serf. It was introduced.
Fuzzy control of a mobile robot Implementation using a MATLAB-based rapid prototyping system.
컴퓨터 그래픽스 분야의 캐릭터 자동생성을 위하여 인공생명의 여러 가지 방법론이 어떻게 적용될 수 있는지 이해
Modeling Driver Behavior in a Cognitive Architecture
Chapter 10 Artificial Intelligence. © 2005 Pearson Addison-Wesley. All rights reserved 10-2 Chapter 10: Artificial Intelligence 10.1 Intelligence and.
Leslie Luyt Supervisor: Dr. Karen Bradshaw 2 November 2009.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Study on Genetic Network Programming (GNP) with Learning and Evolution Hirasawa laboratory, Artificial Intelligence section Information architecture field.
CONTENTS:  Introduction  What is neural network?  Models of neural networks  Applications  Phases in the neural network  Perceptron  Model of fire.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Evolutionary Robotics Teresa Pegors. Importance of Embodiment  Embodied system includes:  Body – morphology of system and movement capabilities  Control.
MULTISENSOR INTEGRATION AND FUSION Presented by: Prince Garg.
Collaboration Development through Interactive Learning between Human and Robot Tetsuya OGATA, Noritaka MASAGO, Shigeki SUGANO, Jun TANI.
Mobile Robot Navigation Using Fuzzy logic Controller
Evolutionary Robotics The Italian Approach The Khepera robot (1996) Developed at EPFL Lausanne, Switzerland(!) by Francesco Mondada Diameter: 55 mm Could.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Intelligent Database Systems Lab N.Y.U.S.T. I. M. Externally growing self-organizing maps and its application to database visualization and exploration.
Introduction of Intelligent Agents
Chapter 7. Learning through Imitation and Exploration: Towards Humanoid Robots that Learn from Humans in Creating Brain-like Intelligence. Course: Robots.
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
A Roadmap towards Machine Intelligence
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Reinforcement Learning AI – Week 22 Sub-symbolic AI Two: An Introduction to Reinforcement Learning Lee McCluskey, room 3/10
Evolution and Learning “Exploring Adaptive Agency I/II” Miller & Todd, 1990.
Path Planning Based on Ant Colony Algorithm and Distributed Local Navigation for Multi-Robot Systems International Conference on Mechatronics and Automation.
Chapter 9 Skill Acquisition, Retention, and Transfer
제 9 주. 응용 -4: Robotics Synthesis of Autonomous Robots through Evolution S. Nolfi and D. Floreano, Trends in Cognitive Science, vol. 6, no. 1, pp. 31~37,
Robot Intelligence Technology Lab. 10. Complex Hardware Morphologies: Walking Machines Presented by In-Won Park
Robot Intelligence Technology Lab. Evolutionary Robotics Chapter 3. How to Evolve Robots Chi-Ho Lee.
제 9 주. 응용 -4: Robotics Artificial Life and Real Robots R.A. Brooks, Proc. European Conference on Artificial Life, pp. 3~10, 1992 학습목표 시뮬레이션 로봇과 실제 로봇을.
Evolving robot brains using vision Lisa Meeden Computer Science Department Swarthmore College.
Vision Based Automation of Steering using Artificial Neural Network Team Members: Sriganesh R. Prabhu Raj Kumar T. Senthil Prabu K. Raghuraman V. Guide:
1 Algoritmos Genéticos aplicados em Machine Learning Controle de um Robo (em inglês)
Evolving CPPNs to Grow Three- Dimensional Physical Structures Joshua E. Auerbach Josh C. Bongard GECCO 2010 – Portland, Oregon Evolving CPPNs to Grow Three-Dimensional.
Neuro-evolving Maintain-Station Behavior for Realistically Simulated Boats Nathan A. Penrod David Carr Sushil J. Louis Bobby D. Bryant Evolutionary Computing.
A novel approach to visualizing dark matter simulations
National Taiwan Normal A System to Detect Complex Motion of Nearby Vehicles on Freeways C. Y. Fang Department of Information.
Comparing two different controller designs for evolving robots Ferenc Havasi Vincenzo Giordano Michael Schwarz Gregory Valigiani Stefan Wiegand Thanks.
Introduction to Machine Learning, its potential usage in network area,
Evolutionary Algorithms Jim Whitehead
Deep Feedforward Networks
Classroom Assessment A Practical Guide for Educators by Craig A
Chapter 11: Artificial Intelligence
CHAPTER 1 Introduction BIC 3337 EXPERT SYSTEM.
Chapter 11: Artificial Intelligence
Automation as the Subject of Mechanical Engineer’s interest
Soft Computing Introduction.
CS b659: Intelligent Robotics
DSS: Decision Support Systems and AI: Artificial Intelligence
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
© James D. Skrentny from notes by C. Dyer, et. al.
Ch 14. Active Vision for Goal-Oriented Humanoid Robot Walking (1/2) Creating Brain-Like Intelligence, Sendhoff et al. (eds), Robots Learning from.
Power and limits of reactive intelligence
Presented by Rhee, Je-Keun
Introduction to Artificial Intelligence Lecture 11: Machine Evolution
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
CHAPTER I. of EVOLUTIONARY ROBOTICS Stefano Nolfi and Dario Floreano
Emir Zeylan Stylianos Filippou
Evolutionary Ensembles with Negative Correlation Learning
Presentation transcript:

Robot Intelligence Technology Lab. Evolution of simple navigation Chapter 4 of Evolutionary Robotics Jan. 12, 2007 YongDuk Kim

2 Robot Intelligence Technology Lab. Contents Introduction Straight motion with obstacle avoidance Visually guided navigation Re-adaptation Cross platform adaptation From simulation to reality Conclusions

3 Robot Intelligence Technology Lab. 1. Introduction In this chapter, evolution of simple behaviors will be described. Navigation ability The development of a suitable mapping from sensory information to motor actions. The closed feedback loop (between sensory information and motor actions) makes it rather difficult to design a stable control system for realistic situations. One solution Listing all possible sensory situations and associate them to a set of predefined motor actions. The solution is not always viable because of unknown and unpredictable environments. Artificial evolution Smart controller by exploiting interactions between the robot and the environment.

4 Robot Intelligence Technology Lab. 2. Straight motion with obstacle avoidance Braitenberg’s vehicle [Braitenberg 1984] The robot morphology is symmetrical and it has two wheels. It is conceptual robot whose wheels are directly linked to the sensors through weighted connections. Hand designed solution It should be noticed that even this simple design requires careful analysis of sensor and motor profiles, and important decisions for what concerns the direction of motion, its straightness, and its velocity. Different robots and different environments require different set of carefully chosen values.

5 Robot Intelligence Technology Lab. 2. Straight motion with obstacle avoidance Evolutionary approach [Floreano and Mondada 1994] It could find a solution for straight navigation and obstacle avoidance without assuming all the prior knowledge about sensors, motors, and environment. The goal was to evolve a control system capable of maximizing forward motion while accurately avoiding all obstacles on its way. The fitness function Where V is the sum of rotation speed of the two wheels, Δv is the absolute value of the algebraic difference between the signed speed values of the wheels, i is the normalized activation value of the infrared sensor with the highest activity.

6 Robot Intelligence Technology Lab. 2. Straight motion with obstacle avoidance These three components encourage – respectively – motion, straight displacement, and obstacle avoidance, but do not say in what direction the robot should move. The control system A neural network  One layer of synaptic weights from the eight infrared sensors to two motors units. The results

7 Robot Intelligence Technology Lab. 2. Straight motion with obstacle avoidance For each generation, it took approximately 40 minutes. Although the fitness indicators keep growing for 100 generations, around the 50 th generation the best individuals already exhibited a smooth navigation around the maze. A fitness value of 1.0 could have been achieve only by a robot moving straight at maximum speed in an open space. In the experiments, 0.3 was the maximum value attained by the evolutionary controller even when continued for further 100 generations.

8 Robot Intelligence Technology Lab. 2. Straight motion with obstacle avoidance Values of the fitness components

9 Robot Intelligence Technology Lab. 3. Visually guided navigation It is very likely that by the end of the this decade almost all autonomous robots will employ vision as a primary sensory system. Mainstream approach to vision processing [Marr 1982] Based on preprocessing, segmentation, and pattern recognition Is not viable for systems that must respond very quickly in partially unpredictable environments. A drastic new approach Takes into account the ecological aspects of visual based behavior and its integration with the motor system. There were only few efforts into this direction. [Horswill 1993; Marjanovic et al. 1996]

10 Robot Intelligence Technology Lab. 3. Visually guided navigation Evolutionary robotics provides an ideal framework. It allows the development of visual processing along with motor processing in closed feedback loop. It relies less on externally imposed assumption. It allows simultaneous exploration of both controllers and sensor morphologies. The ecological vision is going to be a very fertile area for evolutionary robotics over the next years.

11 Robot Intelligence Technology Lab. 3. Visually guided navigation Gantry robot [Harvey 192a, 1992b, 1993]

12 Robot Intelligence Technology Lab. 3. Visually guided navigation The visual input is considerably reduce by sampling only a small part of the image according to genetically specified instructions. The neural networks have a fixed number of input and outputs Artificial evolution was carried out in three stages of increasing behavioral complexity

13 Robot Intelligence Technology Lab. 3. Visually guided navigation

14 Robot Intelligence Technology Lab. 4. Re-adaptation The price to pay for the automatics process of adaptation described above is the amount of time required by evolution carried out entirely on physical robots. The question the is to what extent and at what speed an evolutionary system can generalize and/or re-adapt to modified environmental conditions without retraining it from scratch. Cross platform adaptation [Floreano and Mondada 1998] In some cases, it might be desirable to continue evolution on the new robot incrementally. From the point of view of the neurocontroller, changing the sensory motor characteristics of the robot is just another way of modifying the environment.

15 Robot Intelligence Technology Lab. 4. Re-adaptation Incremental evolution still requires quite a lot of research in order to accommodate more complex sensory motor systems, acquisition of new skills, modification of old ones.

16 Robot Intelligence Technology Lab. 4. Re-adaptation From simulation to reality Simulations can provide a valuable aid to evolutionary robotics as long as they are coupled with test s on physical robots. Transferring an evolved controller from the simulated to the real robot is very likely to generate discrepancies in the behavior and performance of the robot caused by different properties of the sensory motor interactions between the robot and its environment. Experiments [Miglino et al., 1995]  No-noise condition: not including any noise  Noise condition: adding uniform white noise to the simulated sensors.  Conservative-noise condition: the sensory values were read as if the robot had been displaced by a small random quantity along the x and y coordinates.

17 Robot Intelligence Technology Lab. 4. Re-adaptation Environments

18 Robot Intelligence Technology Lab. 5. Conclusions Some examples of artificial evolution applied to simple navigation tasks is presented. The results indicate that artificial evolution can be fruitfully applied even to tasks where a preprogrammed strategy already exists or to tasks that are apparently simple. Even slight modifications to the environment of an evolved individual are likely to cause a drop in performance. Performance can be rapidly recovered.