Environmental Robustness in Multi-agent Teams Terence Soule Robert B Heckendorn Presented at GECCO 09 (The Genetic and Evolutionary Computation Conference)

Slides:



Advertisements
Similar presentations
Mrs. Kercher 6 th Grade Gifted.  ExploraVision is a competition for K–12 students of all interest, skill, and ability levels. The competition encourages.
Advertisements

Evolving Cooperative Strategies in Multi-Agent Systems Using a Coevolutionary Algorithm Cesario C. Julaton III, Ramanathan S. Thinniyam, Una-May O’Reilly.
March 23, 2003 AAAI symposium, Stanford. Jianjun Hu, Erik D. Goodman, Kisung Seo Zhun Fan, Ronald C. Rosenberg Genetic Algorithm Research & Applications.
Coalition Formation through Motivation and Trust Nathan Griffiths Michael Luck.
Business and Management Research WELCOME. Lecture 9.
1 An Evolutionary Algorithm for Query Optimization in Database Kayvan Asghari, Ali Safari Mamaghani Mohammad Reza Meybodi International Joint Conferences.
Introduction to Genetic Algorithms Yonatan Shichel.
A simple EA and Common Search Operators Temi avanzati di Intelligenza Artificiale - Lecture 2 Prof. Vincenzo Cutello Department of Mathematics and Computer.
Evolving Agents in a Hostile Environment Alex J. Berry.
Cooperative Learning Model Math/Science Kathleen Magnani and Danielle Martling Danielle Martling.
Discovery of RNA Structural Elements Using Evolutionary Computation Authors: G. Fogel, V. Porto, D. Weekes, D. Fogel, R. Griffey, J. McNeil, E. Lesnik,
Artificial Intelligence Genetic Algorithms and Applications of Genetic Algorithms in Compilers Prasad A. Kulkarni.
Evolutionary Computational Intelligence Lecture 9: Noisy Fitness Ferrante Neri University of Jyväskylä.
Game of Life Changhyo Yu Game of Life2 Introduction Conway’s Game of Life  Rule Dies if # of alive neighbor cells =< 2 (loneliness) Dies.
TERM PROJECT The Project usually consists of the following: Title
Computer Science Genetic Algorithms10/13/10 1 An Investigation of Niching and Species Formation in Genetic Function Optimization Kalyanmoy Deb David E.
What is Neutral? Neutral Changes and Resiliency Terence Soule Department of Computer Science University of Idaho.
Orthogonal Evolution of Teams: A Class of Algorithms for Evolving Teams with Inversely Correlated Errors Terence Soule and Pavankumarreddy Komireddy This.
Genetic Programming. Agenda What is Genetic Programming? Background/History. Why Genetic Programming? How Genetic Principles are Applied. Examples of.
Impact of Including Authentic Inquiry Experiences in Methods Courses for Pre-Service Elementary and Secondary Teachers Timothy F. Slater, Lisa Elfring,
Khaled Rasheed Computer Science Dept. University of Georgia
IE 594 : Research Methodology – Discrete Event Simulation David S. Kim Spring 2009.
1 Imaging Techniques for Flow and Motion Measurement Lecture 7 Lichuan Gui University of Mississippi 2011 Correlation Interrogation & FFT Acceleration.
Copyright Course Technology 1999
A Major Business Disruption A Strategy for Minimising the Downtime Anthony Hegarty Mitigating Risks.
Writing the Argumentative/Persuasive Essay Retrieved from
Exploring a topic in depth... From Reading to Writing The Odyssey often raises questions in readers’ minds: Was Odysseus a real person? Were the places.
林偉楷 Taiwan Evolutionary Intelligence Laboratory.
Introduction to Genetic Algorithms and Evolutionary Computation
1 Near-Optimal Play in a Social Learning Game Ryan Carr, Eric Raboin, Austin Parker, and Dana Nau Department of Computer Science, University of Maryland.
Optimization Problems - Optimization: In the real world, there are many problems (e.g. Traveling Salesman Problem, Playing Chess ) that have numerous possible.
1/27 Discrete and Genetic Algorithms in Bioinformatics 許聞廉 中央研究院資訊所.
Fuzzy Genetic Algorithm
Predator/Prey Simulation for Investigating Emergent Behavior Jay Shaffstall.
Evolutionary Computation Dean F. Hougen w/ contributions from Pedro Diaz-Gomez & Brent Eskridge Robotics, Evolution, Adaptation, and Learning Laboratory.
GECCO Papers Same research group, different lead authors Same conference Paper 1: Embodied Distributed Evolutionary Algorithm (EDEA) for on-line, on-board.
Artificial Intelligence Chapter 4. Machine Evolution.
Adrian Treuille, Seth Cooper, Zoran Popović 2006 Walter Kerrebijn
Week 1 - An Introduction to Machine Learning & Soft Computing
Evolving the goal priorities of autonomous agents Adam Campbell* Advisor: Dr. Annie S. Wu* Collaborator: Dr. Randall Shumaker** School of Electrical Engineering.
Learning Othello The quest for general strategy building.
Emergence in Artificial Societies Evolving Communication and Cooperation in a Sugarscape world by Pieter Buzing.
Introduction to Genetic Algorithms. Genetic Algorithms We’ve covered enough material that we can write programs that use genetic algorithms! –More advanced.
Genetic Algorithms Genetic algorithms provide an approach to learning that is based loosely on simulated evolution. Hypotheses are often described by bit.
WEIGHTED SYNERGY GRAPHS FOR EFFECTIVE TEAM FORMATION WITH HETEROGENEOUS AD HOC AGENTS Somchaya Liemhetcharat, Manuela Veloso Presented by: Raymond Mead.
Fitting image transformations Prof. Noah Snavely CS1114
Design Report – Fall Semester. Title Page List name of project and team number List date List team members, advisor, sponsor Team logos.
Iterated Prisoner’s Dilemma Game in Evolutionary Computation Seung-Ryong Yang.
Chapter 13- Inference For Tables: Chi-square Procedures Section Test for goodness of fit Section Inference for Two-Way tables Presented By:
Evolving RBF Networks via GP for Estimating Fitness Values using Surrogate Models Ahmed Kattan Edgar Galvan.
An evolutionary approach for improving the quality of automatic summaries Constantin Orasan Research Group in Computational Linguistics School of Humanities,
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 26, 2000.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Christopher Ballinger, Sushil Louis
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
Evolving robot brains using vision Lisa Meeden Computer Science Department Swarthmore College.
Cambridge University Press  G K Powers 2013 Examination techniques Study guide 1.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Intelligent Exploration for Genetic Algorithms Using Self-Organizing.
L – Modeling and Simulating Social Systems with MATLAB
USING MICROBIAL GENETIC ALGORITHM TO SOLVE CARD SPLITTING PROBLEM.
Statistical Quality Control, 7th Edition by Douglas C. Montgomery.
Example: Applying EC to the TSP Problem
Artificial Intelligence Chapter 4. Machine Evolution
AP Stats Check In Where we’ve been… Chapter 7…Chapter 8…
Math SL Year 2 August 24th.
Artificial Intelligence Chapter 4. Machine Evolution
Class Project Guidelines
Lecture 4. Niching and Speciation (1)
Guidelines for Reports Advanced Constraint Processing
Machine Learning overview Chapter 18, 21
Presentation transcript:

Environmental Robustness in Multi-agent Teams Terence Soule Robert B Heckendorn Presented at GECCO 09 (The Genetic and Evolutionary Computation Conference)

Introduction The topic I am interested in for the project is the use of multi-agent teams in rescue missions during a disaster This paper discusses several techniques used to train autonomous agents in exploring unknown environments. Emphasis is given to finding the training environment that will best prepare agents for the real world. Throughout the paper the following terms are used: – Training – The environment where the agents learn. – Testing – The real world where the agents put into practice what they learned in training.

Introduction Some sample environments where these autonomous agents could be used include: – Clearing landmines – Search and Rescue (Crandall Canyon) – Environmental cleanup – Mining and Resource Discovery – Aircraft Debris Recovery – Individual Guides in Evacuation and Assessment

Approaches Several algorithms that could be used in this type of team building exercise are mentioned. The authors chose to use a genetic algorithm to address the problem. A main concern with this approach is the time required to train agents. This requires agents to learn in a simulated environment rather than the real world. This is why it is critical to find the training environment that will best train the agents for the real world.

The Environment The training world is made up of a 45x45 grid where 10 to 20 percent of the cells are labeled as ‘interesting’. Two types of agents are defined as follows: – Scouts are fast agents who locate interesting cells and mark them with a beacon. They can move 2 lengths at a time. – Investigators are slower than scouts, but have good distance vision. They investigate the interesting cells and then mark them as investigated. If the cell contains a beacon it is deactivated. They can move 1 length at a time. Example roles for scouts and investigators include: – Scouts locate and mark landmines. Investigators deactivate them. – Scouts locate interesting geological formations on a distant planet. Investigators take soil samples.

The Environment Using the 45x45 grid the authors present three sample environments for their study. These are: – Random where approximately 20% of the cells are interesting (see Figure 1). – Clumped where exactly 20% of the cells are interesting (see Figure 2). – Linear where exactly 10% of the cells are interesting (see Figure 3). For each evaluation a new random case is generated so the agent can’t memorize the layout of an environment.

Fitness Function The genetic algorithm requires a fitness function to determine the quality of agents. The functions are: – (3B -.1b) for scouts – (3I -.1b) for investigators Where: – B = The number of beacons placed. – I = The number of interesting areas investigated. – b = The number of time steps outside of the problem area. In the Linear environment the fitness numbers are doubled to offset the effect of having half as many interesting cells. It was noted that if the boundary penalty is too high some agents evolve that sit still to avoid the penalty. The team fitness is the sum of fitness values of the team members.

Training Algorithms Experiments are run using three different training algorithms. These are: – Teams made up of 3 scouts and 3 investigators. They tend to have strong cooperation but individual team members can become lazy. – Islands train specific members or individuals which are then used to form teams. This requires more evaluation than the team approach because of the focus on individuals. This approach creates highly fit individuals who may not cooperate very well because their areas of expertise overlap. – Orthogonal Evolution of Teams (OET) is a hybrid of the other 2 approaches. It alternates between treating the entire population as islands or as teams. In this paper they use islands during the selection step and teams during the replacement step.

Algorithm Definitions Some definitions that were not clear to me include: – OET – Orthogonality in Computer Science guarantees that modifying one component of a system doesn’t cause side effects in another component. A car example is given where accelerating does not interfere with other components such as steering. – Definition of a three member tournament.

Results Each of the 3 algorithms was tested by using it as the training environment and then testing in the other 2 environments. The results are shown in tables 2-5 (the order of the tables could be confusing). Some guidelines include: – Values in parentheses are standard deviations. – Bold values represent cases where the training and test environments are the same. View tables 2 through 5.

Conclusions The authors come to 3 main conclusions: – Evolutionary techniques evolve teams that are robust in the given environments. – The best results came when training in a linear environment. – The best teams were produced using the OET algorithm.