NEW TIES WP2 Agent and learning mechanisms. Decision making and learning Agents have a controller (decision tree, DQT)  Input: situation (as perceived.

Slides:



Advertisements
Similar presentations
Genetic Algorithms Chapter 3. A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Genetic Algorithms GA Quick Overview Developed: USA in.
Advertisements

Agent-based Modeling: A Brief Introduction Louis J. Gross The Institute for Environmental Modeling Departments of Ecology and Evolutionary Biology and.
Biologically Inspired Computing: Operators for Evolutionary Algorithms
Heuristics CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Genetic Algorithms Yohai Trabelsi. Outline Evolution in the nature Genetic Algorithms and Genetic Programming A simple example for Genetic Algorithms.
1 Wendy Williams Metaheuristic Algorithms Genetic Algorithms: A Tutorial “Genetic Algorithms are good at taking large, potentially huge search spaces and.
N ew and E mergent W orld models T hrough I ndividual, E volutionary and S ocial Learning Main goal: to realize an evolving artificial.
NEW TIES year 2 review NEW TIES = New and Emergent World models Through Individual, Evolutionary and Social learning.
Evolutionary Computational Intelligence
Evolutionary Computation Introduction Peter Andras s.
A new crossover technique in Genetic Programming Janet Clegg Intelligent Systems Group Electronics Department.
Genetic Algorithms GAs are one of the most powerful and applicable search methods available GA originally developed by John Holland (1975) Inspired by.
Basic Data Mining Techniques
WP3: Language Evolution Paul Vogt Federico Divina Tilburg University.
Hilton’s Game of Life (HGL) A theoretical explanation of the phenomenon “life” in real nature. Hilton Tamanaha Goi Ph.D. 1st Year, KAIST, Dept. of EECS.
Chapter 6: Transform and Conquer Genetic Algorithms The Design and Analysis of Algorithms.
Genetic Programming. Agenda What is Genetic Programming? Background/History. Why Genetic Programming? How Genetic Principles are Applied. Examples of.
Coordinative Behavior in Evolutionary Multi-agent System by Genetic Algorithm Chuan-Kang Ting – Page: 1 International Graduate School of Dynamic Intelligent.
Genetic Algorithms: A Tutorial
Genetic Programming Chapter 6. A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Genetic Programming GP quick overview Developed: USA.
Evolutionary algorithms
Genetic Algorithm.
Natural Selection Developed by Charles Darwin in 1859
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
1 Evolvability Analysis for Evolutionary Robotics Sung-Bae Cho Yonsei University, Korea.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory Mixed Integer Problems Most optimization algorithms deal.
Study on Genetic Network Programming (GNP) with Learning and Evolution Hirasawa laboratory, Artificial Intelligence section Information architecture field.
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
Evolution of Populations Chapter 16. Gene and Variation Although Mendel and Darwin both worked in the 1800’s, they were not able to share information.
What is Genetic Programming? Genetic programming is a model of programming which uses the ideas (and some of the terminology) of biological evolution to.
Instructions for using this template. Remember this is Jeopardy, so where I have written “ Answer ” this is the prompt the students will see, and where.
(Particle Swarm Optimisation)
Design of an Evolutionary Algorithm M&F, ch. 7 why I like this textbook and what I don’t like about it!
Introduction to Evolutionary Algorithms Session 4 Jim Smith University of the West of England, UK May/June 2012.
Simulation of Plant Growth using Genetic Algorithms Peter Barber Westminster College.
Genetic Algorithms Genetic Algorithms – What are they? And how they are inspired from evolution. Operators and Definitions in Genetic Algorithms paradigm.
1 “Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for optimal combinations of things, solutions.
Derivative Free Optimization G.Anuradha. Contents Genetic Algorithm Simulated Annealing Random search method Downhill simplex method.
EE459 I ntroduction to Artificial I ntelligence Genetic Algorithms Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
Natural selection and Gene frequencies. Evolution is ‘ a change in the gene pool of a population from generation to generation over time’. Although individuals.
Project funded by the Future and Emerging Technologies arm of the IST Programme FET-Open scheme Project funded by the Future and Emerging Technologies.
DECOI: Social Simulation - August Social Simulation Branimir Cace, Carlos Grilo, Arne Handt, Pablo Rabanal & Scott Stensland.
EEL 5937 Managing mutability in agents EEL 5937 Multi Agent Systems Lecture 26, April 10, 2003 Lotzi Bölöni.
Automated Patch Generation Adapted from Tevfik Bultan’s Lecture.
Genetic Algorithms Czech Technical University in Prague, Faculty of Electrical Engineering Ondřej Vaněk, Agent Technology Center ZUI 2011.
Chapter 9 Genetic Algorithms.  Based upon biological evolution  Generate successor hypothesis based upon repeated mutations  Acts as a randomized parallel.
Emergence in Artificial Societies Evolving Communication and Cooperation in a Sugarscape world by Pieter Buzing.
Introduction to Genetic Algorithms. Genetic Algorithms We’ve covered enough material that we can write programs that use genetic algorithms! –More advanced.
Genetic Algorithms What is a GA Terms and definitions Basic algorithm.
Genetic Algorithms. 2 Overview Introduction To Genetic Algorithms (GAs) GA Operators and Parameters Genetic Algorithms To Solve The Traveling Salesman.
The Standard Genetic Algorithm Start with a “population” of “individuals” Rank these individuals according to their “fitness” Select pairs of individuals.
Genetic Programming COSC Ch. F. Eick, Introduction to Genetic Programming GP quick overview Developed: USA in the 1990’s Early names: J. Koza Typically.
Genetic Programming Using Simulated Natural Selection to Automatically Write Programs.
Genetics in EACirc DESCRIPTION OF THE COMPONENTS BASED ON EVOLUTION.
CAP6938 Neuroevolution and Artificial Embryogeny Real-time NEAT Dr. Kenneth Stanley February 22, 2006.
Estimation of Distribution Algorithm and Genetic Programming Structure Complexity Lab,Seoul National University KIM KANGIL.
Genetic Algorithms And other approaches for similar applications Optimization Techniques.
Genetic Algorithm. Outline Motivation Genetic algorithms An illustrative example Hypothesis space search.
Genetic Programming.
Selected Topics in CI I Genetic Programming Dr. Widodo Budiharto 2014.
Evolution Notes March
Modified Crossover Operator Approach for Evolutionary Optimization
Introduction to Operators
Genetic Algorithms: A Tutorial
Genetic Algorithms Chapter 3.
Biologically Inspired Computing: Operators for Evolutionary Algorithms
Beyond Classical Search
Genetic Algorithms: A Tutorial
Population Methods.
Presentation transcript:

NEW TIES WP2 Agent and learning mechanisms

Decision making and learning Agents have a controller (decision tree, DQT)  Input: situation (as perceived = seen/heard/interpr’d  Output: action Decision making = using DQT Learning = modifying DQT Decisions also depend on inheritable “attitude genes” (learned through evolution)

Example of a DQT 0.5 B B T A BiasTestActionDecision 0.2 Genetic bias YES Boolean choice Legend VISUAL: FRONT FOOD REACHABLE T NOYES TURN LEFT MOVETURN RIGHT A PICKUP 1.0 A BAG: FOOD T YESNO TURN LEFT MOVETURN RIGHT A EAT 1.0 A 0.5

Interaction evolution & individual learning Bias node with n children each with bias b i Bias ≠ probability  Bias b i is learned, changing (name: learned bias)  Genetic bias g i is inherited, part of genome, constant Actual probability of choosing child x: p(b,g) = b + (1 - b) ∙ g Learned and inherited behaviour are linked through formula

DQT nodes & parameters cont’d Test node language: native concepts + emerging concepts Native: see_agent, see_mother, see_food, have_food, see_mate, … New concepts can emerge by categorisation (discrimination game)

Learning: the heart of the emergence engine Evolutionary learning:  not within an agent (not during lifetime), over generations  by variation + selection Individual learning:  within one agent, during lifetime  by reinforcement learning Social learning:  during lifetime, in interacting agents  by sending/receiving + adopting knowledge pieces

Types of learning: properties Evolutionary learning:  Agent does not create new knowledge during lifetime  Basic DQTree + genetic biases are inheritable  “knowledge creator” = crossover and mutation Individual learning:  Agent does create new knowledge during lifetime  DQTree + learned biases are modified  “knowledge creator” = reinforcement learning (driven by rewards)  Individually learnt knowledge dies with its host agent Social learning:  Agent imports knowledge already created elsewhere (new? not new?)  Adoption of imported knowledge ≈ crossover  Importing knowledge pieces  can save effort for recipient  can create novel combinations  Exporting knowledge helps its preservation after death of host

Present status of types of learning Evolutionary learning:  Demonstrated in 2 NT scenarios  Autonomous selection/reproduction causes problems with population stability (im/explosion) Individual learning:   code, but never demonstrated in NT scenarios Social learning:  Under construction/design based on the “telepathy” approach  Communication protocols + adoption mechanisms needed

Evolution: variation operators Operators for DQT:  Crossover = subtree swap  Mutation =  Substitute subtree with random sub-tree  Change concepts in test nodes  Change bias on an edge Operators for attitude genes:  Crossover = full arithmetic xover  Mutation =  Add Gaussian noise  Replace with random value

Evolution: selection operators Mate selection:  Mate action chosen by DQT  Propose – accept proposal  Adulthood OK Survivor selection:  Dead if too old ( ≥ 80 years)  Dead if zero energy

Experiment: Simple world Setup: Environment World size: 200 x 200 grid cells Agents and food (no tokens, roads, etc). Both are variable in number. Initial distribution of agents (500): in upper left corner Initial distribution of food (10000): 5000 in upper left and lower right corner.

Experiment: Simple world Setup: Agents Native knowledge (concepts and DQT sub trees)  Navigating (random walk)  Eating (identify, pickup and eat plants)  Mating (identify mates, propose/agree) Random DQT-tree branches  Differs per agent  Based on the “pool” of native concepts

Experiment: Simple world Simulation continued for 3 months real time to test stability

Experiment: Poisonous Food Setup: Environment Two types of food: poisonous (decreases energy) and edible (increases energy) World size: 200 x 200 grid cells Agents and food (no tokens, roads, etc). Both are variable in number. Initial distribution of agents (500): uniform random over the grid space. Initial distribution of food (10000): 5000 of each type of food uniform random over the same grid space as the agents.

Experiment: Poisonous Food Setup: Agent Native knowledge  Identical to simple world experiment Additional native knowledge  Can distinguish poisonous from edible plants  Relation with eating/picking up is not present No random DQT-tree branches

Experiment: Poisonous Food Measures Population size Welfare (energy) Number of poisonous and edible plants Complexity of controller (nr. of nodes) Age

Experiment: Poisonous Food Demo

Experiment: Poisonous Food Results