11.3: Movement of Ants (plus more Matlab, plus Artificial Life)

Slides:



Advertisements
Similar presentations
Algorithm Analysis Input size Time I1 T1 I2 T2 …
Advertisements

Ultimatum Game Two players bargain (anonymously) to divide a fixed amount between them. P1 (proposer) offers a division of the “pie” P2 (responder) decides.
Evolution and Repeated Games D. Fudenberg (Harvard) E. Maskin (IAS, Princeton)
Evolution of Cooperation The importance of being suspicious.
Flocking and more.  NPC groups can move in cohesive groups not just independently ◦ Meadow of sheep grazing? ◦ Hunting flock of birds? ◦ Ants? Bees?
6-1 LECTURE 6: MULTIAGENT INTERACTIONS An Introduction to MultiAgent Systems
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
Game Theory Game theory is an attempt to model the way decisions are made in competitive situations. It has obvious applications in economics. But it.
Better Group Behaviors in Complex Environments using Global Roadmaps O. Burchan Bayazit, Jyh-Ming Lien and Nancy M. Amato Presented by Mohammad Irfan Rafiq.
Swarm algorithms COMP308. Swarming – The Definition aggregation of similar animals, generally cruising in the same direction Termites swarm to build colonies.
CITS4403 Computational Modelling Agent Based Models.
Evolving New Strategies The Evolution of Strategies in the Iterated Prisoner’s Dilemma 01 / 25.
Story time! Robert Axelrod. Contest #1 Call for entries to game theorists All entrants told of preliminary experiments 15 strategies = 14 entries + 1.
Objectives © Pearson Education, 2005 Oligopoly LUBS1940: Topic 7.
A Memetic Framework for Describing and Simulating Spatial Prisoner’s Dilemma with Coalition Formation Sneak Review by Udara Weerakoon.
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Project: – Several options for bid: Bid our signal Develop several strategies Develop stable bidding strategy Simulating Normal Random Variables.
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
1 Lecture 8: Genetic Algorithms Contents : Miming nature The steps of the algorithm –Coosing parents –Reproduction –Mutation Deeper in GA –Stochastic Universal.
Tirgul 9 Amortized analysis Graph representation.
Better Group Behaviors in Complex Environments using Global Roadmaps O. Burchan Bayazit, Jyh-Ming Lien and Nancy M. Amato Andreas Edlund.
Evolutionary Games The solution concepts that we have discussed in some detail include strategically dominant solutions equilibrium solutions Pareto optimal.
1 Section 2d Game theory Game theory is a way of thinking about situations where there is interaction between individuals or institutions. The parties.
Introduction to Game Theory and Behavior Networked Life CIS 112 Spring 2009 Prof. Michael Kearns.
6/4/03Genetic Algorithm The Genetic Algorithm The Research of Robert Axelrod The Algorithm of John Holland Reviewed by Eyal Allweil and Ami Blonder.
Fundamentals of Python: From First Programs Through Data Structures
A Game-Theoretic Approach to Strategic Behavior. Chapter Outline ©2015 McGraw-Hill Education. All Rights Reserved. 2 The Prisoner’s Dilemma: An Introduction.
Agenda, Day 2  Questions about syllabus? About myths?  Prisoner’s dilemma  Prisoner’s dilemma vs negotiation  Play a single round  Play multiple rounds.
1 CO Games Development 1 Week 6 Introduction To Pathfinding + Crash and Turn + Breadth-first Search Gareth Bellaby.
CS 484 – Artificial Intelligence1 Announcements Lab 4 due today, November 8 Homework 8 due Tuesday, November 13 ½ to 1 page description of final project.
Man and Superman Human Limitations, innovation and emergence in resource competition Robert Savit University of Michigan.
L – Modelling and Simulating Social Systems with MATLAB Lesson 5 – Introduction to agent-based simulations A. Johansson & W. Yu ©
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Example Department of Computer Science University of Bologna Italy ( Decentralised, Evolving, Large-scale Information Systems (DELIS)
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Presenter: Chih-Yuan Chou GA-BASED ALGORITHMS FOR FINDING EQUILIBRIUM 1.
Dynamic Games & The Extensive Form
Stigmergy: emergent cooperation
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
THE “CLASSIC” 2 x 2 SIMULTANEOUS CHOICE GAMES Topic #4.
Finite Iterated Prisoner’s Dilemma Revisited: Belief Change and End Game Effect Jiawei Li (Michael) & Graham Kendall University of Nottingham.
Exploring Complex Systems through Games and Computer Models Santa Fe Institute – Project GUTS
Algorithms and their Applications CS2004 ( ) 13.1 Further Evolutionary Computation.
Algorithm Analysis Data Structures and Algorithms (60-254)
Copyright © 2010 Pearson Education, Inc. Chapter 6 Probability.
Evolving cooperation in one-time interactions with strangers Tags produce cooperation in the single round prisoner’s dilemma and it’s.
Voter Turnout. Overview Recap the “Paradox” of Voting Incentives and Voter Turnout Voter Mobilization.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Decision theory under uncertainty
Section 2 – Ec1818 Jeremy Barofsky
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Robert Axelrod’s Tournaments Robert Axelrod’s Tournaments, as reported in Axelrod, Robert. 1980a. “Effective Choice in the Prisoner’s Dilemma.” Journal.
1. 2 You should know by now… u The security level of a strategy for a player is the minimum payoff regardless of what strategy his opponent uses. u A.
Path Planning Based on Ant Colony Algorithm and Distributed Local Navigation for Multi-Robot Systems International Conference on Mechatronics and Automation.
Evolving Strategies for the Prisoner’s Dilemma Jennifer Golbeck University of Maryland, College Park Department of Computer Science July 23, 2002.
The Good News about The Bad News Gospel. The BAD News Gospel: Humans are “fallen”, “depraved” and incapable of doing the right thing “Human Nature” is.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
The inference and accuracy We learned how to estimate the probability that the percentage of some subjects in the sample would be in a given interval by.
Gambling and probability 1. Odds and football.  Predict the Premier League results for this weekend.  Can you estimate the probability of a win/draw/loss.
The Prisoner’s Dilemma or Life With My Brother and Sister John CT.
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
Your friend has a hobby of generating random bit strings, and finding patterns in them. One day she come to you, excited and says: I found the strangest.
March 1, 2016Introduction to Artificial Intelligence Lecture 11: Machine Evolution 1 Let’s look at… Machine Evolution.
Modelling and Simulating Social Systems with MATLAB
Tools for Decision Analysis: Analysis of Risky Decisions
Evolving New Strategies
tit-for-tat algorithm
Introduction to Artificial Intelligence Lecture 11: Machine Evolution
Evolving cooperation in one-time interactions with strangers
Hiroki Sayama NECSI Summer School 2008 Week 2: Complex Systems Modeling and Networks Agent-Based Models Hiroki Sayama
Presentation transcript:

11.3: Movement of Ants (plus more Matlab, plus Artificial Life)

Movement of Ants Ant movement complicates the spread-of-fire model in a number of ways 1. Each ant has an orientation (N, S, E, W) 2. Each cell contains an ant, a quantity of pheromone (chemical deposited by ants and attractive to them), both, or neither 3. Ant cannot move into cell occupied by another

Ordered Pair Representation We can represent ant presence/absence/orientation using one number: 0 = no ant; 1 = E, 2 = N, 3 = W, 4 = S Another number can represent the concentration of pheromone from zero to some maximum (e.g. 5). Book suggests using an ordered pair (like Cartesian coordinate) to combine these; e.g., (1, 3) = east- facing ant in a cell with a 3/5 concentration of pheromone.

Single-Number Representation Matlab prefers to have a single number in each cell. So we can use a two-digit number to represent an ordered pair: (3,5) becomes 35; (0, 2) becomes 2, etc. grid = 10*ant + pheromone; ant = fix(grid/10); % fix keeps integer part pheromone = mod(grid, 10);

Matrices and Indexing We’ve seen how to perform operations on an entire matrix at once: grid(rand(n)<probTree) = 1, etc. What if we want to operate on individual rows, columns, and elements? grid(i, j) accesses row i, column j of grid. grid(i, :) accesses all columns of row i grid(:, j) accesses all rows of column j

Matrices and Indexing: Examples grid(2, 3) = 1; % a tree grows at 2,3 grid(1, :) = 2; % whole top row on fire

Ranges and Indexing But we’re supposed to start with a gradient strip of pheromone – a range like 0, 1, 2, 3, 4, 5 In Matlab we can simply say 1:5 grid(3, 4:8) = 1:5; For a vertical “strip”, we transpose the range: grid(2:6, 4) = (1:5)’; Let’s put an arbitrarily long horizontal strip of gradient in an arbitrary row....

Initializing the Gradient phergrid = zeros(n);% NxN pheromone grid row = fix(rand*n)+1; % indices start at 1 len = fix(rand*n)+1; % length of gradient strip col = fix(rand*(n-len))+1; % starting column phergrid(row, col:col+len-1) = 1:len; % help me Obi-wan!

Initializing the Ants For trees we simply did: probTree = 0.2; grid = zeros(n); grid(rand(n)<probTree); But here we want several possible values for each ant So we can set up a grid full of ant values, then zero it out where appropriate....

Initializing the Ants probAnt = 0.2; antgrid = fix(4*rand(n))+1; % 4 directions antgrid(rand(n) > probAnt) = 0; Putting it all together: grid = 10*antgrid + phergrid;

Updating the Grid An ant turns in the direction of the neighboring cell with the greatest amount of pheromone (in a tie, pick one at random), then walks in that direction If there’s no ant in a cell, the pheromone decreases by 1 at each time step, with a minimum value of 0. If an ant leaves a cell, the amount of pheromone increases by 1 (ant “deposits” pheromone”). So long as there is an ant in a cell, the amount of pheromone in the cell stays constant.

Avoiding Collisions For an ant facing in a given direction and about to walk in that direction, there are three potential ants in other cells that it could collide with. For example, if I am an ant facing North: me NW NE NN N

Avoiding Collisions Because there are four other directions (S, E, W), each cell has a potential collision with 12 others: me As a first approximation, we can ignore collisions: e.g., cell is occupied by last ant to move there, and others go away (maybe replaced by new ones being born).

The Big Picture: Self-Organization & Spontaneous Orders By itself, the rules for movement of ants aren’t terribly interesting. What interests scientists is the spontaneous orders and self-organizing behaviors that emerge from such simple systems of rules. This is a profound idea that shows up in biology, economics, and the social sciences.

Spontaneous Orders in Markets Every individual...generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. − Adam Smith, The Wealth of Nations Adam Smith ( )

Termite Nest-Building

Bird Flocks

Boids: Artificial Birds Flocks of birds appear to be moving as a coherent whole, following a leader & avoiding obstacles Can this global behavior instead be emergent from the local behavior of individuals birds? Boids (Reynolds 1986): Each “boid” follows three simple rules

Boids: Artificial Birds Separation: steer to avoid crowding local flockmates Alignment: steer towards the average heading of local flockmates Cohesion: steer to move toward the average position of local flockmates

Evolution of Cooperation Consider the Prisoner’s Dilemma game where you and I are arrested for committing a crime. If I defect (rat you out) and you cooperate with me (keep quiet), you get 10 years and I go free. If I cooperate and you defect, I get 10 years and you go free. If we both cooperate, we each get six months If we both defect, we both get five years. What is the best strategy?

Evolution of Cooperation The best strategy is for me is to defect (same for you). If I defect, the expected value (average value) of my punishment is 2.5 years (0 if you cooperate, 5 if you defect) If I cooperate, the expected value of my punishment is 5.25 years (6 months if you cooperate, 10 years if you defect). But what if we repeat this game over and over, allowing each of us to remember what others did in previous iterations (repetitions)?

Iterated Prisoner’s Dilemma Axelrod (1981/1984) : Held a simulated tournament among various PD strategies submitted by contestants Strategies could be arbitrarily simple (always defect, always cooperate) or complicated (keep track of other guys’ last five moves, then try to predict what he’ll do next time, etc.) Amazingly, winning strategy was simple tit for tat (quid pro quo): Always cooperate with someone the first time. Subsequently, do what he did on your previous encounter with him.

Iterated Prisoner’s Dilemma: (Artificial) Life Lessons? In general, most successful strategies followed four rules: Be nice: Don’t be the first to defect Be provocable (don’t be a sucker) Don’t be envious: don’t strive for a payoff greater than the other player’s Don’t be too clever (KISS principle)

The Bad News: People Are Naturally Envious Ultimatum game : psychology experiment with human subjects (Güth et al. 1982) Subject A is given $10 and told to share some of it (in whole $$) with subject B, where B knows how much A is given Optimal for A is to give B $1 and keep $9 Typically, A will offer $3, and B will refuse to accept anything less (!)

The Good News: TFT is an Evolutionarily Stable Strategy Q: What happens when we introduce a “cheater” (always defects) into a population of TFT players? A: The cheater initially gains some points by exploiting TFT player’s niceness, but soon is overwhelmed by subsequent TFT retribution. So TFT is an evolutionarily stable strategy (Maynard-Smith 1982)

Evolution of Communication What is communication? Communication is the phenomenon of one organism producing a signal that when responded to by another organism confers some advantage (or the statistical probability of it) to the signaler or his group. ─ G. Burghardt (1970) How does a community come to share a common system of communication (language)?

Evolution of Communication MacLennan (1990): Simulate communication by a simple matching game: each “simorg” (simulated organism) has a “private” situation that it wants to describe to others.

MacLennan (1990): Simulation communication by a simple matching game To communicate a situation, the individual looks up its current situation in a table and emits an symbol into the shared environment Each individual then uses its own table to convert the shared symbol back into a guess about the private situation of the emitter. Whenever an individual matches the emitter’s situation, it and the emitter get a fitness point Individuals with highest fitness get to survive

Evolution of Communication: Results, Fitness iteration fitness

Evolution of Communication: Results, Denotation Matrix First iteration: random association of symbols with situations

Last iteration: systematic association of symbols with situations Evolution of Communication: Results, Denotation Matrix

synonyms

homonyms

Quantifying Denotation (Dis)order Claude Shannon ( ) Shannon Information Entropy: quantifies (in # bits) amount of disorder in a distribution where p k is the probability of event (situation) k Examples: p = [ 0.25,.25,.25,.25], H = 2.0 p = [.95,.025,.0125,.0125], H = 0.36

Choosing the Next Generation Fitness-Proportionate Selection: use our biased roulette wheel to favor individuals with higher fitness, without ruling out selection of low-fitness individuals i1i1 i2i2 i3i3 i4i4 i1i1 60% i3i3 i4i4 i2i2 8% 10% 22% normalized fitnesses individuals

Iterating Over Generations A highly expressive language like Matlab may allow us to avoid explicit iteration (looping), via operators like sum, >, etc. But when the current generation (population) depends on the previous one, we say that the model is inherently iterative. For such models we use an explicit loop, typically a for loop....

Original Population Model in Matlab % growth rate k = 0.1; % initial population P0 = 100; P = zeros(1, 20); P(1) = P0; % iterate for t = 2:20 P(t) = P(t-1) + k*P(t-1); end % analytical solution Pa = P0 * exp(k*[1:20]); % overlay plots plot(P) hold on plot(Pa, 'r') % red

Modeling & Simulation: Conclusions Cellular automata and related simulations can be a powerful and exciting way of exploring phenomena for which an actual experiment is too difficult or costly. But one must be careful not to “build-in” the very behavior that one claims is emergent. One must also be careful not to over-interpret the results.

Potential Projects Implement collision avoidance in ants simulation. Implement Conway’s Game of Life in init, update functions. Implement MacLennan’s evolution of communication algorithm.