Prospects (and perils) of modeling. “Idea” models Show that a proposed mechanism for a phenomenon is plausible Explore general mechanisms underlying behavior.

Slides:



Advertisements
Similar presentations
The Matching Hypothesis Jeff Schank PSC 120. Mating Mating is an evolutionary imperative Much of life is structured around securing and maintaining long-term.
Advertisements

Evolving Cooperation in the N-player Prisoner's Dilemma: A Social Network Model Dept Computer Science and Software Engineering Golriz Rezaei Michael Kirley.
Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Using Parallel Genetic Algorithm in a Predictive Job Scheduling
DYNAMICS OF RANDOM BOOLEAN NETWORKS James F. Lynch Clarkson University.
Evolution and Repeated Games D. Fudenberg (Harvard) E. Maskin (IAS, Princeton)
Evolution of Cooperation The importance of being suspicious.
FCAT Review The Nature of Science
On the Genetic Evolution of a Perfect Tic-Tac-Toe Strategy
The Evolution of Cooperation within the Iterated Prisoner’s dilemma on a Social Network.
CITS4403 Computational Modelling Agent Based Models.
EC – Tutorial / Case study Iterated Prisoner's Dilemma Ata Kaban University of Birmingham.
Evolving New Strategies The Evolution of Strategies in the Iterated Prisoner’s Dilemma 01 / 25.
Genetic Algorithms, Part 2. Evolving (and co-evolving) one-dimensional cellular automata to perform a computation.
Cellular Automata, Part 2. What is the relationship between the dynamics of cellular automata and their ability to compute?
Chapter 4 DECISION SUPPORT AND ARTIFICIAL INTELLIGENCE
A Memetic Framework for Describing and Simulating Spatial Prisoner’s Dilemma with Coalition Formation Sneak Review by Udara Weerakoon.
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
1 Lecture 8: Genetic Algorithms Contents : Miming nature The steps of the algorithm –Coosing parents –Reproduction –Mutation Deeper in GA –Stochastic Universal.
6/2/2001 Cooperative Agent Systems: Artificial Agents Play the Ultimatum Game Steven O. Kimbrough Presented at FMEC 2001, Oslo Joint work with Fang Zhong.
Evolutionary Games The solution concepts that we have discussed in some detail include strategically dominant solutions equilibrium solutions Pareto optimal.
Genetic Algorithms and Their Applications John Paxton Montana State University August 14, 2003.
Math 105: Problem Solving in Mathematics. Course Description This course introduces students to the true nature mathematics, what mathematicians really.
Evolutionary Games The solution concepts that we have discussed in some detail include strategically dominant solutions equilibrium solutions Pareto optimal.
Lectures on Cellular Automata Continued Modified and upgraded slides of Martijn Schut Vrij Universiteit Amsterdam Lubomir Ivanov Department.
Nawaf M Albadia Introduction. Components. Behavior & Characteristics. Classes & Rules. Grid Dimensions. Evolving Cellular Automata using Genetic.
Bottom-Up Coordination in the El Farol Game: an agent-based model Shu-Heng Chen, Umberto Gostoli.
The Role of Artificial Life, Cellular Automata and Emergence in the study of Artificial Intelligence Ognen Spiroski CITY Liberal Studies 2005.
1 Performance Evaluation of Computer Networks: Part II Objectives r Simulation Modeling r Classification of Simulation Modeling r Discrete-Event Simulation.
Dynamic Models of Segregation
Computer Viruses, Artificial Life & the Origin of Life Robert C Newman Abstracts of Powerpoint Talks - newmanlib.ibri.org -newmanlib.ibri.org.
Governor’s School for the Sciences Mathematics Day 13.
Evolution Strategies Evolutionary Programming Genetic Programming Michael J. Watts
CS 484 – Artificial Intelligence1 Announcements Lab 4 due today, November 8 Homework 8 due Tuesday, November 13 ½ to 1 page description of final project.
5. Alternative Approaches. Strategic Bahavior in Business and Econ 1. Introduction 2. Individual Decision Making 3. Basic Topics in Game Theory 4. The.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
1 Near-Optimal Play in a Social Learning Game Ryan Carr, Eric Raboin, Austin Parker, and Dana Nau Department of Computer Science, University of Maryland.
Example Department of Computer Science University of Bologna Italy ( Decentralised, Evolving, Large-scale Information Systems (DELIS)
Models of Cooperation in Social Systems Example 1: Prisoner’s Dilemma Prisoner’s Dilemma: Player 2 Player 1 cooperate defect cooperatedefect 3, 30, 5.
Presenter: Chih-Yuan Chou GA-BASED ALGORITHMS FOR FINDING EQUILIBRIUM 1.
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
Math 105: Problem Solving in Mathematics
Introduction to Lattice Simulations. Cellular Automata What are Cellular Automata or CA? A cellular automata is a discrete model used to study a range.
CELLULAR AUTOMATA A Presentation By CSC. OUTLINE History One Dimension CA Two Dimension CA Totalistic CA & Conway’s Game of Life Classification of CA.
George F Luger ARTIFICIAL INTELLIGENCE 5th edition Structures and Strategies for Complex Problem Solving Machine Learning: Social and Emergent Luger: Artificial.
Algorithms and their Applications CS2004 ( ) 13.1 Further Evolutionary Computation.
Evolving cooperation in one-time interactions with strangers Tags produce cooperation in the single round prisoner’s dilemma and it’s.
Robert Axelrod’s Tournaments Robert Axelrod’s Tournaments, as reported in Axelrod, Robert. 1980a. “Effective Choice in the Prisoner’s Dilemma.” Journal.
The Evolution of Specialisation in Groups – Tags (again!) David Hales Centre for Policy Modelling, Manchester Metropolitan University, UK.
Cellular Automata Introduction  Cellular Automata originally devised in the late 1940s by Stan Ulam (a mathematician) and John von Neumann.  Originally.
CS851 – Biological Computing February 6, 2003 Nathanael Paul Randomness in Cellular Automata.
Cellular Automata BIOL/CMSC 361: Emergence 2/12/08.
Game Theory by James Crissey Luis Mendez James Reid.
A few of the people involved and what they’ve done.
The Role of Altruistic Punishment in Promoting Cooperation
Evolving Strategies for the Prisoner’s Dilemma Jennifer Golbeck University of Maryland, College Park Department of Computer Science July 23, 2002.
Evolving Specialisation, Altruism & Group-Level Optimisation Using Tags – The emergence of a group identity? David Hales Centre for Policy Modelling, Manchester.
Evolving Specialisation, Altruism & Group-Level Optimisation Using Tags David Hales Centre for Policy Modelling, Manchester Metropolitan University, UK.
Evolution of Cooperation in Mobile Ad Hoc Networks Jeff Hudack (working with some Italian guy)
Social Norm, Costly Punishment and the Evolution to Cooperation : Theory, Experiment and Simulation Tongkui Yu 1, 2, Shu-Heng Chen 2, Honggang Li 1* 1.
Indirect Reciprocity in the Selective Play Environment Nobuyuki Takahashi and Rie Mashima Department of Behavioral Science Hokkaido University 08/07/2003.
March 1, 2016Introduction to Artificial Intelligence Lecture 11: Machine Evolution 1 Let’s look at… Machine Evolution.
Spatio-Temporal Information for Society Münster, 2014
On Routine Evolution of Complex Cellular Automata
The Matching Hypothesis
An Overview of Evolutionary Cellular Automata Computation
Introduction to Artificial Intelligence Lecture 11: Machine Evolution
Cellular Automata (CA) Overview
Coevolutionary Automated Software Correction
Presentation transcript:

Prospects (and perils) of modeling

“Idea” models Show that a proposed mechanism for a phenomenon is plausible Explore general mechanisms underlying behavior Explore effects of topology, parameters, etc. on behavior

Examples: “What are the logical requirements for self- reproduction? (Von Neumann, 1966) “Under what circumstances will population growth be predictable?” (May, 1976) “Does distributing a population of Prisoner’s Dilemma agents in space cause them to cooperate without requiring iterated games?” (Nowak and May, 1992)

Notion of “intuition pump” (Dennett, 1984) Allows exploration and understanding of core ideas of highly complicated systems Allows possibility for mathematical analysis Provides inspiration for new kinds of technology and computing methods Advantages of idea models

Technique Originating idea model Programmable computersTuring (“Turing machine”) model of human mathematical thought Cellular automata Von Neumann self-reproducing automaton (1966) Neural networks McCulloch & Pitts “logical calculus” of neural activity (1943) Genetic algorithms Holland model of adaptation (1975) Artificial immune systems Minimal models of immune system (Farmer, Packard, and Perelson, 1986; Forrest et al. 1993) Swarm intelligence Deneubourg, Bonabeau, Theraulaz, minimal models of insect behavior (1996)

Perils of complex systems modeling Every modeling project has “perils”. Let’s look at two examples of the possible perils in minimal models. (Chosen for pedagogical reasons, not to criticize anyone in particular. Examples of these perils are in many, if not most computer models.) Then let’s summarize the lessons learned.

Hofstadter’s “Luring Lottery” From Hofstadter’s Metamagical Themas column in Scientific American: "This … brings me to the climax of this column: the announcement of a Luring Lottery open to all readers and nonreaders of Scientific American. The prize of this lottery is $ 1,000,000/N, where N is the number of entries submitted. Just think: if you are the only entrant (and if you submit only one entry), a cool million is yours! Perhaps, though, you doubt this will come about. It does seem a trifle iffy. If you'd like to increase your chances of winning, you are encouraged to send in multiple entries without limit. Just send in one postcard per entry. If you send in 100 entries, you'll have 100 times the chance of some poor slob who sends in just one.

Come to think of it, why should you have to send in multiple entries separately? Just send one postcard with your name and address and a positive integer (telling how many entries you're making) to: Luring Lottery c/o Scientific American 415 Madison Avenue New York, N.Y You will be given the same chance of winning as if you had sent in that number of postcards with ‘1’ written on them Illegible, incoherent, ill-specified, or incomprehensible entries will be disqualified. Only entries received by midnight June 30, 1983 will be considered.” What do you guess happened?

Example 1: Prisoner’s Dilemma Prisoner’s Dilemma: One of the most famous and influential idea models in the social sciences Meant to model social dilemmas, arms races, etc. Invented by Flood and Dresher in 1950s Studied most extensively by Axelrod (e.g., The Evolution of Cooperation, 1984) Player 2 Player 1 cooperate defect cooperatedefect 3, 30, 5 5, 01, 1

Dilemma: It’s better for any particular individual to defect. But in the long run, everyone is better off if everyone cooperates. So what are the conditions under which cooperation can be sustained? –This has been investigated by many people using agent- based models: Iterated games Norms and metanorms Spatial distribution of players

Iterated games Example game See Netlogo model (PD two-person iterated) Axelrod’s tournament Axelrod’s genetic algorithm How to represent strategies?

Norms “Norm”: Behavior prescribed by social setting. Individuals are punished for not acting according to norm. Examples: –Norm against cutting people off in traffic. –Norm for covering your nose and mouth when you sneeze –Norm for governments not to raise import taxes without sufficient cause

Axelrod’s (1986) Norms model Norms game: N-player game. 1.Individual i decides whether to cooperate (C) or Defect (D). If D: –Individual gets "temptation" payoff T=3 –Inflicts on the N  1 others a "hurt" payoff (H=-1). If C, no payoff to self or others.

2.If D, there is a (randomly chosen) probability S of being seen by each of the other agents. Each other agent decides whether to punish the defector or not. Punishers incur “enforcement" cost E=  2 every time they punish a defector with punishment P=  9.

Strategy of an agent given by: –Boldness (propensity to defect) –Vengefulness (propensity to punish defectors). B, V  [0,1]. S is the probability that a defector will be observed defecting. An agent defects if B > S (probability of being observed). An agent punishes observed defectors with probability V.

A population of 20 agents play the Norms game repeatedly. One round: Every agent has exactly one opportunity to defect, and also the opportunity to observe (and possibly punish) any defection by other players that has taken place in the round. One generation: Four rounds. At beginning of each generation, agents’ payoffs set to zero. At end of generation, new generation is created via selection based on payoff and mutation (i.e., genetic algorithm).

Axelrod’s results (Norms) Axelrod performed five independent runs of 100 generations each. Intuitive expectation for results: Vengefulness will evolve to counter boldness.

Actual results (Norms model) (From Axelrod, 1986)

Actual results (Norms model) (From Axelrod, 1986) = results of one run : Take all 100 generations for all 5 runs, yielding 500 populations. Group populations with similar average B and V, and measure average B and V one generation later.

Actual results (Norms model) “This raises the question of just what it takes to get a norm established.” (From Axelrod, 1986)

Metanorms model On each round, if there is defection and non- punishment when it is seen, each agent gets opportunity to meta-punish non-punishers (excluding self and defector). Meta-punishment payoff to non-punisher:  9 Meta-enforcement payoff to enforcer:  2 every time it meta-punishes

Results (Metanorms model) (From Axelrod, 1986)

Results (Metanorms model) “[T]he metanorms game can prevent defections if the initial conditions are favorable enough.“ (i.e., sufficient level of vengefulness) (From Axelrod, 1986)

Conclusion: “Metanorms can promote and sustain cooperation in a population.”

Re-implementation by Galan & Izquierdo (2005) Galan & Izquierdo showed: 1.Long term behavior of model significantly different from short term. 2.Behavior dependent on specific values of parameters (e.g., payoff values, mutation rate) 3.Behavior dependent on details of genetic algorithm (e.g., selection method)

Re-Implementation results (From Galan & Izquierdo, 2005) Norms model: Norm collapse occurs in long-term on all runs. (Norm collapse: average B  6/7, average V  1/7 Norm establishment: average B  2/7, average V  5/7) Proportion of runs out of 1,000

(From Galan & Izquierdo, 2005) Metanorms model: Norm is established early, but collapses in long term on most runs.

(From Galan & Izquierdo, 2005)

Conclusions from Galan & Izquierdo Not intended to be a specific critique of Axelrod’s work. Analysis in G&I paper required a huge amount of computational power, which was not available in Instead, re-implementation exercise shows that we need to: –Replicate models (Axelrod: "Replication is one of the hallmarks of cumulative science.“) –Explore parameter space

Conclusions from Galan & Izquierdo –Run stochastic simulations for long times, many replications –Complement simulation with analytical work whenever possible –Be aware of the scope of our computer models and of the conclusions obtained with them.

Example 2: Spatial games and chaos M. Nowak and R. May (1992), “Evolutionary games and spatial chaos”, Nature. Experimented with “spatial” Prisoner’s Dilemma to investigate how cooperation can be sustained.

Spatial Prisoner’s dilemma model Players arrayed on a two-dimensional lattice, one player per site. Each player either always cooperates or always defects. Players have no memory. Each player plays with eight nearest neighbors. Score is sum of payoffs. Each site is then occupied by highest scoring player in its neighborhood. Updates are done synchronously.

NetLogo PD Basic Evolutionary model

Nowak and May’s results Sample of patterns that arise from different initial conditions. Blue: cooperator at a site that was a cooperator in previous generation. Red: defector following a defector. Yellow: defector following cooperator. Green: cooperator following defector.

In general, different initial configurations and different payoff ratios can generate oscillating or “chaotically changing” patterns in which cooperators and defectors coexist indefinitely. Contrasts with non-spatial multi-player Prisoner’s Dilemma in which defectors dominate without special additions (e.g., metanorms).

Interpretation Motivation for this work is “primarily biological”. “We believe that deterministically generated spatial structure within populations may often be crucial for the evolution of cooperation, whether it be among molecules, cells, or organisms.“ "That territoriality favours cooperation...is likely to remain valid for real-life communities“ (Karl Sigmund, Nature, 1992, in commentary on Nowak and May paper)

Re-implementation by Glance and Huberman (1993) “ There are important differences between the way a system composed of many interacting elements is simulated by a digital machine and the manner in which it behaves when studied in real experiments." Asked: What role does the assumption of synchronous updating have in the behavior observed by Nowak and May? "In natural social systems... a global clock that causes all the elements of the system to update their state at the same time seldom exists."

Glance and Huberman’s results SynchronousAsynchronous Same initial configuration: a single defector in the center. Blue: cooperator at a site that was a cooperator in previous generation. Red: defector following a defector. Yellow: defector following cooperator. Green: cooperator following defector.

Glance and Huberman’s conclusion "Until it can be demonstrated that global clocks synchronize mutations and chemical reactions among distal elements of biological structures, the patterns and regularities observed in nature will require continuous descriptions and asynchronous simulations." Note: Show model with “nonlocal” interactions

Nowak, Bonhoeffer, and May response (PNAS, 1994) Huberman & Glance performed simulation for only one case of the “I defect, you cooperate” payoff value p. Sequential, rather than synchronous, updating of sites, still leads to persistence of both cooperation and defection for a range of p values. Behavior with other forms of asynchronous updating (e.g., random)?

Summary Some prospects for idea models: –Often no other way to make progress! –“All models are wrong. Some are useful.” –All the ones I talked about here were useful and led to new insights, new ways of thinking about complex systems, better models, and better understanding of how to build useful models.

Some perils for minimal models: –Hidden unrealistic assumptions –Sensitivity to parameters –Unreplicability due to missing information in publications –Misinterpretation and over-reliance on results –Lack of theoretical insight

Summary How to overcome perils: 1. Replicate, replicate, replicate! Independent replication of computer models is essential for progress is complex systems Community of scientists and funding agencies needs to recognize this as a worthwhile, publishable, fundable activity) 2.Write your own code.

3.Question assumptions of models 4.Explore parameter space 5.Complement simulation with analytical work 6.Understand and emphasize limitations of your model. 7.Give enough details in your paper that someone else can replicate your experiments!

The ‘edge of chaos’ in cellular automata (C. Langton, 1992) Langton devised an “order parameter” for cellular automata Building on Wolfram’s classes of behavior for CA, Langton found evidence that cellular automata can be “ordered” or “chaotic” roughly according to.

Langton gave some evidence that the “complexity” of patterns formed by cellular automata is maximized at the transition between order and chaos He argued that the potential for computation must be maximized at this “phase transition”. (Need long transients, information carrying signals, etc.)

The ‘edge of chaos’ in cellular automata (Packard, 1988) Packard used a genetic algorithm to evolve cellular automata to perform a computation (the “density classification task”): Hypothesis: Resulting CAs will end up at a value near the “edge of chaos”.

Gamma: measure of chaos at each value Packard’s GA: fraction of rules at each value

Gamma: measure of chaos at each value Packard’s GA: fraction of rules at each value Mitchell, Hraber, & Crutchfield (1994): fraction of rules at each value

What caused the discrepancies? Theoretical result (Das, Mitchell, & Crutchfield, 1994) shows has to be 0.5 to do well on this task. (Note that  0.5 does not mean chaotic here.) Via personal communication, we found out that the original GA included injections of many random CAs with various densities into the population at each generation, in order to prevent convergence.

It seems that this, and other non-standard details of the GA, were the cause of the behavior Packard observed, rather than any intrinsic advantage of particular values. In fact, not clear that particular values were correlated with high fitness CAs.