Programming for Social Scientists Lecture 4 UCLA Political Science 209-1: Programming for Social Scientists Winter 1999 Lars-Erik Cederman & Benedikt Stefansson
POL SCI Cederman / Stefansson 2 Exercise 1b int matrix[2][2] = Player... -setRow: (int) r Col: (int) c { if (rowPlayer) { row = r; col = c; } else { row = c; col = r; } return self; } -(BOOL)move { return matrix[!row][col] > matrix[row][col]; }
POL SCI Cederman / Stefansson 3 Exercise 1c int matrix[2][2][2] = {{{3,0},{5,1}}, Player -init: (int)n rowPlayer: (BOOL)rp playerType: (int)pt { name = n; rowPlayer = rp; playerType = pt; return self; }... -(BOOL)move { return matrix[playerType][!row][col] > matrix[playerType][row][col]; }
POL SCI Cederman / Stefansson 4 Exercise 1c (cont'd) player1 = [Player create: globalZone]; player2 = [Player create: globalZone]; for (pt=0; pt<2; pt++) { [player1 init: 1 rowPlayer: YES playerType: pt]; [player2 init: 2 rowPlayer: NO playerType: pt]; for (r=0; r<2; c++) { printf(" \n"); printf("|"); for (c=0; c<2; c++) { [player1 setRow: r Col: c]; [player2 setRow: r Col: c]; if ([player1 move] !! [player2 move]) printf(" |"); else printf(" * |"); } printf("\n"); } printf(" \n");
POL SCI Cederman / Stefansson 5 Exercise 2: Player -init: (int) n { name = n; alive = YES; return self; } -setOther: o { other = o; return self; } -(BOOL)isAlive { return alive; } play: r { int shot; [r load]; shot = [r trigger]; if (shot) alive = NO; else [other play: r]; return self;
POL SCI Cederman / Stefansson 6 Exercise 2: Revolver.m... Revolver -empty { bullets = 0; return self; } -load { bullets++; return self; } -(BOOL)trigger { return (double)rand()/(double)RAND_MAX < bullets/6.0;
POL SCI Cederman / Stefansson 7 Prisoner's Dilemma Game Player 2 C D C 3,30,5 Player 1 D 5,01,1
POL SCI Cederman / Stefansson 8 Iterated Prisoner's Dilemma repetitions of single-shot PD "Folk Theorem" shows that mutual cooperation is sustainable In The Evolution of Cooperation, Robert Axelrod (1984) created a computer tournament of IPD –cooperation sometimes emerges –Tit For Tat a particularly effective strategy
POL SCI Cederman / Stefansson 9 One-Step Memory Strategies C D C D p q Memory: C D Strategy = (i, p, q) i = prob. of cooperating at t = 0 p = prob. of cooperating if opponent cooperated q = prob. of cooperating if opponent defected t-1 t
POL SCI Cederman / Stefansson 10 The Four Strategies (cf. Cohen et al. p. 8)
POL SCI Cederman / Stefansson 11 A four-iterations PD t01234 Row Column i i {C,D} p,q U U UU UU U U = S
POL SCI Cederman / Stefansson 12 all-D meets TFT t01234 Row (all-D) Column (TFT) i=0 i=1 p=q= = 8 = D D C D D D D D D D p=1; q=0 Cumulated Payoff
POL SCI Cederman / Stefansson 13 Moves and Total Payoffs for all 4 x 4 Strategy Combinations Source: Cohen et al. Table 3, p. 10
POL SCI Cederman / Stefansson 14 simpleIPD: File structure main.m ModelSwarm.m Player.m ModelSwarm.h Player.h
POL SCI Cederman / Stefansson 15 simpleIPD: main.m int main(int argc, const char ** argv) { id modelSwarm; initSwarm(argc, argv); modelSwarm = [ModelSwarm create: globalZone]; [modelSwarm buildObjects]; [modelSwarm buildActions]; [modelSwarm activateIn: nil]; [[modelSwarm getActivity] run]; return 0; }
POL SCI Cederman / Stefansson 16 The ModelSwarm An instance of the Swarm class can manage a model world Facilitates the creation of agents and interaction model Model can have many Swarms, often nested main Player1 ModelSwarm Player2
POL SCI Cederman / Stefansson 17 simpleIPD: ModelSwarm: Swarm { id player1,player2; int numIter; id stopSchedule, modelSchedule, playerActions; } +createBegin: (id) aZone; -createEnd; -updateMemories; -distrPayoffs; -buildObjects; -buildActions; -activateIn: (id) swarmContext;
POL SCI Cederman / Stefansson 18 Creating a Swarm I. createBegin,createEnd –Initialize memory and parameters II. buildObjects –Build all the agents and objects in the model III. buildActions –Define order and timing of events IV. activate –Merge into top level swarm or start Swarm running
POL SCI Cederman / Stefansson 19 Step I: Initializing the ModelSwarm int ModelSwarm +createBegin: (id) aZone { ModelSwarm * obj; obj = [super createBegin:aZone]; return obj; } -createEnd { return [super createEnd]; }
POL SCI Cederman / Stefansson 20 Details on createBegin method The “+” indicates that this is a class method as opposed to “-” which indicates an instance method ModelSwarm * obj –indicates to compiler that obj is statically typed to ModelSwarm class [super...] –Executes createBegin method in the super class of obj (Swarm) and returns an instance of ModelSwarm 1 2 3
POL SCI Cederman / Stefansson 21 Memory zones The Defobj super class provides facilities to create and drop an object through In either case the object is created “in a memory zone” Effectively this means that the underlying mechanism provides enough memory for the instance, it’s variables and methods. The zone also keeps track of all objects created in it and allows you to reclaim memory simply by dropping a zone. It will signals to all objects in it to destroy themselves. 4
POL SCI Cederman / Stefansson 22 In main.m: modelSwarm= [ModelSwarm create: globalZone]; Where did that zone come from? In main.m : initSwarm (argc, argv); In ModelSwarm.m: +createBegin: Executes various functions in defobj and simtools which create a global memory zone among other things create: method is implemented in defobj, superclass of the Swarm class and it calls the createBegin: method in ModelSwarm
POL SCI Cederman / Stefansson 23 Step II: Building the agents -buildObjects { player1 = [Player createBegin: self]; [player1 initPlayer]; player1 = [player1 createEnd]; player2 = [Player createBegin: self]; [player2 initPlayer]; player2 = [player2 createEnd]; [player1 setOtherPlayer: player2]; [player2 setOtherPlayer: player1]; return self; }
POL SCI Cederman / Stefansson 24 Details on the buildObjects phase The purpose of this method is to create each instance of objects needed at the start of simulation, and then to pass parameters to the objects It is good OOP protocol to provide setX: methods for each parameter X we want to set, as in: [player1 setOtherPlayer: player2]
POL SCI Cederman / Stefansson 25 Why createBegin vs. create? Using createBegin:, createEnd is appropriate when we want a reminder that the object needs to initialize something, calculate or set (usually this code is put in the createEnd method). Always use createBegin with createEnd to avoid messy problems But create: is perfectly fine if we just want just to create an object without further ado.
POL SCI Cederman / Stefansson 26 simpleIPD: ModelSwarm.m (cont'd) -updateMemories { [player1 remember]; [player2 remember]; return self; } -distrPayoffs { int action1, action2; action1 = [player1 getNewAction]; action2 = [player2 getNewAction]; [player1 setPayoff: [player1 getPayoff] + matrix[action1][action2]]; [player2 setPayoff: [player2 getPayoff] + matrix[action2][action1]]; return self; }
POL SCI Cederman / Stefansson 27 simpleIPD: Player: SwarmObject { int time, numIter; int i,p,q; int cumulPayoff; int memory; int newAction; id other; } -initPlayer; -createEnd; -setOtherPlayer: player; -setPayoff: (int) p; -(int)getPayoff; -(int)getNewAction; -remember;
POL SCI Cederman / Stefansson 28 simpleIPD: Player -initPlayer { time = 0; cumulPayoff = 0; i = 1; // TFT p = 1; q = 0; newAction = i; return self; } -createEnd { [super createEnd]; return self; } -setOtherPlayer: player { other = player; return self; } -setPayoff: (int) payoff { cumulPayoff = payoff; return self; } -(int) getPayoff { return cumulPayoff; } -(int) getNewAction { return newAction; } -remember { memory = [other getNewAction]; return self; } -step { if (time==0) newAction = i; else { if (memory==1) newAction = p; else newAction = q; } time++; return self; }
POL SCI Cederman / Stefansson 29 Step III: Building schedules -buildActions { stopSchedule = [Schedule create: self]; [stopSchedule at: 12 createActionTo: self message: M(stopRunning)]; modelSchedule = [Schedule createBegin: self]; [modelSchedule setRepeatInterval: 3]; modelSchedule = [modelSchedule createEnd]; playerActions = [ActionGroup createBegin: self]; playerActions = [ActionGroup createEnd]; [playerActions createActionTo: player1 message: M(step)]; [playerActions createActionTo: player2 message: M(step)]; [modelSchedule at: 0 createActionTo: self message:M(updateMemories)]; [modelSchedule at: 1 createAction: playerActions]; [modelSchedule at: 2 createActionTo: self message: M(distrPayoffs)]; return self; }
POL SCI Cederman / Stefansson 30 Schedules Schedules define event in terms of: –Time of first invocation –Target object –Method to call tt+1t+2 [m update] [m distribute] [schedule at: t createActionTo: agent message: M(method)]
POL SCI Cederman / Stefansson 31 ActionGroups Group events at same timestep Define event in terms of: –Target object –Method to call t=1 t=2 t=3 [m update] [m distribute] [p1 step] [p2 step] [actionGroup createActionTo: agent message: M(method)]
POL SCI Cederman / Stefansson 32 Implementation schedule=[Schedule createBegin: [self getZone]]; [schedule setRepeatInterval: 3]; schedule=[schedule1 createEnd]; [schedule at: 1 createActionTo: m message: M(update)]; [schedule at: 3 createActionTo: m message: M(distribute)]; actionGroup=[ActionGroup createBegin: [self getZone]]; [actionGroup createEnd]; [actionGroup createActionTo: p1 message: M(step)]; [actionGroup createActionTo: p2 message: M(step)]; [schedule at: 2 createAction: actionGroup]; t t+1 t+4 t+2 t+3...
POL SCI Cederman / Stefansson 33 Step IV: Activating the Swarm -activateIn: (id) swarmContext { [super activateIn: swarmContext]; [modelSchedule activateIn: self]; [stopSchedule activateIn: self]; return [self getActivity]; } -stopRunning { printf("Payoffs: %d,%d\n",[player1 getPayoff], [player2 getPayoff]); [[self getActivity] terminate]; return self; }
POL SCI Cederman / Stefansson 34 Activation of schedule(s) In main.m:[modelSwarm activateIn: nil]; -activateIn: (id) swarmContext [modelSchedule activateIn: self] There is only one Swarm so we activate it in nil This one line could set in motion complex scheme of merging and activation
POL SCI Cederman / Stefansson 35 Previous example as a for loop for(t=1;t<4;t++) { [self updateMemories]; [player1 step]; [player2 step]; [self distrPayoffs]; } [self stopRunning];