CSRG Presented by Souvik Das 11/02/05

Slides:



Advertisements
Similar presentations
THE PRICE OF STOCHASTIC ANARCHY Christine ChungUniversity of Pittsburgh Katrina LigettCarnegie Mellon University Kirk PruhsUniversity of Pittsburgh Aaron.
Advertisements

Reaching Agreements II. 2 What utility does a deal give an agent? Given encounter  T 1,T 2  in task domain  T,{1,2},c  We define the utility of a.
6.896: Topics in Algorithmic Game Theory Lecture 20 Yang Cai.
Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Price Of Anarchy: Routing
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
3. Basic Topics in Game Theory. Strategic Behavior in Business and Econ Outline 3.1 What is a Game ? The elements of a Game The Rules of the.
Effort Games and the Price of Myopia Michael Zuckerman Joint work with Yoram Bachrach and Jeff Rosenschein.
Evolution and Repeated Games D. Fudenberg (Harvard) E. Maskin (IAS, Princeton)
Congestion Games with Player- Specific Payoff Functions Igal Milchtaich, Department of Mathematics, The Hebrew University of Jerusalem, 1993 Presentation.
Tacit Coordination Games, Strategic Uncertainty, and Coordination Failure John B. Van Huyck, Raymond C. Battalio, Richard O. Beil The American Economic.
Chapter 14 Infinite Horizon 1.Markov Games 2.Markov Solutions 3.Infinite Horizon Repeated Games 4.Trigger Strategy Solutions 5.Investing in Strategic Capital.
Competitive Safety Analysis: Robust Decision-Making in Multi-Agent systems Moshe Tennenholtz Summarized by Yi Seung-Joon.
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
The Evolution of Conventions H. Peyton Young Presented by Na Li and Cory Pender.
A camper awakens to the growl of a hungry bear and sees his friend putting on a pair of running shoes, “You can’t outrun a bear,” scoffs the camper. His.
1 By Gil Kalai Institute of Mathematics and Center for Rationality, Hebrew University, Jerusalem, Israel presented by: Yair Cymbalista.
An Introduction to Game Theory Part II: Mixed and Correlated Strategies Bernhard Nebel.
Communication Networks A Second Course Jean Walrand Department of EECS University of California at Berkeley.
Static Games of Complete Information: Equilibrium Concepts
6/2/2001 Cooperative Agent Systems: Artificial Agents Play the Ultimatum Game Steven O. Kimbrough Presented at FMEC 2001, Oslo Joint work with Fang Zhong.
Correlated-Q Learning and Cyclic Equilibria in Markov games Haoqi Zhang.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
An Introduction to Black-Box Complexity
Convergence Time to Nash Equilibria in Load Balancing Eyal Even-Dar, Tel-Aviv University Alex Kesselman, Tel-Aviv University Yishay Mansour, Tel-Aviv University.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
1 On the Agenda(s) of Research on Multi-Agent Learning by Yoav Shoham and Rob Powers and Trond Grenager Learning against opponents with bounded memory.
MAKING COMPLEX DEClSlONS
Chapter 12 Choices Involving Strategy Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written.
A Framework for Distributed Model Predictive Control
Learning in Multiagent systems
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
1 On the Emergence of Social Conventions: modeling, analysis and simulations Yoav Shoham & Moshe Tennenholtz Journal of Artificial Intelligence 94(1-2),
Dynamic Games & The Extensive Form
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
1 Optimizing Decisions over the Long-term in the Presence of Uncertain Response Edward Kambour.
Negotiating Socially Optimal Allocations of Resources U. Endriss, N. Maudet, F. Sadri, and F. Toni Presented by: Marcus Shea.
Consider a very simple setting: a fish stock is harvested by two symmetric (identical) players, which can be fishermen or fleets. 2.1 A Simple Non-cooperative.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
Network Formation Games. NFGs model distinct ways in which selfish agents might create and evaluate networks We’ll see two models: Global Connection Game.
A useful reduction (SAT -> game)
Game theory basics A Game describes situations of strategic interaction, where the payoff for one agent depends on its own actions as well as on the actions.
Satisfaction Games in Graphical Multi-resource Allocation
A useful reduction (SAT -> game)
Task: It is necessary to choose the most suitable variant from some set of objects by those or other criteria.
Simultaneous Move Games: Discrete Strategies
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Communication Complexity as a Lower Bound for Learning in Games
Convergence, Targeted Optimality, and Safety in Multiagent Learning
Hidden Markov Models Part 2: Algorithms
Announcements Homework 3 due today (grace period through Friday)
Multiagent Systems Game Theory © Manfred Huber 2018.
CS 3343: Analysis of Algorithms
Presented By Aaron Roth
Game Theory in Wireless and Communication Networks: Theory, Models, and Applications Lecture 10 Stochastic Game Zhu Han, Dusit Niyato, Walid Saad, and.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
CASE − Cognitive Agents for Social Environments
Learning 6.2 Game Theory.
School of Computer Science & Engineering
Equlibrium Selection in Stochastic Games
Chapter 29 Game Theory Key Concept: Nash equilibrium and Subgame Perfect Nash equilibrium (SPNE)
Multiagent Systems Repeated Games © Manfred Huber 2018.
Boltzmann Machine (BM) (§6.4)
EASTERN MEDITERRANEAN UNIVERSITY DEPARTMENT OF INDUSTRIAL ENGINEERING IENG314 OPERATIONS RESEARCH II SAMIR SAMEER ABUYOUSSEF
Molly W. Dahl Georgetown University Econ 101 – Spring 2009
The Public Goods Environment
UNIT II: The Basic Theory
Collaboration in Repeated Games
Normal Form (Matrix) Games
Presentation transcript:

CSRG Presented by Souvik Das 11/02/05 On the Emergence of Social Conventions: modeling, analysis and simulations Yoav Shoham & Moshe Tennenholtz Journal of Artificial Intelligence 94(1-2), pp. 139-166, July 1997. CSRG Presented by Souvik Das 11/02/05

Authors Yoav Shoham Moshe Tennenholtz Professor of Computer Science, Stanford University AI, MAS, Game Theory, e-commerce http://ai.stanford.edu/~shoham/ , email: shoham@stanford.edu Moshe Tennenholtz Professor of Industrial Engineering and Management at the Technion – Israel Institute of Technology AI, MAS, Protocol evolution http://iew3.technion.ac.il/Home/Users/Moshet.phtml email: moshet@ie.technion.ac.il

Definition Social Convention Limiting agents’ choices to induce subgames Such restrictions are social constraints, in cooperative games When restrictions leave only one strategy for all agents it is a social convention

Three basic concepts Maximin Nash Equilibrium Pareto Optimality Guarantees highest minimal payoff Rationality of other players or common knowledge may not be assumed Nash Equilibrium No player deviates unilaterally from equlibrium solution without hurting his/her payoff Common knowledge and rationality assumed Pareto Optimality Joint action is pareto optimal if on increasing one agents payoff, another suffers

Coordination and Cooperation games Maximin gives –1 while the other two give 1 as payoff Cooperation Maximin and Nash give –2 but this is pareto dominated 1,1 -1,-1 1,1 -3,3 3,-3 -2,-2

Motivation Under what conditions do conventions eventually emerge? How efficiently are they achieved? What are the different parameters affecting speed of convergence?

Game Model Symmetric Population size N >= 4 Each game 2 player 2 choice Typical coordination and cooperation games Payoff matrix M of each game g M = x,x u,v v,u y,y

Game model cont. Social law sl induces sub game gsl where g is the unrestricted game Rationality test of sl Let V be the game variable used for determining rationality Let V(g) denote the value of that variable in game g A rational social law with respect to g is V(g) < V(gsl) Note: Rationality here does not imply optimality

Example In coordination game, two possible rational social conventions with respect to maximin Restriction on either one of the strategies In cooperation game, only one possible rational social convention with respect to maximin Cooperate

The Game Dynamics N-k-g stochastic social game Unbounded sequence of ordered tuples of k agents selected at random from given N agents Random k agents meet repeatedly and play game g In each iteration, action selection by agents are synchronous

Action Selection An agent switches to a new action iff total payoff obtained from that action in the last m >= N >= 4 iterations is more than the present action in same time period This action update rule called HCR or Highest Cumulative Reward Complicated weighted HCR rules based on simple HCR possible m puts finite bound on history

Theorem 1 Given a N-2-g stochastic social agreement game Corollary For every ε > 0, there exists a bounded number Λ such that if the system runs for Λ iterations, probability that a social convention is reached is 1-ε Once the convention is reached, it is never left Reaching the convention guarantees to agent a payoff no less than the maximum value initially guaranteed If social convention exists for g that is rational w.r.t maximin value then, then social convention will be rational w.r.t. maximin Corollary HCR rule guarantees eventual convergence for coordination and cooperation social games, that is, rational convention

Theorem 2 Efficiency measured in terms of number of iterations T(N) required to get desired behavior T(N) = Ω ( N log N ) for any update rule R which guarantees convergence

Proof: Theorem 1 Case I: Coordination games ( y > 0, u < 0, v < 0 ) Rational social convention will restrict all agents to similar strategy Pair of agents (i,j) with similar strategy meet together till all other agents forget their past i meets x (not equal to j) and then meets j. This step continues in loop till i meets all agents. If Λ = k g(N) f(N), then probability that convention not reached is e-k f(N) and g(N) bounded by an exponent of the form Ns where s is a polynomial in m and N

Proof: Theorem 1 Case II: Cooperation ( y < 0, u < 0, v > 0 ) Similar structure of proof as Case I The major change is in the creation of a pair of cooperative agents Achieved by meeting a pair of agents till a pair of non-cooperative agents forget their past These historyless non cooperative agents meet till all other non cooperative agents forget their history Then they meet sequentially and convention is reached in similar way as coordination game

Proof: Theorem 2 Total number of permutations possible for choosing two players from N is NP2 or N(N-1) Ways in which a particular player is chosen is N Probability of it not being chosen as player 1 or player 2 in 2 person game in one iteration is (1-1/(N-1))2 Probability of player not being chosen for a stretch of T(N) = (N-1)f(N) games is (1-1/(N-1))2(N-1)f(N) which converges to e-2f(N)

Proof: Theorem 2 cont. Consider the random variable YN(i) which contains the number of agents that did not participate in any of the i iterations E[YN(T(N))] goes to 0 implies that convention established If e-2f(N) > 1/N, then E[YN(T(N))] > 1, implying no convergence Therefore, for convergence, e-2f(N) < 1/N Taking natural log, f(N) > 0.5logN Thus, T(N) = Ω ( N log N )

Evolution of coordination: Experimental Results Coordination games achieve conventions rapidly with the HCR rule while cooperation games do not Parameters considered are Update frequency How frequently an agent uses its action update rule HCR Memory restarts Previous history forgotten, but current action retained Memory window Previous m iterations in which agent participated versus previous m iterations regardless of whether the agent participated in those

Update frequency The efficiency of convention decreases as the delay in update increases

Memory Restarts With decreasing memory restart distance, convention evolution efficiency decreases

Memory Window Increasing memory size indefinitely is not helpful Old information not as relevant as new ones

Co-varying memory size and update frequency When update frequency drops below 100, it becomes better to use statistics of only last window than entire history When agents have update delays, they rely on old information Systems with large update delays should have frequent memory restarts

Convention Evolution Dynamics As the number of players remaining to conform to convention decreases, the rate of convergence slows down

Extended Coordination Game Symmetric 2-person-s-choice game where payoff x for both agents is greater than 0, iff they perform similar actions, and it is –x otherwise New update rule used in this case is External Majority or EM rule EM rule Strategy i is adopted if it was observed in other agents more often than any other strategy Reduces to HCR rule for s=2

Experimental results Addition of more potential conventions decreases the efficiency of convention formation by less than logarithmic fashion

General Comments These conventions are not necessarily Nash Equlibria Constraints are viewed as regulations laid down by central authority such as government If central authority present and is able to enforce certain rules, then they may as well enforce the efficient convention In proofs of theorems, statements are made without validation

Comments on Selection Rule HCR rule replaces the Best Response or BR rule used in evolutionary stable strategies and stochastically stable strategies Two important criteria for selection function are obliviousness and locality Selection function is independent of identity of players Selection function is purely a function of player’s personal history Obliviousness is similar to Young’s approach Young* uses BR which is global Rationale for using local update is that individual decision making usually happens in absence of global information Is HCR really local? *The Evolution of Conventions, H P Young, Econometrica, Vol 61, No. 1, (Jan 1993), 57-84

Comments on the Experiment It is not clear How many agents play games in each iteration and how they are chosen How does one ensure that a particular pair of agents play and the rest forget their play history in instances where the memory window is based upon the last m iterations in which the agents participated

Comparison with Young’s Work Model differences BR vs HCR Anonymity of history Incompleteness of information measured by k/m ratio A convention defined as state h consisting of m repetitions of a pure strategy which is an absorbing state No central authority to dictate restrictions Mistakes (deviation from rational behavior assumed) Adaptive play’s incomplete sampling helps it to break out from sub optimal cycles As long as m/k and k are large, for 2x2 games, stochastically stable equilibria is independent of m and k

Questions?