Vincent Conitzer conitzer@cs.duke.edu CPS 296.1 Repeated games Vincent Conitzer conitzer@cs.duke.edu.

Slides:



Advertisements
Similar presentations
An Introduction to Game Theory Part V: Extensive Games with Perfect Information Bernhard Nebel.
Advertisements

Chapter 17: Making Complex Decisions April 1, 2004.
CPS Extensive-form games Vincent Conitzer
Game Theoretic Analysis of Oligopoly y n Y N 0000 Y N The unique dominant strategy Nash Equilibrium is (y,Y) A game of imperfect.
On the Approximation Performance of Fictitious Play in Finite Games Paul W. GoldbergU. Liverpool Rahul Savani U. Liverpool Troels Bjerre Sørensen U. Warwick.
Some Problems from Chapt 13
Infinitely Repeated Games
Complexity Results about Nash Equilibria Vincent Conitzer, Tuomas Sandholm International Joint Conferences on Artificial Intelligence 2003 (IJCAI’03) Presented.
M9302 Mathematical Models in Economics Instructor: Georgi Burlakov 3.1.Dynamic Games of Complete but Imperfect Information Lecture
Crime, Punishment, and Forgiveness
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
Mixed Strategies CMPT 882 Computational Game Theory Simon Fraser University Spring 2010 Instructor: Oliver Schulte.
Evolution and Repeated Games D. Fudenberg (Harvard) E. Maskin (IAS, Princeton)
Game Theory “Доверяй, Но Проверяй” (“Trust, but Verify”) - Russian Proverb (Ronald Reagan) Topic 5 Repeated Games.
Games With No Pure Strategy Nash Equilibrium Player 2 Player
Infinitely Repeated Games. In an infinitely repeated game, the application of subgame perfection is different - after any possible history, the continuation.
Non-Cooperative Game Theory To define a game, you need to know three things: –The set of players –The strategy sets of the players (i.e., the actions they.
Chapter 14 Infinite Horizon 1.Markov Games 2.Markov Solutions 3.Infinite Horizon Repeated Games 4.Trigger Strategy Solutions 5.Investing in Strategic Capital.
1 Game Theory. By the end of this section, you should be able to…. ► In a simultaneous game played only once, find and define:  the Nash equilibrium.
M9302 Mathematical Models in Economics Instructor: Georgi Burlakov 2.5.Repeated Games Lecture
Infinitely Repeated Games Econ 171. Finitely Repeated Game Take any game play it, then play it again, for a specified number of times. The game that is.
EC941 - Game Theory Lecture 7 Prof. Francesco Squintani
Game Theory: Inside Oligopoly
Game Theory Lecture 9.
Learning in games Vincent Conitzer
Game Theory Lecture 8.
Dynamic Games of Complete Information.. Repeated games Best understood class of dynamic games Past play cannot influence feasible actions or payoff functions.
A camper awakens to the growl of a hungry bear and sees his friend putting on a pair of running shoes, “You can’t outrun a bear,” scoffs the camper. His.
Game Theory: Key Concepts Zero Sum Games Zero Sum Games Non – Zero Sum Games Non – Zero Sum Games Strategic Form Games  Lay out strategies Strategic Form.
Outline MDP (brief) –Background –Learning MDP Q learning Game theory (brief) –Background Markov games (2-player) –Background –Learning Markov games Littman’s.
Correlated-Q Learning and Cyclic Equilibria in Markov games Haoqi Zhang.
APEC 8205: Applied Game Theory Fall 2007
Nash Q-Learning for General-Sum Stochastic Games Hu & Wellman March 6 th, 2006 CS286r Presented by Ilan Lobel.
TOPIC 6 REPEATED GAMES The same players play the same game G period after period. Before playing in one period they perfectly observe the actions chosen.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Constraints in Repeated Games. Rational Learning Leads to Nash Equilibrium …so what is rational learning? Kalai & Lehrer, 1993.
Minimax strategies, Nash equilibria, correlated equilibria Vincent Conitzer
1. problem set 6 from Osborne’s Introd. To G.T. p.210 Ex p.234 Ex p.337 Ex. 26,27 from Binmore’s Fun and Games.
CPS Learning in games Vincent Conitzer
Vincent Conitzer CPS Game Theory Vincent Conitzer
Instructor: Vincent Conitzer
MAKING COMPLEX DEClSlONS
Problems from Chapter 12. Problem 1, Chapter 12 Find a separating equilibrium Trial-and-error. Two possible separating strategies for Player 1: – Choose.
1 Game Theory Sequential bargaining and Repeated Games Univ. Prof.dr. M.C.W. Janssen University of Vienna Winter semester Week 46 (November 14-15)
CPS 270: Artificial Intelligence Decision theory Instructor: Vincent Conitzer.
Punishment and Forgiveness in Repeated Games. A review of present values.
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Rutgers University A Polynomial-time Nash Equilibrium Algorithm for Repeated Stochastic Games Enrique Munoz de Cote Michael L. Littman.
Punishment, Detection, and Forgiveness in Repeated Games.
Mixed Strategies and Repeated Games
Subgames and Credible Threats (with perfect information) Econ 171.
Lecture 6 Reputation Reputations arise in situations where there is an element of repetition, and also where coordination between players is possible.
CPS 570: Artificial Intelligence Markov decision processes, POMDPs
Punishment, Detection, and Forgiveness in Repeated Games.
Entry Deterrence Players Two firms, entrant and incumbent Order of play Entrant decides to enter or stay out. If entrant enters, incumbent decides to fight.
Yuan Deng Vincent Conitzer Duke University
Dynamic Games of Complete Information
CPS 570: Artificial Intelligence Markov decision processes, POMDPs
Vincent Conitzer CPS Repeated games Vincent Conitzer
Instructor: Vincent Conitzer
Multiagent Systems Repeated Games © Manfred Huber 2018.
Game Theory Fall Mike Shor Topic 5.
Vincent Conitzer Repeated games Vincent Conitzer
Chapter 14 & 15 Repeated Games.
Chapter 14 & 15 Repeated Games.
Collaboration in Repeated Games
Normal Form (Matrix) Games
Game Theory Spring Mike Shor Topic 5.
Vincent Conitzer CPS Repeated games Vincent Conitzer
Presentation transcript:

Vincent Conitzer conitzer@cs.duke.edu CPS 296.1 Repeated games Vincent Conitzer conitzer@cs.duke.edu

Repeated games In a (typical) repeated game, players play a normal-form game (aka. the stage game), then they see what happened (and get the utilities), then they play again, etc. Can be repeated finitely or infinitely many times Really, an extensive form game Would like to find subgame-perfect equilibria One subgame-perfect equilibrium: keep repeating some Nash equilibrium of the stage game But are there other equilibria?

Finitely repeated Prisoner’s Dilemma Two players play the Prisoner’s Dilemma k times cooperate defect 2, 2 0, 3 3, 0 1, 1 cooperate defect In the last round, it is dominant to defect Hence, in the second-to-last round, there is no way to influence what will happen So, it is optimal to defect in this round as well Etc. So the only equilibrium is to always defect

Modified Prisoner’s Dilemma Suppose the following game is played twice cooperate defect1 defect2 5, 5 0, 6 6, 0 4, 4 1, 1 2, 2 cooperate defect1 defect2 Consider the following strategy: In the first round, cooperate; In the second round, if someone defected in the first round, play defect2; otherwise, play defect1 If both players play this, is that a subgame perfect equilibrium?

Another modified Prisoner’s Dilemma Suppose the following game is played twice cooperate defect crazy 5, 5 0, 6 1, 0 6, 0 4, 4 0, 1 0, 0 cooperate defect crazy What are the subgame perfect equilibria? Consider the following strategy: In the first round, cooperate; In the second round, if someone played defect or crazy in the first round, play crazy; otherwise, play defect Is this a Nash equilibrium (not subgame perfect)?

Infinitely repeated games First problem: are we just going to add up the utilities over infinitely many rounds? Everyone gets infinity! (Limit of) average payoff: limn→∞Σ1≤t≤nu(t)/n Limit may not exist… Discounted payoff: Σtδtu(t) for some δ < 1

Infinitely repeated Prisoner’s Dilemma cooperate defect 2, 2 0, 3 3, 0 1, 1 cooperate defect Tit-for-tat strategy: Cooperate the first round, In every later round, do the same thing as the other player did in the previous round Is both players playing this a Nash/subgame-perfect equilibrium? Does it depend on δ? Trigger strategy: Cooperate as long as everyone cooperates Once a player defects, defect forever Is both players playing this a subgame-perfect equilibrium? What about one player playing tit-for-tat and the other playing trigger?

Folk theorem(s) Can we somehow characterize the equilibria of infinitely repeated games? Subgame perfect or not? Averaged utilities or discounted? Easiest case: averaged utilities, no subgame perfection We will characterize what (averaged) utilities (u1, u2, …, un) the agents can get in equilibrium The utilities must be feasible: there must be outcomes of the game such that the agents, on average, get these utilities They must also be enforceable: deviation should lead to punishment that outweighs the benefits of deviation Folk theorem: a utility vector can be realized by some Nash equilibrium if and only if it is both feasible and enforceable

Feasibility 2, 2 0, 3 3, 0 1, 1 The utility vector (2, 2) is feasible because it is one of the outcomes of the game The utility vector (1, 2.5) is also feasible, because the agents could alternate between (2, 2) and (0, 3) What about (.5, 2.75)? What about (3, 0.1)? In general, convex combinations of the outcomes of the game are feasible p1a1 + p2a2 + … + pnan is a convex combination of the ai if the pi sum to 1 and are nonnegative

Enforceability 2, 2 0, 3 3, 0 1, 1 A utility for an agent is not enforceable if the agent can guarantee herself a higher utility E.g. a utility of .5 for player 1 is not enforceable, because she can guarantee herself a utility of 1 by defecting A utility of 1.2 for player 1 is enforceable, because player 2 can guarantee player 1 a utility of at most 1 by defecting What is the relationship to minimax strategies & values?

Computing a Nash equilibrium in a 2-player repeated game using folk theorem Average payoff, no subgame perfection Can be done in polynomial time: Compute minimum enforceable utility for each agent I.e., compute maxmin values & strategies Find a feasible point where both players receive at least this utility E.g., both players playing their maxmin strategies Players play feasible point (by rotating through the outcomes), unless the other deviates, in which case they punish the other player by playing minmax strategy forever Minmax strategy easy to compute A more complicated (and earlier) algorithm by Littman & Stone [04] computes a “nicer” and subgame-perfect equilibrium

Example Markov Decision Process (MDP) Machine can be in one of three states: good, deteriorating, broken Can take two actions: maintain, ignore

Stochastic games A stochastic game has multiple states that it can be in Each state corresponds to a normal-form game After a round, the game randomly transitions to another state Transition probabilities depend on state and actions taken Typically utilities are discounted over time 1, 1 1, 0 0, 1 0, 0 .2 .5 2, 2 0, 3 3, 0 1, 1 .4 .3 1, 0 0, 1 .6 1-state stochastic game = (infinitely) repeated game 1-agent stochastic game = Markov Decision Process (MDP)

Stationary strategies A stationary strategy specifies a mixed strategy for each state Strategy does not depend on history E.g., in a repeated game, stationary strategy = always playing the same mixed strategy An equilibrium in stationary strategies always exists [Fink 64] Each player will have a value for being in each state

Shapley’s [1953] algorithm for 2-player zero-sum stochastic games (~value iteration) Each state s is arbitrarily given a value V(s) Player 1’s utility for being in state s Now, for each state, compute a normal-form game that takes these (discounted) values into account -3 + δ(.7*2 + .3*5) = -3 + 2.9δ .7 * V(s2) = 2 * -3, 3 * -3 + 2.9δ, 3 - 2.9δ * .3 V(s1) = -4 V(s3) = 5 s1’s modified game Show value iteration with soil quality first Solve for the value of the modified game (using LP) Make this the new value of s1 Do this for all states, repeat until convergence Similarly, analogs of policy iteration [Pollatschek & Avi-Itzhak] and Q-Learning [Littman 94, Hu & Wellman 98] exist