Infinitely Repeated Games. In an infinitely repeated game, the application of subgame perfection is different - after any possible history, the continuation.

Slides:



Advertisements
Similar presentations
Vincent Conitzer CPS Repeated games Vincent Conitzer
Advertisements

Some Problems from Chapt 13
Infinitely Repeated Games
M9302 Mathematical Models in Economics Instructor: Georgi Burlakov 3.1.Dynamic Games of Complete but Imperfect Information Lecture
Crime, Punishment, and Forgiveness
Evolution and Repeated Games D. Fudenberg (Harvard) E. Maskin (IAS, Princeton)
Game Theory “Доверяй, Но Проверяй” - Russian Proverb (Trust, but Verify) - Ronald Reagan Mike Shor Lecture 6.
Game Theory “Доверяй, Но Проверяй” (“Trust, but Verify”) - Russian Proverb (Ronald Reagan) Topic 5 Repeated Games.
Games With No Pure Strategy Nash Equilibrium Player 2 Player
Non-Cooperative Game Theory To define a game, you need to know three things: –The set of players –The strategy sets of the players (i.e., the actions they.
Chapter 14 Infinite Horizon 1.Markov Games 2.Markov Solutions 3.Infinite Horizon Repeated Games 4.Trigger Strategy Solutions 5.Investing in Strategic Capital.
M9302 Mathematical Models in Economics Instructor: Georgi Burlakov 2.5.Repeated Games Lecture
Infinitely Repeated Games Econ 171. Finitely Repeated Game Take any game play it, then play it again, for a specified number of times. The game that is.
EC941 - Game Theory Lecture 7 Prof. Francesco Squintani
Game Theory: Inside Oligopoly
Game Theory Lecture 9.
Industrial Organization - Matilde Machado Tacit Collusion Tacit Collusion Matilde Machado.
Game Theory Lecture 8.
ECO290E: Game Theory Lecture 4 Applications in Industrial Organization.
Repeated Prisoner’s Dilemma If the Prisoner’s Dilemma is repeated, cooperation can come from strategies including: “Grim Trigger” Strategy – one.
Games People Play. 8: The Prisoners’ Dilemma and repeated games In this section we shall learn How repeated play of a game opens up many new strategic.
Dynamic Games of Complete Information.. Repeated games Best understood class of dynamic games Past play cannot influence feasible actions or payoff functions.
Yale Lectures 21 and Repeated Games: Cooperation vs the End Game.
Final Lecture. ``Life can only be understood backwards; but it must be lived forwards.” Søren Kierkegaard Thoughts on subgame perfection?
APEC 8205: Applied Game Theory Fall 2007
Repeated games - example This stage game is played 2 times Any SPNE where players behave differently than in a 1-time game? Player 2 LMR L1, 10, 05, 0.
TOPIC 6 REPEATED GAMES The same players play the same game G period after period. Before playing in one period they perfectly observe the actions chosen.
UNIT III: COMPETITIVE STRATEGY Monopoly Oligopoly Strategic Behavior 7/21.
0 MBA 299 – Section Notes 4/25/03 Haas School of Business, UC Berkeley Rawley.
On Bounded Rationality and Computational Complexity Christos Papadimitriou and Mihallis Yannakakis.
Communication Networks A Second Course Jean Walrand Department of EECS University of California at Berkeley.
© 2009 Institute of Information Management National Chiao Tung University Lecture Notes II-2 Dynamic Games of Complete Information Extensive Form Representation.
Problems from Chapter 12. Problem 1, Chapter 12 Find a separating equilibrium Trial-and-error. Two possible separating strategies for Player 1: – Choose.
1 Game Theory Sequential bargaining and Repeated Games Univ. Prof.dr. M.C.W. Janssen University of Vienna Winter semester Week 46 (November 14-15)
1 Unless otherwise noted, the content of this course material is licensed under a Creative Commons Attribution – Non-Commercial 3.0 License.
Punishment and Forgiveness in Repeated Games. A review of present values.
ECO290E: Game Theory Lecture 12 Static Games of Incomplete Information.
Nash equilibrium Nash equilibrium is defined in terms of strategies, not payoffs Every player is best responding simultaneously (everyone optimizes) This.
Frank Cowell: Microeconomics Repeated Games MICROECONOMICS Principles and Analysis Frank Cowell January 2007 Almost essential Game Theory: Dynamic Almost.
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
Dynamic Games & The Extensive Form
1 Topic 2 (continuation): Oligopoly Juan A. Mañez.
Chapters 29, 30 Game Theory A good time to talk about game theory since we have actually seen some types of equilibria last time. Game theory is concerned.
Topic 3 Games in Extensive Form 1. A. Perfect Information Games in Extensive Form. 1 RaiseFold Raise (0,0) (-1,1) Raise (1,-1) (-1,1)(2,-2) 2.
Punishment, Detection, and Forgiveness in Repeated Games.
Final Lecture. Problem 2, Chapter 13 Exploring the problem Note that c, x yields the highest total payoff of 7 for each player. Is this a Nash equilibrium?
Mixed Strategies and Repeated Games
Oligopoly Theory1 Oligopoly Theory (11) Collusion Aim of this lecture (1) To understand the idea of repeated game. (2) To understand the idea of the stability.
MICROECONOMICS Principles and Analysis Frank Cowell
Chapter 6 Extensive Form Games With Perfect Information (Illustrations)
Game Theory (Microeconomic Theory (IV)) Instructor: Yongqin Wang School of Economics, Fudan University December, 2004.
Dynamic games, Stackelburg Cournot and Bertrand
Extensive Form Games With Perfect Information (Illustrations)
1 The Dynamics of Pricing Rivalry Besanko, Dranove, Shanley, and Schaefer Chapters 8.
Punishment, Detection, and Forgiveness in Repeated Games.
Entry Deterrence Players Two firms, entrant and incumbent Order of play Entrant decides to enter or stay out. If entrant enters, incumbent decides to fight.
Lecture 10 Collusion and Repeated games (Chapter 14, 15)
1 Strategic Thinking Lecture 7: Repeated Strategic Situations Suggested reading: Dixit and Skeath, ch. 9 University of East Anglia School of Economics.
ECON 330 Lecture 17 Monday, November 25.
Dynamic Games of Complete Information
Vincent Conitzer CPS Repeated games Vincent Conitzer
Game Theory II – Repeated Games
Computer-Mediated Communication
Chapter 29 Game Theory Key Concept: Nash equilibrium and Subgame Perfect Nash equilibrium (SPNE)
Game Theory Fall Mike Shor Topic 5.
Vincent Conitzer Repeated games Vincent Conitzer
Chapter 14 & 15 Repeated Games.
Chapter 14 & 15 Repeated Games.
Game Theory Spring Mike Shor Topic 5.
Vincent Conitzer CPS Repeated games Vincent Conitzer
Presentation transcript:

Infinitely Repeated Games. In an infinitely repeated game, the application of subgame perfection is different - after any possible history, the continuation must be a NE - but after any history each subgame looks like the original game We cannot use generalized backward induction, because there is no last period - the trick is to recognize that each subgame is identical to the whole game - this simplifies things since only need to consider the initial game A technical issue: we must discount future payoffs using a discount factor ( 0<δ<1) - without discounting the payoffs are not finite

Repeating the one-shot NE is always an SPNE. If (2) plays D always - then a NE can come from (1) playing D - everyone playing D always is a SPNE In the finite repeated prisoner’s dilemma game a unique SPNE is {(D, D), (D, D)…(D, D)} In the infinitely repeated version, there are multiple SPNE as long as players are sufficiently patient - one SPNE is always {(D, D), (D, D)…(D, D)} - the existence of other SPNE implies that cooperation is possible - Cooperation can yield a higher long-run payoff 2121 CD C3, 31, 4 D 4, 1 2, 2

One-shot: unique NE is {D, D} Finite repeated: unique SPNE {(D, D), …, (D, D)} Infinitely repeated: Claim: if players are sufficiently patient (if the discount factor is sufficiently high), then cooperation can be sustained in a SPNE in the infinitely repeated game. Nash Reversion - consider the following “trigger strategy” Play C if you have always seen C, otherwise play D. - if both players follow this strategy, we always observe (C, C) Infinite prisoner’s dilemma 2121 CD C3, 31, 4 D 4, 1 2, 2

“trigger strategy” Under what conditions is there a SPNE in trigger strategies Is the trigger strategy a best response for (1) if (2) uses the trigger strategy? For the trigger strategy to be a BR for (1): CD C3, 31, 4 D 4, 1 2, 2 Patience matters in infinitely repeated games If players do not value the future at all, the analysis of repeated games is the analysis of repeated one-shot games

Using the “trigger strategy” we can get many different paths Path of cooperation in even periods and non-cooperation in odd periods - - then a trigger that goes to D if they ever observe anything different from this and we would get DD forever Path of alternating Or any other path you can think of Even if the stage game has an unique equilibrium, there may be SPNE in the infinitely repeated game in which no stage’s outcome is a NE of a stage game The Folk Theorem: take a game, play it infinitely often -if players are patient enough you can get a wide variety of subgame paths -some paths might require a very high discount rate – a lot of patience -you need further refinements in order to predict behavior The Folk Theorem.

2 firms: they play Bertrand each period for an infinite # of periods - the firms discount future payoffs Recall that in the one-shot game with two firms P 1 = P 2 = C is the unique NE. - Any NE in the one shot game is a NE in the repeated game If firm (2) plays P 2 = C every period Then firm (1) gets 0 profits no matter what, so P 1 = C i s a BR 1 P1 = P2 = C is also a SPNE In the infinitely repeated Bertrand game, many other price paths are possible - there are other SPNEs ( a lot of them) - we will focus on the one in which both players choose the monopoly price P M Infinitely repeated Bertrand.

Dynamic Bertrand with tacit collusion.

Collusion as N rises.

This is the folk theorem A reasonable focal point – too many possibilities - firms choose symmetric strategies - on the frontier so each gets Anything can happen Because there are so many equilibria Using SPNE alone we cannot predict what is to happen in infinitely repeated games Any price between the marginal cost and the monopoly price can be sustained Any payoffs in the triangle can be the average per period payoffs in a SPNE - if players are sufficiently patient they can use a Nash reversion strategy - anything beats getting zero forever - However, with Nash reversion, players cannot do worse than the NE Too many possibilities exist.

Cournot with Nash reversion. In Cournot, there are also SPNE with trigger strategies - firms can tacitly collude - what keeps it up is the prospect of future gains Average per period payoffs, that can occur in SPNE in infinitely repeated game, using Nash revision, if sufficiently patient.

Minimax payoffs.

Cournot with Non-Nash reversion. What does non-Nash reversion strategy look like? The Folk Theorem states that the only lower bound on payoffs in an infinitely repeated game when players are sufficiently patient is given by the minimax payoffs (rather than the Nash eqm. Payoffs) One player can force the other to the minimax payoffs In Cournot, the minimax payoff is zero Infinitely repeated games pose problems for analysis - because of the infinite # of equilibria it is difficult to predict the path of play - it is difficult to perform comparative statics - we can explain anything; the analysis does not add much value

Using infinitely repeated games. One response is to ignore repeated game considerations (1) focus on simple dynamic games (2) assume players repeat a one-shot NE when it has intuitive properties Another response is introduces a state space - assumes players’ strategies are a function of the current state, not of the history - value functions with Markov perfect equilibria - this removes history dependence A third response is to use the insights to explain conditions under which cooperation can occur -.make standard assumptions such as players are rational - find restriction on parameters that make cooperation possible

Tit for Tat. This repeated PD strategy has only one period of memory; players cannot carry a grudge Strategy (TFT):t=1 Cooperate t>1 Cooperate if opponent played C in period t – 1 Defect if opponent played D in period t – 1 This is not equilibrium analysis There are some good properties of TFT 1) nice – starts out cooperating, never initiates defection 2) simple – easy to follow, easy for opponent to understand 3) forgiving – after D, it is willing to cooperate again if opponent does 4) provocable – never lets cheating go unpunished Axelrod’s experiments: he collected strategies for computerized Prisoner Dilemma games - In a round-robin tournament each strategy played every other strategy. - TFT was the winner. - Nevertheless, a simple defect strategy will always beat TFT, but provides a low payoff