UNIT III: MONOPOLY & OLIGOPOLY Monopoly Oligopoly Strategic Competition 7/20.

Slides:



Advertisements
Similar presentations
Infinitely Repeated Games
Advertisements

M9302 Mathematical Models in Economics Instructor: Georgi Burlakov 3.1.Dynamic Games of Complete but Imperfect Information Lecture
Basics on Game Theory Class 2 Microeconomics. Introduction Why, What, What for Why Any human activity has some competition Human activities involve actors,
BASICS OF GAME THEORY. Recap Decision Theory vs. Game Theory Rationality Completeness Transitivity What’s in a game? Players Actions Outcomes Preferences.
Oligopoly Games An Oligopoly Price-Fixing Game
Chapter Twenty-Eight Game Theory. u Game theory models strategic behavior by agents who understand that their actions affect the actions of other agents.
Infinitely Repeated Games. In an infinitely repeated game, the application of subgame perfection is different - after any possible history, the continuation.
Non-Cooperative Game Theory To define a game, you need to know three things: –The set of players –The strategy sets of the players (i.e., the actions they.
6-1 LECTURE 6: MULTIAGENT INTERACTIONS An Introduction to MultiAgent Systems
The basics of Game Theory Understanding strategic behaviour.
M9302 Mathematical Models in Economics Instructor: Georgi Burlakov 2.5.Repeated Games Lecture
Infinitely Repeated Games Econ 171. Finitely Repeated Game Take any game play it, then play it again, for a specified number of times. The game that is.
EC941 - Game Theory Lecture 7 Prof. Francesco Squintani
2008/02/06Lecture 21 ECO290E: Game Theory Lecture 2 Static Games and Nash Equilibrium.
Dynamic Games of Complete Information.. Repeated games Best understood class of dynamic games Past play cannot influence feasible actions or payoff functions.
A camper awakens to the growl of a hungry bear and sees his friend putting on a pair of running shoes, “You can’t outrun a bear,” scoffs the camper. His.
Objectives © Pearson Education, 2005 Oligopoly LUBS1940: Topic 7.
Basics on Game Theory For Industrial Economics (According to Shy’s Plan)
Yale Lectures 21 and Repeated Games: Cooperation vs the End Game.
Unit III: The Evolution of Cooperation Can Selfishness Save the Environment? Repeated Games: the Folk Theorem Evolutionary Games A Tournament How to Promote.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Review Midterm3/19 3/12.
Unit III: The Evolution of Cooperation Can Selfishness Save the Environment? Repeated Games: the Folk Theorem Evolutionary Games A Tournament How to Promote.
APEC 8205: Applied Game Theory Fall 2007
Repeated games - example This stage game is played 2 times Any SPNE where players behave differently than in a 1-time game? Player 2 LMR L1, 10, 05, 0.
An introduction to game theory Today: The fundamentals of game theory, including Nash equilibrium.
TOPIC 6 REPEATED GAMES The same players play the same game G period after period. Before playing in one period they perfectly observe the actions chosen.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
UNIT III: COMPETITIVE STRATEGY Monopoly Oligopoly Strategic Behavior 7/21.
Game Applications Chapter 29. Nash Equilibrium In any Nash equilibrium (NE) each player chooses a “best” response to the choices made by all of the other.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Review Midterm3/23 3/9.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
UNIT III: COMPETITIVE STRATEGY
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Communication Networks A Second Course Jean Walrand Department of EECS University of California at Berkeley.
© 2009 Institute of Information Management National Chiao Tung University Lecture Notes II-2 Dynamic Games of Complete Information Extensive Form Representation.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
UNIT III: MONOPOLY & OLIGOPOLY Monopoly Oligopoly Strategic Competition 7/28.
Social Choice Session 7 Carmen Pasca and John Hey.
1 Game Theory Sequential bargaining and Repeated Games Univ. Prof.dr. M.C.W. Janssen University of Vienna Winter semester Week 46 (November 14-15)
UNIT 4.3: IMPERFECT COMPETITION Oligopoly(Oli.). Identical Products No advantage D=MR=AR=P Both efficiencies Price-Taker 1000s Perfect Competition Monopolistic.
ECO290E: Game Theory Lecture 12 Static Games of Incomplete Information.
Unit III: The Evolution of Cooperation Can Selfishness Save the Environment? Repeated Games: the Folk Theorem Evolutionary Games A Tournament How to Promote.
Intermediate Microeconomics
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Chapters 29 and 30 Game Theory and Applications. Game Theory 0 Game theory applied to economics by John Von Neuman and Oskar Morgenstern 0 Game theory.
Chapter 5 Game Theory and the Tools of Strategic Business Analysis.
CHAPTER 23 MONOPOLISTIC COMPETITION AND OLIGOPOLY.
Roadmap: So far, we’ve looked at two polar cases in market structure spectrum: Competition Monopoly Many firms, each one “small” relative to industry;
The Science of Networks 6.1 Today’s topics Game Theory Normal-form games Dominating strategies Nash equilibria Acknowledgements Vincent Conitzer, Michael.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
UNIT III: MONOPOLY & OLIGOPOLY Monopoly Oligopoly Strategic Competition 7/30.
Mohsen Afsharchi Multiagent Interaction. What are Multiagent Systems?
Chapters 29 and 30 Game Theory and Applications. Game Theory 0 Game theory applied to economics by John Von Neuman and Oskar Morgenstern 0 Game theory.
Intermediate Microeconomics Game Theory. So far we have only studied situations that were not “strategic”. The optimal behavior of any given individual.
Zero-sum Games The Essentials of a Game Extensive Game Matrix Game Dominant Strategies Prudent Strategies Solving the Zero-sum Game The Minimax Theorem.
Game Theory (Microeconomic Theory (IV)) Instructor: Yongqin Wang School of Economics, Fudan University December, 2004.
Repeated Games Examples of Repeated Prisoner’s Dilemma Overfishing Transboundary pollution Cartel enforcement Labor union Public goods The Tragedy of the.
Ch. 16 Oligopoly. Oligopoly Only a few sellers offer similar or identical products Actions of any seller can have large impact on profits of other sellers.
Lec 23 Chapter 28 Game Theory.
Entry Deterrence Players Two firms, entrant and incumbent Order of play Entrant decides to enter or stay out. If entrant enters, incumbent decides to fight.
Copyright©2004 South-Western 17 Oligopoly. Copyright © 2004 South-Western BETWEEN MONOPOLY AND PERFECT COMPETITION Imperfect competition includes industries.
Ch. 16 Oligopoly.
Dynamic Games of Complete Information
Multiagent Systems Repeated Games © Manfred Huber 2018.
Chapter 14 & 15 Repeated Games.
Chapter 14 & 15 Repeated Games.
Molly W. Dahl Georgetown University Econ 101 – Spring 2009
UNIT II: The Basic Theory
Presentation transcript:

UNIT III: MONOPOLY & OLIGOPOLY Monopoly Oligopoly Strategic Competition 7/20

Market Structure Perfect Comp Oligopoly Monopoly No. of Firms infinite (>)2 1 Output MR = MC = P ??? MR = MC < P ProfitNo ? Yes Efficiency Yes ? ???

Oligopoly We have no general theory of oligopoly. Rather, there are a variety of models, differing in assumptions about strategic behavior and information conditions. All the models feature a tension between: –Collusion: maximize joint profits –Competition: capture a larger share of the pie

Game Theory Game Trees and Matrices Games of Chance v. Strategy The Prisoner’s Dilemma Dominance Reasoning Best Response and Nash Equilibrium Mixed Strategies

Games of Chance Buy Don’t Buy (1000) (-1) (0) (0) Player 1 Chance You are offered a fair gamble to purchase a lottery ticket that pays $1000, if your number is drawn. The ticket costs $1. What would you do?

Games of Chance Buy Don’t Buy (1000) (-1) (0) (0) Player 1 Chance You are offered a fair gamble to purchase a lottery ticket that pays $1000, if your number is drawn. The ticket costs $1. The chance of your number being chosen is independent of your decision to buy the ticket.

Games of Strategy Buy Don’t Buy (1000,-1000) (-1,1) (0,0) (0,0) Player 1 Player 2 Player 2 chooses the winning number. What are Player 2’s payoffs?

Games of Strategy Advertise Don’t Advertise A D A D (10,5) (15,0) (6,8) (20,2) Firm 1 Firm 2 Duopolists deciding to advertise. Firm 1 moves first. Firm 2 observes Firm 1’s choice and then makes its own choice. How should the game be played? Profits are in ( )

Games of Strategy Advertise Don’t Advertise A D A D (10,5) (15,0) (6,8) (20,2) Firm 1 Firm 2 Duopolists deciding to advertise. Firm 1 moves first. Firm 2 observes Firm 1’s choice and then makes its own choice. How should the game be played? Backwards-induction

Games of Strategy Advertise Don’t Advertise A D A D (10,5) (15,0) (6,8) (20,2) Firm 1 Firm 2 Duopolists deciding to advertise. The 2 firms move simultaneously. (Firm 2 does not see Firm 1’s choice.) Imperfect Information. Information set

Matrix Games Advertise Don’t Advertise A D A D (10,5) (15,0) (6,8) (20,2) Firm 1 Firm 2 10, 5 15, 0 6, 8 20, 2 A D A D

Matrix Games Advertise Don’t Advertise A D A D (10,5) (15,0) (6,8) (20,2) Firm 1 Firm 2 10, 5 15, 0 6, 8 20, 2 A D A D

Games of Strategy Games of strategy require at least two players. Players choose strategies and get payoffs. Chance is not a player! In games of chance, uncertainty is probabilistic, random, subject to statistical regularities. In games of strategy, uncertainty is not random; rather it results from the choice of another strategic actor. Thus, game theory is to games of strategy as probability theory is to games of chance.

A Brief History of Game Theory Minimax Theorem 1928 Theory of Games & Economic Behavior 1944 Nash Equilibrium 1950 Prisoner’s Dilemma 1950 The Evolution of Cooperation 1984 Nobel Prize: Harsanyi, Selten & Nash 1994

The Prisoner’s Dilemma In years in jail Player 2 ConfessDon’t Confess Player 1 Don’t -10, -100, , 0 -1, -1 The pair of dominant strategies ( Confess, Confess ) is a Nash Eq. GAME 1.

The Prisoner’s Dilemma Each player has a dominant strategy. Yet the outcome (-10, -10) is pareto inefficient. Is this a result of imperfect information? What would happen if the players could communicate? What would happen if the game were repeated? A finite number of times? An infinite or unknown number of times? What would happen if rather than 2, there were many players?

Dominance Definition Dominant Strategy: a strategy that is best no matter what the opponent(s) choose(s). T 1 T 2 T 3 0,2 4,3 3,3 4,0 5,4 5,6 3,5 3,5 2,3 0,2 4,3 3,3 4,0 5,4 5,3 3,5 3,5 2,3 S1S2S3S1S2S3 S1S2S3S1S2S3 Sure Thing Principle: If you have a dominant strategy, use it!

Dominance Definition Dominant Strategy: a strategy that is best no matter what the opponent(s) choose(s). T 1 T 2 T 3 0,2 4,3 3,3 4,0 5,4 5,6 3,5 3,5 2,3 0,2 4,3 3,3 4,0 5,4 5,3 3,5 3,5 2,3 S1S2S3S1S2S3 S1S2S3S1S2S3 Sure Thing Principle: If you have a dominant strategy, use it! (S 2,T 3 ) (S 2,T 2 )

Nash Equilibrium Definitions Best Response Strategy: a strategy, s*, is a best response strategy, iff the payoff to (s*,t) is at least as great as the payoff to (s,t) for all s ,4 4,0 5,3 4,0 0,4 5,3 3,5 3,5 6,6 S1S2S3S1S2S3 S1S2S3S1S2S3 T 1 T 2 T 3 Nash Equilibrium: a set of best response strategies (one for each player), (s*, t*) such that s* is a best response to t* and t* is a b.r. to s*. (S 3,T 3 )

Nash Equilibrium ,4 2,3 1,5 3,2 1,1 0,0 5,1 0,0 3,3 S1S2S3S1S2S3 S1S2S3S1S2S3 T 1 T 2 T 3 Nash equilibrium need not be Efficient.

Nash Equilibrium ,1 0,0 0,0 0,0 1,1 0,0 0,0 0,0 1,1 S1S2S3S1S2S3 S1S2S3S1S2S3 T 1 T 2 T 3 Nash equilibrium need not be unique. A COORDINATION PROBLEM

Nash Equilibrium ,1 0,0 0,0 0,0 1,1 0,0 0,0 0,0 3,3 S1S2S3S1S2S3 S1S2S3S1S2S3 T 1 T 2 T 3 Multiple and Inefficient Nash Equilibria.

Nash Equilibrium ,1 0,0 0,-100 0,0 1,1 0,0 -100,0 0,0 3,3 S1S2S3S1S2S3 S1S2S3S1S2S3 T 1 T 2 T 3 Multiple and Inefficient Nash Equilibria. Is it always advisable to play a NE strategy? What do we need to know about the other player?

Button-Button Left Right L R L R (-2,2) (4,-4) (2,-2) (-1,1) Player 1 Player 2 Player 1 hides a button in his Left or Right hand. Player 2 observes Player 1’s choice and then picks either Left or Right. How should the game be played? GAME 2.

Button-Button Left Right L R L R (-2,2) (4,-4) (2,-2) (-1,1) Player 1 Player 2 Player 1 should hide the button in his Right hand. Player 2 should picks Right. GAME 2.

Button-Button Left Right L R L R (-2,2) (4,-4) (2,-2) (-1,1) Player 1 Player 2 What happens if Player 2 cannot observe Player 1’s choice? GAME 2.

Button-Button Left Right L R L R (-2,2) (4,-4) (2,-2) (-1,1) Player 1 Player 2 -2, 2 4, -4 2, -2 -1, 1 L R L R GAME 2.

Mixed Strategies -2, 2 4, -4 2, -2 -1, 1 Definition Mixed Strategy: A mixed strategy is a probability distribution over all strategies available to a player. Let (p, 1-p) = prob. Player 1 chooses L, R. (q, 1-q) = prob. Player 2 chooses L, R. L R LRLR GAME 2.

Mixed Strategies -2, 2 4, -4 2, -2 -1, 1 Then the expected payoff to Player 1: EP 1 (L) = -2(q) + 4(1-q) = 4 – 6q EP 1 (R) = 2(q) – 1(1-q) = q Then if q < 5/9, Player 1’s best response is to always play L (p = 1) L R LRLR (p) (1-p) (q) (1-q) GAME 2.

q LEFT 1 5/9 RIGHT p p*(q) Button-Button Player 1’s best response function. GAME 2.

Mixed Strategies -2, 2 4, -4 2, -2 -1, 1 Then the expected payoff to Player 1: EP 1 (L) = -2(q) + 4(1-q) EP 1 (R) = 2(q) – 1(1-q) (Equalizers) q* = 5/9and for Player 2: p* = 1/3 EP 2 (L) = -2(p) + 2(1-p) EP 2 (R) = 4(p) – 1(1-p) L R LRLR (p) (1-p) (q) (1-q) NE = {(1/3), (5/9)} GAME 2.

q LEFT 1 5/9 RIGHT 0 01/3 1 p q*(p) p*(q) NE = {(1/3), (5/9)} Button-Button GAME 2.

2x2 Game T 1 T 2 1. Prisoner’s Dilemma 2. Button – Button 3. Stag Hunt 4. Chicken 5. Battle of Sexes S 1 S 2 x 1,x 2 w 1, w 2 z 1,z 2 y 1, y 2

Stag Hunt T 1 T 2 S 1 S 2 5,5 0,3 3,0 1,1 also Assurance Game NE = {(S 1,T 1 ), (S 2,T 2 )} GAME 3.

Chicken T 1 T 2 S 1 S 2 3,3 1,5 5,1 0,0 also Hawk/Dove NE = {(S 1,T 2 ), (S 2,T 1 )} GAME 4.

Battle of the Sexes T 1 T 2 S 1 S 2 5,3 0,0 0,0 3,5 NE = {(S 1,T 1 ), (S 2,T 2 )} GAME 5.

P2530P P 1 GAME 5. NE = {(1, 1); (0, 0); (5/8, 3/8)} (0,0) (5/8,3/8) (1,1) Battle of the Sexes

Existence of Nash Equilibrium Prisoner’s DilemmaBattle of the SexesButton-Button GAME 1.GAME 5. (Also 3, 4)GAME p q10q10 There can be (i) a single pure-strategy NE; (ii) a single mixed-strategy NE; or (iii) two pure-strategy NEs plus a single mixed-strategy NE (for x=z; y=w).

Strategic Competition Prisoner’s Dilemma Repeated Games Discounting The Folk Theorem Cartel Enforcement

Repeated Games Some Questions: What happens when a game is repeated? Can threats and promises about the future influence behavior in the present? Cheap talk Finitely repeated games: Backward induction Indefinitely repeated games: Trigger strategies

Repeated Games Examples of Repeated Prisoner’s Dilemma Cartel enforcement Transboundary pollution Common property resources Arms races The Tragedy of the Commons Free-rider Problems

Can threats and promises about future actions influence behavior in the present? Consider the following game, played 2X: C 3,3 0,5 D 5,0 1,1 Repeated Games C D See Gibbons:

Repeated Games Draw the extensive form game: (3,3) (0,5)(5,0) (1,1) (6,6) (3,8) (8,3) (4,4) (3,8)(0,10)(5,5)(1,6)(8,3) (5,5)(10,0) (6,1) (4,4) (1,6) (6,1) (2,2)

Repeated Games Now, consider three repeated game strategies: D (ALWAYS DEFECT): Defect on every move. C(ALWAYS COOPERATE):Cooperate on every move. T(TRIGGER): Cooperate on the first move, then cooperate after the other cooperates. If the other defects, then defect forever.

Repeated Games If the game is played twice, the V(alue) to a player using ALWAYS DEFECT (D) against an opponent using ALWAYS DEFECT(D) is: V (D/D) = = 2, and so on... V (C/C) =3 + 3 =6 V (T/T)=3 + 3 = 6 V (D/C)=5 + 5 =10 V (D/T)=5 + 1 = 6 V (C/D)=0 + 0 =0 V (C/T)=3 + 3 =6 V (T/D)=0 + 1 =1 V (T/C)=3 + 3 =6

Repeated Games And 3x: V (D/D) = = 3 V (C/C) = = 9 V (T/T)= = 9 V (D/C)= = 15 V (D/T)= = 7 V (C/D)= = 0 V (C/T)= = 9 V (T/D)= = 2 V (T/C)= = 9

Repeated Games Time average payoffs: n=3 V (D/D) = = 3 /3= 1 V (C/C) = = 9/3= 3 V (T/T)= = 9/3= 3 V (D/C)= = 15/3= 5 V (D/T)= = 7/3= 7/3 V (C/D)= = 0/3= 0 V (C/T)= = 9/3= 3 V (T/D)= = 2/3 = 2/3 V (T/C)= = 9/3= 3

Repeated Games Time average payoffs: n V (D/D) = /n= 1 V (C/C) = /n= 3 V (T/T)= /n= 3 V (D/C)= /n= 5 V (D/T)= /n= 1 +  V (C/D)= /n= 0 V (C/T)= … /n= 3 V (T/D)= /n = 1 -  V (T/C)= /n= 3

Repeated Games Now draw the matrix form of this game: 1x T3,3 0,5 3,3 C 3,3 0,53,3 D 5,0 1,15,0 C D T

Repeated Games T 3,3 1-  1+  3,3 C 3,3 0,5 3,3 D 5,0 1,1 1+ ,1-  C D T If the game is repeated, ALWAYS DEFECT is no longer dominant. Time Average Payoffs

Repeated Games T 3,3 1-  1+  3,3 C 3,3 0,5 3,3 D 5,0 1,1 1+ ,1-  C D T … and TRIGGER achieves “a NE with itself.”

Repeated Games Time Average Payoffs T(emptation)> R(eward)> P(unishment)> S(ucker) T R,R P-  P +  R,R C R,R S,T R,R D T,S P,P P + , P -  C D T

Discounting The discount parameter, , is the weight of the next payoff relative to the current payoff. In a indefinitely repeated game,  can also be interpreted as the likelihood of the game continuing for another round (so that the expected number of moves per game is 1/(1-  )). The V(alue) to someone using ALWAYS DEFECT (D) when playing with someone using TRIGGER (T) is the sum of T for the first move,  P for the second,  2 P for the third, and so on (Axelrod: 13-4): V (D/T) = T +  P +  2 P + … “The Shadow of the Future”

Discounting Writing this as V (D/T) = T +  P +   2 P +..., we have the following: V (D/D) = P +  P +  2 P + … = P/(1-  ) V (C/C) =R +  R +  2 R + … = R/(1-  ) V (T/T)=R +  R +  2 R + … = R/(1-  ) V (D/C)=T +  T +  2 T + … = T/(1-  ) V (D/T)=T +  P +  2 P + … = T+  P/(1-  ) V (C/D)=S +  S +  2 S + … = S/(1-  ) V (C/T)=R +  R +  2 R + … = R/(1-  ) V (T/D)=S +  P +  2 P + … = S+  P/(1-  ) V (T/C)=R +  R +  2 R + … = R/(1-  )

T C D Discounted Payoffs T > R > P > S 0 >  > 1 T weakly dominates C R /(1-  ) S /(1-  ) R /(1-  ) R /(1-  ) T /(1-  ) R /(1-  ) T /(1-  ) P /(1-  ) T +  P /(1-  ) S /(1-  ) P /(1-  ) S +  P /(1-  ) Discounting C D T R /(1-  ) S +  P /(1-  ) R /(1-  ) R /(1-  ) T +  P /(1-  ) R /(1-  )

Discounting Now consider what happens to these values as  varies (from 0-1): V (D/D) = P +  P +  2 P + … = P/(1-  ) V (C/C) =R +  R +  2 R + … = R/(1-  ) V (T/T)=R +  R +  2 R + … = R/(1-  ) V (D/C)=T +  T +  2 T + … = T/(1-  ) V (D/T)=T +  P +  2 P + … = T+  P/(1-  ) V (C/D)=S +  S +  2 S + … = S/(1-  ) V (C/T)=R +  R +  2 R + … = R/(1-  ) V (T/D)=S +  P +  2 P + … = S+  P/(1-  ) V (T/C)=R +  R +  2 R + … = R/(1-  )

Discounting Now consider what happens to these values as  varies (from 0-1): V (D/D) = P +  P +  2 P + … = P+  P/(1-  ) V (C/C) =R +  R +  2 R + … = R/(1-  ) V (T/T)=R +  R +  2 R + … = R/(1-  ) V (D/C)=T +  T +  2 T + … = T/(1-  ) V (D/T)=T +  P +  2 P + … = T+  P/(1-  ) V (C/D)=S +  S +  2 S + … = S/(1-  ) V (C /T) = R +  R +  2 R + … = R/(1-  ) V (T/D)=S +  P +  2 P + … = S+  P/(1-  ) V (T/C)=R +  R +  2 R + … = R/(1-  ) V(D/D) > V(T/D) D is a best response to D

Discounting Now consider what happens to these values as  varies (from 0-1): V (D/D) = P +  P +  2 P + … = P+  P/(1-  ) V (C/C) =R +  R +  2 R + … = R/(1-  ) V (T/T)=R +  R +  2 R + … = R/(1-  ) V (D/C)=T +  T +  2 T + … = T/(1-  ) V (D/T)=T +  P +  2 P + … = T+  P/(1-  ) V (C/D)=S +  S +  2 S + … = S/(1-  ) V (C/T)=R +  R +  2 R + … = R/(1-  ) V (T/D)=S +  P +  2 P + … = S+  P/(1-  ) V (T/C)=R +  R +  2 R + … = R/(1-  ) ?

Discounting Now consider what happens to these values as  varies (from 0-1): For all values of  : V(D/T) > V(D/D) > V(T/D) V(T/T) > V(D/D) > V(T/D) Is there a value of  s.t., V(D/T) = V(T/T)? Call this  *. If  <  *, the following ordering hold: V(D/T) > V(T/T) > V(D/D) > V(T/D) D is dominant: GAME SOLVED V(D/T) = V(T/T) T+  P(1-  ) = R/(1-  ) T-  t+  P = R T-R =  (T-P)   * = (T-R)/(T-P) ?

Discounting Now consider what happens to these values as  varies (from 0-1): For all values of  : V(D/T) > V(D/D) > V(T/D) V(T/T) > V(D/D) > V(T/D) Is there a value of  s.t., V(D/T) = V(T/T)? Call this  *.  * = (T-R)/(T-P) If  >  *, the following ordering hold: V(T/T) > V(D/T) > V(D/D) > V(T/D) D is a best response to D; T is a best response to T; multiple NE.

Discounting V(T/T) = R/(1-  )  * 1 V TRV TR Graphically: The V(alue) to a player using ALWAYS DEFECT (D) against TRIGGER (T), and the V(T/T) as a function of the discount parameter (  ) V(D/T) = T +  P/(1-  )

The Folk Theorem (R,R) (T,S) (S,T) (P,P) The payoff set of the repeated PD is the convex closure of the points [( T,S ); ( R,R ); ( S,T ); ( P,P )].

The Folk Theorem (R,R) (T,S) (S,T) (P,P) The shaded area is the set of payoffs that Pareto-dominate the one-shot NE ( P,P ).

The Folk Theorem (R,R) (T,S) (S,T) (P,P) Theorem: Any payoff that pareto- dominates the one-shot NE can be supported in a SPNE of the repeated game, if the discount parameter is sufficiently high.

The Folk Theorem (R,R) (T,S) (S,T) (P,P) In other words, in the repeated game, if the future matters “enough” i.e., (  >  * ), there are zillions of equilibria!

The theorem tells us that in general, repeated games give rise to a very large set of Nash equilibria. In the repeated PD, these are pareto-rankable, i.e., some are efficient and some are not. In this context, evolution can be seen as a process that selects for repeated game strategies with efficient payoffs. “Survival of the Fittest” The Folk Theorem

Next Time 7/22 Decision under Uncertainty Pindyck, Chs 5, 13. Besanko, Chs 14-16