Download presentation
Presentation is loading. Please wait.
1
Quantal Response Equilibrium APEC 8205: Applied Game Theory Fall 2007
2
THE GAME Players: N = {1,2,…,n} Strategies: S i = {s 1 i,s 2 i,…,s J(i) i } Strategy Profile: s = {s 1,s 2,…,s n } for s i S i i N Strategy Space:S = i N S i Individual Payoffs: u i (s) Everyone’s Payoffs:u(s) = {u 1 (s),u 2 (s),…,u n (s)}
3
SOME NOTATION i : J(i) dimensional simplex = i N i p i = (p 1 i,p 2 i,…,p J(i) i ) i p = {p 1, p 2,…,p n } p i (s i ): Probability player i chooses strategy s i p(s) = i = 1 n p i (s i ): Probability of strategy profile s S given p Eu i (p) = s S p(s)u i (s): Player i’s expected payoff
4
DEFINITION p’ = {p i ’, p -i ’} is a Nash equilibrium if for all i N and all p i i, Eu i (p i ’, p -i ’) Eu i (p i,p -i ’) where p -i ’ is p’ exclusive of p i ’.
5
MORE NOTATION s j i = {p i : p j i = 1}: Player i’s pure strategy j j i : Random error for player i and strategy j Eu j i (p) = Eu i (s j i,p -i ) + j i : i’s expected payoff for strategy j plus an error i = ( 1 i, 2 i,…, J(i) i ): Collection of errors for player i f i ( i ): Joint density of errors assuming E( i ) = 0 f j i ( j i ): Marginal density of error = ( 1, 2,…, n ): Errors for all players f = (f 1,f 2,…,f n ): Joint densities of errors for all players
6
ASSUMPTION Player i chooses the strategy j when Eu j i (p) Eu k i (p) k = 1,2,…,J(i).
7
IMPORTANT NOTES Player i knows i, but not k for i k. Player i only knows the distribution f k ( k ), which means k’s strategy choice is random from the perspective of i. k’s strategy choice is not uniformly random because it also depends on his payoffs and lack of knowledge of j’s specific strategy choice.
8
BACK TO NOTATION R j i (p) = { i | Eu j i (p) Eu k i (p) k = 1,2,…,J(i)} –Region of errors that make strategy j optimal for player i j i (p) = –probability strategy j optimal for player i
9
ANOTHER DEFINITION is a Quantal Response Equilibrium if j i = j i ( ) j = 1,2,…,J(i) and i N where j i is the probability player i chooses strategy j and = { 1, 2,…, n } where i = { 1 i, 2 i,…, J(i) i } i N.
10
COMMENT Assuming the j i ’s are identically and independently distributed (iid) extreme value (Weibull), the QRE implies the logit function j i = where is a parameter that is inversely related to the dispersion or variance of error. For = 0, probabilities are uniform. As approaches , the dispersion of error approaches 0 and the QRE approaches a Nash equilibrium.
11
GENERAL EXAMPLE If 1 i = i and 2 i = 1 - i for i = 1,2, the QRE will solve and
12
MORE SPECIFIC EXAMPLE
13
ANOTHER SPECIFIC EXAMPLE NE = {(0.50, 0.50), (0.50, 0.50)} Observed: –Goeree & Holt = {(0.48, 0.52), (0.48, 0.52)} –Our Class = {(0.75,0.25),(0.44,0.56)}
14
ANOTHER SPECIFIC EXAMPLE NE = {(0.50, 0.50), (0.125, 0.875)} Observed: –Goeree & Holt = {(0.96, 0.04), (0.16, 0.84)} –Our Class = {(0.57,0.43),(0.20,0.80)}
15
ANOTHER SPECIFIC EXAMPLE NE = {(0.50, 0.50), (0.909, 0.091)} Observed: –Goeree & Holt = {(0.08, 0.92), (0.80, 0.20)} –Our Class = {(0.29,0.71),(0.70,0.30)}
16
PLAYER 1’s QRE
17
PLAYER 2’s QRE
18
EMPIRCAL ANALYSIS N pairs of subjects. y i {(Top, Left), (Top, Right), (Bottom, Left), (Bottom, Right)} Let y = {y 1,…,y N }, so the probability of y is L = i=1 N Pr(y i ) where Solve for : 1 ( ) and 2 ( ). Optimize L for 0.
19
WHAT HAVE OTHERS DONE EMPIRICALLY McKelvey and Palfrey: Had subjects play a variety of games. Hypotheses: –Random Play Rejected –Nash Play Rejected –Learning (e.g. decreases with experience) Results Mixed
20
GOOD EXERCISE Data From Class Room Experiment Last Year –Treatment 1: {(T, L), (T, R), (B, L), (B, R)} = {3,2,1,1} –Treatment 2: {(T, L), (T, R), (B, L), (B, R)} = {0,6,1,0} –Treatment 3: {(T, L), (T, R), (B, L), (B, R)} = {2,0,3,2} Questions: –Is play strictly random? –Does differ across treatments? –What are the estimated probabilities of Top & Bottom and Left & Right for each treatment?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.