Download presentation
Presentation is loading. Please wait.
Published byMyles Pitts Modified over 9 years ago
1
Common Knowledge: The Math
2
We need a way to talk about “private information”
3
We will use an information structure
4
Ω is the (finite) set of “states” of the world ω Ω is a possible state of the world E ⊆ Ω is an event Examples: Ω = {(hot, rainy), (hot, sunny), (cold, rainy), (cold, sunny)} ω=(hot,rainy) E={(hot,rainy),(hot,sunny)}
5
π i “partitions” the set of states for player i into those he can and those he cannot distinguish. E.g., Suppose player 1 is in a basement with a thermostat but no window π 1 = { { (hot, rainy), (hot, sunny) }, { (cold, rainy), (cold, sunny) } } We write: π 1 ( (hot, sunny) ) = π 1 ( (hot, rainy) ) π 1 ( (cold, sunny) ) = π 1 ( (cold, rainy) )
6
Suppose player 2 is in a high-rise with a window but no thermostat π 2 = { { (hot, rainy), (cold, rainy) }, { (hot, sunny), (cold, sunny) } } π 2 ( (hot, sunny) ) = π 2 ( (cold, sunny) ) π 2 ( (hot, rainy) ) = π 2 ( (cold, rainy) )
7
We let μ represent the “common prior” probability distribution over Ω I.e. μ: Ω [0, 1] s.t. Σ μ(ω) = 1 We interpret μ(ω) as the probability state ω occurs E.g., μ((hot, sunny)) =.45 μ((hot, rainy)) =.05 μ((cold, sunny)) =.05 μ((cold, rainy)) =.45
8
We can likewise write μ(E) or μ(E|F), using Bayes Rule. E.g., μ((hot, sunny)|hot) = =.9
9
Now, we want to investigate how this private information can influence play in a game. We assume that in every state of the world the players play the same coordination game. (But they may play different actions in different states!)
10
a, a b, c c, b d, d A B AB a > c, d > b (Interpret?)
11
What are the strategies in this new game? The payoffs? s i : π i {A, B} e.g. s 1 ({(hot, rainy), (hot, sunny)})=As 1 ({(cold, rainy), (cold, sunny)})=B s 2 ({(hot, sunny), (cold, sunny)})=As 2 ({(hot, rainy), (cold, rainy)})=B Not s 1 ({(cold, rainy)})=Bs 1 ({(hot, rainy), (hot, sunny), (cold,sunny)})=A
12
U i : s 1 × s 2 l R s.t. U i (s 1, s 2 ) = Σ ω μ(ω) U i (s 1 (π 1 (ω), s 2 (π 2 (ω))) How did we get this? Expected Utility =Weighted average of payoff in each state (given common priors, and prescribed action in each state)
13
E.g. s 1 ({(hot, rainy), (hot, sunny)})=A s 1 ({(cold, rainy), (cold, sunny)})=B s 2 ({(hot, sunny), (cold, sunny)})=A s 2 ({(hot, rainy), (cold, rainy)})=B 1, 1 0,0 5,5 A B AB
14
U 1 (s 1, s 2 ) =μ((hot,sunny)) U 1 (s 1 (π 1 ((hot,sunny), s 2 (π 2 ((hot,sunny)))+… =μ((hot,sunny)) U 1 (s 1 ({(hot,rainy),(hot,sunny)}, s 2 ({(hot,sunny),(cold,sunny)})+… =μ((hot,sunny)) U 1 (A, A)+… =.45×1+.05×0 +.05×0 +.45×5 =2.7
15
What is the condition for NE? Same as before… (s 1,s 2 ) is NE iff U 1 (s 1,s 2 ) ≥ U 1 (s 1 ’,s 2 ) for all s 1 ’ U 2 (s 1,s 2 ) ≥ U 2 (s 1,s 2 ’) for all s 2 ’
16
E.g. s 1 ({(hot, rainy), (hot, sunny)})=A s 1 ({(cold, rainy), (cold, sunny)})=B s 2 ({(hot, sunny), (cold, sunny)})=A s 2 ({(hot, rainy), (cold, rainy)})=B 1, 1 0,0 5,5 A B AB Is (s 1,s 2 ) NE?
17
U 1 (s 1,s 2 )=2.7 Let’s consider all possible deviations for player 1 Let s’ 1 ({(hot, rainy), (hot, sunny)})=s’ 1 ({(cold, rainy), (cold, sunny)})=A U 1 (s’ 1,s 2 )=.45*1+.05*0+.05*1+.45*0=.45 U 1 (s’ 1,s 2 )<U 1 (s 1,s 2 ) Let s’ 1 ({(hot, rainy), (hot, sunny)})=s’ 1 ({(cold, rainy), (cold, sunny)})=B U 1 (s’ 1,s 2 )=2.5 U 1 (s’ 1,s 2 )<U 1 (s 1,s 2 ) Let s’ 1 ({(hot, rainy), (hot, sunny)})=B s’ 1 ({(cold, rainy), (cold, sunny)})=A U 1 (s’ 1,s 2 )=.3 U 1 (s’ 1,s 2 )<U 1 (s 1,s 2 ) (Similarly for player 2) (s 1,s 2 ) is NE
18
Now assume μ((hot, sunny)) =.35 μ((hot, rainy)) =.15 μ((cold, sunny)) =.35 μ((cold, rainy)) =.15 Is (s 1,s 2 ) still NE?
19
U 1 (s 1,s 2 )=.35*1+.15*0+.15*0+.35*5=2.1 Consider: s’ 1 ({(hot, rainy), (hot, sunny)})=s’ 1 ({(cold, rainy), (cold, sunny)})=B U 1 (s’ 1,s 2 )=.35*0+.15*5+.15*0+.35*5=2.5 U 1 (s’ 1,s 2 )>U 1 (s 1,s 2 ) (s 1,s 2 ) isn’t NE (in fact, can similarly argue no other (s 1,s 2 ) are NE that condition action on information!)
20
So sometimes it is possible to condition one’s action on one’s information, and sometimes it isn’t Can we characterize, for any coordination game and information structure, when this is possible? It turns out the answer will have to do with “higher order beliefs.” To see that we will need to define concepts called p-beliefs and common p-beliefs
21
We say i p-believes E at ω, if μ (E|π i ( ω ) ) ≥ p E.g., consider our original information structure and let E={(hot,sunny),(cold,sunny)} player 1.7-believes E at (hot,sunny) μ ({(hot,sunny),(cold,sunny)}|π 1 (( hot,sunny) ) ) = μ ({(hot,sunny),(cold,sunny)}|{(hot,sunny),(hot, rainy )} ) = (.45+0)/.5=9/10>.7
23
I.e. Both p-believe E Both p-believe that both p-believe E Both p-believe that both p-believe that both p-believe E …
26
Suppose (s 1, s 2 ) is a Nash equilibrium such that for i=1,2 s i (ω) = A for all ω E s i (ω) = B for all ω F Then Ω/F is common p-belief at E, and Ω/E is common (1-p)-belief at F
27
Intuition… If 1 is playing A when she observes the event E, then he better be quite sure it isn’t F (b/c 2 plays B on F) How sure? At least p! Is this enough? What if 1 p-believes that it isn’t F, but doesn’t think 2 p-believes it isn’t F? Well then 1 thinks 2 will play B! How confident does 1 have to be, therefore, that 2 p-believes it isn’t F? At least p! …
28
If Ω/F is common p-belief at E, and Ω/E is common (1-p)-belief at F Then there exists a Nash Equilibrium (s 1, s 2 ) s.t. s i (ω) = A for all ω E s i (ω) = B for all ω F
29
Note: -higher order beliefs matter IFF my optimal choice depends on your choice! (coordination game, hawk dove game, but not signaling game!) -Even if game state dependent!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.