Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Non-Probabilistic Generalization of the Agreement Theorem.

Similar presentations


Presentation on theme: "A Non-Probabilistic Generalization of the Agreement Theorem."— Presentation transcript:

1 A Non-Probabilistic Generalization of the Agreement Theorem

2 . Knowledge E K(E) Ω – a state space  – a partition of Ω
(ω) K(E) ω Ω – a state space  – a partition of Ω (ω) – the element of  that contains state ω. At ω the agent knows (ω) ...and also E. K(E) – The event that the agent knows E.

3 . Knowledge Jaako Hintikka
Knowledge and Belief – An Introduction to the Logic of the Two Notions Knowledge . (ω) ω K(E) = {ω | } (ω)  E K : 2Ω  2Ω 1. K(Ω) = Ω 2. K(E)  K(F) = K(E  F) 3. K(E)  E 4. ¬ K(E) = K(¬ K(E)) K(E) – The event that the agent knows E. Conversely: If K satisfies 1-4 then there exist  such that…

4 [d = p] – The event {ω | d(ω) = p}
Knowledge Probability . . 2/3 2/3 1/14 2/14 4/14 P – a prior probability . . . Fix event E. . . The posterior probability of E: 2/3 2/3 d: Ω  R d(ω) = P(E | (ω)) 1/2 1/2 E [d = p] – The event {ω | d(ω) = p} e.g. [d = 2/3]

5 . . . . . . . Common Knowledge 1 - 2 - c - coarser than 1 and 2
1 - . . . 2 - c - coarser than 1 and 2 . . finest among all such partitions the common knowledge partition. K(E) := K1(E)  K2(E) Kc(E) =  Kn(E) n=1

6 The probabilistic agreement theorem
common P – a prior probability d1(ω) = P(E | 1(ω)) … to agree… d2(ω) = P(E | 2(ω)) p1  p2 Kc ( ) [d1 = p1 ]  ([d2 = p2 ] =  … to disagree. It is impossible …

7 The probabilistic agreement theorem
non- The probabilistic agreement theorem A P – a prior probability d1(ω) = P(E | 1(ω))   d2(ω) = P(E | 2(ω))   A set of decisions p1  p2 δ1 δ2 Kc ( ) [d1 = p1 ]  ([d2 = p2 ] δ1 δ2 =  + conditions on d1 and d2 ? satisfied by the posterior probability functions

8 Virtual decision functions
A decision function di : Ω   is derived from the virtual decision function D if di (ω)= D(i (ω)) Agents are like minded if all individual decision functions are derived from the same virtual decision function. Interpretation: D(E) is the decision made if E were the information given to the agent. Cave, J. (1983), Learning to agree, Economics Letters, 12. Bacharach, M. (1985), Some extensions of a claim of Aumann in an axiomatic model of knowledge, J. Econom. Theory, 37(1).

9 The Sure Thing Principle (STP)
A businessman contemplates buying a certain piece of property. He considers the outcome of the next presidential election relevant. So, to clarify the matter to himself, he asks whether he would buy if he knew that the Democratic candidate were going to win, and decides that he would. Similarly, he considers whether he would buy if he knew that the Republican candidate were going to win, and again finds that he would. Seeing that he would buy in either event, he decides that he should buy, even though he does not know which event obtains, or will obtain, as we would ordinarily say. It is all too seldom that a decision can be arrived at on the basis of this principle, but except possibly of for the assumption of simple ordering, I know of no other extralogical principle governing decisions that finds such ready acceptance. The sure-thing principle cannot appropriately be accepted as postulate in the sense that P1 is, because it would introduce new undefined technical terms referring to knowledge and possibility that would render it mathematically useless without still more postulates governing these terms. It will be preferable to regard the principle as a loose one that suggests certain formal postulate well articulated with P1. Savage, L. J. (1954), The foundations of statistics.

10 Virtual decision functions
A decision function di : Ω   is derived from the virtual decision function D if di (ω)= D(i (ω)) Agents are like minded if all individual decision functions are derived from the same virtual decision function. The virtual decision function, D, satisfies the STP when for any two disjoint events E, F, if D(E) = D(F) = δ then D(EF) = δ.

11 An agreement theorem If the agents are
like minded with virtual decision function D, and D satisfies STP, then it is impossible to agree to disagree. That is, if the decisions of the agents are common knowledge then they coincide.

12 A detective story A murder has been committed. To increase the chances of conviction, the chief of police puts two detectives on the case, with strict instructions to work independently, to exchange no information. The two, Alice and Bob, went to the same police school; so given the same clues, they would reach the same conclusions. But as they will work independently, they will, presumably; not get the same clues. At the end of thirty days, each is to decide whom to arrest (possibly nobody). Like mindedness

13 Curtain A detective story
On the night before the thirtieth day, they happen to meet … and get to talking about the case. True to their instructions, they exchange no substantive information, no clues; but … feel that there is no harm in telling each other whom they plan to arrest. Thus, … it is common knowledge between them whom each will arrest. Conclusion: They arrest the same people; and this, in spite of knowing nothing about each other's clues. Curtain

14 A detective story Aumann, (1999) Notes on interactive epistemology, IJGT. Aumann, (1988) Notes on interactive epistemology, unpublished.

15 Is the STP captured? Is the agent more knowledgeable in ω than in ω’ ?
How do virtual decision functions fit in a partitional knowledge setup? Syntactically, they involve knowledge that cannot be expressed in terms of actual knowledge Ki . Semantically, at a given state ω the agent’s knowledge is given by i(ω) and not by any other event. (Moses & Nachum (1990)) . . Ki p ω’ ¬ Ki p ω ¬ Ki ¬ Ki p Ki ¬ Ki p i(ω) i(ω’) A remedy

16 Comparison of knowledge
The event that i is more knowledgeable than j: [i  j] :=  (¬ Kj(E)  Ki(E)) E i knows at ω … more than … i knows at ω’. . ω . ω’ intra personal intra inter state personal inter state intra personal inter state i knows at ω … more than … j knows at ω. It is impossible to express the interstate STP in the language because language is “local”. It can express only what is true in one world, and it cannot express the differences between states of the world. . ω ω  ¬ Kj(E)  Ki(E) for each E.

17 Interpersonal Sure Thing Principle (ISTP)
The decision functions (d1,…, dn) satisfy the ISTP if for each i and j: Kj( ) [i  j] )  [di = δ]  [dj = δ] [i = j] := [i  j]  [j  i] If the decision functions satisfy ISTP, then the agents are like minded: For each i and j, [i = j]  [di = di]

18 Expandability The decision functions (d1,…, dn) on the model (Ω, K1 , … , Kn ) are ISTP-expandable if for each expansion (Ω, K1 , … , Kn , Kn+1 ) there exists a decision dn+1 for agent n+1 such that (d1,…, dn, dn+1) satisfy the ISTP. Where n+1 is epistemic dummy An agent is epistemic dummy, if it is common knowledge that each other agent is more knowledgeable. Officer E. P. Dummy

19 A non-probabilistic generalization of the agreement theorem
If the decision functions (d1,…, dn) on the model (Ω, K1 , … , Kn ) are ISTP-expandable then the agents cannot agree to disagree.

20 ken the list of all known sentences
Why ISTP? K  K Alice’s ken Binmore’s ken K K ken the list of all known sentences The decision δ depends only on the ken. Alice knows that… Binmore is more knowledgeable: Binmore’s decision is δ: K = K’ is consistent with K K’ ….. … Binmore’s decision is δ for each the kens K’ . . . .


Download ppt "A Non-Probabilistic Generalization of the Agreement Theorem."

Similar presentations


Ads by Google