Download presentation
Presentation is loading. Please wait.
Published byPhilomena Jefferson Modified over 8 years ago
1
Probabilistic Dependence Logic Pietro Galliani
2
Not all undetermined formulas are undetermined in the same way x $y(=(y) x=y) Abelard Eloise Eloise losesEloise wins x $y(=(y) x y)
3
It will be simpler to use DF-Logic: ($x\{y 1..y k })y means “one can choose a value for x which satisfies y and which depends only on the values of y 1...y k ” =(t 1...t n ) is equivalent to $y 1...$y n-1 ($y n \{y 1...y n-1 }) (y 1 = t 1 ... y n = t n ). ($x\{y 1...y k })y is equivalent to $x (=(y 1...y k,x) y); The game semantics for backslashed connectives is the obvious one.
4
A (uniform) behavioral strategy b for player a is a family of functions from partial plays (p 1...p i ), where a must move in p i, to probability distributions over the set of all possible successors p i, such that the same distribution is used for indistinguishable positions. x ($y\{})(x=y) ($y\{})(x=y) {x:a} ($y\{})(x=y ) {x:b} x=y {x:a,y:a} x=y {x:b,y:b} x=y {x:a,y:b} x=y {x:b,y:a} 1-p p q 1-q q
5
Since the games H(f) can be of imperfect recall, Kuhn’s theorem does not hold, and not all mixed (uniform) strategies correspond to (uniform) behavioral strategies! However, every u.b.s b induces a probability distribution b * over uniform pure strategies s (that is, a uniform mixed strategy)
6
P(H s (f); s; t)= 1 if (s;t) is winning for Eloise; = 0 otherwise. Given two behavioral strategies b and g, P(H s (f); b; g) = s t b * (s) g * (t)P(H s (f); s; t) Then, the value of the game is V s (f) = V(H s (f)) = sup g inf b P(H s (f); b; g) Equivalently, V s (f) =sup g inf s P(H s (f); s; g). Given a complete play (p 1...p n ) = (s;t), we define its payoff as
7
Example: dom(M) = {a 1...a n }, f = x ($y\{})(x=y) Let g be “choose y=a i with uniform prob. 1/n”: g((f,0)($y(=(y) x=y),s))(=(y) x=y, s[a i /y]) = 1/n On the other hand, every g induces a prob. distribution over {a 1...a n }, and for at least one a i Prob(y = a i ) 1/n. Then if s chooses this a i for y then P(H s (f); s; g) 1/n and in conclusion V s (f) 1/n Then, for all s, P(H s (f); s; g) 1/n; Therefore, V s (f) 1/n.
8
What is the range of the value function? There is a formula f such that r [0,1] $ a model M r s.t. V 0 (f) = r in M r Proof: let f = x y$z\{} (E(x,y) I(x,y,z)) dom(M r ) = unit circumference; E M r = { : the arc ab is long exactly 2rp} I M r = { : c is in the arc ab} (arcs are counterclockwise) x y Length = 2rp ? z Abelard winsEloise wins
9
What about finite models? If |dom(M)| < , V(f) Q [0,1] for all f Proof: V(f) is the solution of a linear optimization problem. Variables: l 1...l t weights of the pure strategies s 1..s t ; v (value of the formula) Operators: matrix A (integer values) is such that A (l 1...l t ) T = 0 iff l 1...l t correspond to an u.b.s.
10
Maximizev S i l i = 1; Under the conditionsS i l i P(H(f);s j ;t i ) >= v, j; A(l 1...l t ) T =0; l i >= 0, i. A linear optimization problem with rational coefficient always has a rational solution, as required.
11
If |dom(M)| > 1, for every r = p/q Q [0,1] there is a formula f such that V(f) = r. Proof:
12
For finite models M, the Minimax Theorem holds: If dom(M) < then for every formula f and assignment s there exist two u.b.s. b e and g e s.t. sup g P(H s (f);b e ;g) P(H s (f);b e ;g e ) inf b P(H s (f);b;g e ) As a consequence, b $t P(H s (f); b; t) r iff $g s P(H s (f); s; g) r. Proof: create a new player for every possible subformula of f and divide them in two “parties”, corresponding to Abelard and Eloise. Then proceed as in the usual proof.
13
For infinite models, the theorem does not hold. Example: in (N, <), consider the formula f = x ($y\{}) (x < y) {x:0 } {x:1 }... x:0 y:0 x:0 y:1 x:0 y:2... x:1 y:0 x:1 y:1 x:1 y:2... No equilibrium pair exists! p1p1 p2p2 p 3,p 4,... q1q1 q2q2 q3q3 q 4,q 5,... q1q1 q2q2 q3q3 {}{}{}{}{}{}
14
From now on, we will only consider finite models. Aim: adapt Hodges’ semantics to this “probabilistic dependence logic”. A probabilistic team m with domain dom(m) is a probability distribution over all assignments s with domain dom(m). Given a probabilistic team m and a formula f in NNF, the game H m (f) is as follows: (f, s 1 )(f, s 2 ) (f, s 3 )... m(s 1 ) m(s 2 )m(s 3 )...
15
Given two teams x 1 and x 2 and a p [0,1], (p x 1 + (1-p) x 2 )(s) = p x 1 (s) + (1-p) x 2 (s); = Linear combination:
16
m[F/x](s[m/x]) = m(s) (F(s)(m)). Given a m and a function F: s|->f, f distr. on M, m=m= F= m[F/z]= Supplement:
17
A probabilistic team m is a r-trump of f iff $g s P(H m (f); s; g) r. Then, we define T as T = {(f, m, r) : m is a r-trump of f}; The task: characterize the set T compositionally 1) If f is a literal, (f, m, r) T iff s|=f m(s) r; 2) (y q, m, r) T iff $ p, x 1, x 2, r 1, r 2 such that m = p x 1 + (1-p) x 2 ; (y, x 1, r 1 ), (q, x 2, r 2 ) T; pr 1 + (1-p)r 2 r.
18
3) ($x y, m, r) T iff there is a F such that (y, m[F/x], r) T; 4) ($x\V y, m, r) T iff there is a F such that (y, m[F/x], r) T; s,s' coincide over V F(s) = F(s'); 5) (~ y, m, r) T iff, for all r’ > 1-r, (y, m, r’) T. Since V m (f) = sup {r: (f, m, r) T} This gives us a compositional way of finding V m (f)!
19
1) If f is a literal, V m (f) = s|=f m(s); 2) V m (y q) = sup{pV x 1 (y)+(1-p)pV x 2 (q) : : px 1 + (1-p)x 2 = m}; 3) V m ($x y) = sup F (V m[F/x] (y)); 4) V m ($x\V y) = sup {V m[F/x] (y) : if s and s' coincide over V, then F(s) = F(s')}; 5) V m (~ y) = 1- V m (y).
20
If f is First Order, then V m (f) = s|=f m(s) Proof: by structural induction. What about dependence atomic formulas? Theorem: V m (=(t 1...t n )) = sup{ s B m(s) : B |= =(t 1...t n )} V m (=(x,y)) = m= 0.6+0.1=0.7
21
Proof: Use the Minimax Theorem. An optimal t lets y 1 = t 1,... y q-1 = t q-1 : then, V m (=(t 1...t n )) = sup f S{m(s) : t q = f(t 1...t q-1 )} where f ranges over dom(M) q-1 dom(M) The theorem follows by letting B f = {s: t q = f(t 1...t q-1 )}
22
This is very similar to a measure of approximate functional dependency used in Database Theory (Kivinen and Mannila, 1995): G 3 (X Y, r) = |r| - max{r' : r' |= DT X Y}; g 3 (X Y, r) = G 3 (X Y)/|r|. If m r is the probabilistic team representing r, g 3 (X {A q }, r) = 1 – V m (=(X, A q )). r=r= G 3 ({x} {y}) = 3 + 0 = 3 g 3 ({x} {y}) = 3/(6+3+0+1) = 0.3
23
Problem: The conjunction of dependence formulas does not have the intended interpretation. Example: let f := =(x) =(y) and V m (f) 1/2, although every subteam constant on both x and y has weight 1/3. Proof: For every subteam x of m, V x (=(x)), V x (=(y)) 1/2. Thus, V m (=(x) =(y)) = pV x 1 (=(x)) + (1-p)V x 2 (=(y)) p/2 + (1-p)/2 = 1/2 m =
24
Because of this, V m ($x\V y) is not always equal to V m ($x (=(V,x) y): Example: let f = ($z\{}) (=(y) x = z); f' = $z (=(z) =(y) x = z); m be as before; Then V m (f) 1/3, but V m (f') 1/2.
25
The problem is in the conjunction: “Abelard checks either y or q” vs “Abelard checks y; if it turns out to be true, he checks q too” It is possible to introduce a sequential conjunction, similar to (Groenendijk and Stokhof 1991) and (Abramsky, 2006).
26
Modify the definition of the game: Positions in the form (y, s | y 1...). After II verifies y, she must also verify y 1,...; (y q, s | y 1...) |-> (y, s | q,y 1...); The other game rules are adapted in the obvious way (plus technical change in the definition of uniform strategy). Then, V m ($x\V y) = V m ($x (=(V,x) y) and conjunction of d.a.f. works as advertised. Compositional semantics can be adapted.
27
Conclusions: 1) Not all undetermined formulas are equally so; 2) Hodges’ compositional semantics can be adapted; 3) There are relations between logics of imperfect information and DT; 4) Dynamic connectives for logics of imperfect information are interesting. Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.