Presentation is loading. Please wait.

Presentation is loading. Please wait.

Recurrence operations

Similar presentations


Presentation on theme: "Recurrence operations"— Presentation transcript:

1 Recurrence operations
Episode 9 Recurrence operations What the recurrences are all about An informal look at the main types of recurrences Parallel recurrence versus branching recurrence Formal definition of parallel (co)recurrence Evolution sequences for parallel recurrences Rimplicative reductions Primplicatively reducing multiplication to addition Primplicatively reducing the RELATIVES problem to the PARENTS problem Primplicatively reducing the Kolmogorov complexity problem to the halting problem Brimplication: the ultimate concept of algorithmic reduction Definition of branching (co)recurrence Evolution sequences for branching recurrences A look at some valid and invalid principles with recurrences Alternative definition of branching (co)recurrence

2 What the recurrences are all about
9.1 What is common for the members of the family of game operations called recurrence operations is that, when applied to A, they turn it into a game playing which means repeatedly playing A. In terms of resources, recurrence operations generate multiple “copies” of A, thus making A a reusable/recyclable resource. In classical logic, recurrence-style operations would be meaningless (redundant), because classical logic, as we know, is resource-blind and thus sees no difference between one and multiple copies of A. In the resource-conscious computability logic, however, recurrence operations are not only meaningful, but also necessary to achieve a satisfactory level of expressiveness and realize its potential and ambitions. Hardly any computer program is used only once; rather, it is run over and over again. Loops within such programs also assume multiple repetitions of the same subroutine. In general, the tasks performed in the real life by computers, robots or humans are typically recurring ones or involve recurring subtasks. The is more than one naturally emerging recurrence operation. The differences between various recurrence operations are related to how “repetition” or “reusage” is exactly understood.

3 The sequential recurrence of Chess
9.2 Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. We call sequential recurrence (read “srecurrence”). Chess

4 The sequential recurrence of Chess
9.2 Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. We call sequential recurrence (read “srecurrence”). Chess

5 The sequential recurrence of Chess
9.2 Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. We call sequential recurrence (read “srecurrence”). Chess

6 The sequential recurrence of Chess
9.2 Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. We call sequential recurrence (read “srecurrence”). Chess

7 The sequential recurrence of Chess
9.2 Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. We call sequential recurrence (read “srecurrence”). Chess

8 The sequential recurrence of Chess
9.2 Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. We call sequential recurrence (read “srecurrence”). Chess

9 The sequential recurrence of Chess
9.2 Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. We call sequential recurrence (read “srecurrence”). Chess

10 The sequential recurrence of Chess
9.2 Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. We call sequential recurrence (read “srecurrence”). Chess ...

11 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess

12 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess

13 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess

14 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess

15 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess

16 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess

17 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess

18 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess

19 The parallel recurrence of Chess
9.3 A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ... Hence is called parallel recurrence. Chess ...

20 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

21 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

22 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

23 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

24 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

25 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

26 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

27 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

28 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

29 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

30 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

31 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

32 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

33 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess

34 The branching recurrence of Chess
9.4 A really good operating system, however, would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is Chess,where is called branching recurrence. Chess  , , ,, ,, ,, ,, ,,, ,,,

35 The parallel versus branching recurrences
9.5 In these notes we will take a close look only at parallel and branching of recurrences. At the intuitive level, the difference between and is that in A, unlike A, Environment does not have to restart A from the very beginning every time it wants to reuse it (as a resource); rather, Environment is (essentially) allowed to backtrack to any of the previous --- not necessarily starting --- positions and try a new continuation from there, thus depriving the adversary of the possibility to reconsider the moves it has already made in that position. This is in fact the type of reusage every purely software resource allows or would allow in the presence of an advanced operating system and unlimited memory: one can start running process A; then fork it at any stage thus creating two threads that have a common past but possibly diverging futures (with the possibility to treat one of the threads as a “backup copy” and preserve it for backtracking purposes); then further fork any of the branches at any time; and so on. The less flexible type of reusage of A assumed by A, on the other hand, is closer to what infinitely many autonomous physical resources would naturally offer, such as an unlimited number of independently acting robots each performing task A, or an unlimited number of computers with limited memories, each one only capable of and responsible for running a single thread of process A. Here the effect of replicating/forking an advanced stage of A cannot be achieved unless, by good luck, there are two identical copies of the stage, meaning that the corresponding two robots or computers have so far acted in precisely the same ways.

36 Parallel recurrence and corecurrence defined
9.6 Definition 9.6.a Let A=(Vr,A) be a game. Then A (read “precurrence A”) is the game G=(Vr,G) such that: LreG iff every move of  starts with c. for some cConstants and, for each such c, c.LreA. WneG = ⊤ iff, for all cConstants, WneAc. = ⊤. Definition 9.6.b Let A=(Vr,A) be a game. Then A (read “coprecurrence A”) is the game G=(Vr,G) such that: LreG iff every move of  starts with c. for some cConstants and, for each such c, c.LreA. WneG = ⊥ iff, for all cConstants, WneAc. = ⊥. We see that, indeed, A = AAA... and A = AAA... And, as always, we have:  A = A A =  A  A = A A =  A

37 Evolution sequences for parallel recurrences
9.7 In evolution sequences, a position of A [resp. A] can be represented as an infinite parallel conjunction [resp. disjunction], with the infinite contiguous block of “not-yet-activated” conjuncts [resp. disjuncts] starting from item #n combined together and written as nA [resp. nA]. Move Game (position) ⊓x⊔y(y=x2) (can also be written as 0⊓x⊔y(y=x2)) 1.7 ⊓x⊔y(y=x2)  ⊔y(y=72)  2⊓x⊔y(y=x2) 1.49 ⊓x⊔y(y=x2)  49=72  2⊓x⊔y(y=x2) 0.3 ⊔y(y=32)  49=72  2⊓x⊔y(y=x2) 0.9 9=32  49=72  2⊓x⊔y(y=x2) 2.5 9=32  49=72  ⊔y(y=52)  3⊓x⊔y(y=x2) 2.25 9=32  49=72  25=52  3⊓x⊔y(y=x2) Who is the winner in this run? Machine

38 Rimplications (weak reductions)
9.8 The two sorts of recurrences naturally induce two “rimplication” operations, defined by A B = A  B A B = A  B and (read “A primplication B”) (read “A brimplication B”) Rimplications are weak sorts of reductions. The difference between them and the ordinary reduction  is that, in a rimplicative reduction of problem B to problem A, the resource A can be (re)used many times. Remember Turing reduction from Episode 2. It, too, allows to use the oracle (“resource”) any number of times rather than just once. As it turns out, both of our rimplicative reducibility relations are conservative generalizations of Turing reducibility. That is in the sense that, when restricted to the traditional sorts of problems (such as deciding a predicate or computing a function), the Turing reducibility of B to A coincides with the computability of A B as well as of A B. The differences between and become relevant only when these operations are applied to non-traditional (properly interactive) problems.

39 Primplicatively reducing multiplication to addition
9.9 ⊓x⊓y⊔z(z=x+y) ⊓x⊓y⊔z(z=xy) Move Game (position) ⊓x⊓y⊔z(z=x+y)  ⊓x⊓y⊔z(z=xy) 1.3 ⊓x⊓y⊔z(z=x+y)  ⊓y⊔z(z=3y) 1.7 ⊓x⊓y⊔z(z=x+y)  ⊔z(z=37) 0.0.7 ⊓y⊔z(z=7+y)  1⊓x⊓y⊔z(z=x+y)  ⊔z(z=37) 0.0.7 ⊔z(z=7+7)  1⊓x⊓y⊔z(z=x+y)  ⊔z(z=37) 0.0.14 14=7+7  1⊓x⊓y⊔z(z=x+y)  ⊔z(z=37) 0.1.14 14=7+7  ⊓y⊔z(z=14+y)  2⊓x⊓y⊔z(z=x+y)  ⊔z(z=37) 0.1.7 14=7+7  ⊔z(z=14+7)  2⊓x⊓y⊔z(z=x+y)  ⊔z(z=37) 0.1.21 14=7+7  21=14+7  2⊓x⊓y⊔z(z=x+y)  ⊔z(z=37) 1.21 14=7+7  21=14+7  2⊓x⊓y⊔z(z=x+y)  21=37

40 Primplicatively reducing the RELATIVES problem to the PARENTS problem
9.10 Let Relatives(x,y) = “x and y have a common ancestor within five generations”. A marriage registration bureau can permit marriage between x and y only when they are not relatives. The bureau does not have a program (database) telling who is whose relative. It does, however, have a program for telling anyone’s both parents, and the program (as usual) is reusable. Can the bureau operate successfully? Bureau’s goal (problem): Bureau’s resource: ⊓x⊓y(Relatives(x,y) ⊔ Relatives(x,y)) ⊓x⊔y⊔z(y=Mother(x)  z=Father(x)) The overall problem that the bureau in fact has to solve: ⊓x⊔y⊔z(y=Mother(x)  z=Father(x)) ⊓x⊓y(Relatives(x,y) ⊔ Relatives(x,y)) Here is a strategy: Wait for Environment’s moves 1.m and 1.n (triggered by an application for marriage between m and n). Repeatedly using the antecedent, find the names of all of the ancestors of m within five generations (62 names altogether), and do the same for n. Compare the two sets of ancestors. If they are disjoint, make the move 1.1, otherwise make the move 1.0.

41 ⊓x⊓y(Halts(x,y) ⊔ Halts(x,y)) ⊓x⊔y(y=k(x))
Rimplicatively reducing the Kolmogorov complexity problem to the halting problem 9.11 Let k(x) mean “The Kolmogorov complexity of x” (cf. Slide 2.22). In Episode 2 we showed that both the acceptance problem and the Kolmogorov complexity problem are Turing reducible to the Halting problem. There was, however, a difference: For solving the acceptance problem, a Turing machine needed to query an oracle for the halting problem only once; on the other hand, solving the Kolmogorov complexity problem essentially requires multiple queries of the oracle. Hence, it is no surprise that, while the acceptance problem is pimplicatively () reducible to the halting problem (Slide 7.7), the Kolmogorov complexity problem is only rimplicatively reducible to the halting problem. Specifically, either one of the following problems has an algorithmic solution, but the problem(s) become unsolvable with  instead of or Halting problem Kolmogorov complexity problem ⊓x⊓y(Halts(x,y) ⊔ Halts(x,y)) ⊓x⊔y(y=k(x)) ⊓x⊓y(Halts(x,y) ⊔ Halts(x,y)) ⊓x⊔y(y=k(x))

42 ⊓x⊓y(Halts(x,y) ⊔ Halts(x,y)) ⊓x⊔y(y=k(x))
A strategy for primplicatively reducing the Kolmogorov complexity problem to the halting problem 9.12 ⊓x⊓y(Halts(x,y) ⊔ Halts(x,y)) ⊓x⊔y(y=k(x)) Seeing the antecedent as an infinite -conjunction, here is a machine’s winning strategy for the above game. It is based on the solution given on Slide 2.23. Step 1. Create a variable i and initialize it to 0. Step 2. Wait till Environment selects some value m for x in the consequent (i.e. makes the move 1.m), signifying asking you about the Kolmogorov complexity of m. Step 3. Make the two moves 0.i.i and 0.i.0. The meaning of these two moves is asking Environment --- in the ith -conjunct of the antecedent --- whether machine #i halts on input 0. Environment will have to answer this question and answer correctly, or else it loses. Step 4. If the answer is “No” (move 0.i.0), increment i to i+1 and repeat Step 3. Step 5. Otherwise, if the answer is “Yes” (move 0.i.1), simulate machine #i on input 0 until it halts; if the simulation shows that the machine returns m, make the move 1.|i|, thus saying that |i| is the Kolmogorov complexity of m (|i| means the size of i); otherwise, increment i to i+1 and repeat Step 3.

43 The ultimate concept of algorithmic reduction
9.13 While is natural and technically easy to define, understand or visualize, is still a more interesting operation of reduction. What makes it special is the following belief. The latter, in turn, is based on the belief that (and by no means ) is the operation allowing to reuse its argument in the strongest algorithmic sense possible. Thesis Brimplicative reducibility, i.e. algorithmic solvability (computability) of A B, is an adequate mathematical counterpart of our intuition of reducibility in the weakest --- and thus broadest --- algorithmic sense possible. Specifically: (a) Whenever a problem B is brimplicatively reducible to a problem A, B is also algorithmically reducible to A according to anyone’s reasonable intuition. (b) Whenever a problem B is algorithmically reducible to a problem A according to anyone’s reasonable intuition, B is also brimplicatively reducible to A. This is pretty much in the same sense as (by the Church-Turing thesis), a function f is computable by a Turing machine iff f has an algorithmic solution according to everyone’s reasonable intuition. So, it is time to explain in more technical detail. We will implicitly assume that the game to which is applied is a constant one. When A=(Vr,A) is a non-constant game, A should be understood as the game (Vr,G) such that, for any Vr-valuation e, G(e)= A(e).

44 How, exactly, -games are played
9.14 When playing A, at any time we have a binary tree of positions of A. The nodes (or, more generally, branches that may be finite or infinite) of such a tree are seen as bit strings. Every time a player makes a (legal) move  of A, it should indicate for which of the many positions of A the move is meant. This is done by prefixing  with “w.”, where w is the bit string representing the position (its “address”). At the beginning, the tree only has one node (the root), whose address is the empty string . Chess

45 How, exactly, -games are played
9.14 When playing A, at any time we have a binary tree of positions of A. The nodes (or, more generally, branches that may be finite or infinite) of such a tree are seen as bit strings. Every time a player makes a (legal) move  of A, it should indicate for which of the many positions of A the move is meant. This is done by prefixing  with “w.”, where w is the bit string representing the position (its “address”). At the beginning, the tree only has one node (the root), whose address is the empty string . .E2E4 Chess E2E4

46 How, exactly, -games are played
9.14 Creating new branches in a -game is exclusively Environment’s privilege. A position (represented by bit string) w is branched by the replicative move “w:”. Note: w cannot be an internal node of the tree here. It has to be a leaf. .E2E4 Chess

47 How, exactly, -games are played
9.14 Creating new branches in a -game is exclusively Environment’s privilege. A position (represented by bit string) w is branched by the replicative move “w:”. Note: w cannot be an internal node of the tree here. It has to be a leaf. .E2E4 Chess : 1

48 How, exactly, -games are played
9.14 Now the move E7E5 is made by Environment in leaf 0. So, the move is 0.E7E5. .E2E4 Chess : 0.E7E5 1 E7E5

49 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 G8F6

50 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 D1H5

51 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1

52 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1.F1C4 F1C4

53 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1.F1C4 01.G7G6 G7G6

54 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1.F1C4 01.G7G6 01.G1F3 G1F3

55 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1.F1C4 01.G7G6 01.G1F3 00.H8C5 H8C5

56 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1.F1C4 01.G7G6 01.G1F3 00.H8C5 1 01:

57 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1 1.F1C4 01.G7G6 01.G1F3 00.H8C5 1 01: 1:

58 How, exactly, -games are played
9.14 And so on. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1 1.F1C4 01.G7G6 01.G1F3 00.H8C5 1 D7D5 01: 1: 11.D7D5

59 How, exactly, -games are played
9.14 While w in a replicative move (move of the form w:) should always be a leaf of the current tree, w does not necessarily have to be a leaf in a non-replicative move w.. It may as well be an internal node of the tree. In such a case, the effect of the move w. is making the move  in all leaves that descend from w. For example, if in the present position the move 01.A7A5 is made, it will be considered legal, and will have the same effect as the two consecutive moves 010.A7A5 and 011.A7A5. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1 1.F1C4 01.G7G6 01.G1F3 00.H8C5 1 01: 1: 11.D7D5 A7A5 A7A5 01.A7A5

60 How, exactly, -games are played
9.14 While w in a replicative move (move of the form w:) should always be a leaf of the current tree, w does not necessarily have to be a leaf in a non-replicative move w.. It may as well be an internal node of the tree. In such a case, the effect of the move w. is making the move  in all leaves that descend from w. For example, if in the present position the move 01.A7A5 is made, it will be considered legal, and will have the same effect as the two consecutive moves 010.A7A5 and 011.A7A5. .E2E4 Chess : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1 1.F1C4 01.G7G6 01.G1F3 00.H8C5 1 01: 1: 11.D7D5 A7A5 A7A5 01.A7A5

61 Legal runs of -games .E2E4 : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1 1.F1C4
9.15 .E2E4 : 0.E7E5 1 1.G8F6 0.D1H5 0: 1 1 1.F1C4 01.G7G6 01.G1F3 00.H8C5 1 01: 1: 11.D7D5 01.A7A5

62 Legal runs of -games 9.15 To summarize, each legal move in a given position (seen as a tree) of a -game is one of the following: (a) A “replicative move” w:, where w is a leaf. Such a move can only be made by Environment. The effect of this move is splitting leaf w into two leaves w0 and w1, thus creating two branches out of one. The positions in these two new leaves will be the same as the position in the old leaf. .E2E4 : 0.E7E5 1.G8F6 0.D1H5 0: 1.F1C4 01.G7G6 1 01.G1F3 1 1 00.H8C5 1 01: 1: 11.D7D5 01.A7A5

63 Legal runs of -games 9.15 To summarize, each legal move in a given position (seen as a tree) of a -game is one of the following: (a) A “replicative move” w:, where w is a leaf. Such a move can only be made by Environment. The effect of this move is splitting leaf w into two leaves w0 and w1, thus creating two branches out of one. The positions in these two new leaves will be the same as the position in the old leaf. (b) A “nonreplicative (ordinary) move” w., where w is a leaf and  is a legal move of A in the position at that leaf. Such a move can be made by either player. The effect of this move is making move  in the position of A that is found at that leaf. .E2E4 : 0.E7E5 1.G8F6 0.D1H5 0: 1.F1C4 01.G7G6 1 01.G1F3 1 1 00.H8C5 1 01: 1: 11.D7D5 01.A7A5

64 Legal runs of -games 9.15 To summarize, each legal move in a given position (seen as a tree) of a -game is one of the following: (a) A “replicative move” w:, where w is a leaf. Such a move can only be made by Environment. The effect of this move is splitting leaf w into two leaves w0 and w1, thus creating two branches out of one. The positions in these two new leaves will be the same as the position in the old leaf. (b) A “nonreplicative (ordinary) move” w., where w is a leaf and  is a legal move of A in the position at that leaf. Such a move can be made by either player. The effect of this move is making move  in the position of A that is found at that leaf. .E2E4 : 0.E7E5 1.G8F6 (c) A “nonreplicative (ordinary) move” w., where w is an internal node and  is a legal move of A in all positions that are found at the leaves descending from w. Such a move can be made by either player. The effect of this move is making the move  in all of the above-mentioned positions. 0.D1H5 0: 1.F1C4 01.G7G6 1 01.G1F3 1 1 00.H8C5 1 01: 1: 11.D7D5 01.A7A5

65 made infinitely many times. .E2E4 :
Who wins a -game 9.16a This particular run will be considered won by Machine iff Machine is the winner in each of the five positions seen at the leaves of the tree. Similarly for any other (legal) finite runs of any other -games. But we also need to understand how this extends to infinite runs as well. So we need a more general characterization. Let us call binary trees in the style of the tree seen bitstring trees. Each branch of such a tree can be understood as the corresponding bit string. Branches may be finite or infinite. Of course, infinite branches can only be generated by infinite runs where replicative moves have been made infinitely many times. .E2E4 : 0.E7E5 The branches that are not initial segments of some other (“longer”) branches are said to be complete. This tree thus has five complete branches: 00, 010, 011, 10 and 11. 1.G8F6 0.D1H5 0: 1.F1C4 01.G7G6 1 01.G1F3 1 1 00.H8C5 1 01: 1: 11.D7D5 01.A7A5

66 in the remaining moves, such prefixes “u.”.
Who wins a -game 9.16b When w is a complete branch of the bitstring tree generated by a legal run  of a game A, by w we denote the result of deleting from  all moves except those that are prefixed by “u.” for some initial segment u of w (possibly u=w), and then deleting, in the remaining moves, such prefixes “u.”. For example, if  is the run seen on the right side of this slide, then 011 = Intuitively, this is the run of the ordinary game of Chess within branch 011. E2E4, E7E5, D1H5, G7G6, G1F3, A7A5 Indeed, this is exactly the run of Chess that takes us to the situation seen on the corresponding (middle) board. .E2E4 : 0.E7E5 Generally, every complete branch w corresponds to a legal run of A, specifically, the run w. Then, Machine is considered to have won a legal run  of A iff, for each complete branch w of the bitstring tree generated in the play, Machine has won the run w of A. 1.G8F6 0.D1H5 0: 1.F1C4 01.G7G6 1 01.G1F3 1 1 00.H8C5 1 01: 1: 11.D7D5 01.A7A5

67 Branching corecurrence
9.17 The branching corecurrence A of A (read “cobrecurrence A”) is defined exactly like A, only with the roles of the two players interchanged. Specifically, the differences are that: In A, it is Machine (rather than Environment) who can make replicative moves. In A, Machine wins a run  iff it wins w for at least one (rather than every) complete branch w of the bitstring tree generated in the play. This completes our semiformal definition of branching recurrence and corecurrence. If you need a more formal definition, see Section 4.6 of “In the beginning was game semantics”. Also, the last slide of this Episode presents an alternative definition, which is technically simpler but less intuitive. As expected, we have:  A = A A =  A

68 The circle notation for trees of games
9.18 1 1 1 1 At the leaves of this tree we see four different positions. Each position  of Chess can be thought of as a game, specifically, the game Chess, playing which (as we know from Episode 3) means playing Chess starting from position . Let us call the four games A,B,C and D for the compactness of representation: 1 1 1 A 1 C D B B A more compact way to represent the above tree is to use the “circle notation” and just write ((A∘(B∘B))∘(C∘D)) or ((A∘(B∘B))∘(C∘D)), depending on whether this is a play of Chess or Chess. It should be clear how to represent any other finite tree in the above style. The circle notation will be very handy in visualizing evolution sequences of or -games. The following slide shows an example.

69 Evolution sequences for branching recurrences
9.19 Move Game ⊓x⊔y(y=x2) : (⊓x⊔y(y=x2) ∘ ⊓x⊔y(y=x2)) 0.7 (⊔y(y=72) ∘ ⊓x⊔y(y=x2)) 0.49 (49=72 ∘ ⊓x⊔y(y=x2)) 1: (49=72 ∘ (⊓x⊔y(y=x2) ∘ ⊓x⊔y(y=x2))) 10.3 (49=72 ∘ (⊔y(y=32) ∘ ⊓x⊔y(y=x2))) 11.5 (49=72 ∘ (⊔y(y=32) ∘ ⊔y(y=52))) 10.9 (49=72 ∘ (9=32 ∘ ⊔y(y=52))) 11.25 (49=72 ∘ (9=32 ∘ 25=52)) We got 3 complete branches (leaves), with ⊤ at each one. So Machine wins.

70 ⊓x(p(x)⊔p(x)) ( p(7)⊔p(7) ) ( p(7)⊔p(7) ∘ p(7)⊔p(7) )
Another example 9.20 Let p(x) be an arbitrary predicate. p(x) can be undecidable, or --- even worse --- we may not know what particular predicate p(x) is. Yet, the following game has an algorithmic winning strategy. We demonstrate one particular run with that strategy. Move Game ⊓x(p(x)⊔p(x)) .7 [wait for an Environment’s move] ( p(7)⊔p(7) ) : ( p(7)⊔p(7) ∘ p(7)⊔p(7) ) 0.0 ( p(7) ∘ p(7)⊔p(7) ) 1.1 ( p(7) ∘ p(7) ) [victory!] On the other hand, for some p(x), the same game with instead of , i.e. the game ⊓x(p(x)⊔p(x))  ⊓x(p(x)⊔p(x))  ⊓x(p(x)⊔p(x))  ... has no algorithmic solution. Specifically, Machine could have a hard time if (and only if) Environment makes different moves in different -disjuncts.

71 A look at some valid and invalid principles with recurrences
9.21 The logical behaviors of the two groups (parallel and branching) of recurrences are quite different. The following claims provide examples. P P is valid is not meaning that P is easier to win than P. P P ⊓x(P(x)⊔P(x)) is valid is not ⊓x(P(x)⊔P(x)) (P⊔Q)  P ⊔ Q is valid is not (P⊔Q)  P ⊔ Q P (PQP)  Q is valid is not P (PQP)  Q

72 An alternative definition of branching (co)recurrence
9.22 Below is an alternative definition of branching recurrence. It does not directly correspond to our earlier intuitive or semiformal characterizations of , but it is equivalent to the definition directly based on our intuitions --- equivalent in the sense of mutual reducibility of the two versions of A. The meaning of the notation w used below is as explained on Slide 9.16.b. Definition 9.22.a Let A=(Vr,A) be a game. Then A (read “brecurrence A”) is the game G=(Vr,G) such that: LreG iff every move of  starts with u. for some finite bit string u and, for every infinite bit string w, wLreA. WneG = ⊤ iff, for every infinite bit string w, WneAw  = ⊤. Definition 9.22.b Let A=(Vr,A) be a game. Then A (read “cobrecurrence A”) is the game G=(Vr,G) such that: LreG iff every move of  starts with u. for some finite bit string u and, for every infinite bit string w, wLreA. WneG = ⊥ iff, for every infinite bit string w, WneAw  = ⊥.


Download ppt "Recurrence operations"

Similar presentations


Ads by Google