1 Slides: Asaf Shapira & Oded Schwartz; Sonny Ben-Shimon & Yaniv Nahum. Sonny Ben-Shimon & Yaniv Nahum. Notes: Leia Passoni, Reuben Sumner, Yoad Lustig.

Slides:



Advertisements
Similar presentations
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Advertisements

Lecture 3 Universal TM. Code of a DTM Consider a one-tape DTM M = (Q, Σ, Γ, δ, s). It can be encoded as follows: First, encode each state, each direction,
Variants of Turing machines
1 Savitch and Immerman- Szelepcsènyi Theorems. 2 Space Compression  For every k-tape S(n) space bounded offline (with a separate read-only input tape)
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 3 School of Innovation, Design and Engineering Mälardalen University 2012.
1 Space Complexity. 2 Def: Let M be a deterministic Turing Machine that halts on all inputs. Space Complexity of M is the function f:N  N, where f(n)
Peter van Emde Boas: Games and Complexity Guangzhou 2009 Complexity, Speed-up and Compression Games and Complexity Peter van Emde Boas Guangzhou 2009 ©
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Computability and Complexity 22-1 Computability and Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture13: Mapping Reductions Prof. Amos Israeli.
Peter van Emde Boas: Games and Computer Science 1999 Speed-up and Compression Theoretical Models 1999 Peter van Emde Boas References available at:
Computability and Complexity 19-1 Computability and Complexity Andrei Bulatov Non-Deterministic Space.
P, NP, PS, and NPS By Muhannad Harrim. Class P P is the complexity class containing decision problems which can be solved by a Deterministic Turing machine.
Complexity ©D.Moshkovitz 1 Turing Machines. Complexity ©D.Moshkovitz 2 Motivation Our main goal in this course is to analyze problems and categorize them.
Reducibility A reduction is a way of converting one problem into another problem in such a way that a solution to the second problem can be used to solve.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata are like Turing Machines with a restriction: The working space of the tape is the space of the.
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
Randomized Computation Roni Parshani Orly Margalit Eran Mantzur Avi Mintz
Complexity ©D.Moshkovits 1 Space Complexity Complexity ©D.Moshkovits 2 Motivation Complexity classes correspond to bounds on resources One such resource.
Programming the TM qa  (,q) (,q) q1q1 0q1q1 R q1q1 1q1q1 R q1q1  h  Qa  (,q) (,q) q1q1 0q2q2  q1q1 1q3q3  q1q1  h  q2q2 0q4q4 R q2q2 1q4q4.
Complexity 1 The Padding Argument. Complexity 2 Motivation: Scaling-Up Complexity Claims space + non-determinism We have: space + determinism can be simulated.
Non Deterministic Space Avi Ben Ari Lior Friedman Adapted from Dr. Eli Porat Lectures Bar-Ilan - University.
Alternating Turing Machine (ATM) –  node is marked accept iff any of its children is marked accept. –  node is marked accept iff all of its children.
Chapter 11: Limitations of Algorithmic Power
1 Slides: Asaf Shapira & Oded Schwartz; Sonny Ben-Shimon & Yaniv Nahum. Sonny Ben-Shimon & Yaniv Nahum. Notes: Leia Passoni, Reuben Sumner, Yoad Lustig.
Non-Deterministic Space is Closed Under Complementation Neil Immerman Richard Szelepcsenyi Presented By: Subhajit Dasgupta.
Definition: Let M be a deterministic Turing Machine that halts on all inputs. Space Complexity of M is the function f:N  N, where f(n) is the maximum.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
1 2 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- Turing machines.  We will still consider only.
Theory of Computing Lecture 21 MAS 714 Hartmut Klauck.
Computability Chapter 5. Overview  Turing Machine (TM) considered to be the most general computational model that can be devised (Church-Turing thesis)
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata (LBAs) are the same as Turing Machines with one difference: The input string tape space is the.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
 2005 SDU Lecture13 Reducibility — A methodology for proving un- decidability.
TM Design Macro Language D and SD MA/CSSE 474 Theory of Computation.
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
Fall 2013 CMU CS Computational Complexity Lecture 2 Diagonalization, 9/12/2013.
Overview of the theory of computation Episode 3 0 Turing machines The traditional concepts of computability, decidability and recursive enumerability.
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
1 Design and Analysis of Algorithms Yoram Moses Lecture 13 June 17, 2010
Chapter 7 Introduction to Computational Complexity.
Universal Turing Machine
Space Complexity Guy Feigenblat Based on lecture by Dr. Ely Porat Complexity course Computer science department, Bar-Ilan university December 2008.
 2005 SDU Lecture14 Mapping Reducibility, Complexity.
Probabilistic Algorithms
Space Complexity Complexity ©D.Moshkovits.
Goal: Plan: Explore space complexity Low space classes: L, NL
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Part VI NP-Hardness.
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
Turing Machines Acceptors; Enumerators
Decidable Languages Costas Busch - LSU.
Proposed in Turing’s 1936 paper
Part II Theory of Nondeterministic Computation
Recall last lecture and Nondeterministic TMs
Turing Machines Complexity ©D.Moshkovitz.
CS21 Decidability and Tractability
Time Complexity Classes
DSPACE Slides By: Alexander Eskin, Ilya Berdichevsky
Presentation transcript:

1 Slides: Asaf Shapira & Oded Schwartz; Sonny Ben-Shimon & Yaniv Nahum. Sonny Ben-Shimon & Yaniv Nahum. Notes: Leia Passoni, Reuben Sumner, Yoad Lustig & Tal Hassner. (from Oded Goldreich’s course lecture notes)

2 Introduction This lecture covers: Space Complexity Space Complexity Non-Deterministic space Non-Deterministic space

3 Complexity Functions Def: A function f is called constructible if it satisfies the following conditions: Positive: f: N +  N + Positive: f: N +  N + Monotone: f(n+1)  f(n) for all n Monotone: f(n+1)  f(n) for all n Constructive:  a Turing Machine M f that, on input x, outputs a string of size f(|x|), in time O(|x|+f(|x|)), and in space O(f(|x|)) Constructive:  a Turing Machine M f that, on input x, outputs a string of size f(|x|), in time O(|x|+f(|x|)), and in space O(f(|x|)) 4.1

4 Constructible functions Many “popular” complexity functions, e.g. n, log(n), n 2, n! satisfy the above criteria Many “popular” complexity functions, e.g. n, log(n), n 2, n! satisfy the above criteria Odd things may occur, in regard to relations between complexity classes, if we don't choose these functions properly Odd things may occur, in regard to relations between complexity classes, if we don't choose these functions properly Note: We therefore use time constructible functions for time bound and space constructible functions for space bound Note: We therefore use time constructible functions for time bound and space constructible functions for space bound 4.2

5 Space Complexity - The model: 3-tape Turing machine: 1. Input Tape – Read Only 2. Output tape – Write only [Omitted for decision problems] [Usually unidirectional] 3. Work tape – Read & Write 4.3 Enables sub linear space Length of which corresponds to space usage

6 What kind of TM should we use? Any multi-tape TM can be simulated by an ordinary TM with a polynomial loss of efficiency Any multi-tape TM can be simulated by an ordinary TM with a polynomial loss of efficiency Hence, from here on, a TM will refer to the 3 tape TM described above Hence, from here on, a TM will refer to the 3 tape TM described above

7 DSPACE - Definition For every TM, M, and input x: W M (x) = The index of the rightmost cell on the work-tape scanned by M on x S M (n) = max |x|=n W M (x)  L (x) = 1 if x  L, 0 otherwise DSPACE(S(n)) = {L|  DTM M,  x M(x)=  L (x) and  n S M (n)  S(n) } Maximal amount of space used by M for input of length n

8 Sub-Logarithmic Space DSPACE(O(1)) is equivalent to the set of regular languages. DSPACE(O(1)) is equivalent to the set of regular languages. Do we gain any strength by having sub- logarithmic sapce?? Or formally… Do we gain any strength by having sub- logarithmic sapce?? Or formally… DSAPCE(o(log(n)))  DSPACE(O(1)) ? DSAPCE(o(log(n)))  DSPACE(O(1)) ? or is it or is it DSPACE(o(log(n))) = DSPACE(O(1)) ? DSPACE(o(log(n))) = DSPACE(O(1)) ? 4.4

9 Thm: DSPACE(o(log(n))) is a proper superset of DSPACE(O(1)) Proof: We will construct a language L Proof: We will construct a language L s.t. L  DSPACE(loglog(n)), however, s.t. L  DSPACE(loglog(n)), however, L  DSPACE(O(1)), which will prove the theorem (log(log(n)) = o(log(n))). L  DSPACE(O(1)), which will prove the theorem (log(log(n)) = o(log(n))). DSPACE(o(log(n)))  DSPACE(O(1))

10 Proof (contd.) - Definition of L L = {x k |k  N,x k =B k ’ 0 $B k ’ 1 $B k ’ 2 $…$B k ’ (2 k -1) $} L = {x k |k  N,x k =B k ’ 0 $B k ’ 1 $B k ’ 2 $…$B k ’ (2 k -1) $} where: where: B k ’ i = Binary representation of i of length k. B k ’ i = Binary representation of i of length k. For example: For example: x 2 = 00$01$10$11$ x 2 = 00$01$10$11$

11 Claim 1 : L  DSPACE(O(1)) (Regular- Languages) Proof : By using the “Pumping Lemma” Claim 2 : L  DSPACE(loglog(n)) Proof : We will show an algorithm for deciding L that uses loglog(n) space. Proof (contd.)

12 1) Check that the first block is all 0’s and that the last is all 1’s. 2) For any two consecutive blocks, check that the second is the binary increment of the first. 2) For any two consecutive blocks, check that the second is the binary increment of the first. Clearly (1) can be done in constant space, and (2) in log(k) space which is loglog(n) space, as n=|xk|=(k+1)2k The wrong way of proving Claim 2 works if x  L this might use more then loglog(n) if x k  L (e.g. 0 m $1 m $) m=n/2 - 1

13 The correct solution m = 1 while (true) { check that the last m bits of the first block are all 0’s check that the last m bits of the first block are all 0’s check that the last m bits of the B k ’ i blocks form an increasing sequence mod 2 m, and that each block has m bits. check that the last m bits of the B k ’ i blocks form an increasing sequence mod 2 m, and that each block has m bits. check that the last m bits of the last block are all 1’s check that the last m bits of the last block are all 1’s if you found an error, return false if you found an error, return false if m is the exact size of B k ’ i return true. if m is the exact size of B k ’ i return true. m = m +1 m = m +1}

14 The correct solution – An Example m=2, 2 left bits increasing mod 2 2 =4 000$001$010$011$100$101$110$111$ m=1, 1 left bits increasing mod 2 1 =2 The entire series is increasing  return true m=3, 3 left bits increasing mod 2 3 =8 input: 000$001$010$011$100$101$110$111$

15 Sub-Logarithm Conclusion: L  Space(O(loglog(n)))\Space(O(1))  DSPACE(o(log(n)))  DSPACE(O(1)) DSPACE(o(log(n)))  DSPACE(O(1)) In fact, the above claim does not work for o(loglog(n)), that is: not work for o(loglog(n)), that is: DSPACE(o(loglog(n))=DSPACE(O(1)) DSPACE(o(loglog(n))=DSPACE(O(1))

16 Configuration - Definition Def: A configuration of a TM M, is a complete description of its state at a computation stage, comprising: Def: A configuration of a TM M, is a complete description of its state at a computation stage, comprising: 1. the state of M ( range |Q M |) 2. contents of the worktape ( range 2 s(n) ) 3. the head position on the input tape ( range n) 4. the head position on the worktape ( range s(n)).

17 #Configuration – An upper bound Let C be the number of possible configuarations of a TM M. C  |Q M |* 2 s(n) * n * s(n) number of states contents of the worktape head position on input tape head position on worktape

18 Relation between time and space Thm:  s(n) ≥ log(n) DSPACE(s(n))  Dtime(2 O(s(n)) ) Proof: For every L  DSPACE(s(n)),There is a TM M that uses no more than O(s(n)) space on input x. For every L  DSPACE(s(n)),There is a TM M that uses no more than O(s(n)) space on input x.  the number of configurations of M  2 O(s(n)).  the number of configurations of M  2 O(s(n)).  if M does not stop after 2 O(s(n)) steps, it must pass through the same configuration twice, which implies an infinite loop.  if M does not stop after 2 O(s(n)) steps, it must pass through the same configuration twice, which implies an infinite loop. 4.5

19 How to Make TMs halt? Thm: For s(n) ≤ log(n), for any TM, M  DSPACE(s(n)), there is a TM, M’  DSPACE(O(s(n)) s.t. L(M’)=L(M), and M’ always halts. Proof: By simulation. Given x, M’ computes the maximal number of configurations C -- that takes s(|x|) space. Now M’ simulates M. If it arrives at an answer in less than C steps, it returns it. Otherwise (M is in an infinite loop) M’ returns ‘no’. Otherwise (M is in an infinite loop) M’ returns ‘no’. This stage also takes s(|x|), so the total is O(s|x|). This stage also takes s(|x|), so the total is O(s|x|). 4.6

20 Space Hierarchy Thm: for any s 1 (n), s 2 (n), if s 1 (n)  log(n), s 2 (n) is space-constructible and s 1 (n)=o(s 2 (n)) then DSPACE(s 1 (n)) ≠ DSPACE(s 2 (n)) DSPACE(s 1 (n)) ≠ DSPACE(s 2 (n)) Proof: By diagonalization. We construct a language L, such that L  DSPACE(s 2 (n)), but L can’t be recognized by any TM using s 1 (n) space: Let c 0 be a constant, 0  c 0  1. L = { x | x = 01*, | |  c 0  s 2 (|x|), M rejects x using ≤ c 0  s 2 (|x|) space } 4.7

21 Space Hierarchy Claim: L  DSPACE(s 2 (n)) Proof: By a straightforward algorithm. 1. check if x is of the right form. (O(1) space) 2. compute S:= c 0  s 2 (|x|). (s 2 (|x|) space) 3. check that | |  c 0  s 2 (|x|). (log(S) space) 4. simulate M. if the bound S has been exceeded – reject. if M rejects x – accept, else reject. Altogether O(s 2 (|x|)) space as claimed.

22 Space Hierarchy Claim: L  DSPACE(s 1 (n)) Proof: We show that for every TM M of space complexity s 1 (n), L(M)  L. s 1 (n)=o(s 2 (n))   n 0. s 1 (n 0 )  c 0 * s 2 (n 0 ). Assume, by way of contradiction, a TM M 0 of space complexity s 1 (n), s.t. | |  c 0 * s 2 (n 0 ), accepting L. Observe M 0 result on the input string

23 L  DSPACE(s 1 (n)) – Proof contd. 1. if M 0 accepts x, then by definition x  L. 2. if M 0 rejects x, then since | |  c 0  s 2 (n 0 ), and M 0 on x uses at most s 1 (n 0 )  c 0 * s 2 (n 0 ) space, it must be that x  L. In any case this is a contradiction to the fact that M 0 accepts L.

24 Non Deterministic Space Def: A non-deterministic Turing machine - NDTM is a TM with a non- deterministic transition function, having a work tape, a read-only input tape, and a unidirectional write-only output tape. Def: A non-deterministic Turing machine - NDTM is a TM with a non- deterministic transition function, having a work tape, a read-only input tape, and a unidirectional write-only output tape. The machine is said to accept input x if there exists a computation ending in an accepting state. The machine is said to accept input x if there exists a computation ending in an accepting state. 5.1

25 Def: On-line / Off-line TM An offline (online) non-deterministic TM has a work tape, a read-only input tape, a unidirectional write-only output tape, and a two-way (one-way) read-only guess tape. An offline (online) non-deterministic TM has a work tape, a read-only input tape, a unidirectional write-only output tape, and a two-way (one-way) read-only guess tape. The machine is said to accept input x if there exists a content y to the guess tape that causes the machine’s computation to end in an accepting state. The machine is said to accept input x if there exists a content y to the guess tape that causes the machine’s computation to end in an accepting state.

26 Nspace on, Nspace off Def: Nspace on (S) = { L | there exist an online-NDTM M L, which uses ≤ S(|x|) space, that accepts x iff x  L } Def: Nspace off (S) = { L | there exist an offline-NDTM M L, which uses ≤ S(|x|) space, that accepts x iff x  L }

27 NDTM = Online-NDTM Claim: the NDTM model is equivalent to the online-NDTM model Proof: We show that a language L is decidable by a NDTM in time O(T) and space O(S) iff L is decidable by an online- NDTM with same time and space bounds.  Use the guess string y to determine which transition function to take every step.  Guess the content of the next cell in the guess string when read. Remember the last step’s guessed letter (using internal state) when guess-tape-head doesn’t move. 5.2

28 Nspace on vs. Nspace off Thm: Nspace on (S)  Nspace off (log(S)) We’ll simulate an online-NDTM M on that uses space S, using an offline- NDTM M off that uses space log(S). We’ll simulate an online-NDTM M on that uses space S, using an offline- NDTM M off that uses space log(S). M off guesses a sequence of configurations of M on and then validates it to be an accepting run M off guesses a sequence of configurations of M on and then validates it to be an accepting run 5.3

29 Nspace on vs. Nspace off The guess string has blocks, each representing a configuration [of length (O(S))] with ≤ 2 O(S) blocks (any valid sequence of configurations with more blocks, must have the same configuration twice, therefore can be replaced by a shorter guess) guess string doesn’t count in M off the space of M off

30 Nspace on vs. Nspace off M off will validate that: The first block is a legal starting configuration. The first block is a legal starting configuration. Last block is a legal accepting configuration. Last block is a legal accepting configuration. Every block can result by a legal move applied to previous block (carried out two consequetive blocks at a time) Every block can result by a legal move applied to previous block (carried out two consequetive blocks at a time)

31 Nspace on vs. Nspace off The (supposedly) configuration strings: The (supposedly) configuration strings:... $aaaabcaa$ aaaabc h aa $ aaaabxa h a $aaaabcaa$aaaab$... $aaaabc h aa$ $aaaabxa h a$ 1. Check (almost) all symbols in strings are identical, and string lengths identical. 2. Check that symbols marked with head position in 1st configuration transforms to a legal threesome on the 2nd. O(log(|C on |)) O(1)

32 Nspace on vs. Nspace off The working tape holds a counter of the location in the configuration checked - log(O(S)), and O(1) additional space for the validation. The working tape holds a counter of the location in the configuration checked - log(O(S)), and O(1) additional space for the validation. A counter for the number of configuration checked - O(S) - is not necessary! A counter for the number of configuration checked - O(S) - is not necessary!

33 Nspace on vs. Nspace off Note that this simulation can’t be done by the online machine, as it has to read forwards & backwards on the guess tape (block size being a function of n). Note that this simulation can’t be done by the online machine, as it has to read forwards & backwards on the guess tape (block size being a function of n).

34 Nspace on vs. Nspace off Thm: Nspace off (S)  Nspace on (2 O(S) ) Proof: The proof of this theorem also uses simulation of one machine using the other.

35 Savitch’s Theorem Thm: NL = Nspace(log(n))  DSPACE(log 2 (n)) We later generalize the theorem to read: S(n)  log(n)  Nspace(S)  DSPACE(S 2 ) S(n)  log(n)  Nspace(S)  DSPACE(S 2 ) Def: a Configuration Graph is a graph that, given a TM M which works in space S on an input x, has one vertex for every possible configuration of M’s computation on x, and an edge (u,v) if M can change move from u to v. 5.4

36 Savitch’s Thm - Reducing Acceptance to Reachability If there is more then one accepting configuration, another vertex t is added, with edges (u,t) for all accepting configurations’ vertices u. If there is more then one accepting configuration, another vertex t is added, with edges (u,t) for all accepting configurations’ vertices u. The starting configuration’s vertex is named s. The starting configuration’s vertex is named s. The question of M accepting x reduces to an s-t reachability problem on the configuration graph. We next show reachability in DSPACE(log 2 (n)).

37 Savitch’s Theorem The Trick: If there is a path from vertex u to v If there is a path from vertex u to v of size d>0, then there must be a vertex z, s.t. there is a path from u to z, shorter then  d/2 , and a path from z to u, shorter then  d/2 . of size d>0, then there must be a vertex z, s.t. there is a path from u to z, shorter then  d/2 , and a path from z to u, shorter then  d/2 . Note: As we try to save space, we can afford trying ALL possible z’s. Time complexity does not matter. Note: As we try to save space, we can afford trying ALL possible z’s. Time complexity does not matter.

38 The Algorithm Boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if d=1 return FALSE for every vertex v (except a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE }} Both use the same space

39 Why log 2 (n)? 1. The binary representation of all numbers used by the algorithm is at most of size of O(log(n)). 2. As the d parameter is reduced to half at each recursive call, the recursion tree is of depth O(log(n)). 3. Therefore at each step of the computation, we use at most O(log(n)) numbers of size O(log(n)) resulting in O(log2(n)) total space.

40 Example of Savitch’s algorithm (a,b,c)=Is there a path from a to b, that takes no more than c steps. Log 2 (3) boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } (1,4,3) (1,4,3)(1,2,2) (1,4,3)(1,2,2)TRUE (1,4,3)(2,4,1) (1,4,3)(2,4,1)FALSE (1,4,3)(1,3,2) (1,4,3)(1,3,2)(1,2,1) (1,4,3)(1,3,2)(1,2,1)TRUE (1,4,3)(1,3,2)(2,3,1) (1,4,3)(1,3,2)(2,3,1)TRUE (1,4,3)(1,3,2)TRUE (1,4,3)(3,4,1) (1,4,3)(3,4,1)TRUE (1,4,3) TRUE boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE }

41 Applying s-t-reachability to Savitch’s theorem. Given a NDTM M n working in space log(n), we construct a DTM M working in log 2 (n) in the following way: Given a NDTM M n working in space log(n), we construct a DTM M working in log 2 (n) in the following way: Given x, M solves the s-t reachability on the configuration graph of (M n,x). Given x, M solves the s-t reachability on the configuration graph of (M n,x). Note: the graph is generated “on demand”, reusing space, therefore M never keeps the entire representation of the graph. Note: the graph is generated “on demand”, reusing space, therefore M never keeps the entire representation of the graph.

42 Appling s-t-reachability to Savitch’s thm. Since M n works in log(n) space it has O(2 log(n) ) configurations, its configuration graph is of size O(2 log(n) ) and reachability is solved in log 2 (O(2 log(n) ) )= log 2 (n) space. Since M n works in log(n) space it has O(2 log(n) ) configurations, its configuration graph is of size O(2 log(n) ) and reachability is solved in log 2 (O(2 log(n) ) )= log 2 (n) space.

43 Savitch’s theorem - conclusion NL  DSPACE(log 2 (n)) NL  DSPACE(log 2 (n)) This is not just a special case of Savitch Thm but equivalent. This is not just a special case of Savitch Thm but equivalent. As we’ll see next As we’ll see next

44 Generalization of the proof Note that in the last argument, We could have substituted the log(n) function by any function, and thus derive the general Savitch Theorem: Note that in the last argument, We could have substituted the log(n) function by any function, and thus derive the general Savitch Theorem: S(n)  log(n)  Nspace(S)  DSPACE(S 2 ). S(n)  log(n)  Nspace(S)  DSPACE(S 2 ). We will next prove a lemma that will help us generalize any theorem proved for small functions to larger ones. Specifically, we will generalize the NL  DSPACE(log 2 (n)) theorem We will next prove a lemma that will help us generalize any theorem proved for small functions to larger ones. Specifically, we will generalize the NL  DSPACE(log 2 (n)) theorem

45 Translation Lemma-(Padding argument) Nspace(s 1 (n))  DSPACE(s 2 (n))  Nspace(s 1 (n))  DSPACE(s 2 (n))  Nspace(s 1 (f(n)))  SPACE(s 2 (f(n))) 5.5 For space constructible functions s 1 (n), s 2 (n)  log(n), f(n)  n:

46 Padding argument Let L  NPspace(s 1 (f(n))) Let L  NPspace(s 1 (f(n))) There is a 3-Tape-NDTM M L which accepts L in NPspace (s 1 (f(n))) There is a 3-Tape-NDTM M L which accepts L in NPspace (s 1 (f(n))) babba      Input Work |x| O(s 1 (f(|x|)))

47 Padding argument Define L’ = { x0 f(|x|)-|x| | x  L } Define L’ = { x0 f(|x|)-|x| | x  L } We’ll show a NDTM M L’ which decides L’ in the same space as M L. We’ll show a NDTM M L’ which decides L’ in the same space as M L. babba      Input Work n’=f(|x|) O(s1(n’)) = O(s1(f(|x|))

48 Padding argument – M L’ 1.Count 0’s backwards, mark end of x and check f(|x|)-|x| = 0’s length 2.Run M L on x. babba#      Input Work n' O(s1(n’)) NSpace(log(n’)) NSpace(s1(f(n))) = NSpace(s1(n’))

49 Padding argument babba#      Input Work n' O(s1(n’)) Total Nspace(O(s1(n’)))

50 Padding argument – M’ L’ M L’  NPspace(s 1 (n)) using Nspace(s 1 (n))  DSPACE(s 2 (n)) : there is a M’ L’, deterministic TM, which accept L’ in DSPACE(s 2 (n)) Given M’ L’, we will construct a DTM M* L that accept L in O(s2(f(n)) space.

51 Padding argument – M* L 1. Run M’ L’ on input x. 2. Whenever M’ L’ head leaves the x part of the input, use counter to simulate the head position. 3. check that M’ L’ doesn’t use more than s 2 (f(|x|)) space. 4. This can be checked because s 2 and f are constructible. M’ L’ can be simulated by another DTM which receives another DTM which receives the original input, by “imagining” the 0’s, the original input, by “imagining” the 0’s, and counting the place of the imaginary head, when “reading” to the right of the input.

52 Padding argument – particular case L  Nspace(n)  L’  Nspace(Log(n’)) = NL(n’)  L’  Space(Log 2 (n’))  L  Space(Log 2 (2 n ))  L  Space(n 2 ) by NL  DSPACE(log 2 (n)) n’ = 2 n

53 Padding argument – particular case Therefore NL  DSPACE(log 2 (n))  Nspace(n)  DSPACE(n 2 )