1 Slides: Asaf Shapira & Oded Schwartz; Sonny Ben-Shimon & Yaniv Nahum. Sonny Ben-Shimon & Yaniv Nahum. Notes: Leia Passoni, Reuben Sumner, Yoad Lustig.

Slides:



Advertisements
Similar presentations
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Advertisements

Lecture 3 Universal TM. Code of a DTM Consider a one-tape DTM M = (Q, Σ, Γ, δ, s). It can be encoded as follows: First, encode each state, each direction,
Variants of Turing machines
1 Savitch and Immerman- Szelepcsènyi Theorems. 2 Space Compression  For every k-tape S(n) space bounded offline (with a separate read-only input tape)
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
1 Space Complexity. 2 Def: Let M be a deterministic Turing Machine that halts on all inputs. Space Complexity of M is the function f:N  N, where f(n)
Giorgi Japaridze Theory of Computability Savitch’s Theorem Section 8.1.
Peter van Emde Boas: Games and Complexity Guangzhou 2009 Complexity, Speed-up and Compression Games and Complexity Peter van Emde Boas Guangzhou 2009 ©
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Computability and Complexity 22-1 Computability and Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Introduction to Computability Theory Lecture14: Recap Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture13: Mapping Reductions Prof. Amos Israeli.
Peter van Emde Boas: Games and Computer Science 1999 Speed-up and Compression Theoretical Models 1999 Peter van Emde Boas References available at:
Computability and Complexity 19-1 Computability and Complexity Andrei Bulatov Non-Deterministic Space.
P, NP, PS, and NPS By Muhannad Harrim. Class P P is the complexity class containing decision problems which can be solved by a Deterministic Turing machine.
Complexity ©D.Moshkovitz 1 Turing Machines. Complexity ©D.Moshkovitz 2 Motivation Our main goal in this course is to analyze problems and categorize them.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata are like Turing Machines with a restriction: The working space of the tape is the space of the.
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
Randomized Computation Roni Parshani Orly Margalit Eran Mantzur Avi Mintz
Complexity ©D.Moshkovits 1 Space Complexity Complexity ©D.Moshkovits 2 Motivation Complexity classes correspond to bounds on resources One such resource.
Complexity 1 The Padding Argument. Complexity 2 Motivation: Scaling-Up Complexity Claims space + non-determinism We have: space + determinism can be simulated.
Non Deterministic Space Avi Ben Ari Lior Friedman Adapted from Dr. Eli Porat Lectures Bar-Ilan - University.
1 Slides: Asaf Shapira & Oded Schwartz; Sonny Ben-Shimon & Yaniv Nahum. Sonny Ben-Shimon & Yaniv Nahum. Notes: Leia Passoni, Reuben Sumner, Yoad Lustig.
Alternating Turing Machine (ATM) –  node is marked accept iff any of its children is marked accept. –  node is marked accept iff all of its children.
Chapter 11: Limitations of Algorithmic Power
Non-Deterministic Space is Closed Under Complementation Neil Immerman Richard Szelepcsenyi Presented By: Subhajit Dasgupta.
CS 461 – Nov. 21 Sections 7.1 – 7.2 Measuring complexity Dividing decidable languages into complexity classes. Algorithm complexity depends on what kind.
Definition: Let M be a deterministic Turing Machine that halts on all inputs. Space Complexity of M is the function f:N  N, where f(n) is the maximum.
Machines with Memory Chapter 3 (Part B). Turing Machines  Introduced by Alan Turing in 1936 in his famous paper “On Computable Numbers with an Application.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 15-1 Mälardalen University 2012.
1 2 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- Turing machines.  We will still consider only.
Theory of Computing Lecture 21 MAS 714 Hartmut Klauck.
Computability Chapter 5. Overview  Turing Machine (TM) considered to be the most general computational model that can be devised (Church-Turing thesis)
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata (LBAs) are the same as Turing Machines with one difference: The input string tape space is the.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
 2005 SDU Lecture13 Reducibility — A methodology for proving un- decidability.
TM Design Macro Language D and SD MA/CSSE 474 Theory of Computation.
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
Fall 2013 CMU CS Computational Complexity Lecture 2 Diagonalization, 9/12/2013.
Overview of the theory of computation Episode 3 0 Turing machines The traditional concepts of computability, decidability and recursive enumerability.
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Chapter 7 Introduction to Computational Complexity.
Space Complexity Guy Feigenblat Based on lecture by Dr. Ely Porat Complexity course Computer science department, Bar-Ilan university December 2008.
 2005 SDU Lecture14 Mapping Reducibility, Complexity.
Probabilistic Algorithms
Space Complexity Complexity ©D.Moshkovits.
Goal: Plan: Explore space complexity Low space classes: L, NL
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Part VI NP-Hardness.
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
Turing Machines Acceptors; Enumerators
Decidable Languages Costas Busch - LSU.
Part II Theory of Nondeterministic Computation
Recall last lecture and Nondeterministic TMs
Turing Machines Complexity ©D.Moshkovitz.
CS21 Decidability and Tractability
Time Complexity Classes
DSPACE Slides By: Alexander Eskin, Ilya Berdichevsky
Presentation transcript:

1 Slides: Asaf Shapira & Oded Schwartz; Sonny Ben-Shimon & Yaniv Nahum. Sonny Ben-Shimon & Yaniv Nahum. Notes: Leia Passoni, Reuben Sumner, Yoad Lustig & Tal Hassner. (from Oded Goldreich’s course lecture notes)

2 Introduction In this lecture we’ll cover: Space Complexity Non-Deterministic space

3 Complexity Functions Definition: A function f is called constructible if it satisfies the following conditions: –Positive: f: N +  N + –Monotone: f(n+1)  f(n) for all n. –Constructible: there exists a Turing Machine M f that, on input x, outputs a string of size f(|x|), in time O(|x|+f(|x|)), and in space O(f(|x|)). 4.1

4 Constructible functions Many “popular” complexity functions such as n, log(n), n 2, n! satisfy the above criteria. Odd situations may occur in regard to relations between complexity classes if we don't choose these functions properly. Note: We will therefore use time constructible functions for time bound and space constructible functions for space bound. 4.2

5 Space Complexity - The model: 3-tape Turing machine: 1. Input Tape – Read Only. 2. Output tape – Write only. Omitted for decision problems. Usually unidirectional. 3. Work tape – Read & Write. Enables sub linear space Whose length corresponds to space usage 4.3

6 What kind of TM should we use? Any multi-tape TM can be simulated by an ordinary TM with a polynomial loss of efficiency. Due to the previous fact, from here on, a TM will refer to the 3 tape TM described above.

7 Dspace - Definition For every TM, M, and input x: W M (x) = The index of the rightmost cell on the worktape scanned by M on x. S M (n) = max |x|=n W M (x)  L (x) = 1 if x  L, 0 otherwise. Dspace(S(n)) = {L|  DTM M,  x M(x)=  L (x) and  n S M (n)  S(n) } Maximal amount of space used by M for input of length n

8 Below Logarithmic Space It is known that Dspace(O(1)) is equivalent to the set of regular languages. Do we gain any strength by having sub- logarithmic sapce?? Or formally… Dspace(o(log(n)))  Dspace(O(1)) ? or is it Dspace(o(log(n))) = Dspace(O(1)) ? 4.4

9 Theorem: Dspace(o(log(n))) is a proper superset of Dspace(O(1)). Proof: We will construct a language, L, such that L  Dspace(loglog(n)), but L  Dspace(O(1)), which will prove the theorem (log(log(n)  o(log(n))). Dspace(o(log(n)))  Dspace(O(1))

10 Proof (contd.) - Definition of L L = {x k |k  N,x k =B k ’ 0 $B k ’ 1 $B k ’ 2 $…$B k ’ (2 k -1) $} where: B k ’ i = Binary representation of i of length k. For example: x 2 = 00$01$10$11$

11 Claim 1 : L  Dspace(O(1)) (Regular- Languages) Proof : By using the “Pumping Lemma” Claim 2 : L  Dspace(loglog(n)) Proof : We will show an algorithm for deciding L that uses loglog(n) space. Proof (contd.)

12 1) Check that the first block is all 0’s and that the last is all 1’s. 2) For any two consecutive blocks, check that the second is the binary increment of the first. The wrong way for prooving Claim 2 (works if x  L) Clearly (1) can be done in constant space, and (2) in log(k) space which is loglog(n) space, as n=|x k |=(k+1)2 k this solution can use more then loglog(n) if x k  L (e.g. 0 m $1 m $) if x k  L (e.g. 0 m $1 m $) m=n/2 - 1

13 The correct solution m = 1 while (true) { –check that the last m bits of the first block are all 0’s. –check that the last m bits of the B k ’ i blocks form an increasing sequence mod 2 m, and that each block has m bits. –check that the last m bits of the last block are all 1’s. –if you found an error, return false. –if m is the exact size of B k ’ i return true. –m = m +1 }

14 The correct solution – An Example m=2, 2 left bits increasing mod 2 2 =4 000$001$010$011$100$101$110$111$ m=1, 1 left bits increasing mod 2 1 =2 The entire series is increasing  return true m=3, 3 left bits increasing mod 2 3 =8 input: 000$001$010$011$100$101$110$111$

15 Below Logarithm Conclusion: L  Space(O(loglog(n)))\Space(O(1))  Dspace(o(log(n)))  Dspace(O(1)) One can show that the above claim does not work for o(loglog(n)), that is: Dspace(o(loglog(n))=Dspace(O(1))

16 Configuration - Definition A configuration of M, M  Dspace(s(n)), is a complete description of its computational state on a fixed input, x (|x|=n), at a specific time. These include: 1. the state of M (|Q M | possibilities) 2. contents of the worktape (2 s(n) possibilities) 3. the head position on input tape (n possibilities) 4. the head position on the worktape (s(n) possibilities).

17 #Configuration – An upper bound Let C be the number of possible configuarations of a TM M. C  |Q M |* 2 s(n) * n * s(n) contents of the worktape head position on input tape head position on worktape number of states

18 Relation between time and space Theorem:  s(n) s.t log(n)  s(n)  Dspace(s(n))  Dtime(2 O(s(n)) ) proof: For every L  Dspace(s(n)),There is a TM M that uses no more than O(s(n)) space on input x.  the number of configurations of M  2 O(s(n)). passing from one configuration to another takes O(1) time.  M will stop after 2 O(s(n)) steps. If it doesn’t, it must pass through the same configuration twice, which implies an infinite loop. 4.5 log(n)  s(n) and previous slide

19 How to Make TMs halt???? Theorem:  TM, M  Dspace(s(n)),  TM, M’  Dspace(O(s(n)) : L(M’)=L(M), and M’ always halts. (s.t. log(n)  s(n)) proof: By simulation. Given x, M’ computes the maximal number of configurations C. That takes s(|x|) space. Now M’ simulates M. If it gets an answer in less than C steps, it returns the answer. Otherwise (M entered an infinite loop) it returns ‘no’. This stage also takes s(|x|), so the total is O(s|x|). 4.6

20 Space Hierarchy Theorem:  s 1 (n), s 2 (n). s 1 (n)  log(n), s 2 (n) is space- constructible and s 1 (n)=o(s 2 (n))  Dspace(s 1 (n))  Dspace(s 2 (n)) proof: By diagonalization. We will construct a language L, such that L  Dspace(s 2 (n)), but L can’t be recognized by a TM using s 1 (n) space. We define : Let c 0 be a constant, 0  c 0  1. L = {x|x= 01*,| |  c 0 * s 2 (|x|), M rejects x by using no more then c 0 * s 2 (|x|) space} 4.7

21 Space Hierarchy Claim: L  Dspace(s 2 (n)) proof: By a straightforward algorithm. 1. check if x is of the right form. (O(1) space) 2. compute S:= c 0 * s 2 (|x|). (s 2 (|x|) space) 3. check that | |  c 0 * s 2 (|x|). (log(S) space) 4. simulate M. if the bound S was exceeded – reject. if M rejects x – accept, else reject. We get a total of O(s 2 (|x|)) space as needed. s 2 (n) is space- constructible

22 Space Hierarchy Claim: L  Dspace(s 1 (n)) proof:We will show that for every TM M of space complexity s 1 (n), L(M)  L. s 1 (n)=o(s 2 (n))   n 0. s 1 (n 0 )  c 0 * s 2 (n 0 ). Let’s assume by contradiction that there is a TM M 0 space complexity s 1 (n) that satisfies | |  c 0 * s 2 (n 0 ) that accepts L. Let’s observe how M 0 operates on input x := 01 n 0 -| |-1

23 L  Dspace(s 1 (n)) – Proof contd. 1. if M 0 accepts x, then by definition x  L. 2. if M 0 rejects x, then since | |  c 0 * s 2 (n 0 ), and M 0 on x uses at most s 1 (n 0 )  c 0 * s 2 (n 0 ) space, therfore x  L. In any case we find a contradiction to the fact that M 0 accepts L.

24 Other space theorems Borodin’s Gap Theorem:  g(n), g(n) is recursive and g(n)  n,  f. Dspace(f(n))=Dspace(g(f(n)) Blum’s Speed-up Theorem:  g(n), g(n) is recursive and g(n)  n,  L  R,  TM M, L(M) = L, M  Space(s(n))  M’, L(M’) = L, M’  Space(g -1 s(n)) 4.8

25 Non Deterministic Space Defintion: A non-deterministic Turing machine - NDTM is a TM with a non- Deterministic transition function, having a work tape, a read-only input tape, and a unidirectional write-only output tape. The machine is said to accept input x if there exists a computation ending in an accepting state. 5.1

26 Defintions of On-line / Off-line TM An offline (online) non-deterministic TM has a work tape, a read-only input tape, a unidirectional write-only output tape, and a two-way (one-way) read-only guess tape. The contents of the guess tape is selected non-deterministically (and is the only non- deterministic part in this machine). The machine is said to accept input x if there exists a content y to the guess tape such that the machine ends in an accepting state.

27 Nspace on, Nspace off Definition: Nspace on (S) = { L | there exist an online-NDTM M L accepting x iff x  L using at most S(|x|) space} Definition: Nspace off (S) = { L | there exist an offline-NDTM M L accepting x iff x  L using at most S(|x|) space }

28 NDTM = Online-NDTM Claim: the NDTM model is equivalent to the online-NDTM model proof: We will show that a language L is decidable by a NDTM in time O(T) and space O(S) iff L is decidable by an online- NDTM in the same time and space bounds.  Use the guess string y to determine which transition function to take every step.  Guess every step the content of the next cell in the guess string. Remember last step’s guessed letter (using internal state) when simulated guess-string-head doesn’t move. 5.2

29 Nspace on vs. Nspace off Theorem: Nspace on (S)  Nspace off (log(S)) We’ll simulate an online-NDTM M on that uses space S, using an offline-NDTM M off that uses space log(S). M off guess the sequence of configurations of M on and then validates it. 5.3

30 Nspace on vs. Nspace off The guess string will have blocks, each representing a configuration [of length (O(S))]. There will be no more then 2 O(S) blocks (any valid series of configurations having more blocks, has a repeating configuration, therefore can be replaced by a shorter guess). guess string doesn’t count in M off the space of M off

31 Nspace on vs. Nspace off M off will validate that: The first block is a legal starting configuration. Last block is a legal accepting configuration. Every block is legally following it’s previous (will be done one by one).

32 Nspace on vs. Nspace off The (supposedly) configuration strings:... $aaaabcaa$ aaaabc h aa $ aaaabxa h a $aaaabcaa$aaaab$... $aaaabc h aa$ $aaaabxa h a$ 1. Check (almost) all char in strings are identical, and string length are identical. 2. Check that char marked with head position in 1st configuration transforms to a legal threesome on the 2nd. O(log(|Conf|)) O(1)

33 Nspace on vs. Nspace off The working string will hold a counter for the location in the configuration checked - log(O(S)), and O(1) more space for the validation. A counter for the number of configuration checked - O(S) - is not needed.

34 Nspace on vs. Nspace off Note that this simulation can’t be done by the online machine, as it has to read forwards & backwards on the guess tape (block size being a function of n).

35 Nspace on vs. Nspace off Theorem: Nspace off (S)  Nspace on (2 O(S) ) proof: The proof of this theorem also uses simulation of one machine using the other.

36 Savitch’s Theorem Theorem: NL = Nspace(log(n))  Dspace(log 2 (n)) We will later generalize this theorem and show that: S(n)  log(n)  Nspace(S)  Dspace(S 2 ) Defintion: a Configuration Graph is a graph that given a TM M which works in space S on an input x, we assign a node for every possible configuration of M’s computation on x, and an edge (u,v) if M can change from configuration u to v. 5.4

37 Savitch’s Theorem - Reducing Acceptance to Reachability If there is more then one accepting configuration, another vertex t is added, with edges (u,t) for all accepting configurations’ vertices u. The starting configuration’s vertex is named s. The question of M accepting x reduces to an s-t reachability problem on the configurations graph. We will next show an algorithm that solves reachability in Dspace(log 2 (n)).

38 Savitch’s Theorem The Trick: If there is a path from vertex u to v of size d>0, then there must be a vertex z, such that there is a path from u to z, shorter then  d/2 , and a path from z to u, shorter then  d/2 . Note: As we try to save space, we can afford trying ALL possible z’s. Time complexity does not matter.

39 The Algorithm boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if d=1 return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } Both use the same space

40 Why log 2 (n)? 1. The binary representation of all numbers used by the algorithm is at most of size of O(log(n)). 2. As the d parameter is reduced to half at each recursive call, the recursion tree is of depth O(log(n)). 3. Therefore at each step of the calculation, we use O(log(n)) numbers of size O(log(n)) resulting in O(log2(n)) total space.

41 Example of Savitch’s algorithm (a,b,c)=Is there a path from a to b, that takes no more than c steps. Log 2 (3) boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } (1,4,3) (1,4,3)(1,2,2) (1,4,3)(1,2,2)TRUE (1,4,3)(2,4,1) (1,4,3)(2,4,1)FALSE (1,4,3)(1,3,2) (1,4,3)(1,3,2)(1,2,1) (1,4,3)(1,3,2)(1,2,1)TRUE (1,4,3)(1,3,2)(2,3,1) (1,4,3)(1,3,2)(2,3,1)TRUE (1,4,3)(1,3,2)TRUE (1,4,3)(3,4,1) (1,4,3)(3,4,1)TRUE (1,4,3) TRUE boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE } boolean PATH(a,b,d) { if there is an edge from a to b then return TRUE else { if (d=1) return FALSE for every vertex v (not a,b) { if PATH(a,v,  d/2  ) and PATH(v,b,  d/2  ) then return TRUE } return FALSE }

42 Applying s-t-reachability to Savitch’s theorem. Given a NDTM M n working in space log(n), we construct a DTM M working in log 2 (n) in the following way: Given x, M solves the s-t reachability on the configuration graph of (M n,x). Note that the graph is generated “on demand”, by reusing space, therefore M never keeps the entire representation of the graph.

43 Appling s-t-reachability to Savitch’s thm. Since M n works in log(n) space it has O(2 log(n) ) configurations, thus its configuration graph is of size O(2 log(n) ) and the reachability is solved in log 2 (O(2 log(n) ) )= log 2 (n) space.

44 Savitch’s theorem - conclusion NL  Dspace(log 2 (n)) This is not just a special case of Savitch thm but equals it. We’ll see this next

45 Generalization of the proof Note that in the last argument, We could have substituted the log(n) function by any function, and thus derive the general Savitch Theorem: S(n)  log(n)  Nspace(S)  Dspace(S 2 ). We will next prove a lemma that will help us generalize any theorem proved for small functions to larger ones. Specifically, we will generalize the NL  Dspace(log 2 (n)) theorem

46 Translation Lemma-(Padding argument) Nspace(s 1 (n))  Dspace(s 2 (n))  Nspace(s 1 (f(n)))  Dspace(s 2 (f(n))) 5.5 For space constructible functions s 1 (n), s 2 (n)  log(n), f(n)  n:

47 Padding argument Let L  NPspace(s 1 (f(n))) There is a 3-Tape-NDTM M L which accepts L in NPspace (s 1 (f(n))) babba      Input Work |x| O(s 1 (f(|x|)))

48 Padding argument Define L’ = { x0 f(|x|)-|x| | x  L } We’ll show a NDTM M L’ which decides L’ in the same space as M L. babba      Input Work n’=f(|x|) O(s1(n’)) = O(s1(f(|x|))

49 Padding argument – M L’ 1.Count 0’s backwards, mark end of x and check f(|x|)-|x| = 0’s length 2.Run M L on x. babba#      Input Work n' O(s1(n’)) NSpace(log(n’)) NSpace(s1(f(n))) = NSpace(s1(n’))

50 Padding argument babba#      Input Work n' O(s1(n’)) Total Nspace(O(s1(n’)))

51 Padding argument – M’ L’ M L’  NPspace(s 1 (n)) using Nspace(s 1 (n))  Dspace(s 2 (n)) : there is a M’ L’,deterministic TM, which accept L’ in Dspace(s 2 (n)) Given M’ L’, we will construct a DTM M* L that accept L in O(s2(f(n)) space.

52 Padding argument – M* L 1. Run M’ L’ on input x. 2. Whenever M’ L’ head leaves the x part of the input, use counter to simulate the head position. 3. check that M’ L’ doesn’t use more than s 2 (f(|x|)) space. 4. This can be checked because s 2 and f are constructible. M’ L’ can be simulated by another DTM which receives another DTM which receives the original input, by “imagining” the 0’s, the original input, by “imagining” the 0’s, and counting the place of the imaginary head, when “reading” to the right of the input.

53 Padding argument – particular case L  Nspace(n)  L’  Nspace(Log(n’)) = NL(n’)  L’  Space(Log 2 (n’))  L  Space(Log 2 (2 n ))  L  Space(n 2 ) by NL  Dspace(log 2 (n)) n’ = 2 n

54 Padding argument – particular case Therefore NL  Dspace(log 2 (n))  Nspace(n)  Dspace(n 2 )