CS21 Decidability and Tractability

Slides:



Advertisements
Similar presentations
Measuring Time Complexity
Advertisements

Turing Machines Memory = an infinitely long tape Persistent storage A read/write tape head that can move around the tape Initially, the tape contains only.
INHERENT LIMITATIONS OF COMPUTER PROGRAMS CSci 4011.
The diagonalization method The halting problem is undecidable Decidability.
Measuring Time Complexity Sipser 7.1 (pages )
P and NP Sipser (pages ). CS 311 Fall Polynomial time P = ∪ k TIME(n k ) … P = ∪ k TIME(n k ) … TIME(n 3 ) TIME(n 2 ) TIME(n)
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
February 11, 2015CS21 Lecture 161 CS21 Decidability and Tractability Lecture 16 February 11, 2015.
Complexity ©D.Moshkovitz 1 Turing Machines. Complexity ©D.Moshkovitz 2 Motivation Our main goal in this course is to analyze problems and categorize them.
Reducibility A reduction is a way of converting one problem into another problem in such a way that a solution to the second problem can be used to solve.
CS5371 Theory of Computation Lecture 11: Computability Theory II (TM Variants, Church-Turing Thesis)
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
CS Master – Introduction to the Theory of Computation Jan Maluszynski - HT Lecture 8+9 Time complexity 1 Jan Maluszynski, IDA, 2007
January 28, 2015CS21 Lecture 101 CS21 Decidability and Tractability Lecture 10 January 28, 2015.
CS21 Decidability and Tractability
CS 310 – Fall 2006 Pacific University CS310 Complexity Section 7.1 November 27, 2006.
CS 461 – Nov. 21 Sections 7.1 – 7.2 Measuring complexity Dividing decidable languages into complexity classes. Algorithm complexity depends on what kind.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Measuring complexity Section 7.1 Giorgi Japaridze Theory of Computability.
Fall 2013 CMU CS Computational Complexity Lecture 2 Diagonalization, 9/12/2013.
Umans Complexity Theory Lectures Lecture 1b: Turing Machines & Halting Problem.
Turing Machines Sections 17.6 – The Universal Turing Machine Problem: All our machines so far are hardwired. ENIAC
FORMAL LANGUAGES, AUTOMATA, AND COMPUTABILITY * Read chapter 4 of the book for next time * Lecture9x.ppt.
Theory of Computational Complexity TA : Junichi Teruyama Iwama lab. D3
CSCI 2670 Introduction to Theory of Computing November 15, 2005.
 2005 SDU Lecture14 Mapping Reducibility, Complexity.
February 8, 2016CS21 Lecture 151 CS21 Decidability and Tractability Lecture 15 February 8, 2016.
February 1, 2016CS21 Lecture 121 CS21 Decidability and Tractability Lecture 12 February 1, 2016.
The Acceptance Problem for TMs
CSCI 2670 Introduction to Theory of Computing
CSC 4170 Theory of Computation The class P Section 7.2.
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Busch Complexity Lectures: Turing Machines
This statement is false.
CS21 Decidability and Tractability
CS21 Decidability and Tractability
CSC 4170 Theory of Computation The class P Section 7.2.
CSE 105 theory of computation
CS154, Lecture 7: Turing Machines.
The diagonalization method The halting problem is undecidable
MCC 2093 Advanced Theoretical Computer Science
CSE 105 theory of computation
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
Turing Machines Acceptors; Enumerators
Summary.
Computability & Complexity I
Decidable Languages Costas Busch - LSU.
CS21 Decidability and Tractability
CS21 Decidability and Tractability
CS21 Decidability and Tractability
CS21 Decidability and Tractability
CS21 Decidability and Tractability
CS154, Lecture 12: Time Complexity
CSC 4170 Theory of Computation Time complexity Section 7.1.
Turing Machines Complexity ©D.Moshkovitz.
CS21 Decidability and Tractability
CS21 Decidability and Tractability
Decidability and Tractability
CS21 Decidability and Tractability
CS21 Decidability and Tractability
Instructor: Aaron Roth
CSE 105 theory of computation
Theory of Computability
CSCI 2670 Introduction to Theory of Computing
CS151 Complexity Theory Lecture 1 April 2, 2019.
CSC 4170 Theory of Computation Time complexity Section 7.1.
Turing Machines Everything is an Integer
MA/CSSE 474 Theory of Computation
Intro to Theory of Computation
CSE 105 theory of computation
Presentation transcript:

CS21 Decidability and Tractability Lecture 16 February 13, 2019 February 13, 2019 CS21 Lecture 16

Outline Gödel Incompleteness Theorem (cont’d) …on to Complexity Extended Church-Turing Thesis The complexity class P Examples of problems in P February 13, 2019 CS21 Lecture 16

Incompleteness Theorem Lemma: Th(N) is not RE Proof: reduce from co-HALT (show co-HALT ≤m Th(N)) recall co-HALT is not RE what should f(<M, w>) produce? construct 𝛾 such that M loops on w ⇔𝛾 is true February 13, 2019 CS21 Lecture 16

Incompleteness Theorem we will define VALCOMPM,w(v) ≡… (details to come) so that it is true iff v is a (halting) computation history of M on input w then define f(<M, w>) to be: 𝛾≡¬ ∃v VALCOMPM,w(v) YES maps YES? <M, w> ∈ co-HALT ⇒𝛾 is true ⇒𝛾∈ Th(N) NO maps to NO? <M, w> ∉ co-HALT ⇒𝛾 is false ⇒𝛾∉ Th(N) February 13, 2019 CS21 Lecture 16

Expressing computation in the language of number theory Recall: basic building blocks x < y ≡∃z x + z = y ∧¬(z = 0) INTDIV(x, y, q, r) ≡ x = qy + r ∧ r < y DIV(y, x) ≡ ∃q INTDIV(x,y,q,0) PRIME(x) ≡ x ≥ (1+1) ∧∀ y (DIV(y, x) ⇒ (y = 1 ∨ y = x)) February 13, 2019 CS21 Lecture 16

Expressing computation in the language of number theory we’ll write configurations over an alphabet of size p, where p is a prime that depends on M d is a power of p: POWERp(d) ≡∀z (DIV(z, d) ∧ PRIME(z)) ⇒ z = p d = pk and length of v as a p-ary string is k LENGTH(v, d) ≡ POWERp(d) ∧ v < d Explain what we’re shooting for: digits of v are symbols in string February 13, 2019 CS21 Lecture 16

Expressing computation in the language of number theory the p-ary digit of v at position y is b (assuming y is a power of p): DIGIT(v, y, b) ≡ ∃u ∃a (v = a + by + upy ∧ a < y ∧ b < p) the three p-ary digits of v at position y are b,c, and d (assuming y is a power of p): 3DIGIT(v, y, b, c, d) ≡ ∃u ∃a (v = a + by + cpy + dppy + upppy ∧ a < y ∧ b < p ∧ c < p ∧ d < p) Clarify what “position y” means February 13, 2019 CS21 Lecture 16

Expressing computation in the language of number theory the three p-ary digits of v at position y “match” the three p-ary digits of v at position z according to M’s transition function (assuming y and z are powers of p): MATCH(v, y, z) ≡ where C = {(a,b,c,d,e,f) : abc in config. Ci can legally change to def in config. Ci+1} ⋁ 3DIGIT(v, y, a, b, c) ∧ 3DIGIT(v, z, d, e, f) (a,b,c,d,e,f) ∈𝐶 Ask students to justify the non-binary OR February 13, 2019 CS21 Lecture 16

Expressing computation in the language of number theory all pairs of 3-digit sequences in v up to d that are exactly c apart “match” according to M’s transition function (assuming c, d powers of p) MOVE(v, c, d) ≡ ∀y (POWERp(y) ∧ yppc < d) ⇒ MATCH(v, y, yc) February 13, 2019 CS21 Lecture 16

Expressing computation in the language of number theory the string v starts with the start configuration of M on input w = w1…wn padded with blanks out to length c (assuming c is a power of p): START(v, c) ≡ ∧ pn < c ∧ ∀y (POWERp(y) ∧ pn < y < c ⇒ DIGIT(v, y, k)) where k0k1k2k3…kn is the p-ary encoding of the start configuration, and k is the p-ary encoding of a blank symbol. ∧ DIGIT(v, pi, ki) i = 0,1,2,3, …, n February 13, 2019 CS21 Lecture 16

Expressing computation in the language of number theory string v has a halt state in it somewhere before position d (assuming d is power of p): HALT(v, d) ≡ ∃y (POWERp(y) ∧ y < d ∧ ∧ DIGIT(v,y,a)) where H is the set of p-ary digits “containing” states qaccept or qreject. a ∈𝐻 February 13, 2019 CS21 Lecture 16

Expressing computation in the language of number theory string v is a valid (halting) computation history of machine M on string w: VALCOMPM,w(v) ≡ ∃c ∃d (POWERp(c) ∧c < d ∧ LENGTH(v, d) ∧ START(v, c) ∧ MOVE(v, c, d) ∧ HALT(v, d)) M does not halt on input w: ¬∃v VALCOMPM,w(v) February 13, 2019 CS21 Lecture 16

Incompleteness Theorem Lemma: Th(N) is not RE Proof: reduce from co-HALT (show co-HALT ≤m Th(N)) recall co-HALT is not RE constructed 𝛾 such that M loops on w ⇔𝛾 is true February 13, 2019 CS21 Lecture 16

Summary full-fledged model of computation: TM many equivalent models Church-Turing Thesis encoding of inputs Universal TM February 13, 2019 CS21 Lecture 16

Summary classes of problems: counting: decidable (“solvable by algorithms”) recursively enumerable (RE) co-RE counting: not all problems are decidable not all problems are RE February 13, 2019 CS21 Lecture 16

Summary diagonalization: HALT is undecidable reductions: other problems undecidable many examples Rice’s Theorem natural problems that are not RE Recursion Theorem: non-obvious capability of TMs: printing out own description Incompleteness Theorem February 13, 2019 CS21 Lecture 16

Complexity So far we have classified problems by whether they have an algorithm at all. In real world, we have limited resources with which to run an algorithm: one resource: time another: storage space need to further classify decidable problems according to resources they require February 13, 2019 CS21 Lecture 16

Complexity Complexity Theory = study of what is computationally feasible (or tractable) with limited resources: running time storage space number of random bits degree of parallelism rounds of interaction others… main focus not in this course February 13, 2019 CS21 Lecture 16

Worst-case analysis Always measure resource (e.g. running time) in the following way: as a function of the input length value of the fn. is the maximum quantity of resource used over all inputs of given length called “worst-case analysis” “input length” is the length of input string, which might encode another object with a separate notion of size February 13, 2019 CS21 Lecture 16

Time complexity Definition: the running time (“time complexity”) of a TM M is a function f:N → N where f(n) is the maximum number of steps M uses on any input of length n. “M runs in time f(n),” “M is a f(n) time TM” February 13, 2019 CS21 Lecture 16

Time complexity Example: TM M deciding L = {0k1k : k ≥ 0}. On input x: scan tape left-to-right, reject if 0 to right of 1 repeat while 0’s, 1’s on tape: scan, crossing off one 0, one 1 if only 0’s or only 1’s remain, reject; if neither 0’s nor 1’s remain, accept # steps? # steps? # steps? February 13, 2019 CS21 Lecture 16

Time complexity We do not care about fine distinctions e.g. how many additional steps M takes to check that it is at the left of tape We care about the behavior on large inputs general-purpose algorithm should be “scalable” overhead for e.g. initialization shouldn’t matter in big picture February 13, 2019 CS21 Lecture 16

Time complexity Measure time complexity using asymptotic notation (“big-oh notation”) disregard lower-order terms in running time disregard coefficient on highest order term example: f(n) = 6n3 + 2n2 + 100n + 102781 “f(n) is order n3” write f(n) = O(n3) February 13, 2019 CS21 Lecture 16

c’ = max0≤n≤n0{c, f(n)/g(n)} Asymptotic notation Definition: given functions f,g:N → R+, we say f(n) = O(g(n)) if there exist positive integers c, n0 such that for all n ≥ n0 f(n) ≤ cg(n). meaning: f(n) is (asymptotically) less than or equal to g(n) if g > 0 can assume n0 = 0, by setting c’ = max0≤n≤n0{c, f(n)/g(n)} February 13, 2019 CS21 Lecture 16

Asymptotic notation facts each bound asymptotically less than next “logarithmic”: O(log n) logb n = (log2 n)/(log2 b) so logbn = O(log2 n) for any constant b; therefore suppress base when write it “polynomial”: O(nc) = nO(1) also: cO(log n) = O(nc’) = nO(1) “exponential”: O(2nδ) for δ > 0 February 13, 2019 CS21 Lecture 16

Time complexity total = O(n) + nO(n) + O(n) = O(n2) On input x: scan tape left-to-right, reject if 0 to right of 1 repeat while 0’s, 1’s on tape: scan, crossing off one 0, one 1 if only 0’s or only 1’s remain, reject; if neither 0’s nor 1’s remain, accept O(n) steps ≤ n repeats O(n) steps O(n) steps total = O(n) + nO(n) + O(n) = O(n2) February 13, 2019 CS21 Lecture 16

Time complexity Recall: language is a set of strings a complexity class is a set of languages complexity classes we’ve seen: Regular Languages, Context-Free Languages, Decidable Languages, RE Languages, co-RE languages Definition: TIME(t(n)) = {L : there exists a TM M that decides L in time O(t(n))} February 13, 2019 CS21 Lecture 16

Time complexity We saw that L = {0k1k : k ≥ 0} is in TIME(n2). Book: it is also in TIME(n log n) by giving a more clever algorithm Can prove: There does not exist a (single tape) TM which decides L in time (asymptotically) less than n log n How about on a multitape TM? February 13, 2019 CS21 Lecture 16

Time complexity 2-tape TM M deciding L = {0k1k : k ≥ 0}. On input x: scan tape left-to-right, reject if 0 to right of 1 scan 0’s on tape 1, copying them to tape 2 scan 1’s on tape 1, crossing off 0’s on tape 2 if all 0’s crossed off before done with 1’s reject if 0’s remain after done with ones, reject; otherwise accept. O(n) O(n) O(n) total: 3*O(n) = O(n) February 13, 2019 CS21 Lecture 16

Multitape TMs Convenient to “program” multitape TMs rather than single ones equivalent when talking about decidability not equivalent when talking about time complexity Theorem: Let t(n) satisfy t(n) ≥ n. Every multi-tape TM running in time t(n) has an equivalent TM running in time O(t(n)2). February 13, 2019 CS21 Lecture 16

Multitape TMs simulation of k-tape TM by single-tape TM: . . . a b a b add new symbol x for each old x marks location of “virtual heads” (input tape) . . . a a . . . b b c d . . . # a b a b # a a # b b c d # February 13, 2019 CS21 Lecture 16

Multitape TMs Repeat: O(t(n)) times . . . . . . . . . . . . scan tape, remembering the symbols under each virtual head in the state make changes to reflect 1 step of M; if hit #, shift to right to make room. when M halts, erase all but 1st string O(t(n)) times . . . a b a b . . . a a O(k t(n)) = O(t(n)) . . . b b c d O(k t(n)) = O(t(n)) O(t(n)) . . . # a b a b # a a # b b c d # February 13, 2019 CS21 Lecture 16

Multitape TMs Moral: feel free to use k-tape TMs, but be aware of slowdown in conversion to TM note: if t(n) = O(nc) then t(n)2 = O(n2c )= O(nc’) note: if t(n) = O(2nδ) for δ > 0 then t(n)2 = O(22nδ) = O(2nδ’) for δ’ > 0 high-level operations you are used to using can be simulated by TM with only polynomial slowdown e.g., copying, moving, incrementing/decrementing, arithmetic operations +, -, *, / February 13, 2019 CS21 Lecture 16