Computational Complexity Theory

Slides:



Advertisements
Similar presentations
COMPLEXITY THEORY CSci 5403 LECTURE VII: DIAGONALIZATION.
Advertisements

Lecture 3 Universal TM. Code of a DTM Consider a one-tape DTM M = (Q, Σ, Γ, δ, s). It can be encoded as follows: First, encode each state, each direction,
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
Lecture 16: Relativization Umans Complexity Theory Lecturess.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen Department of Computer Science University of Texas-Pan American.
Theory of Computing Lecture 16 MAS 714 Hartmut Klauck.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computability and Complexity 22-1 Computability and Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Courtesy Costas Busch - RPI1 A Universal Turing Machine.
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
RELATIVIZATION CSE860 Vaishali Athale. Overview Introduction Idea behind “Relativization” Concept of “Oracle” Review of Diagonalization Proof Limits of.
Fall 2004COMP 3351 A Universal Turing Machine. Fall 2004COMP 3352 Turing Machines are “hardwired” they execute only one program A limitation of Turing.
February 20, 2015CS21 Lecture 191 CS21 Decidability and Tractability Lecture 19 February 20, 2015.
1 Reducibility. 2 Problem is reduced to problem If we can solve problem then we can solve problem.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Optimal Proof Systems and Sparse Sets Harry Buhrman, CWI Steve Fenner, South Carolina Lance Fortnow, NEC/Chicago Dieter van Melkebeek, DIMACS/Chicago.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
CSCI 2670 Introduction to Theory of Computing November 29, 2005.
CSCI 3160 Design and Analysis of Algorithms Tutorial 10 Chengyu Lin.
Theory of Computing Lecture 21 MAS 714 Hartmut Klauck.
A Universal Turing Machine
1. 2 Lecture outline Basic definitions: Basic definitions: P, NP complexity classes P, NP complexity classes the notion of a certificate. the notion of.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Recursively Enumerable and Recursive Languages
1 Recursively Enumerable and Recursive Languages.
Recursively Enumerable and Recursive Languages. Definition: A language is recursively enumerable if some Turing machine accepts it.
Decidability.
1 A Universal Turing Machine. 2 Turing Machines are “hardwired” they execute only one program A limitation of Turing Machines: Real Computers are re-programmable.
 2005 SDU Lecture14 Mapping Reducibility, Complexity.
Umans Complexity Theory Lectures Lecture 3c: Nondeterminism: Ladner’s Theorem: If P≠ NP then ∃ NP Languages neither NP Complete nor in P.
Computational Complexity Theory
The Acceptance Problem for TMs
A Universal Turing Machine
Recursively Enumerable and Recursive Languages
Great Theoretical Ideas in Computer Science
For how many oracles does P=NP?
Busch Complexity Lectures: Reductions
Lecture12 The Halting Problem
Reductions Costas Busch - LSU.
Theory of Computability
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
NP-Completeness Yin Tat Lee
CS154, Lecture 10: Rice’s Theorem, Oracle Machines
Busch Complexity Lectures: Undecidable Problems (unsolvable problems)
Intro to Theory of Computation
CS154, Lecture 10: Rice’s Theorem, Oracle Machines
Decidable Languages Costas Busch - LSU.
Proposed in Turing’s 1936 paper
CS154, Lecture 13: P vs NP.
CS154, Lecture 12: Time Complexity
Formal Languages, Automata and Models of Computation
CSC 4170 Theory of Computation Time complexity Section 7.1.
Recall last lecture and Nondeterministic TMs
CS21 Decidability and Tractability
Umans Complexity Theory Lectures
Introduction to Oracles in Complexity Theory
CS21 Decidability and Tractability
The Polynomial Hierarchy
Instructor: Aaron Roth
Instructor: Aaron Roth
More Undecidable Problems
Instructor: Aaron Roth
CSC 4170 Theory of Computation Time complexity Section 7.1.
Presentation transcript:

Computational Complexity Theory Lecture 6: Ladner’s theorem (contd.); Relativization Indian Institute of Science

Recap: Class co-NP and EXP Definition. A language L ⊆ {0,1}* is in co-NP if there’s a poly-time TM M and a poly function p such that x∈L ∀u ∈{0,1}p(|x|) s.t. M(x, u) = 1 Definition. EXP = ∪ DTIME ( 2n ) c EXP c ≥ 1 NP co-NP P

Recap: Diagonalization Diagonalization refers to a class of techniques used in complexity theory to separate complexity classes. These techniques are characterized by two main features: There’s a universal TM U that when given strings α and x, simulates Mα on x with only a small overhead. Every string represents some TM, and every TM can be represented by infinitely many strings.

Recap: Time Hierarchy Theorem Let f(n) and g(n) be time-constructible functions s.t., f(n) . log f(n) = o(g(n)). Theorem. DTIME(f(n)) ⊊ DTIME(g(n)) Theorem. P ⊊ EXP

Recap: Ladner’s theorem Definition. A language L in NP is NP-intermediate if L is neither in P nor NP-complete. Theorem. (Ladner) If P ≠ NP then there is an NP- intermediate language. Proof. Let H: N N be a function. Let SATH = {Ψ0 1 : Ψ ∈ SAT and |Ψ| = m} H(m) m H would be defined in such a way that SATH is NP-intermediate (assuming P ≠ NP )

Recap: Properties of H ∞ Theorem. There’s a function H: N N such that H(m) is computable from m in O(m3) time SATH ∈ P H(m) ≤ C (a constant) If SATH ∉ P then H(m) with m ∞

Recap: Proof of Ladner’s theorem P ≠ NP Suppose SATH ∈ P. Then H(m) ≤ C. This implies a poly-time algorithm for SAT as follows: On input ϕ , find m = |ϕ|. Compute H(m), and construct the string ϕ 0 1 Check if ϕ 0 1 belongs to SATH H(m) m H(m) m length at most m + 1 + mC

Recap: Proof of Ladner’s theorem P ≠ NP Suppose SATH is NP-complete. Then H(m) with m. This also implies a poly-time algorithm for SAT: On input ϕ, compute f(ϕ) = Ψ 0 1k. Let m = |Ψ|. Compute H(m) and check if k = mH(m). W.l.o.g. nc = |f(ϕ)| ≥ m2c √n ≥ m ∞ f SAT ≤p SATH ϕ Ψ 0 1k

Recap: Proof of Ladner’s theorem P ≠ NP Suppose SATH is NP-complete. Then H(m) with m. This also implies a poly-time algorithm for SAT: On input ϕ, compute f(ϕ) = Ψ 0 1k. Let m = |Ψ|. Compute H(m) and check if k = mH(m). W.l.o.g. nc = |f(ϕ)| ≥ m2c √n ≥ m ∞ f SAT ≤p SATH ϕ Ψ 0 1k Thus, checking if an n-size formula ϕ is satisfiable reduces to checking if a √n-size formula Ψ is satisfiable.

Construction of H Observation. The value of H(m) determines membership in SATH of strings whose length is ≥ m. Therefore, it is OK to define H(m) based on strings in SATH whose length is < m (say, log m).

Construction of H Observation. The value of H(m) determines membership in SATH of strings whose length is ≥ m. Therefore, it is OK to define H(m) based on strings in SATH whose length is < m (say, log m). Construction. H(m) is the smallest k < log log m s.t. Mk decides membership of all length up to log m strings x in SATH within k.|x|k time. If no such k exists then H(m) = log log m.

Construction of H Observation. The value of H(m) determines membership in SATH of strings whose length is ≥ m. Therefore, it is OK to define H(m) based on strings in SATH whose length is < m (say, log m). Homework. Prove that H(m) is computable from m in O(m3) time.

Construction of H Claim. If SATH ∈ P then H(m) ≤ C (a constant). Proof. There is a poly-time M that decides membership of every x in SATH within c.|x|c time.

Construction of H Claim. If SATH ∈ P then H(m) ≤ C (a constant). Proof. There is a poly-time M that decides membership of every x in SATH within c.|x|c time. As M can be represented by infinitely many strings, there’s anα ≥ c s.t. M = Mα decides membership of every x in SATH within α.|x|α time. So, for every m satisfying α < log log m, H(m) ≤ α.

Construction of H Claim. If H(m) ≤ C (a constant) then SATH ∈ P. Proof. There’s a k ≤ C s.t. H(m) = k for infinitely many m.

Construction of H Claim. If H(m) ≤ C (a constant) then SATH ∈ P. Proof. There’s a k ≤ C s.t. H(m) = k for infinitely many m. Pick any x ∈ {0,1}*. Think of a large enough m s.t. |x| ≤ log m and H(m) = k.

Construction of H Claim. If H(m) ≤ C (a constant) then SATH ∈ P. Proof. There’s a k ≤ C s.t. H(m) = k for infinitely many m. Pick any x ∈ {0,1}*. Think of a large enough m s.t. |x| ≤ log m and H(m) = k. This means x is correctly decided by Mk in k.|x|k time. So, Mk is a poly-time machine deciding SATH.

Natural NP-intermediate problem? Integer factoring. FACT = {(N, U): there’s a prime ≤ U dividing N} Claim. FACT ∈ NP ∩ co-NP So, FACT is NP-complete if and only if NP = co-NP.

Natural NP-intermediate problem? Integer factoring. FACT = {(N, U): there’s a prime ≤ U dividing N} Claim. FACT ∈ NP ∩ co-NP Proof. FACT ∈ NP : Give p as a certificate. The verifier checks if p is prime (AKS test), p ≤ U and p divides N.

Natural NP-intermediate problem? Integer factoring. FACT = {(N, U): there’s a prime ≤ U dividing N} Claim. FACT ∈ NP ∩ co-NP Proof. FACT ∈ NP : Give complete prime factorization of N as a certificate. The verifier checks if none of the prime factors is ≤ U.

Natural NP-intermediate problem? Integer factoring. FACT = {(N, U): there’s a prime ≤ U dividing N} Factoring algorithm. Dixon’s randomized algorithm factors an n-bit number in exp(O(√n log n)) time.

Power & limits of diagonalization Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization.

Oracle Turing Machines Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Definition: Let L ⊆ {0,1}* be a language. An oracle TM ML is a TM with a special query tape and three special states qquery, qyes and qno such that

Oracle Turing Machines Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Definition: Let L ⊆ {0,1}* be a language. An oracle TM ML is a TM with a special query tape and three special states qquery, qyes and qno such that whenever the machine enters the qquery state, it immediately transits to qyes or qno depending on whether the string in the query tape belongs to L. (ML has oracle access to L)

Oracle Turing Machines Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Think of physical realization of ML as a device with access to a subroutine that decides L. We don’t count the time taken by the subroutine. query Decider for L ML

Oracle Turing Machines Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Think of physical realization of ML as a device with access to a subroutine that decides L. We don’t count the time taken by the subroutine. The transition table of ML doesn’t have any rule of the kind (qquery, b) (q, c, L/R).

Oracle Turing Machines Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Think of physical realization of ML as a device with access to a subroutine that decides L. We don’t count the time taken by the subroutine. We can define a nondeterministic Oracle TM similarly.

Oracle Turing Machines Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Important note: Oracle TMs (deterministic/nondeterministic) have the same two features used in diagonalization: For any fixed L ⊆ {0,1}*, 1. There’s an efficient universal TM with oracle access to L, 2. Every ML has infinitely many representations.

Relativization

Complexity classes using oracles Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Definition: Let L ⊆ {0,1}* be a language. Complexity classes PL, NPL and EXPL are defined just as P, NP and EXP respectively, but with TMs replaced by oracle TMs with oracle access to L in the definitions of P, NP and EXP respectively.

Complexity classes using oracles Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Definition: Let L ⊆ {0,1}* be a language. Complexity classes PL, NPL and EXPL are defined just as P, NP and EXP respectively, but with TMs replaced by oracle TMs with oracle access to L in the definitions of P, NP and EXP respectively. SAT ∈ PSAT

Relativizing results Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Observation: Let L ⊆ {0,1}* be an arbitrarily fixed language. Owing to the ‘Important note’, the proof of P ≠ EXP can be easily adapted to prove PL ≠ EXPL by working with TMs with oracle access to L. We say that the P ≠ EXP result relativizes.

Relativizing results Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Observation: Let L ⊆ {0,1}* be an arbitrarily fixed language. Owing to the ‘Important note’, any proof/result that uses only the two features of diagonalization relativizes.

Relativizing results Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Is is true that - either PL = NPL for every L ⊆ {0,1}*, - or PL ≠ NPL for every L ⊆ {0,1}* ? Theorem (Baker-Gill-Solovay): The answer is No. Any proof of P = NP or P ≠ NP must not relativize.

Relativizing results Like in the proof of P ≠ EXP, can we use diagonalization to show P ≠ NP ? The answer is No, if one insists on using only the two features of diagonalization. Is is true that - either PL = NPL for every L ⊆ {0,1}*, - or PL ≠ NPL for every L ⊆ {0,1}* ? Theorem (Baker-Gill-Solovay): The answer is No. Any proof of P = NP or P ≠ NP must not relativize.

Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB ≠ NPB. Proof: Using diagonalization!

Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB ≠ NPB. Proof: Let A = {(M, x,1m): M accepts x in 2m steps}. A is an EXP-complete language under poly-time Karp reduction. Then, PA = EXP. Also, NPA = EXP. Hence PA = NPA.

Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB ≠ NPB. Proof: Let A = {(M, x,1m): M accepts x in 2m steps}. A is an EXP-complete language under poly-time Karp reduction. Then, PA = EXP. Also, NPA = EXP. Hence PA = NPA. Why isn’t EXPA = EXP ?

Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB ≠ NPB. Proof: For any language B let LB = {1n : there’s a string of length in B}. Observe, LB ∈ NPB for any B. (Guess the string, check if it has length n, and ask oracle B to verify membership.)

Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB ≠ NPB. Proof: For any language B let LB = {1n : there’s a string of length in B}. Observe, LB ∈ NPB for any B. We’ll construct B (using diagonalization) in such a way that LB ∉ PB, implying PB ≠ NPB.