CS151 Complexity Theory Lecture 11 May 7, 2019.

Slides:



Advertisements
Similar presentations
Complexity Theory Lecture 6
Advertisements

Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
CS151 Complexity Theory Lecture 3 April 6, CS151 Lecture 32 Introduction A motivating question: Can computers replace mathematicians? L = { (x,
Umans Complexity Theory Lectures Lecture 15: Approximation Algorithms and Probabilistically Checkable Proofs (PCPs)
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
CS151 Complexity Theory Lecture 3 April 6, Nondeterminism: introduction A motivating question: Can computers replace mathematicians? L = { (x,
CS151 Complexity Theory Lecture 7 April 20, 2004.
CS151 Complexity Theory Lecture 12 May 6, CS151 Lecture 122 Outline The Polynomial-Time Hierarachy (PH) Complete problems for classes in PH, PSPACE.
CS151 Complexity Theory Lecture 7 April 20, 2015.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Class NL.
CS151 Complexity Theory Lecture 11 May 4, CS151 Lecture 112 Outline Extractors Trevisan’s extractor RL and undirected STCONN.
CS151 Complexity Theory Lecture 13 May 11, CS151 Lecture 132 Outline Natural complete problems for PH and PSPACE proof systems interactive proofs.
CS Master – Introduction to the Theory of Computation Jan Maluszynski - HT Lecture NP-Completeness Jan Maluszynski, IDA, 2007
CS151 Complexity Theory Lecture 15 May 18, CS151 Lecture 152 Outline IP = PSPACE Arthur-Merlin games –classes MA, AM Optimization, Approximation,
February 20, 2015CS21 Lecture 191 CS21 Decidability and Tractability Lecture 19 February 20, 2015.
CS151 Complexity Theory Lecture 9 April 27, 2004.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
Definition: Let M be a deterministic Turing Machine that halts on all inputs. Space Complexity of M is the function f:N  N, where f(n) is the maximum.
Complexity Classes Kang Yu 1. NP NP : nondeterministic polynomial time NP-complete : 1.In NP (can be verified in polynomial time) 2.Every problem in NP.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
CS151 Complexity Theory Lecture 13 May 11, Outline proof systems interactive proofs and their power Arthur-Merlin games.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
CS151 Complexity Theory Lecture 12 May 6, QSAT is PSPACE-complete Theorem: QSAT is PSPACE-complete. Proof: 8 x 1 9 x 2 8 x 3 … Qx n φ(x 1, x 2,
CS151 Complexity Theory Lecture 10 April 29, 2015.
Fall 2013 CMU CS Computational Complexity Lecture 7 Alternating Quantifiers, PH, PSPACE. These slides are mostly a resequencing of Chris Umans’
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
Umans Complexity Theory Lectures Lecture 9b: Pseudo-Random Generators (PRGs) for BPP: - Hardness vs. randomness - Nisan-Wigderson (NW) Pseudo- Random Generator.
Pseudo-randomness. Randomized complexity classes model: probabilistic Turing Machine –deterministic TM with additional read-only tape containing “coin.
Umans Complexity Theory Lectures
Probabilistic Algorithms
Computational Complexity Theory
Umans Complexity Theory Lectures
Umans Complexity Theory Lectures
CS151 Complexity Theory Lecture 11 May 8, 2017.
Pseudorandomness when the odds are against you
Intro to Theory of Computation
Intro to Theory of Computation
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
Lecture 24 NP-Complete Problems
How Hard Can It Be?.
CS21 Decidability and Tractability
CS154, Lecture 13: P vs NP.
CS21 Decidability and Tractability
CS21 Decidability and Tractability
Umans Complexity Theory Lectures
NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979.
P, NP and NP-Complete Problems
CS21 Decidability and Tractability
CS21 Decidability and Tractability
CS21 Decidability and Tractability
The Polynomial Hierarchy
CSE 589 Applied Algorithms Spring 1999
P, NP and NP-Complete Problems
CS151 Complexity Theory Lecture 10 May 2, 2019.
CS151 Complexity Theory Lecture 7 April 23, 2019.
CS151 Complexity Theory Lecture 12 May 10, 2019.
CSE 105 theory of computation
Instructor: Aaron Roth
CS151 Complexity Theory Lecture 5 April 16, 2019.
CS151 Complexity Theory Lecture 4 April 12, 2019.
CS151 Complexity Theory Lecture 4 April 8, 2004.
Presentation transcript:

CS151 Complexity Theory Lecture 11 May 7, 2019

Min-entropy General model of physical source w/ k < n bits of hidden randomness 2k strings string sampled uniformly from this set {0,1}n Definition: random variable X on {0,1}n has min-entropy minx –log(Pr[X = x]) min-entropy k implies no string has weight more than 2-k May 7, 2019

Extractor Extractor: universal procedure for “purifying” imperfect source: E is efficiently computable truly random seed as “catalyst” source string 2k strings E near-uniform seed m bits {0,1}n t bits May 7, 2019

|Prz[C(z) = 1] - Pry, x←X[C(E(x, y)) = 1]| ≤ ε Extractor “(k, ε)-extractor” ⇒ for all X with min-entropy k: output fools all circuits C: |Prz[C(z) = 1] - Pry, x←X[C(E(x, y)) = 1]| ≤ ε distributions E(X, Ut), Um “ε-close” (L1 dist ≤ 2ε) Notice similarity to PRGs output of PRG fools all efficient tests output of extractor fools all tests May 7, 2019

Extractors Using extractors Main motivating application: use output in place of randomness in any application alters probability of any outcome by at most ε Main motivating application: use output in place of randomness in algorithm how to get truly random seed? enumerate all seeds, take majority May 7, 2019

Extractors Goals: good: best: E short seed O(log n) log n+O(1) source string 2k strings E near-uniform seed m bits {0,1}n t bits Goals: good: best: short seed O(log n) log n+O(1) long output m = kΩ(1) m = k+t–O(1) many k’s k = nΩ(1) any k = k(n) May 7, 2019

Extractors random function for E achieves best ! Trevisan Extractor: but we need explicit constructions many known; often complex + technical optimal extractors still open Trevisan Extractor: insight: use NW generator with source string in place of hard function this works (!!) proof slightly different than NW, easier May 7, 2019

E(x, y)=C(x)[y|S1]◦C(x)[y|S2]◦…◦C(x)[y|Sm] Trevisan Extractor Ingredients: (𝛿 > 0, m are parameters) error-correcting code C:{0,1}n → {0,1}n’ distance (½ - ¼m-4)n’ blocklength n’ = poly(n) (log n’, a = δlog n/3) design: S1,S2,…,Sm ∈ {1…t = O(log n’)} E(x, y)=C(x)[y|S1]◦C(x)[y|S2]◦…◦C(x)[y|Sm] May 7, 2019

E(x, y)=C(x)[y|S1]◦C(x)[y|S2]◦…◦C(x)[y|Sm] Trevisan Extractor E(x, y)=C(x)[y|S1]◦C(x)[y|S2]◦…◦C(x)[y|Sm] Theorem (T): E is an extractor for min-entropy k = nδ, with output length m = k1/3 seed length t = O(log n) error ε ≤ 1/m C(x): 010100101111101010111001010 seed y May 7, 2019

Trevisan Extractor Proof: given X ⊆ {0,1}n of size 2k assume E fails to ε-pass statistical test C |Prz[C(z) = 1] – Prx←X, y[C(E(x, y)) = 1]| > ε distinguisher C ⇒ predictor P: Prx←X, y[P(E(x, y)1…i-1)=E(x, y)i] > ½ + ε/m May 7, 2019

Trevisan Extractor Proof (continued): for at least ε/2 of x ∈ X we have: Pry[P(E(x, y)1…i-1)=E(x, y)i] > ½ + ε/(2m) fix bits 𝛼,𝛽 outside of Si to preserve advantage Pry’[P(E(x; 𝛼y’𝛽)1…i-1)=C(x)[y’] ] >½ + ε/(2m) as vary y’, for j ≠ i, j-th bit of E(x; 𝛼y’𝛽) varies over only 2a values (m-1) tables of 2a values supply E(x;𝛼y’𝛽)1…i-1 May 7, 2019

Trevisan Extractor output C(x)[y’] w.p. ½ + ε/(2m) Y’ ∈ {0,1}log n’ P May 7, 2019

Trevisan Extractor Proof (continued): (m-1) tables of size 2a constitute a description of a string that has ½ + ε/(2m) agreement with C(x) # of strings x with such a description? exp((m-1)2a) = exp(nδ2/3) = exp(k2/3) strings Johnson Bound: each string accounts for at most O(m4) x’s total #: O(m4)exp(k2/3) << 2k(ε/2) contradiction May 7, 2019

|Prz[C(z) = 1] - Pry, x←X[C(E(x, y)) = 1]| ≤ ε Extractors Trevisan: k = n𝛿 t = O(log n) m = k1/3 𝜖 = 1/m (k, 𝜖)- extractor: E is efficiently computable ∀ X with minentropy k, E fools all circuits C: |Prz[C(z) = 1] - Pry, x←X[C(E(x, y)) = 1]| ≤ ε source string 2k strings E near-uniform seed m bits {0,1}n t bits May 7, 2019

Strong error reduction L ∈ BPP if there is a p.p.t. TM M: x ∈ L ⇒ Pry[M(x,y) accepts] ≥ 2/3 x ∀ L ⇒ Pry[M(x,y) rejects] ≥ 2/3 Want: x ∈ L ⇒ Pry[M(x,y) accepts] ≥ 1 - 2-k x ∉ L ⇒ Pry[M(x,y) rejects] ≥ 1 - 2-k We saw: repeat O(k) times n = O(k)·|y| random bits; 2n-k bad strings Want to spend n = poly(|y|) random bits; achieve << 2n/3 bad strings May 7, 2019

Strong error reduction Better: E extractor for minentropy k=|y|3=nδ, ε < 1/6 pick random w ∈ {0,1}n, run M(x, E(w, z)) for all z ∈ {0,1}t, take majority call w “bad” if majzM(x, E(w, z)) incorrect |Prz[M(x,E(w,z))=b] - Pry[M(x,y)=b]| ≥ 1/6 extractor property: at most 2k bad w n random bits; 2nδ bad strings May 7, 2019

RL Recall: probabilistic Turing Machine RL (Random Logspace) deterministic TM with extra tape for “coin flips” RL (Random Logspace) L ∈ RL if there is a probabilistic logspace TM M: x ∈ L ⇒ Pry[M(x,y) accepts] ≥ ½ x ∉ L ⇒ Pry[M(x,y) rejects] = 1 important detail #1: only allow one-way access to coin-flip tape important detail #2: explicitly require to run in polynomial time May 7, 2019

RL L ⊆ RL ⊆ NL ⊆ SPACE(log2 n) Theorem (SZ) : RL ⊆ SPACE(log3/2 n) Belief: L = RL (major open problem) May 7, 2019

(Recall: STCONN is NL-complete) RL L ⊆ RL ⊆ NL Natural problem: Undirected STCONN: given an undirected graph G = (V, E), nodes s, t, is there a path from s → t? Theorem: USTCONN ∈ RL. (Recall: STCONN is NL-complete) May 7, 2019

Undirected STCONN Proof sketch: (in Papadimitriou) add self-loop to each vertex (technical reasons) start at s, random walk 2|V||E| steps, accept if see t Lemma: expected return time for any node i is 2|E|/di suppose s=v1, v2, …, vn=t is a path expected time from vi to vi+1 is (di/2)(2|E|/di) = |E| expected time to reach vn ≤ |V||E| Pr[fail reach t in 2|V||E| steps] ≤ ½ Reingold 2005: USTCONN ∈ L May 7, 2019

May 7, 2019

A motivating question Central problem in logic synthesis: Complexity of this problem? NP-hard? in NP? in coNP? in PSPACE? complete for any of these classes? ∧ given Boolean circuit C, integer k is there a circuit C’ of size at most k that computes the same function C does? ∨ ∧ ∧ ∨ ¬ x1 x2 x3 … xn May 7, 2019

Oracle Turing Machines Oracle Turing Machine (OTM): multitape TM M with special “query” tape special states q?, qyes, qno on input x, with oracle language A MA runs as usual, except… when MA enters state q?: y = contents of query tape y ∈ A ⇒ transition to qyes y ∉ A ⇒ transition to qno May 7, 2019

Oracle Turing Machines Nondeterministic OTM defined in the same way (transition relation, rather than function) oracle is like a subroutine, or function in your favorite programming language but each call counts as single step e.g.: given φ1, φ2, …, φn are even # satisfiable? poly-time OTM solves with SAT oracle May 7, 2019

Oracle Turing Machines Shorthand #1: applying oracles to entire complexity classes: complexity class C language A CA = {L decided by OTM M with oracle A with M “in” C} example: PSAT May 7, 2019

Oracle Turing Machines Shorthand #2: using complexity classes as oracles: OTM M complexity class C MC decides language L if for some language A ∈ C, MA decides L Both together: CD = languages decided by OTM “in” C with oracle language from D exercise: show PSAT = PNP May 7, 2019

The Polynomial-Time Hierarchy can define lots of complexity classes using oracles the classes on the next slide stand out they have natural complete problems they have a natural interpretation in terms of alternating quantifiers they help us state certain consequences and containments (more later) May 7, 2019

The Polynomial-Time Hierarchy Δ1=PP Σ1=NP Π1=coNP Δ2=PNP Σ2=NPNP Π2=coNPNP Δi+1=PΣi Σi+i=NPΣi Πi+1=coNPΣi Polynomial Hierarchy PH = ∪i Σi May 7, 2019

The Polynomial-Time Hierarchy Δi+1=PΣi Σi+i=NPΣi Πi+1=coNPΣi Example: MIN CIRCUIT: given Boolean circuit C, integer k; is there a circuit C’ of size at most k that computes the same function C does? MIN CIRCUIT ∈ Σ2 May 7, 2019

The Polynomial-Time Hierarchy Δi+1=PΣi Σi+i=NPΣi Πi+1=coNPΣi Example: EXACT TSP: given a weighted graph G, and an integer k; is the k-th bit of the length of the shortest TSP tour in G a 1? EXACT TSP ∈ Δ2 May 7, 2019

The PH PSPACE: generalized geography, 2-person games… EXP PSPACE PSPACE: generalized geography, 2-person games… 3rd level: V-C dimension… 2nd level: MIN CIRCUIT, BPP… 1st level: SAT, UNSAT, factoring, etc… PH Σ3 Π3 Δ3 Σ2 Π2 Δ2 NP coNP P May 7, 2019

Useful characterization Recall: L ∈ NP iff expressible as L = { x | ∃ y, |y| ≤ |x|k, (x, y) ∈ R } where R ∈ P. Corollary: L ∈ coNP iff expressible as L = { x | ∀ y, |y| ≤ |x|k, (x, y) ∈ R } May 7, 2019

Useful characterization Theorem: L ∈ Σi iff expressible as L = { x | ∃ y, |y| ≤ |x|k, (x, y) ∈ R } where R ∈ Πi-1. Corollary: L ∈ Πi iff expressible as L = { x | ∀ y, |y| ≤ |x|k, (x, y) ∈ R } where R ∈ Σi-1. May 7, 2019

Useful characterization Theorem: L ∈ Σi iff expressible as L = { x | ∃ y, |y| ≤ |x|k, (x, y) ∈ R }, where R ∈ Πi-1. Proof of Theorem: induction on i base case (i =1) on previous slide (⇐) we know Σi = NPΣi-1 = NPΠi-1 guess y, ask oracle if (x, y) ∈ R May 7, 2019

Useful characterization Theorem: L ∈ Σi iff expressible as L = { x | ∃ y, |y| ≤ |x|k, (x, y) ∈ R }, where R ∈ Πi-1. (⇒) given L ∈ Σi = NPΣi-1 decided by ONTM M running in time nk try: R = { (x, y) : y describes valid path of M’s computation leading to qaccept } but how to recognize valid computation path when it depends on result of oracle queries? May 7, 2019

Useful characterization Theorem: L ∈ Σi iff expressible as L = { x | ∃ y, |y| ≤ |x|k, (x, y) ∈ R }, where R ∈ Πi-1. try: R = { (x, y) : y describes valid path of M’s computation leading to qaccept } valid path = step-by-step description including correct yes/no answer for each A-oracle query zj (A ∈ Σi-1) verify “no” queries in Πi-1: e.g: z1∉A ∧ z3∉A ∧ … ∧ z8∉A for each “yes” query zj: ∃ wj, |wj| ≤ |zj|k with (zj, wj) ∈ R’ for some R’ ∈ Πi-2 by induction. for each “yes” query zj put wj in description of path y May 7, 2019

Useful characterization Theorem: L ∈ Σi iff expressible as L = { x | ∃ y, |y| ≤ |x|k, (x, y) ∈ R }, where R ∈ Πi-1. single language R in Πi-1 : (x, y) ∈ R ⇔ all “no” zj are not in A and all “yes” zj have (zj, wj) ∈ R’ and y is a path leading to qaccept. Note: AND of polynomially-many Πi-1 predicates is in Πi-1. May 7, 2019

Alternating quantifiers Nicer, more usable version: L∈Σi iff expressible as L = { x | ∃y1 ∀y2 ∃y3 …Qyi (x, y1,y2,…,yi)∈R } where Q= ∀/∃ if i even/odd, and R∈P L∈Πi iff expressible as L = { x | ∀y1 ∃y2 ∀y3 …Qyi (x, y1,y2,…,yi)∈R } where Q= ∃/∀ if i even/odd, and R∈P May 7, 2019

Alternating quantifiers Proof: (⇒) induction on i base case: true for Σ1=NP and Π1=coNP consider L∈Σi: L = {x | ∃y1 (x, y1) ∈ R’}, for R’ \in Πi-1 L = {x | ∃y1 ∀y2 ∃y3 …Qyi ((x, y1), y2,…,yi)∈R} L = {x | ∃y1 ∀y2 ∃y3 …Qyi (x, y1,y2,…,yi)∈R} same argument for L ∈ Πi (⇐) exercise. May 7, 2019

Alternating quantifiers Pleasing viewpoint: “∃∀∃∀∃∀∃…” PSPACE const. # of alternations poly(n) alternations PH Δ3 Σ2 Π2 “∃∀” “∀∃” “∃∀∃ …” Σi “∀∃∀ …” Πi Δ2 Σ3 “∃∀∃” “∀∃∀” Π3 NP coNP “∃” “∀” P May 7, 2019

Complete problems three variants of SAT: QSATi (i odd) = {3-CNFs φ(x1, x2, …, xi) for which ∃x1∀x2 ∃x3 … ∃xi φ(x1, x2, …, xi) = 1} QSATi (i even) = {3-DNFs φ(x1, x2, …, xi) for which ∃x1 ∀x2 ∃x3 … ∀xi φ(x1, x2, …, xi) = 1} QSAT = {3-CNFs φ for which ∃x1 ∀x2 ∃x3 … Qxn φ(x1, x2, …, xn) = 1} May 7, 2019