Umans Complexity Theory Lectures

Slides:



Advertisements
Similar presentations
Complexity Theory Lecture 6
Advertisements

Isolation Technique April 16, 2001 Jason Ku Tao Li.
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
Complexity Theory Lecture 4 Lecturer: Moni Naor. Recap Last week: Space Complexity Savitch’s Theorem: NSPACE(f(n)) µ SPACE(f 2 (n)) –Collapse of NPSPACE.
Lecture 16: Relativization Umans Complexity Theory Lecturess.
Probabilistic algorithms Section 10.2 Giorgi Japaridze Theory of Computability.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
Probabilistic Algorithms Michael Sipser Presented by: Brian Lawnichak.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
CS151 Complexity Theory Lecture 5 April 13, 2015.
Computability and Complexity 19-1 Computability and Complexity Andrei Bulatov Non-Deterministic Space.
CS151 Complexity Theory Lecture 7 April 20, 2004.
CS151 Complexity Theory Lecture 5 April 13, 2004.
Randomized Computation Roni Parshani Orly Margalit Eran Mantzur Avi Mintz
1 Slides by Gil Nadel & Lena Yerushalmi. Adapted from Oded Goldreich’s course lecture notes by Amiel Ferman, Noam Sadot, Erez Waisbard and Gera Weiss.
CS151 Complexity Theory Lecture 7 April 20, 2015.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Class NL.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
CS151 Complexity Theory Lecture 11 May 4, CS151 Lecture 112 Outline Extractors Trevisan’s extractor RL and undirected STCONN.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
Complexity ©D. Moshkovitz 1 And Randomized Computations The Polynomial Hierarchy.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
CSCI 2670 Introduction to Theory of Computing November 29, 2005.
1 2 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- Turing machines.  We will still consider only.
Computation Model and Complexity Class. 2 An algorithmic process that uses the result of a random draw to make an approximated decision has the ability.
CSCI 2670 Introduction to Theory of Computing December 1, 2004.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
. CLASSES RP AND ZPP By: SARIKA PAMMI. CONTENTS:  INTRODUCTION  RP  FACTS ABOUT RP  MONTE CARLO ALGORITHM  CO-RP  ZPP  FACTS ABOUT ZPP  RELATION.
CS151 Complexity Theory Lecture 12 May 6, QSAT is PSPACE-complete Theorem: QSAT is PSPACE-complete. Proof: 8 x 1 9 x 2 8 x 3 … Qx n φ(x 1, x 2,
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Fall 2013 CMU CS Computational Complexity Lecture 7 Alternating Quantifiers, PH, PSPACE. These slides are mostly a resequencing of Chris Umans’
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
Umans Complexity Theory Lectures Lecture 1c: Robust Time & Space Classes.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
Pseudo-randomness. Randomized complexity classes model: probabilistic Turing Machine –deterministic TM with additional read-only tape containing “coin.
NP-Completeness A problem is NP-complete if: It is in NP
The NP class. NP-completeness
Probabilistic Algorithms
Great Theoretical Ideas in Computer Science
Time Complexity Costas Busch - LSU.
Umans Complexity Theory Lectures
Umans Complexity Theory Lectures
CS151 Complexity Theory Lecture 11 May 8, 2017.
Intro to Theory of Computation
Intro to Theory of Computation
CSCI 2670 Introduction to Theory of Computing
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
Computational Complexity
Time Complexity We use a multitape Turing machine
CS21 Decidability and Tractability
Time Complexity Classes
Umans Complexity Theory Lectures
CS21 Decidability and Tractability
The Polynomial Hierarchy
CS151 Complexity Theory Lecture 8 April 26, 2019.
CS151 Complexity Theory Lecture 7 April 23, 2019.
CS151 Complexity Theory Lecture 11 May 7, 2019.
CS151 Complexity Theory Lecture 12 May 10, 2019.
CS151 Complexity Theory Lecture 1 April 2, 2019.
Instructor: Aaron Roth
Instructor: Aaron Roth
CS151 Complexity Theory Lecture 5 April 16, 2019.
CS151 Complexity Theory Lecture 4 April 12, 2019.
CS151 Complexity Theory Lecture 4 April 8, 2004.
Presentation transcript:

Umans Complexity Theory Lectures Lecture 8: Introduction to Randomized Complexity: - Randomized complexity classes, - Error reduction, BPP in P/poly Reingold’s Undirected Graph Reachability in RL

Randomized complexity classes model: probabilistic Turing Machine deterministic TM with additional read-only tape containing “coin flips” “p.p.t” = probabilistic polynomial time BPP (Bounded-error Probabilistic Poly-time) L  BPP if there is a p.p.t. TM M: x  L  Pry[M(x,y) accepts] ≥ 2/3 x  L  Pry[M(x,y) rejects] ≥ 2/3

Randomized complexity classes RP (Random Polynomial-time) L  RP if there is a p.p.t. TM M: x  L  Pry[M(x,y) accepts] ≥ ½ x  L  Pry[M(x,y) rejects] = 1 coRP (complement of Random Polynomial-time) L  coRP if there is a p.p.t. TM M: x  L  Pry[M(x,y) accepts] = 1 x  L  Pry[M(x,y) rejects] ≥ ½

Randomized complexity classes One more important class: ZPP (Zero-error Probabilistic Poly-time) ZPP = RP  coRP Pry[M(x,y) outputs “fail”] ≤ ½ otherwise outputs correct answer

Randomized complexity classes These classes may capture “efficiently computable” better than P. “1/2” in ZPP, RP, coRP definition unimportant can replace by 1/poly(n) “2/3” in BPP definition unimportant can replace by ½ + 1/poly(n) Why? error reduction we will see simple error reduction by repetition more sophisticated error reduction later

Relationship to other classes all these classes contain P they can simply ignore the tape with coin flips all are in PSPACE can exhaustively try all strings y count accepts/rejects; compute probability RP  NP (and coRP  coNP) multitude of accepting computations NP requires only one

Relationship to other classes PSPACE BPP NP coNP RP coRP P

Error reduction for RP given L and p.p.t TM M: new p.p.t TM M’: x  L  Pry[M(x,y) accepts] ≥ ε x  L  Pry[M(x,y) rejects] = 1 new p.p.t TM M’: simulate M k/ε times, each time with independent coin flips accept if any simulation accepts otherwise reject

Error reduction for RP if x  L: if x  L: x  L  Pry[M(x,y) accepts] ≥ ε x  L  Pry[M(x,y) rejects] = 1 if x  L: probability a given simulation “bad” ≤ (1 – ε) Since (1–ε)(1/ε) ≤ e-1 we have: probability all simulations “bad” ≤ (1–ε)(k/ε) ≤ e-k Pry’[M’(x, y’) accepts] ≥ 1 – e-k if x  L: Pry’[M’(x,y’) rejects] = 1

Error reduction for BPP given L, and p.p.t. TM M: x  L  Pry[M(x,y) accepts] ≥ ½ + ε x  L  Pry[M(x,y) rejects] ≥ ½ + ε new p.p.t. TM M’: simulate M k/ε2 times, each time with independent coin flips accept if majority of simulations accept otherwise reject

Error reduction for BPP Xi random variable indicating “correct” outcome in i-th simulation (out of m = k/ε2 ) Pr[Xi = 1] ≥ ½ + ε Pr[Xi = 0] ≤ ½ - ε E[Xi] ≥ ½+ε X = ΣiXi μ = E[X] ≥ (½ + ε)m Chernoff Tail Bound: Pr[X ≤ m/2] ≤ 2-cε2μ for some c>0

Error reduction for BPP x  L  Pry[M(x,y) accepts] ≥ ½ + ε x  L  Pry[M(x,y) rejects] ≥ ½ + ε if x  L Pry’[M’(x, y’) accepts] ≥ 1 – (½)ck if x  L Pry’[M’(x,y’) rejects] ≥ 1 – (½)ck

Randomized complexity classes We have shown: polynomial identity testing is in coRP a poly-time algorithm for detecting unique solutions to SAT implies NP = RP

Is randomness a panacea for intractability? BPP How powerful is BPP? We have seen an example of a problem in BPP that we only know how to solve in EXP. Is randomness a panacea for intractability?

BPP It is not known if BPP = EXP (or even NEXP!) but there are strong hints that it does not Is there a deterministic simulation of BPP that does better than brute-force search? yes, if allow non-uniformity Theorem (Adleman): BPP  P/poly

BPP and Boolean circuits Proof: language L  BPP error reduction gives TM M such that if x  L of length n Pry[M(x, y) accepts] ≥ 1 – (½)n2 if x  L of length n Pry[M(x, y) rejects] ≥ 1 – (½)n2

BPP and Boolean circuits say “y is bad for x” if M(x,y) gives incorrect answer for fixed x: Pry[y is bad for x] ≤ (½)n2 Pry[y is bad for some x] ≤ 2n(½)n2< 1 Conclude: there exists some y on which M(x, y) is always correct build circuit for M, hardwire this y

BPP and Boolean circuits Does BPP = EXP ? Adleman’s Theorem shows: BPP = EXP implies EXP  P/poly If you believe that randomness is all-powerful, you must also believe that non-uniformity gives an exponential advantage.

RL Recall: probabilistic Turing Machine RL (Random Logspace) deterministic TM with extra tape for “coin flips” RL (Random Logspace) L  RL if there is a probabilistic logspace TM M: x  L  Pry[M(x,y) accepts] ≥ ½ x  L  Pry[M(x,y) rejects] = 1 important detail #1: only allow one-way access to coin-flip tape important detail #2: explicitly require to run in polynomial time

RL L  RL  NL  SPACE(log2 n) Theorem (SZ) : RL  SPACE(log3/2 n) Belief: L = RL (open problem)

(Recall: STCONN is NL-complete) RL L  RL  NL Natural problem: Undirected STCONN: given an undirected graph G = (V, E), nodes s, t, is there a path from s  t ? Theorem: USTCONN  RL. (Recall: STCONN is NL-complete)

Undirected STCONN Test in RL if path from s to t in graph G=(V, E) Proof sketch: (in Papadimitriou) add self-loop to each vertex (technical reasons) start at s, random walk 2|V||E| steps, accept if see t Lemma: expected return time for any node i (of degree di) is 2|E|/di suppose s=v1, v2, …, vn=t is a path expected time from vi to vi+1 is (di/2)(2|E|/di) = |E| expected time to reach vn ≤ |V||E| Pr[fail reach t in 2|V||E| steps] ≤ ½ (Note: Reingold 2005: USTCONN  L)