Randomized Computation Roni Parshani 025529199 Orly Margalit037616638 Eran Mantzur 028015329 Avi Mintz017629262.

Slides:



Advertisements
Similar presentations
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Advertisements

Complexity Theory Lecture 6
Isolation Technique April 16, 2001 Jason Ku Tao Li.
JAYASRI JETTI CHINMAYA KRISHNA SURYADEVARA
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
Theory of Computing Lecture 16 MAS 714 Hartmut Klauck.
Dana Moshkovitz. Back to NP L  NP iff members have short, efficiently checkable, certificates of membership. Is  satisfiable?  x 1 = truex 11 = true.
Having Proofs for Incorrectness
Probabilistic algorithms Section 10.2 Giorgi Japaridze Theory of Computability.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
Probabilistic Algorithms Michael Sipser Presented by: Brian Lawnichak.
Chernoff Bounds, and etc.
Complexity 7-1 Complexity Andrei Bulatov Complexity of Problems.
Complexity 25-1 Complexity Andrei Bulatov #P-Completeness.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Complexity 26-1 Complexity Andrei Bulatov Interactive Proofs.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
P ROBABILISTIC T URING M ACHINES Stephany Coffman-Wolph Wednesday, March 28, 2007.
Computability and Complexity 19-1 Computability and Complexity Andrei Bulatov Non-Deterministic Space.
CS151 Complexity Theory Lecture 7 April 20, 2004.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata are like Turing Machines with a restriction: The working space of the tape is the space of the.
Complexity 5-1 Complexity Andrei Bulatov Complexity of Problems.
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
The Counting Class #P Slides by Vera Asodi & Tomer Naveh
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
1 Slides by Gil Nadel & Lena Yerushalmi. Adapted from Oded Goldreich’s course lecture notes by Amiel Ferman, Noam Sadot, Erez Waisbard and Gera Weiss.
Probabilistic Complexity. Probabilistic Algorithms Def: A probabilistic Turing Machine M is a type of non- deterministic TM, where each non-deterministic.
CS151 Complexity Theory Lecture 7 April 20, 2015.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
Randomized Computation
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Class NL.
Quantum Automata Formalism. These are general questions related to complexity of quantum algorithms, combinational and sequential.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
Complexity ©D. Moshkovitz 1 And Randomized Computations The Polynomial Hierarchy.
Study Group Randomized Algorithms Jun 7, 2003 Jun 14, 2003.
The Polynomial Hierarchy By Moti Meir And Yitzhak Sapir Based on notes from lectures by Oded Goldreich taken by Ronen Mizrahi, and lectures by Ely Porat.
The Complexity of Primality Testing. What is Primality Testing? Testing whether an integer is prime or not. – An integer p is prime if the only integers.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
CS151 Complexity Theory Lecture 13 May 11, Outline proof systems interactive proofs and their power Arthur-Merlin games.
CSCI 2670 Introduction to Theory of Computing November 29, 2005.
1 2 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- Turing machines.  We will still consider only.
Computation Model and Complexity Class. 2 An algorithmic process that uses the result of a random draw to make an approximated decision has the ability.
. CLASSES RP AND ZPP By: SARIKA PAMMI. CONTENTS:  INTRODUCTION  RP  FACTS ABOUT RP  MONTE CARLO ALGORITHM  CO-RP  ZPP  FACTS ABOUT ZPP  RELATION.
Complexity 25-1 Complexity Andrei Bulatov Counting Problems.
PROBABILISTIC COMPUTATION By Remanth Dabbati. INDEX  Probabilistic Turing Machine  Probabilistic Complexity Classes  Probabilistic Algorithms.
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
P Vs NP Turing Machine. Definitions - Turing Machine Turing Machine M has a tape of squares Each Square is capable of storing a symbol from set Γ (including.
Randomization Carmella Kroitoru Seminar on Communication Complexity.
NP-Completness Turing Machine. Hard problems There are many many important problems for which no polynomial algorithms is known. We show that a polynomial-time.
Complexity 24-1 Complexity Andrei Bulatov Interactive Proofs.
NP ⊆ PCP(n 3, 1) Theory of Computation. NP ⊆ PCP(n 3,1) What is that? NP ⊆ PCP(n 3,1) What is that?
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Umans Complexity Theory Lectures
Probabilistic Algorithms
Part VI NP-Hardness.
COMS E F15 Lecture 2: Median trick + Chernoff, Distinct Count, Impossibility Results Left to the title, a presenter can insert his/her own image.
RS – Reed Solomon List Decoding.
Complexity Theory in Practice
CS21 Decidability and Tractability
The Polynomial Hierarchy
CS151 Complexity Theory Lecture 7 April 23, 2019.
Instructor: Aaron Roth
Presentation transcript:

Randomized Computation Roni Parshani Orly Margalit Eran Mantzur Avi Mintz

RP – Random Polynomial Time Denotation: L is Language M is probabilistic polynomial time turning machine Definition: L  RP if  M such that x  L  Prob[ M(x) = 1 ]  ½ x  L  Prob[ M(x) = 0 ] = 1

RP – Random Polynomial Time The disadvantage of RP (coRP) is when the “Input” doesn’t belong to language (does belong to the language) the machine needs to return a correct answer at all times. Definition: x  L   L (x) = 1 x  L   L (x) = 0

RP  NP Proof: –Given: L  RP –Aim : L  NP L  RP   x  L  M such that more than 50% of y give M(x,y) = 1   y : M(x,y) = 1   x  L  y M(x,y) = 0  L  NP

coRP - Complementary Random Polynomial Time Definition: L  coRP if  M such that x  L  Prob[ M(x) = 1 ] = 1 x  L  Prob[ M(x) = 0 ]  ½ An alternative way to define coRP is coRP = { : L  RP }

coRP  co-NP Proof: –Give: L  coRP –Aim : L  co-NP L  coRP   RP   NP  L  co-NP

RP1 P(.) is a polynomial Definition: L  RP1 if  M, p(.) such that x  L  Prob[ M(x,r) = 1 ]  x  L  Prob[ M(x,r) = 0 ] = 1

RP2 P(.) is a polynomial Definition: L  RP2 if  M, p(.) such that x  L  Prob[ M(x,r) = 1 ]  1 – 2 -p(|x|) x  L  Prob[ M(x,r) = 0 ] = 1

RP1 = RP2 = RP Aim: RP1 = RP2 RP2  RP1  we can always select a big enough x such that < 1 – 2 -p(|x|)

RP1 = RP2 = RP RP1  RP2 L  RP1   M, p(.) such that  x  L Prob[ M(x,r) = 1 ]  we run M(x,r) t(|x|) times: If in any of the runs M(x,r) = 1  output is 1 If in all of the runs M(x,r) = 0  output is 0

RP1  RP2 Select t(|x|) ≥ Therefore if x  L  output is 0 If x  L the probability of outputting 0 is only if M(x,r) = 0 all t(|x|) times  ( Prob[M(x,r) = 0] ) t(|x|) ≤ (1- ) t(|x|)  [1- ] ≤ 2 -p(|x|)

RP1  RP2  So the probability of outputting 1 is larger than p(|x|)  L  RP2 Conclusion:  RP1  RP  RP2  RP1 Therefore RP1 = RP = RP2

BPP – Bounded Probability Polynomial Time Definition: L  BPP if  M such that x  L  Prob[ M(x) = 1 ]  ⅔ x  L  Prob[ M(x) = 1 ] < In other words:  x : Prob[ M(x) =  L (x) ]  ⅔

coBPP = BPP coBPP = { : L  BPP } = { :  M : Prob[ M(x) =  L (x) ]  ⅔ } = { :  : Prob[ (x) =  (x) ]  ⅔ } = BPP = 1 – M(.) (M(.) exists iff (.) exists)

BPP1 Previously we defined stricter and weaker definition for RP, in a similar way we will define for BPP. Denotation: p(.) – positive polynomial f – polynomial time computable function Definition: L  BPP1 if  M, p(.), f such that  x  L  Prob[ M(x) = 1 ]  f(|x|) +  x  L  Prob[ M(x) = 1 ] < f(|x|) -

BPP = BPP1 Proof: Aim: BPP  BPP1 f(|x|) = ½ and p(|x|) = 6 This gives the original definition of BPP.

BPP = BPP1 Proof: Aim: BPP1  BPP L  BPP1   M such that  x  L Prob [ M(x) = 1]  f(|x|) +  x  L Prob [ M(x) = 1] < f(|x|) –

BPP1  BPP we want to know with Prob > ⅔ if 0  p  f(|x|) – 1/p(|x|) or iff(|x|) + 1/p(|x|)  p  1 Define: M’ runs M(x) n times, and each M(x) returns If > f(|x|) M’ returns YES, else NO

BPP1  BPP Calculation of n We run n independent Bernoulli variables with p  ½ and Prob < 2   =

BPP1  BPP Choose : and Result: M’ decides L(M) with Prob > ⅔

BPP2 Denotation: p(.) – positive polynomial Definition: L  BPP2 if  M, p(.) such that  x : Prob[ M(x) =  L (x) ]  p(|x|)

BPP  BPP2 Proof: Aim: BPP  BPP2 p(|x|) = This gives the original definition of BPP.

BPP  BPP2 Proof: Aim: BPP  BPP2 L  BPP   M :  x Prob[ M(x) =  L (x) ]  ⅔ Define: M’ runs M(x) n times, and each M(x) returns If > ½ M’ returns YES, else NO We know : Exp[M(x)] > ⅔  x  L Exp[M(x)] <  x  L

BPP  BPP2 Chernoff’s Equation : Let {X 1, X 2, …, X n } be a set of independent Bernoulli variables with the same expectations p  ½,and  : 0<   p(p-1) Then Prob

BPP  BPP2 From Chernoff’s equation :  Prob[|M’(x) – Exp[M(x)]|  ]  But if |M’(x) – Exp[M(x)]|   then M’ returns a correct answer

BPP  BPP2  Prob[M’(x)=  L (x) ]   polynomial P(x) we choose n such that  Prob[M’(x) =  L (x) ]   L  BPP2

RP  BPP Proof: L  RP if  M such that x  L  Prob[ M(x) = 1 ]  ½ x  L  Prob[ M(x) = 0 ] = 1 We previously proved BPP = BPP1 If we place in BPP1 formula with f(.)  and p(.)  4 this gives the original definition of RP.

P  BPP Proof: L  P   M such that M(x) =  L (x)  x : Prob[ M(x) =  L (x) ] =1  ⅔  L  BPP

PSPACE Definition: L  PSPACE if  M such that M(x) =  L (x) and  p such that M uses p(|x|) space. (No time restriction)

PP – Probability Polynomial Time Definition: L  PP if  M such that x  L  Prob[ M(x) = 1 ] > ½ x  L  Prob[ M(x) = 1 ]  ½ In other words  x : Prob[ M(x) =  L (x) ] > ½

PP  PSPACE Definition: (reminder) L  PP if  M such that  x : Prob[ M(x) =  L (x) ]  ½ Proof: L  PP   M, p(.) such that  x: Prob[ M(x,r) =  L (x) ] > ½ and M is polynomial time. If we run M on  r, M is correct more than 50% of the time.

PP  PSPACE Aim: L  PSPACE Run M on every single r. Count the number of received “1” and “0”. The correct answer is the greater result.

PP  PSPACE By the definition of PP, every L  PP this algorithm will always be correct. M(x,r) is polynomial in space  New algorithm is polynomial in space  L  PSPACE

Claim: PP = PP1 If we have a machine that satisfies PP it also satisfies PP1 (Since PP is stricter then PP1 and demands grater then 1/2 and PP demands only, equal or grater to ½) so clearly 

Let M be a language in PP1 Motivation The trick is to build a machine that will shift the answer of M towards the NO direction with a very small probability that is smaller than the smallest probability difference that M could have. So if M is biased towards YES our shift will not harm the direction of the shift. But if there is no bias(or bias towards NO) our shift will give us a bias towards the no answer.

Proof: Let M’ be defined as:

With probability return NO With probability invoke M M’ chooses one of two moves.

If :

Suppose that is decided by a non deterministic machine M with a running time that is bounded by the polynomial p(x). The following machine M’ then will decide L according to the following definition:

M’ uses it’s random coin tosses as a witness to M with only one toss that it does not pass to M’. This toss is used to choose it’s move. One of the two possible moves gets it to the ordinary computation of M with the same input(and the witness is the random input).

The other choice gets it to a computation that always accepts. Consider string x. If M doesn't have an accepting computation then the probability that M’ will answer 1 is exactly 1/2. On the other hand, if M has at least one accepting computation the probability that M’ will answer correctly is greater then 1/2.

Meaning and by the previous claim (PP = PP1) we get that. So we get that:

ZPP – Zero Error Probability We define a probabilistic turning machine which is allowed to reply “I Don’t Know” which will be symbolized by “ ┴ ”. Definition: L  ZPP if  M such that  x : Prob[ M(x) = ┴ ] ≤ ½  x : Prob[ M(x) =  L (x) or M(x) = ┴ ] = 1

Take. Let M be a “ZPP machine”. We will build a machine M’ that decides L according to the definition of RP.

If then by returning 0 when we will always answer correctly because in this case

If the probability of getting the right answer with M’ is greater then 1/2 since M returns a definite answer with probability greater then 1/2 and M’s definite answers are always correct.

In the same way it can be seen that by defining M’(x) as: we get that

If then we will get a YES answer from and hence from M’ with probability greater then 1/2. If then we will get a NO answer from and hence from M’ with probability greater then 1/2.

RSPACE – Randomized Space Complexity Definition: RSPACE (s) = L  RP such that M RP uses at most s(|x|) space and exp( s(|x|) ) time. BadRSPACE (s) = RSPACE (s) without time restriction.

Claim: badRSPACE = NSPACE badRSPACE NSPACE Let L badRSPACE. If x L that means there is at least one witness and the non deterministic machine of NSPACE will choose it.

If x L that means there are no witnesses at all therefore the non deterministic machine of NSPACE also will not find a solution.

NSPACE badRSPACE L NSPACE. M is the Non - deterministic Turing machine which decides L in space S(|x|). If x L there exists r of length exp(S(|x|), so that M(x,r) = 1, where r is the non-deterministic guess used by M. Therefore the probability of selecting r so that M(x,r) = 1 is at least

So if we repeatedly invoke M(x,.) on random r’s we can expect that after tries we will see an accepting computation. So what we want our machine M’ to do is run M on x and a newly randomly selected r (of length exp(S(|x|))) for about times and accept iff M accepts in one of these tries.

Problem: In order to count to we need a counter that uses space of exp(S(|x|)), and we only have S(|x|).

Solution: We will use a randomized counter that will use only S(|x|) space. We flip k = coins. if all are heads then stop else go on. The expected num of tries. But the real counter only needs to count to k and therefore only needs space of.