Download presentation
Presentation is loading. Please wait.
Published byFay Dixon Modified over 9 years ago
2
1
3
2 Probabilistic Computations Extend the notion of “efficient computation” beyond polynomial-time- Turing machines. We will still consider only machines that are allowed to run no more than a fixed polynomial in the input length. 7.1
4
3 Random vs. Non-deterministic computations In an NDTM, even a single wizard’s witness is enough to cause the input be accepted. In the randomized model - we require the probability that such a witness is detected to be larger than a certain threshold. There are two approaches to randomness, online & offline, which we will show to be identical.
5
4 Online approach NDTM : (,, ) (,, ) Randomized TM : (,, ) (,, ) NDTM accepts if a “good” guess exists - a single accepting computation is enough. A Random TM accepts if sufficient accepting computations exist. 7.2
6
5 Probabilistic Turing Machines Probabilistic TMs have an “extra” tape: the random tape M(x)Pr r [M(x,r)] content of input tape “standard” TMsprobabilistic TMs content of random tape
7
6 Does It Really Capture The Notion of Randomized Algorithms? It doesn’t matter if you toss all your coins in advance or throughout the computation…
8
7
9
8
10
9
11
10 Online approach A random online TM has a creates a probability space on possible computations. The computations can be described as a tree. We usually regard only binary tree, because we can simulate any other tree as close as needed.
12
11 Offline approach A language L NP iff there exists a DTM and a polynomial p( ) and a TM M, s.t. for every x L y |y| p(|x|) and TM(x,y) accepts. The DTM has access to a “witness” string of polynomial length. Similarly, a random TM has access to a “guess” input string. BUT... There must be sufficient such witnesses 7.3
13
12 The Random Complexity classes A language that can be recognized by a polynomial time probabilistic TM with good probability, we consider it relatively easy. We first consider a one-sided error complexity class.
14
13 A language L RP, iff a probabilistic TM M exists, such that x L Prob[M(x)=1] ½ x L Prob[M(x)=1] = 0 A language L coRP, iff a probabilistic TM exists M, such that x L Prob[M(x)=1] = 1 x L Prob[M(x)=0] ½ These classes complement each other: coRP = { L: L RP } The classes RP and coRP 7.4
15
14 Comparing RP with NP Let L be a language in NP or RP. Let R L be the relation defining the witness/guess for L for a certain TM. Then: NP RP x L y s.t. (x,y) R L x L Prob[(x,r) R L ] ½ x L y (x,y) R L x L r (x,r) R L Obviously, RP NP 7.5
16
15 Amplification The constant ½ in the definition of RP is arbitrary. If we have a probabilistic TM that accepts x L with probability p<½, we can run this TM several times to “amplify” the probability. If x L, all runs will return 0. If x L, and we run it n times than the probability that none of these accepts is Prob[M n (x)=1] = 1-Prob[M n (x) 1] = = 1-Prob[M(x) 1] n = = 1-(1-Prob[M(x)=1]) n = 1-(1-p) n 7.6
17
16 Alternative Definitions for RP RP1 will mistake quite often, while RP2 will almost never mistake: Define: L RP1 iff probabilistic Poly-time TM M and a polynomial p( ), s.t. x L Prob[M(x)=1] 1/p(|x|) x L Prob[M(x)=1] = 0 Define: L RP2 iff probabilistic Poly-time TM M and a polynomial p( ), s.t. x L Prob[M(x)=1] 1-2 -p(|x|) x L Prob[M(x)=1] = 0
18
17 Claim: RP1=RP2 Trivially, RP2 RP1: if |x| is big enough, then 1-2 - p(|x|) 1/p(|x|) Suppose L RP1: there exists M 1 s.t. x L: Prob[M 1 (x,r)=1] 1/p(|x|) M 2 runs M 1 t(|x|) times, to find that Prob[M 2 (x,r)=0] (1-1/p(|x|)) t(|x|) Solving (1-1/p(|x|)) t(|x|) = 1-2 -p(|x|) gives t(|x|) p(|x|) 2 /log 2 e Hence, M 2 runs in polynomial time and thus L RP2 RP1 RP2 7.7
19
18 The class BPP Define: L BPP iff there exists a polynomial-time probabilistic TM M, such that x L: Prob[M(x)= L (x)] 2/3 where L (x)=1 if x L, and L (x)=0 if x L. The BPP machine success probability is bounded away from failure probability 7.8
20
19 BPP (Bounded-Probability Polynomial-Time) Definition: BPP is the class of all languages L which have a probabilistic polynomial time TM M, s.t x Pr r [M(x,r) = L (x)] 2/3 L (x)=1 x L such TMs are called ‘Atlantic City’
21
20 Amplification Claim: If L BPP, then there exists a probabilistic polynomial TM M’, and a polynomial p(n) s.t x {0,1} n Pr r {0,1} p(n) [M’(x,r) L (x)] < 1/(3p(n)) We can get better amplifications, but this will suffice here...
22
21
23
22
24
23 Relations to P and NP P BPP NP ignore the random input ?
25
24 Does BPP NP? We may have considered saying: “Use the random string as a witness” Why is that wrong?
26
25 Constant invariance An alternative definition: x L Prob[M(x)=1] p+ x L Prob[M(x)=1] p- We can build a machine M 2 that runs M n times, and return the majority of results of M. After a constant number of executions (depending on p and but not on x) we would get the desired probability. 7.9
27
26 The weakest possible BPP definition Define: L BPPW iff there exists a polynomial-time probabilistic TM M, and a polynomial-time computable function f:N [0,1] and a polynomial p( ), such that x L: x L Prob[M(x)=1] f(|x|) + 1/p(|x|) x L Prob[M(x)=1] f(|x|) - 1/p(|x|) If we set f(|x|)=½ and p(|x|)=6 we get the original definition. Hence, BPP BPPW 7.10
28
27 Claim: BPPW=BPP Let L BPPW and let M be its computing TM. Define M’(x) invoke t i =M(x) i=1…n, compute p= t i /n, if p>f(|x|) return YES else return NO. Notice p is the mean of a sample of size n of the random variable M(x). By setting =1/p(|x|) and n=-ln(1/6)/2 2 we get the desired probability gap.
29
28 The strongest possible BPP definition Define: L BPPS iff there exists a polynomial-time probabilistic TM M, and a polynomial p( ), such that x L Prob[M(x)= L (x)] 1-2 -p(|x|) If we set p(|x|)=2 we get the original definition, because 1-2 -2 Hence, BPPS BPP 7.11
30
29 BPP and other complexity classes Clearly, RP BPP. BPP NP? Unknown. coBPP=BPP by definition. 7.12
31
30 The class PP The class BPP allowed for a small but not negligible gap for mistakes. The class PP allows for even smaller gaps, even a single random toss is enough. Running a PP machine polynomially many times, would not necessarily help. Define: L PP iff there exists a polynomial- time probabilistic TM M, such that x L: Prob[M(x)= L (x)] ½ 7.13
32
31 Claim: PP Claim: PP PSPACE Let L PP, M be the TM that recognizes L, and p( ) be the time bound. Define M’(x) run M(x) using all p(|x|) possible coin tosses, and decides by majority. M’ uses polynomial space for each of the (exponential number of) invocations, and only needs to count the number of successes. 7.14
33
32 Other variants for PP Define: L PP1 iff a TM M exists s.t. x L Prob[M(x)=1] ½ x L Prob[M(x)=0] ½ Obviously, PP PP1.
34
33 Claim: PP=PP1 Let L PP1, M be the TM that recognizes L, and p( ) be the time bound. M’(x,(a 1,a 2,...,a p(|x|),b 1,b 2,...,b p(|x|) )) if a 1 =a 2 =...=a p(|x|) =1 then return NO, else return M(x,(b 1,b 2,...,b p(|x|) )) M' is exactly the same as M, except for an additional probability of 2 -p(|x|)+1 for returning NO. 7.15
35
34 PP=PP1 continued If x L, Prob[M(x)=1] > ½ Prob[M(x)=1] ½+2 -p(|x|)+1 Prob[M’(x)=1] (½+2 -p(|x|) )(1-2 -p(|x|)+1 ) > ½ If x L, Prob[M(x)=0] ½ Prob[M’(x)=0] ½(1-2 -p(|x|)+1 )+2 -p(|x|)+1 > ½ In any case Prob[M(x)= L (x)] > ½
36
35 Claim: Claim: NP PP Let L NP, M be the NDTM that recognizes L, and p( ) be the time bound. M’(x,(b 1,b 2,...,b p(|x|) )) if b 1 =1 then return M(x,(b 2,...,b p(|x|) )), else return YES. If x L, then Prob[M’(x)=1]=½ (b 1 =1 only). If x L, then Prob[M’(x)=1]>½ (a witness exists) L PP1=PP NP PP Also: coNP PP because of symmetry. 7.16
37
36 The class ZPP Define: L ZPP iff there exists a polynomial-time probabilistic TM M, such that x L: M(x)={0,1, }, Prob[M(x)= ] ½, and Prob[M(x)= L (x) or M(x)= ] = 1 Prob[M(x)= L (x)]>½ The symbol is “I don’t know”. The value ½ is arbitrary and can be replaced by 2 -p(|x|) or 1-1/p(|x|). 7.17
38
37 Claim: ZPP=RP coRP Let L ZPP, M be the NDTM that recognizes L. Define: M’(x) – let b=M(x) – if b= then return 0, else return b. If x L, M’(x) will never return 1. If x L, Prob[M’(x)=1]>½, as required. ZPP RP. The same way, ZPP coRP.
39
38 Let L RP coRP, M RP and M coRP be the NDTMs that recognize L according to RP and coRP. Define: – M’(x) – if M RP (x)=YES return 1 – if M coRP =NO return 0, else return . M RP (x) never returns YES if x L, and M coRP (x) never returns NO if x L. Therefore, M’(x) never returns the opposite of L (x). The probability that M RP and M coRP are both wrong < ½ Prob[M’(x)= ] < ½. RP coRP ZPP RP coRP RP coRP ZPP
40
39 The random classes hierarchy P ZPP RP BPP It is believed that BPP=P
41
40
42
41
43
42
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.