Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Slides by Gil Nadel & Lena Yerushalmi. Adapted from Oded Goldreich’s course lecture notes by Amiel Ferman, Noam Sadot, Erez Waisbard and Gera Weiss.

Similar presentations


Presentation on theme: "1 Slides by Gil Nadel & Lena Yerushalmi. Adapted from Oded Goldreich’s course lecture notes by Amiel Ferman, Noam Sadot, Erez Waisbard and Gera Weiss."— Presentation transcript:

1

2 1 Slides by Gil Nadel & Lena Yerushalmi. Adapted from Oded Goldreich’s course lecture notes by Amiel Ferman, Noam Sadot, Erez Waisbard and Gera Weiss.

3 2 Introduction In this lecture we’ll cover: Non-Deterministic Logarithmic Space Non-Deterministic Logarithmic Space Probabilistic Computations Probabilistic Computations

4 3 The composition lemma Lemma (decision version): Lemma (decision version): Assume a machine M that solves a problem , using space s( · ) and having oracle access to decision problems  1,…,  t. Further, suppose that for every i, the task  i can be solved within space s i ( · ). Then,  can be solved within space s’(n) = s(n) + max i {s i (e s(n) )} 6.1

5 4 The composition lemma (cont) Lemma (search version): Lemma (search version): Suppose machine M solves problem  using space s(·) and having oracle access to search tasks  1,…,  t. All queries of M and the answers have length bounded by e s(n). For every i,  i can be solved within space s i (·). Hence,  can be solved within space s’(n) = s(n) + max i {s i (e s(n) )}

6 5 Search version - proof guidelines Search tasks oracles may return strings rather than single bits Search tasks oracles may return strings rather than single bits Every time M needs any bit in the answer to  i (q) (q is the query) it runs M i again on q and stores only the required bit; i.e the answer from the oracle would be written on the output one bit at a time and by using a counter the machine could tell when did it reach the desired bit of answer. Every time M needs any bit in the answer to  i (q) (q is the query) it runs M i again on q and stores only the required bit; i.e the answer from the oracle would be written on the output one bit at a time and by using a counter the machine could tell when did it reach the desired bit of answer.

7 6 The composition lemma (cont) Lemma (time version): Lemma (time version): Assume machine M solves problem  using space s( · ) and having oracle access to decision problems  1,…,  k. Further, suppose that for every i,  i can be solved within space t i ( · ). Then,  can be solved within time t’(n) = t(n)  max i {t i (t(n))}.

8 7 Definition of NL NL=NSPACE(O(log(n))) NL=NSPACE(O(log(n))) Def: L  NL if there is a non- deterministic TM M that accepts L, and a function f(n)=O(log(n)) s.t. for every input x and every computation of M on x, at most f(|x|) distinct work-tape cells are used 6.2

9 8 Log-Space Reduction Def: A log-space reduction of L 1 to L 2 (L 1  ls L 2 ) is a function f(n)  Dspace[O(log(n))] s.t.: A log-space reduction of L 1 to L 2 (L 1  ls L 2 ) is a function f(n)  Dspace[O(log(n))] s.t.:  x  L 1  f(x)  L 2  x  L 1  f(x)  L 2 Note: see proof of the composition lemma

10 9 Def: A language L is NL-Complete if: 1. L  NL 2.  L’  NL: L’  ls L NL-Completeness

11 10 Claim : if L  ls L' and L'  SPACE(O(log(n))) then L  SPACE(O(log(n))) then L  SPACE(O(log(n))) proof: (1) L'  SPACE(O(log(n)))   M' that decides L' using log space. (2) L  ls L'   f(  )=Dspace[O(log(n))]: x  L  f(x)  L' (3) so  M that  x accesses an oracle for f(x) to decide L in log space, and since ( lg(|f(x)|)  lg(e lg(|x|) ) = O(lg(|x|))) ), ( lg(|f(x)|)  lg(e lg(|x|) ) = O(lg(|x|))) ), L is decidable in logspace L is decidable in logspace Discussion of Reducibility 6.3

12 11 Cook Reduction vs Karp Reduction (1) if L  Karp L’ and L’  NP then L  NP (2) if L  ls L’ and L’  NL then L  NL May Karp reduction be replaced by Cook reduction in (1)? May Karp reduction be replaced by Cook reduction in (1)? Probably not: Probably not: coSAT  Cook SAT, but if NP  coNP, coSAT  NP coSAT  Cook SAT, but if NP  coNP, coSAT  NP (3) if L  Cook L’ and L’  P then L  P 6.4

13 12 Graph Connectivity (CONN) Def: CONN={ (G,u,v): G is a directed graph, CONN={ (G,u,v): G is a directed graph, u,v  V(G), there is directed path from u to v in G } u,v  V(G), there is directed path from u to v in G } 6.5

14 13 CONN is NL-complete Thm: CONN is NL-complete proof: CONN  NL ( by algorithm ) Input: G = (V; E), v, u  V Task: Find if there exists a directed path from v to u in G. 1. x  v 2. counter  |V| 3. repeat 4. decrement counter by 1 5. guess a node y  V s.t. (x,y)  E 6. if y  u then x  y 7. until y = u or counter = 0 8. if y = u then accept, else reject

15 14 CONN is NL-complete (cont)  L  NL, L is log-space reducible to CONN  L  NL, L is log-space reducible to CONN Input: an input string x Task: output a graph G=(V,E) and two nodes v, u  V s.t. there is a path from v to u in G iff x is accepted by M 1. compute n, the number of different configurations of M while computing input x 2. for i = 1 to n 3. for j = 1 to n 4. if there is a transition of M from configuration i to configuration j output 1 otherwise output 0 5. output 1 and n

16 15 Complementing Classes Def: Let L  {0,1}* be a language. The complement of L, denoted  L, is the language {0,1}* \ L Def: Let C be a complexity class. The complement of C, denoted coC is coC = {  L: L  C} coC = {  L: L  C} 6.6

17 16 Is C=coC? For every C that is a deterministic time or space complexity class, C=coC For every C that is a deterministic time or space complexity class, C=coC In particular, P=coP In particular, P=coP Is NP=coNP? The conjecture is: NP  coNP Is NP=coNP? The conjecture is: NP  coNP

18 17 Is NL=coNL? Claim:  L  NL-complete s.t. L  coNL  NL=coNL NL=coNL proof: (1) L'  NL. L  NL­complete  L'  ls L with f(  )=Dspace[O(log(n))] and x  L'  f(x)  L (2) On other hand, x  L'  f(x)  L, so  L  NL  L'  NL (3) Implying  L'  NL,  L'  NL, thus NL=coNL 6.7

19 18 The CONN problem Def: CONN={ (G,u,v): G is a directed graph, u,v  V(G), there is no directed u,v  V(G), there is no directed path from u to v in G } path from u to v in G } Question : Conn  NL? Question : Conn  NL? Answer: Yes! Answer: Yes! Result: NL=coNL ( Immerman theorem,1988) Result: NL=coNL ( Immerman theorem,1988)

20 19 NonDet. Alg. & Count Reachablity NonDet. Alg. & Count Reachablity Def: A non-deterministic TM M is said to compute a function f if on any input x (1) either M halts with the right answer f(x) or M halts with output “failure” (2) at least one of M’s computations halts with the right answer Def(CR): Given G=(V,E) and a vertex v  V CR(G, v) is the number of vertices reachable from v in G Thm (CR): CR  L

21 20 Algorithm for CR CR ( j ) 1. n=0 2. for u  V 3. count = 0 4. for v  V 5. non-deterministic switch: 5.1 if reach (v, j-1) count++ else fail if (v,u)  E then n++, break 5.1 if reach (v, j-1) count++ else fail if (v,u)  E then n++, break 5.2 continue 5.2 continue end end 6. if counter  CR(j-1) then fail end 7.return n Recursive call! In NL Assume (v,v)  E

22 21 Algorithm for CR CR(j) 1. n=0 2. for u  V 3. count=0 4. for v  V 5. non-deterministic switch: 5.1 if reach (v, j-1) count++ else fail if (v,u)  E then n++, break 5.1 if reach (v, j-1) count++ else fail if (v,u)  E then n++, break 5.2 continue 5.2 continue end end 6. if counter  CR-1 then fail end 7.return n Global variable CR(n) CR-1 = 1 for j = 1..|V| CR-1 = CR(j) end

23 22 Proof of CR (cont) Lemma: machine CR uses O(log(n)) space proof: - line1 - O(log(n)) - line1 - O(log(n)) - each other step - at most ten variables, each of ≤ O(log(n)) space - each other step - at most ten variables, each of ≤ O(log(n)) space Lemma: machine CR outputs an integer, then it is correct Corollary: Machine CR is good 6.11

24 23 Immerman Theorem Thm: NL=coNL Proof: (1) CONN  NL-complete (1) CONN  NL-complete (2) CONN  NL (2) CONN  NL Hence, by the above claime Hence, by the above claime NL=coNL NL=coNL Extension:  s(n)  log(n), NSPACE(s(n))=coNSPACE(s(n)) 6.10

25 24 Slides by Gil Nadel & Lena Yerushalmi. Adapted from Oded Goldreich’s course lecture notes by Amiel Ferman, Noam Sadot, Erez Waisbard and Gera Weiss.

26 25 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- TMs.  We will still consider only machines that are allowed to run no more than a fixed polynomial in the input length. 7.1

27 26 Random vs. Non- deterministic computations  In a NDTM, even a single wizard’s witness is enough to cause the input to be accepted.  In the randomized model - we require the probability that such a witness is detected to be larger than a certain threshold.  There are two approaches to randomness, online & offline, which we will show to be identical.

28 27 Online approach NDTM  : (,, )  (,, ) NDTM  : (,, )  (,, ) Randomized TM  : (,, )  (,, ) Randomized TM  : (,, )  (,, ) NDTM accepts if a “good” guess exists - a single accepting computation is enough. NDTM accepts if a “good” guess exists - a single accepting computation is enough. A Random TM accepts if sufficient accepting computations exist. A Random TM accepts if sufficient accepting computations exist. 7.2

29 28 Online approach  A random online TM defines a probability space on possible computations.  The computations can be described as a tree.  We usually regard only binary tree, because we can simulate any other tree as good as necessary.

30 29 Offline approach  A language L  NP iff there exists a DTM M and a polynomial p(  ) and a TM M, s.t. for every x  L  y |y|  p(|x|) and M(x,y)  The DTM has access to a “witness” string of polynomial length.  Similarly, a random TM has access to a “guess” input string. BUT...  There must be sufficient such witnesses 7.3

31 30 The Random Complexity classes  A language that can be recognized by a polynomial time probabilistic TM with good probability, is considered relatively easy.  We first consider a two-sided error complexity class.

32 31 The Scheme Def: L  PP  iff there exists a polynomial- time probabilistic TM M, such that  x: Prob[M(x)=  L (x)]  1-  where  L (x)=1 if x  L, and  L (x)=0 if x  L. 7.13

33 32 Claim:  PP  Claim:  PP   PSPACE Let L  PP , M be the TM that recognizes L, and p(  ) the time bound. Let L  PP , M be the TM that recognizes L, and p(  ) the time bound. Define M’(x)  run M(x) using all p(|x|) possible coin tosses, and decides according to the fraction that makes M(x)=1. Define M’(x)  run M(x) using all p(|x|) possible coin tosses, and decides according to the fraction that makes M(x)=1. M’ uses polynomial space for each of the (exponential number of) invocations, and only needs to count the number of successes. M’ uses polynomial space for each of the (exponential number of) invocations, and only needs to count the number of successes. 7.14

34 33 PP ½ Not a Realistic Class Def: L  PP’ iff a TM M exists s.t. x  L  Prob[M(x)=1]  ½ x  L  Prob[M(x)=1] ≤ ½ Obviously, PP ½  PP’ Obviously, PP ½  PP’ 7.13

35 34 Claim: PP ½ =PP’ Let L  PP’, M the TM that recognizes L, and p(  ) the time bound. Let L  PP’, M the TM that recognizes L, and p(  ) the time bound. M’(x,(a 1,a 2,...,a p(|x|),b 1,b 2,...,b p(|x|) ))  if a 1 =a 2 =...=a p(|x|) =1 then return NO, else return M(x,(b 1,b 2,...,b p(|x|) )) M’(x,(a 1,a 2,...,a p(|x|),b 1,b 2,...,b p(|x|) ))  if a 1 =a 2 =...=a p(|x|) =1 then return NO, else return M(x,(b 1,b 2,...,b p(|x|) )) M' is exactly the same as M, except for an additional probability of 2 -p(|x|)-1 for returning NO. M' is exactly the same as M, except for an additional probability of 2 -p(|x|)-1 for returning NO. 7.15

36 35 PP ½ =PP’ continued If x  L, Prob[M(x)=1] > ½  Prob[M(x)=1]  ½+2 -p(|x|)+1  Prob[M’(x)=1]  (½+2 -p(|x|) )(1-2 -p(|x|)+1 ) > ½ If x  L, Prob[M(x)=1] > ½  Prob[M(x)=1]  ½+2 -p(|x|)+1  Prob[M’(x)=1]  (½+2 -p(|x|) )(1-2 -p(|x|)+1 ) > ½ If x  L, Prob[M(x)=0]  ½  Prob[M’(x)=1]  ½(1-2 -p(|x|)+1 )+2 -p(|x|)+1 > ½ If x  L, Prob[M(x)=0]  ½  Prob[M’(x)=1]  ½(1-2 -p(|x|)+1 )+2 -p(|x|)+1 > ½ In any case Prob[M(x)=  L (x)] > ½ In any case Prob[M(x)=  L (x)] > ½

37 36 Claim: Claim: NP  PP ½ Let L  NP, M the NDTM that recognizes L, and p(  ) the time bound. Let L  NP, M the NDTM that recognizes L, and p(  ) the time bound. M’(x,(b 1,b 2,...,b p(|x|) ))  if b 1 =1 then return M(x,(b 2,...,b p(|x|) )), else return YES. M’(x,(b 1,b 2,...,b p(|x|) ))  if b 1 =1 then return M(x,(b 2,...,b p(|x|) )), else return YES. If x  L, then Prob[M’(x)=1]=½ (b 1 =0 only). If x  L, then Prob[M’(x)=1]=½ (b 1 =0 only). If x  L, then Prob[M’(x)=1]>½ (a witness exists) If x  L, then Prob[M’(x)=1]>½ (a witness exists) Also: coNP  PP ½ because of symmetry. Also: coNP  PP ½ because of symmetry. 7.16

38 37  A language L  RP, iff a probabilistic TM M exists, such that x  L  Prob[M(x)=1]  ½ x  L  Prob[M(x)=1] = 0  A language L  coRP, iff a probabilistic TM exists M, such that x  L  Prob[M(x)=1] = 1 x  L  Prob[M(x)=0]  ½  coRP = { L:  * \ L  RP } The classes RP and coRP 7.4

39 38 Comparing RP with NP  Let L be a language in NP or RP.  Let R L be the relation defining the witness/guess for L for a certain TM.  Then: NP RP NP RP x  L   y (x,y)  R L x  L  Prob[(x,r)  R L ]  ½ x  L   y (x,y)  R L x  L   r (x,r)  R L  Obviously, RP  NP 7.5

40 39 Amplification  The constant ½ in the definition of RP is arbitrary.  Given a probabilistic TM M accepting any x  L with probability p<½, run M n times to “amplify” the probability gap  If x  L, all runs will return 0.  If x  L, and we run it n times; the probability that none of these accepts is Prob[M n (x)=1] = 1-Prob[M n (x)  1] = = 1-Prob[M(x)  1] n = = 1-(1-Prob[M(x)=1]) n = 1-(1-p) n 7.6

41 40 Alternative Def’s for RP  RP 1 would make an error often, while RP 2 would almost never err:  Def: L  RP 1 iff  probabilistic Poly-time TM M and a polynomial p(  ), s.t. x  L  Prob[M(x)=1]  1/p(|x|) x  L  Prob[M(x)=1] = 0  Def: L  RP 2 iff  probabilistic Poly-time TM M and a polynomial p(  ), s.t. x  L  Prob[M(x)=1]  1-2 -p(|x|) x  L  Prob[M(x)=1] = 0

42 41 Claim: RP 1 =RP 2  Trivially, RP 2  RP 1 : if |x| is long enough, then 1-2 -p(|x|)  1/p(|x|)  Suppose L  RP 1 : there exists M 1 s.t.  x  L: Prob[M 1 (x,r)=1]  1/p(|x|)  M 2 runs M 1 t(|x|) times, to find that Prob[M 2 (x,r)=0]  (1-1/p(|x|)) t(|x|)  Solving (1-1/p(|x|)) t(|x|) = 1-2 -p(|x|) gives t(|x|)  p(|x|) 2 /log 2 e  Hence, M 2 runs in polynomial time and thus L  RP 2  RP 1  RP 2 7.7

43 42 The class BPP = PP 1/3  Def: L  BPP iff there exists a poly-time probabilistic TM M, s.t.  x  L: Prob[M(x)=  L (x)]  2/3  The BPP machine success probability is bounded away from failure probability 7.8

44 43 Constant invariance  An alternative definition: x  L  Prob[M(x)=1]  p+  x  L  Prob[M(x)=1]  p-   We can build a machine M 2 that runs M n times, and accept/reject depending if the fraction of ‘yes’ is more or less p. After a constant number of executions (depending on p and  but not on x) we would get the desired probability. 7.9

45 44 The weakest possible BPP definition  Def: L  BPPW iff there exists a polynomial-time probabilistic TM M, and a polynomial-time computable function f:N  [0,1] and a polynomial p(  ), s.t.  x  L: x  L  Prob[M(x)=1]  f(|x|) + 1/p(|x|) x  L  Prob[M(x)=1]  f(|x|) - 1/p(|x|)  If we set f(|x|)=½ and p(|x|)=6 we get the original definition.  Hence, BPP  BPPW 7.10

46 45 Chernoff’s inequality If X i are n independent Bernoulli random variables with the same expectation p  ½, then  : 0  ]  ] < 2Exp(-  2 n/2p(1-p))  2e -2n  2

47 46 Claim: BPPW=BPP  Let L  BPPW and let M be a TM for L  Def M’(x)  invoke t i =M(x)  i=1…n, compute p=  t i /n, if p>f(|x|) return YES else return NO.  Notice p is the mean of a sample of size n of the random variable M(x)  setting  =1/p(|x|) and n=-ln(1/6)/2  2 we get the desired probability gap.

48 47 The strongest possible BPP definition  Define: L  BPPS iff there exists a polynomial-time probabilistic TM M, and a polynomial p(  ), s.t.  x  L  Prob[M(x)=  L (x)]  1-2 -p(|x|)  If we set p(|x|)=2 we get the original definition, because 1-2 -2   Hence, BPPS  BPP 7.11

49 48 Claim: BPPS=BPP  Let L  BPP and let M be a TM for L  Def: M’(x)  invoke t i =M(x)  i=1…n, compute p=  t i /n, if p>1/2 return YES else return NO.  By Chernoff’s inequality: Prob[|M’(x)-Exp[M(x)]|  2e -n/18  But 1/2-1/6  Exp[M(x)]  1/2+1/6  Prob[M’(x)=  L (x)]  1-2e -n/18 If we choose n=18ln(2)p(|x|) we get the desired probability If we choose n=18ln(2)p(|x|) we get the desired probability

50 49 BPP and other complexity classes Clearly, RP  BPP. Clearly, RP  BPP. BPP  NP? Unknown. BPP  NP? Unknown. coBPP=BPP by definition. coBPP=BPP by definition. 7.12

51 50 The class ZPP Define: L  ZPP iff there exists a polynomial-time probabilistic TM M, such that  x  L: M(x)={0,1,  } Prob[M(x)=  ]  ½ and Prob[M(x)=  L (x) or M(x)=  ] = 1 Define: L  ZPP iff there exists a polynomial-time probabilistic TM M, such that  x  L: M(x)={0,1,  } Prob[M(x)=  ]  ½ and Prob[M(x)=  L (x) or M(x)=  ] = 1 Prob[M(x)=  L (x)]>½ Prob[M(x)=  L (x)]>½ The symbol  is “I don’t know”. The symbol  is “I don’t know”. The value ½ is arbitrary and can be replaced by 2 -p(|x|) or 1-1/p(|x|). The value ½ is arbitrary and can be replaced by 2 -p(|x|) or 1-1/p(|x|). 7.17

52 51 Claim: ZPP=RP  coRP Let L  ZPP, M a NDTM for L Let L  ZPP, M a NDTM for L Def: M’(x)  Def: M’(x)  let b=M(x) let b=M(x) if b=  return 0, else return b if b=  return 0, else return b If x  L, M’(x) will never return 1. If x  L, M’(x) will never return 1. If x  L, Prob[M’(x)=1]>½ as required If x  L, Prob[M’(x)=1]>½ as required ZPP  RP. ZPP  RP. Similarily, ZPP  coRP. Similarily, ZPP  coRP.

53 52 Let L  RP  coRP, M RP and M coRP be the NDTMs for L according to RP and coRP. Let L  RP  coRP, M RP and M coRP be the NDTMs for L according to RP and coRP. Def: Def: M’(x)  M’(x)  if M RP (x)=YES return 1 if M RP (x)=YES return 1 if M coRP =NO return 0, else return  if M coRP =NO return 0, else return  M RP (x) never returns YES if x  L, and M coRP (x) never returns NO if x  L. Therefore, M’(x) never returns a values conflicting  L (x) M RP (x) never returns YES if x  L, and M coRP (x) never returns NO if x  L. Therefore, M’(x) never returns a values conflicting  L (x) probability that both M RP and M coRP err < ½  Prob[M’(x)=  ] < ½. probability that both M RP and M coRP err < ½  Prob[M’(x)=  ] < ½. RP  coRP  ZPP RP  coRP  ZPP RP  coRP RP  coRP  ZPP

54 53 The random classes hierarchy P  ZPP  RP  BPP P  ZPP  RP  BPP What about BPP=P? What about BPP=P?

55 54 Randomized space complexity Def: L  badRSPACE(S) iff there exists a randomized TM M s.t.  x  {0, 1}* x  L  Prob[M(x)=1]  ½ x  L  Prob[M(x)=1] = 0 and M uses at most S(|x|) space Def: L  badRSPACE(S) iff there exists a randomized TM M s.t.  x  {0, 1}* x  L  Prob[M(x)=1]  ½ x  L  Prob[M(x)=1] = 0 and M uses at most S(|x|) space Notice: M has NO time restriction. Notice: M has NO time restriction. 7.18

56 55 Claim: badRSPACE(S)=NSPACE(S) Let L  badRSPACE(S). If x  L then there are witnesses. If x  L then there are none. Let L  badRSPACE(S). If x  L then there are witnesses. If x  L then there are none. Let L  NSPACE(S), and M be the NDTM for L, with space S(|x|). Let L  NSPACE(S), and M be the NDTM for L, with space S(|x|). There an accepting computation in e S(|x|) time,  r |r|<e S(|x|) s.t. M(x,r)=1. There an accepting computation in e S(|x|) time,  r |r|<e S(|x|) s.t. M(x,r)=1. For uniform r: Prob[M(x,r)=1]  e S(|x|) For uniform r: Prob[M(x,r)=1]  e S(|x|) If we run M 2 exp(s|x|) times, one of these runs should find the right r. If we run M 2 exp(s|x|) times, one of these runs should find the right r.

57 56 Claim: badRSPACE(S)=NSPACE(S) But: to do that we have to count to t=2 exp(s|x|) which takes e S(|x|) space! But: to do that we have to count to t=2 exp(s|x|) which takes e S(|x|) space! A randomized counter of k=log 2 t coins which stops the alg. when  i k i =1 A randomized counter of k=log 2 t coins which stops the alg. when  i k i =1 The expected time is 2 k =t. The expected time is 2 k =t. No need to store k tosses, we only have to count to k, which takes log(k) = log(log(t)) = s(|x|) space. No need to store k tosses, we only have to count to k, which takes log(k) = log(log(t)) = s(|x|) space. L  badRSPACE(S) L  badRSPACE(S)

58 57 Randomized space complexity - the correct version Define: L  RSPACE(S) iff there exists a randomized TM M s.t.  x  {0,1}* x  L  Prob[M(x)=1]  ½ x  L  Prob[M(x)=1] = 0 M uses at most S(|x|) space and runs at most e S(|x|) time. Define: L  RSPACE(S) iff there exists a randomized TM M s.t.  x  {0,1}* x  L  Prob[M(x)=1]  ½ x  L  Prob[M(x)=1] = 0 M uses at most S(|x|) space and runs at most e S(|x|) time. Notice: M has both space and time restriction. Notice: M has both space and time restriction. Define: RL=RSPACE(log) Define: RL=RSPACE(log) Claim: L  RL  NL Claim: L  RL  NL

59 58 Undirected graph connectivity Input: an undirected graph G=(V,E) and two vertices s and t. Input: an undirected graph G=(V,E) and two vertices s and t. Task: find if there’s a path between s and t in G. Task: find if there’s a path between s and t in G. Claim: Let n=|V|. With probability at least ½, a random walk of length 8n 3 from s visits all the vertices in the connected component of s. Claim: Let n=|V|. With probability at least ½, a random walk of length 8n 3 from s visits all the vertices in the connected component of s. 7.19

60 59 Proof sketch Let G’=(V’,E’) be the connected component of s in G. Let G’=(V’,E’) be the connected component of s in G. Let T u,v be a random variable of the number of steps until reaching v. Let T u,v be a random variable of the number of steps until reaching v. Easily: E[T u,v ]  2|E’| Easily: E[T u,v ]  2|E’| Let Cover(G’) be the expected number of steps until the last vertex in V’ is encountered, and let C be a directed circle that covers G’. Let Cover(G’) be the expected number of steps until the last vertex in V’ is encountered, and let C be a directed circle that covers G’.

61 60 Proof sketch - continued Cover(G’)  E[T u,v ]  |C|*2|E’|< 4|E’||V’| Cover(G’)  E[T u,v ]  |C|*2|E’|< 4|E’||V’| With probability at least ½, a random walk of length 8|E’||V’| from s visits all the vertices in the connected component of s. With probability at least ½, a random walk of length 8|E’||V’| from s visits all the vertices in the connected component of s. The machine M will simply walk randomly in G starting at s, until t is found or until 8|E||V| steps The machine M will simply walk randomly in G starting at s, until t is found or until 8|E||V| steps

62 61 Directed ST-CONN The above claim doesn’t hold for USTCONN: S T The above claim doesn’t hold for USTCONN: S T A path of length k has ½ k probability of reaching the k th vertex from s. A path of length k has ½ k probability of reaching the k th vertex from s. A walk of exponential length is needed to reach t with high probability. A walk of exponential length is needed to reach t with high probability. 7.20


Download ppt "1 Slides by Gil Nadel & Lena Yerushalmi. Adapted from Oded Goldreich’s course lecture notes by Amiel Ferman, Noam Sadot, Erez Waisbard and Gera Weiss."

Similar presentations


Ads by Google