Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Randomization, Derandomization, and Parallelization --- take the MIS problem as a demonstration Speaker: Hong Chih-Duo Advisor: Chao Kuen-Mao National.

Similar presentations


Presentation on theme: "1 Randomization, Derandomization, and Parallelization --- take the MIS problem as a demonstration Speaker: Hong Chih-Duo Advisor: Chao Kuen-Mao National."— Presentation transcript:

1 1 Randomization, Derandomization, and Parallelization --- take the MIS problem as a demonstration Speaker: Hong Chih-Duo Advisor: Chao Kuen-Mao National Taiwan University Department of Information Engineering

2 2 The MIS problem ► ► Finding a maximum independent set is NPC ► ► The decision version of MIS is to decide the independence number α(G) of G -- still NPC ► ► However, there are many good algorithms to approximate it, in various settings. ► ► A naive greedy algorithm guarantees a (Δ/(Δ+1))|OPT| solution. (cf. your textbook!) ► ► Could tossing coins help in this respect?

3 3 A simple randomized algorithm ► ► For each vertex v, add v to I’ with probability p. (p is to be determined later) ► ► For each edge vu that both v, u ∊ I’, remove one of them from I’ uniformly at random. ► ► The resulting I” is an independent set of G.

4 4 Step 1 Step 2

5 5 Good luck Bad luck Step 3

6 6 Average performance analysis

7 7

8 8 Some refreshers from probability Theory

9 9 ► Theorem Theorem

10 10 Effect of a single toss on E[Z]

11 11

12 12 A derandomized result ► We derived a deterministic procedure to find an assignment {x 1, x 2,..., x n } that guarantees an independent set of size f(x 1,...,x n ) ≥ n 2 /4m. ► Note that we may have to prune I’ = {i : x i =1} in order to get an independent set. (why?) ► The argument is in fact from a general scheme called the conditional probability method, which is very powerful in derandomizing probabilistic proofs.

13 13 Notes on conditional probability methods ► In general, it is hard to compute conditional expectations. ► There are many instance where there is no efficient way to compute the required conditional expectation. ► Moreover, the conditional probability method is inherently sequential: the variables x i ’s are determined in a fixed order. As a result, the time complexity is Ω (n) even if we have an unbounded number of processors. ► Example ► PRAM

14 14 The length of computation path is Ω(n) X 1 =0 X 2 =1 X n-1 =1 X n =0 X 3 =0 Good points height = n

15 15 “ Compress ” the computation path ► It appears that the bottleneck lies in the number of random variables {X i }. Do we really need n variables? ► Recall the definition of Z: wherein X 1,...,X n are i.i.d.. ► Note that if X 1,...,X n are pairwise independent, then also E[Z] = n 2 /4m. ► We may generate pairwise independent X 1,...,X n with fewer i.i.d. random variables!

16 16 “ Compress ” the computation path

17 17 “ Compress ” the computation path

18 18

19 19 The length of computation path is now O(lgn) W1=ω1W1=ω1 W2=ω2W2=ω2 W k-1 = ω k-1 Wk=ωkWk=ωk W3=ω3W3=ω3 Good points height = k = ceiling(lg n)

20 20 A parallelized result ► We derived a deterministic parallel algorithm to find an independent set of size h( ω 1,..., ω k ) ≥ n 2 /4m, for a graph with n vertices and m edges. ► This algorithms can be implemented on an EREW- PRAM in O(lg 2 n) time with O(m 2 ) processors. ► There is a high-level theorem indicating this fact: If an RNC-algorithm works properly when the probabilities are suitably approximated, then it can be converted to an equivalent NC-algorithm.

21 21 Reference Fast Parallel Algorithms for Graph Matching Problems, M.Karpinski, W.Ryttter, p.104~115. The Probabilistic Method, 2nd Edition, Noga Alon, J.Spencer, p.249 ~ 257. Randomized Algorithms, R.Motwani, P.Raghavan, p.335~346.

22 22 Example: looking for the biggest determinant ► A n ∊ M n [{+1,-1}] ► How big can | det(A n ) | be ?? ► This is a famous (and unsolved) problem of Hadamard. ► Fact: |det(A n )| ≦ n n/2.  a corollary of Hadamard’s determinant theorem

23 23 Example: looking for the biggest determinant ► Let’s toss coins! ► (M. Kac) A random matrix A n of {+1, -1} has E[ |det(A n )| 2 ] = n! ► So there exists an n × n matrix A n that satisfies |det(A n )| ≧ (n!) 1/2. ► However, no one knows how to construct one efficiently. Back

24 24 ► CorollaryCorollary

25 25 Some notes on parallel computation models ► PRAM: a very general model that is interested mostly in exposing the parallel nature of problems. ► PRAM consists of a number of RAMs that work synchronous and communicate through a common random access memory. ► Typically, technical details such as synchroniza- tion and communication problems are ignored. ► The most “realistic” PRAM variant is EREW, allowing only exclusive-read and exclusive-write on the shared memory.

26 26 Some notes on parallel complexity classes ► The main aim of parallel computing is the decrease of computation time. The main class of interests is NC = { problems computable in polylogarithmic time and polynomially many processors. } ► Perhaps the most important question in the theory of parallel computations is: is P = NC ? ► It is strongly believed that the answer is negative. However, this question could be of similar difficulty to the P = NP problem. Back


Download ppt "1 Randomization, Derandomization, and Parallelization --- take the MIS problem as a demonstration Speaker: Hong Chih-Duo Advisor: Chao Kuen-Mao National."

Similar presentations


Ads by Google