1 Randomization, Derandomization, and Parallelization --- take the MIS problem as a demonstration Speaker: Hong Chih-Duo Advisor: Chao Kuen-Mao National.

Slides:



Advertisements
Similar presentations
NP-Hard Nattee Niparnan.
Advertisements

Complexity ©D.Moshkovits 1 Where Can We Draw The Line? On the Hardness of Satisfiability Problems.
1 Decomposing Hypergraphs with Hypertrees Raphael Yuster University of Haifa - Oranim.
Approximation Algorithms
Midwestern State University Department of Computer Science Dr. Ranette Halverson CMPS 2433 – CHAPTER 4 GRAPHS 1.
Max Cut Problem Daniel Natapov.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
02/01/11CMPUT 671 Lecture 11 CMPUT 671 Hard Problems Winter 2002 Joseph Culberson Home Page.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
Advanced Topics in Algorithms and Data Structures
Beating Brute Force Search for Formula SAT and QBF SAT Rahul Santhanam University of Edinburgh.
Complexity Theory CSE 331 Section 2 James Daly. Reminders Project 4 is out Due Friday Dynamic programming project Homework 6 is out Due next week (on.
Tractable and intractable problems for parallel computers
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Computability and Complexity 32-1 Computability and Complexity Andrei Bulatov Boolean Circuits.
Analysis of Algorithms CS 477/677
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
Coloring random graphs online without creating monochromatic subgraphs Torsten Mütze, ETH Zürich Joint work with Thomas Rast (ETH Zürich) and Reto Spöhel.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
NP-complete and NP-hard problems. Decision problems vs. optimization problems The problems we are trying to solve are basically of two kinds. In decision.
Hardness Results for Problems
1 Summary of lectures 1.Introduction to Algorithm Analysis and Design (Chapter 1-3). Lecture SlidesLecture Slides 2.Recurrence and Master Theorem (Chapter.
Complexity Classes Kang Yu 1. NP NP : nondeterministic polynomial time NP-complete : 1.In NP (can be verified in polynomial time) 2.Every problem in NP.
MCS312: NP-completeness and Approximation Algorithms
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Complexity Classes (Ch. 34) The class P: class of problems that can be solved in time that is polynomial in the size of the input, n. if input size is.
NP Complexity By Mussie Araya. What is NP Complexity? Formal Definition: NP is the set of decision problems solvable in polynomial time by a non- deterministic.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Private Approximation of Search Problems Amos Beimel Paz Carmi Kobbi Nissim Enav Weinreb (Technion)
1 Maximal Independent Set. 2 Independent Set (IS): In a graph G=(V,E), |V|=n, |E|=m, any set of nodes that are not adjacent.
CSCI 3160 Design and Analysis of Algorithms Tutorial 10 Chengyu Lin.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
06/12/2015Applied Algorithmics - week41 Non-periodicity and witnesses  Periodicity - continued If string w=w[0..n-1] has periodicity p if w[i]=w[i+p],
NP-Complete problems.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
1 CPSC 320: Intermediate Algorithm Design and Analysis July 30, 2014.
NPC.
Computing Approximate Weighted Matchings in Parallel Fredrik Manne, University of Bergen with Rob Bisseling, Utrecht University Alicia Permell, Michigan.
Complexity ©D.Moshkovits 1 2-Satisfiability NOTE: These slides were created by Muli Safra, from OPICS/sat/)
NP ⊆ PCP(n 3, 1) Theory of Computation. NP ⊆ PCP(n 3,1) What is that? NP ⊆ PCP(n 3,1) What is that?
NOTE: To change the image on this slide, select the picture and delete it. Then click the Pictures icon in the placeholder to insert your own image. Fast.
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lectures
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Approximation algorithms
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
The NP class. NP-completeness
Chapter 10 NP-Complete Problems.
Umans Complexity Theory Lectures
Introduction to Approximation Algorithms
A simple parallel algorithm for the MIS problem
Summary of lectures Introduction to Algorithm Analysis and Design (Chapter 1-3). Lecture Slides Recurrence and Master Theorem (Chapter 4). Lecture Slides.
Lecture 22 Complexity and Reductions
CS4234 Optimiz(s)ation Algorithms
What is the next line of the proof?
Computability and Complexity
Where Can We Draw The Line?
Objective of This Course
Randomized Algorithms CS648
CSE838 Lecture notes copy right: Moon Jung Chung
NP-completeness The Chinese University of Hong Kong Fall 2008
CS 3343: Analysis of Algorithms
Lecture 22 Complexity and Reductions
Presentation transcript:

1 Randomization, Derandomization, and Parallelization --- take the MIS problem as a demonstration Speaker: Hong Chih-Duo Advisor: Chao Kuen-Mao National Taiwan University Department of Information Engineering

2 The MIS problem ► ► Finding a maximum independent set is NPC ► ► The decision version of MIS is to decide the independence number α(G) of G -- still NPC ► ► However, there are many good algorithms to approximate it, in various settings. ► ► A naive greedy algorithm guarantees a (Δ/(Δ+1))|OPT| solution. (cf. your textbook!) ► ► Could tossing coins help in this respect?

3 A simple randomized algorithm ► ► For each vertex v, add v to I’ with probability p. (p is to be determined later) ► ► For each edge vu that both v, u ∊ I’, remove one of them from I’ uniformly at random. ► ► The resulting I” is an independent set of G.

4 Step 1 Step 2

5 Good luck Bad luck Step 3

6 Average performance analysis

7

8 Some refreshers from probability Theory

9 ► Theorem Theorem

10 Effect of a single toss on E[Z]

11

12 A derandomized result ► We derived a deterministic procedure to find an assignment {x 1, x 2,..., x n } that guarantees an independent set of size f(x 1,...,x n ) ≥ n 2 /4m. ► Note that we may have to prune I’ = {i : x i =1} in order to get an independent set. (why?) ► The argument is in fact from a general scheme called the conditional probability method, which is very powerful in derandomizing probabilistic proofs.

13 Notes on conditional probability methods ► In general, it is hard to compute conditional expectations. ► There are many instance where there is no efficient way to compute the required conditional expectation. ► Moreover, the conditional probability method is inherently sequential: the variables x i ’s are determined in a fixed order. As a result, the time complexity is Ω (n) even if we have an unbounded number of processors. ► Example ► PRAM

14 The length of computation path is Ω(n) X 1 =0 X 2 =1 X n-1 =1 X n =0 X 3 =0 Good points height = n

15 “ Compress ” the computation path ► It appears that the bottleneck lies in the number of random variables {X i }. Do we really need n variables? ► Recall the definition of Z: wherein X 1,...,X n are i.i.d.. ► Note that if X 1,...,X n are pairwise independent, then also E[Z] = n 2 /4m. ► We may generate pairwise independent X 1,...,X n with fewer i.i.d. random variables!

16 “ Compress ” the computation path

17 “ Compress ” the computation path

18

19 The length of computation path is now O(lgn) W1=ω1W1=ω1 W2=ω2W2=ω2 W k-1 = ω k-1 Wk=ωkWk=ωk W3=ω3W3=ω3 Good points height = k = ceiling(lg n)

20 A parallelized result ► We derived a deterministic parallel algorithm to find an independent set of size h( ω 1,..., ω k ) ≥ n 2 /4m, for a graph with n vertices and m edges. ► This algorithms can be implemented on an EREW- PRAM in O(lg 2 n) time with O(m 2 ) processors. ► There is a high-level theorem indicating this fact: If an RNC-algorithm works properly when the probabilities are suitably approximated, then it can be converted to an equivalent NC-algorithm.

21 Reference Fast Parallel Algorithms for Graph Matching Problems, M.Karpinski, W.Ryttter, p.104~115. The Probabilistic Method, 2nd Edition, Noga Alon, J.Spencer, p.249 ~ 257. Randomized Algorithms, R.Motwani, P.Raghavan, p.335~346.

22 Example: looking for the biggest determinant ► A n ∊ M n [{+1,-1}] ► How big can | det(A n ) | be ?? ► This is a famous (and unsolved) problem of Hadamard. ► Fact: |det(A n )| ≦ n n/2.  a corollary of Hadamard’s determinant theorem

23 Example: looking for the biggest determinant ► Let’s toss coins! ► (M. Kac) A random matrix A n of {+1, -1} has E[ |det(A n )| 2 ] = n! ► So there exists an n × n matrix A n that satisfies |det(A n )| ≧ (n!) 1/2. ► However, no one knows how to construct one efficiently. Back

24 ► CorollaryCorollary

25 Some notes on parallel computation models ► PRAM: a very general model that is interested mostly in exposing the parallel nature of problems. ► PRAM consists of a number of RAMs that work synchronous and communicate through a common random access memory. ► Typically, technical details such as synchroniza- tion and communication problems are ignored. ► The most “realistic” PRAM variant is EREW, allowing only exclusive-read and exclusive-write on the shared memory.

26 Some notes on parallel complexity classes ► The main aim of parallel computing is the decrease of computation time. The main class of interests is NC = { problems computable in polylogarithmic time and polynomially many processors. } ► Perhaps the most important question in the theory of parallel computations is: is P = NC ? ► It is strongly believed that the answer is negative. However, this question could be of similar difficulty to the P = NP problem. Back