1/36 Randomized algorithms Instructor: YE, Deshi

Slides:



Advertisements
Similar presentations
Comp 122, Spring 2004 Order Statistics. order - 2 Lin / Devi Comp 122 Order Statistic i th order statistic: i th smallest element of a set of n elements.
Advertisements

Chapter 2 Concepts of Prob. Theory
Presentation on Probability Distribution * Binomial * Chi-square
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 6.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Theory of Computing Lecture 3 MAS 714 Hartmut Klauck.
“Devo verificare un’equivalenza polinomiale…Che fò? Fò dù conti” (Prof. G. Di Battista)
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
Approximation Algorithms Chapter 28: Counting Problems 2003/06/17.
COUNTING AND PROBABILITY
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
22C:19 Discrete Structures Discrete Probability Fall 2014 Sukumar Ghosh.
The Probabilistic Method Shmuel Wimer Engineering Faculty Bar-Ilan University.
1 Randomization Algorithmic design patterns. n Greed. n Divide-and-conquer. n Dynamic programming. n Network flow. n Randomization. Randomization. Allow.
The number of edge-disjoint transitive triples in a tournament.
CSL758 Instructors: Naveen Garg Kavitha Telikepalli Scribe: Manish Singh Vaibhav Rastogi February 7 & 11, 2008.
1 Maximal Independent Set. 2 Independent Set (IS): In a graph, any set of nodes that are not adjacent.
Karger’s Min-Cut Algorithm Amihood Amir Bar-Ilan University, 2009.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu.
Randomized Algorithms and Randomized Rounding Lecture 21: April 13 G n 2 leaves
1 Separator Theorems for Planar Graphs Presented by Shira Zucker.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
Maximal Independent Set Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
Randomized Algorithms Morteza ZadiMoghaddam Amin Sayedi.
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
CS38 Introduction to Algorithms Lecture 18 May 29, CS38 Lecture 18.
The Best Algorithms are Randomized Algorithms N. Harvey C&O Dept TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AAAA.
11/3/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURES 37+ Randomized.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Independence and Bernoulli.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Applied Business Forecasting and Regression Analysis Review lecture 2 Randomness and Probability.
Order Statistics The ith order statistic in a set of n elements is the ith smallest element The minimum is thus the 1st order statistic The maximum is.
1 2. Independence and Bernoulli Trials Independence: Events A and B are independent if It is easy to show that A, B independent implies are all independent.
CSCE 411 Design and Analysis of Algorithms Set 9: Randomized Algorithms Prof. Jennifer Welch Fall 2014 CSCE 411, Fall 2014: Set 9 1.
Chapter 5: Probability Analysis of Randomized Algorithms Size is rarely the only property of input that affects run time Worst-case analysis most common.
1 Chapter 13 Randomized Algorithms Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 7.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Discrete Random Variables. Introduction In previous lectures we established a foundation of the probability theory; we applied the probability theory.
Great Theoretical Ideas in Computer Science for Some.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Introduction to Algorithms Probability Review – Appendix C CIS 670.
1 Distributed Vertex Coloring. 2 Vertex Coloring: each vertex is assigned a color.
Non-LP-Based Approximation Algorithms Fabrizio Grandoni IDSIA
Approximation Algorithms based on linear programming.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Section 7.3. Why we need Bayes?  How to assess the probability that a particular event occurs on the basis of partial evidence.  The probability p(F)
Great Theoretical Ideas in Computer Science.
Theory of Computational Complexity Yusuke FURUKAWA Iwama Ito lab M1.
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28.
TU/e Algorithms (2IL15) – Lecture 8 1 MAXIMUM FLOW (part II)
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
The Theory of NP-Completeness
Minimum Spanning Tree 8/7/2018 4:26 AM
Randomized Algorithms
Randomized Algorithms CS648
Randomized Algorithms
3.5 Minimum Cuts in Undirected Graphs
HKN ECE 313 Exam 1 Review Session
CS200: Algorithm Analysis
CSE 6408 Advanced Algorithms.
The Theory of NP-Completeness
Presentation transcript:

1/36 Randomized algorithms Instructor: YE, Deshi

2/36 Probability We define probability in terms of a sample space S, which is a set whose elements are called elementary events. Each elementary event can be viewed as a possible outcome of an experiment. An event is a subset of the sample space S. Example: flipping two distinguishable coins Sample space: S = {HH, HT, TH, TT}. Event: the event of obtaining one head and one tail is {HT, TH}. Null event: ø. Two events A and B are mutually exclusive if A ∩ B = ⊘. A probability distribution Pr{} on a sample space S is a mapping from events of S to real numbers such that Pr{A} ≥ 0 for any event A. Pr{S} = 1. Pr{A ∪ B} = Pr{A} + Pr{B} for any two mutually exclusive events A and B.

3/36 Axioms of probability Using Ā to denote the event S - A (the complement of A), we have Pr{Ā} = 1 - Pr{A}. For any two events A and B Discrete probability distributions: A probability distribution is discrete if it is defined over a finite or countably infinite sample space. Let S be the sample space. Then for any event A, Uniform probability distribution on S: Pr{s} = 1/ |S| Continuous uniform probability distribution: For any closed interval [c, d], where a ≤ c ≤ d ≤ b,

4/36 Probability Conditional probability of an event A given that another event B occurs is defined to be Two events are independent if Bayes's theorem, Pr{B}=Pr{B ∩ A} + Pr{B ∩ Ā} =Pr{A} Pr {B | A} + Pr{Ā}Pr{B | Ā}.

5/36 Discrete random variables For a random variable X and a real number x, we define the event X = x to be {s ∈ S : X(s) = x}; Thus Probability density function of random variable X: f (x) = Pr{X = x}. Pr{X = x} ≥ 0 and Σ x Pr{X = x} = 1. If X and Y are random variables, the function f (x, y) = Pr{X = x and Y = y} For a fixed value y,

6/36 Expected value of a random variable Expected value (or, synonymously, expectation or mean) of a discrete random variable X is Example: Consider a game in which you flip two fair coins. You earn $3 for each head but lose $2 for each tail. The expected value of the random variable X representing your earnings is E[X] = 6 · Pr{2 H's} + 1 · Pr{1 H, 1 T} - 4 · Pr{2 T's} =6(1/4) + 1(1/2) - 4(1/4) =1. Linearity of expectation: when n random variables X 1, X 2,..., X n are mutually independent,

7/36 First success Waiting for a first success. Coin is heads with probability p and tails with probability 1-p. How many independent flips X until first heads? Useful property. If X is a 0/1 random variable, E[X] = Pr[X = 1]. Pf. j-1 tails1 head

8/36 Variance and standard deviation The variance of a random variable X with mean E[X] is: If n random variables X 1, X 2,..., X n are pairwise independent, then The standard deviation of a random variable X is the positive square root of the variance of X.

9/36 Randomization Randomization. Allow fair coin flip in unit time. Why randomize? Can lead to simplest, fastest, or only known algorithm for a particular problem. Ex. Symmetry breaking protocols, graph algorithms, quicksort, hashing, load balancing, Monte Carlo integration, cryptography.

10/36 Maximum 3-Satisfiability MAX-3SAT. Given 3-SAT formula, find a truth assignment that satisfies as many clauses as possible. Remark. NP-hard problem. Simple idea. Flip a coin, and set each variable true with probability ½, independently for each variable.

11/36 Claim. Given a 3-SAT formula with k clauses, the expected number of clauses satisfied by a random assignment is 7k/8. Pf. Consider random variable Let Z = weight of clauses satisfied by assignment Z j. Maximum 3-Satisfiability: Analysis linearity of expectation

12/36 E[Z j ] E[Z j ] is equal to the probability that C j is satisfied. C j is not satisfied, each of its three variables must be assigned the value that fails to make it true, since the variables are set independently, the probability of this is (1/2)3=1/8. Thus C j is satisfied with probability 1 – 1/8 = 7/8. Thus E[Z j ] = 7/8.

13/36 Maximum 3-SAT: Analysis Q. Can we turn this idea into a 7/8-approximation algorithm? In general, a random variable can almost always be below its mean. Lemma. The probability that a random assignment satisfies  7k/8 clauses is at least 1/(8k). Pf. Let p j be probability that exactly j clauses are satisfied; let p be probability that  7k/8 clauses are satisfied.

14/36 Analysis con. Let k΄ denote the largest natural number that is strictly smaller than 7k/8. Then 7k/8 - k΄ ≥ 1/8, thus, k΄ ≤ 7k/8 – 1/8. Because k΄ is a natural number, and the remaining of 7k mod 8 is at least 1. Rearranging terms yields p  1 / (8k).

15/36 Maximum 3-SAT: Analysis Johnson's algorithm. Repeatedly generate random truth assignments until one of them satisfies  7k/8 clauses. Theorem. Johnson's algorithm is a 7/8-approximation algorithm. Pf. By previous lemma, each iteration succeeds with probability at least 1/(8k). By the waiting-time bound, the expected number of trials to find the satisfying assignment is at most 8k. ▪

16/36 Maximum Satisfiability Extensions. Allow one, two, or more literals per clause. Find max weighted set of satisfied clauses. Theorem. [Asano-Williamson 2000] There exists a approximation algorithm for MAX-SAT. Theorem. [Karloff-Zwick 1997, Zwick+computer 2002] There exists a 7/8-approximation algorithm for version of MAX-3SAT where each clause has at most 3 literals. Theorem. [Håstad 1997] Unless P = NP, no  -approximation algorithm for MAX-3SAT (and hence MAX-SAT) for any  > 7/8. very unlikely to improve over simple randomized algorithm for MAX-3SAT

17/36 Randomized Divide-and-Conquer

18/36 Finding the Median We are given a set of n numbers S={a 1,a 2,...,a n }. The median is the number that would be in the middle position if we were to sort them. The median of S is equal to k th largest element in S, where k = (n+1)/2 if n is odd, and k=n/2 if n is even. Remark. O(n log n) time if we simply sort the number first. Question: Can we improve it?

19/36 Selection problem Selection problem. Given a set of n numbers S and a number k between 1 and n. Return the k th largest element in S. Select (S, k) choose a splitter a i  S uniformly at random foreach (a  S) { if (a < a i ) put a in S - else if (a > a i ) put a in S + } If |S - |=k-1 then a i was the desired answer Else if |S - |≥k-1 then The kth largest element lies in S - Recursively call Select(S -,k) Else suppose |S - |= l < k-1 then The kth largest element lies in S + Recursively call Select(S +,k-1- l ) Endif

20/36 Analysis Remark. Regardless how the splitter is chosen, the algorithm above returns the k th largest element of S. Choosing a good Splitter. A good choice: a splitter should produce sets S - and S + that are approximately equal in size. For example: we always choose the median as the splitter. Then each iteration, the size of problem shrink half. Let cn be the running time for selecting a uniformed number. Then the running time is T(n) ≤ T(n/2) + cn Hence T(n) = O(n).

21/36 Analysis con. Funny!! The median is just what we want to find. However, if for any fixed constant b > 0, the size of sets in the recursive call would shrink by a factor of at least (1- b) each time. Thus the running time T(n) would be bounded by the recurrence T(n) ≤ T((1-b)n) + cn. We could also get T(n) = O(n). A bad choice. If we always chose the minimum element as the splitter, then T(n) ≤ T(n-1) + cn Which implies that T(n) = O(n 2 ).

22/36 Random Splitters However, we choose the splitters randomly. How should we analysis the running time of this performance? Key idea. We expect the size of the set under consideration to go down by a fixed constant fraction every iteration, so we would get a convergent series and hence a linear bound running time.

23/36 Analyzing the randomized algorithm We say that the algorithm is in phase j when the size of the set under consideration is at most n(3/4) j but greater than n(3/4) j+1. So, to reach phase j, we kept running the randomized algorithm after the phase j – 1 until it is phase j. How much calls (or iterations) in each phases? Central: if at least a quarter of the elements are smaller than it and at least a quarter of the elements are larger than it.

24/36 Analyzing Observe. If a central element is chosen as a splitter, then at least a quarter of the set will be thrown away, the set shrink by ¾ or better. Moreover, at least half elements could be central, so the probability that our random choice of splitter produces a central element is ½. Q: when will the central will be found in each phase? A: by waiting-time bound, the expected number of iterations before a central element is found is 2. Remark. The running time in one iteration of the algorithm is at most cn.

25/36 Analyzing Let X be a random variable equal to the number of steps taken by the algorithm. We can write it as the sum X=X 0 +X , where X j is the expected number of steps spent by the algorithm in phase j. In Phase j, the set has size at most n(¾ ) j and the number of iterations is 2, thus, E[X j ] ≤ 2 cn(¾ ) j So, Theorem. The expected running time of Select(n,k) is O(n).

26 Contention Resolution in a Distributed System Contention resolution. Given n processes P 1, …, P n, each competing for access to a shared database. If two or more processes access the database simultaneously, all processes are locked out. Devise protocol to ensure all processes get through on a regular basis. Restriction. Processes can't communicate. Challenge. Need symmetry-breaking paradigm. P1P1 P2P2 PnPn......

27 Contention Resolution: Randomized Protocol Protocol. Each process requests access to the database at time t with probability p = 1/n. Claim. Let S[i, t] = event that process i succeeds in accessing the database at time t. Then 1/(e  n)  Pr[S(i, t)]  1/(2n). Pf. By independence, Pr[S(i, t)] = p (1-p) n-1. n Setting p = 1/n, we have Pr[S(i, t)] = 1/n (1 - 1/n) n-1. ▪ Useful facts from calculus. As n increases from 2, the function: n (1 - 1/n) n-1 converges monotonically from 1/4 up to 1/e n (1 - 1/n) n-1 converges monotonically from 1/2 down to 1/e. process i requests access none of remaining n-1 processes request access value that maximizes Pr[S(i, t)] between 1/e and 1/2

28 Contention Resolution: Randomized Protocol Claim. The probability that process i fails to access the database in en rounds is at most 1/e. After e  n(c ln n) rounds, the probability is at most n -c. Pf. Let F[i, t] = event that process i fails to access database in rounds 1 through t. By independence and previous claim, we have Pr[F(i, t)]  (1 - 1/(en)) t. n Choose t =  e  n  : n Choose t =  e  n   c ln n  :

29 Contention Resolution: Randomized Protocol Claim. The probability that all processes succeed within 2e  n ln n rounds is at least 1 - 1/n. Pf. Let F[t] = event that at least one of the n processes fails to access database in any of the rounds 1 through t. Choosing t = 2  en   c ln n  yields Pr[F[t]]  n · n -2 = 1/n. ▪ Union bound. Given events E 1, …, E n, union bound previous slide

Global Minimum Cut

31 Global Minimum Cut Global min cut. Given a connected, undirected graph G = (V, E) find a cut (A, B) of minimum cardinality. Applications. Partitioning items in a database, identify clusters of related documents, network reliability, network design, circuit design, TSP solvers. Network flow solution. n Replace every edge (u, v) with two antiparallel edges (u, v) and (v, u). n Pick some vertex s and compute min s-v cut separating s from each other vertex v  V. False intuition. Global min-cut is harder than min s-t cut.

32 Contraction Algorithm Contraction algorithm. [Karger 1995] n Pick an edge e = (u, v) uniformly at random. n Contract edge e. – replace u and v by single new super-node w – preserve edges, updating endpoints of u and v to w – keep parallel edges, but delete self-loops n Repeat until graph has just two nodes v 1 and v 2. n Return the cut (all nodes that were contracted to form v 1 ). uvw  contract u-v a bc e f c a b f d

33 Contraction Algorithm Claim. The contraction algorithm returns a min cut with prob  2/n 2. Pf. Consider a global min-cut (A*, B*) of G. Let F* be edges with one endpoint in A* and the other in B*. Let k = |F*| = size of min cut. n In first step, algorithm contracts an edge in F* probability k / |E|. Every node has degree  k since otherwise (A*, B*) would not be min-cut.  |E|  ½ kn. n Thus, algorithm contracts an edge in F* with probability  2/n. A* B* F*

34 Contraction Algorithm Claim. The contraction algorithm returns a min cut with prob  2/n 2. Pf. Consider a global min-cut (A*, B*) of G. Let F* be edges with one endpoint in A* and the other in B*. Let k = |F*| = size of min cut. n Let G' be graph after j iterations. There are n' = n-j supernodes. n Suppose no edge in F* has been contracted. The min-cut in G' is still k. Since value of min-cut is k, |E'|  ½ kn'. n Thus, algorithm contracts an edge in F* with probability  2/n'. n Let E j = event that an edge in F* is not contracted in iteration j.

35 Contraction Algorithm Amplification. To amplify the probability of success, run the contraction algorithm many times. Claim. If we repeat the contraction algorithm n 2 ln n times with independent random choices, the probability of failing to find the global min-cut is at most 1/n 2. Pf. By independence, the probability of failure is at most (1 - 1/x) x  1/e

36 Global Min Cut: Context Remark. Overall running time is slow since we perform  (n 2 log n) iterations and each takes  (m) time. Improvement. [Karger-Stein 1996] O(n 2 log 3 n). n Early iterations are less risky than later ones: probability of contracting an edge in min cut hits 50% when n / √2 nodes remain. n Run contraction algorithm until n / √2 nodes remain. n Run contraction algorithm twice on resulting graph, and return best of two cuts. Extensions. Naturally generalizes to handle positive weights. Best known. [Karger 2000] O(m log 3 n). faster than best known max flow algorithm or deterministic global min cut algorithm