Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28.

Similar presentations


Presentation on theme: "CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28."— Presentation transcript:

1 CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28

2 6.1. T HE B ASIC C OUNTING A RGUMENT S : probability space K n : a complete graph on n vertices K k : a complete subgraph of K n with k vertices Theorem 6.1: If 2 - +1 < 1, then it is possible to color the edges of K n with two colors so that it has no monochromatic K k subgraph. Proof: Define a sample space consisting of all possible colorings of the edges of K n using two colors. There are 2 possible colorings, and the

3 probability of choosing each coloring in our probability space is 2 -. let A i be the event that clique i is monochromatic. Once the first edge in clique i is colored, the remaining - 1 edges must all be given the same color. Pr( A i )=2 - +1. By using a union bound we can get Pr ≤ Pr( A i ) = 2 - +1 < 1. Hence, Pr =1-Pr > 0. ■

4 Consider whether the edges of K 1000 can be 2-colored in such a way that there is no monochromatic K 20. To simplify the calculations, let n ≤ 2 k/2 and k ≥ 3. 2 - +1 ≤ 2 - (k ( k -1) /2 )+1 ≤ < 1. Since n=1000 ≤ 2 10 =2 k/2, by Theorem 6.1 there exists a 2-coloring of the edges of K 1000 with no monochromatic K 20.

5 6.2. T HE E XPECTATION A RGUMENT Lemma 6.2: Suppose we have a probability space S and a random variable X defined on S such that E[X] = μ. Then Pr(X ≥ μ) > 0 and Pr(X ≤ μ) > 0. Proof: We have μ = E [ X ] = x Pr( X = x ). If Pr( X ≥ μ ) =0, then μ = x Pr( X = x ) = x Pr( X = x ) < μ Pr( X = x ) = μ, giving a contradiction. Similarly, if Pr( X ≤ μ ) =0 then

6 μ = x Pr( X = x ) = x Pr( X = x ) > μ Pr( X = x ) = μ, again yielding a contradiction. ■ 6.2.1. Application: Finding a Large Cut We consider the problem of finding a large cut in an undirected graph. All edges in the graph have the same weight 1. The problem of finding a maximum cut is NP-hard. Cut: a partition of the vertices into two disjoint sets. Value of a cut: the weight of all edges crossing from one side of the partition to the other.

7 Theorem 6.3: Given an undirected graph G with n vertices and m edges, there is a partition of V into two disjoint sets A and B such that at least m/2 edges connect a vertex in A to a vertex in B. That is, there is a cut with value at least m/2. Proof: Let e 1,…, e m be an arbitrary enumeration of the edges of G. For i = 1,..., m, define X i such that 1 if edge i connects A to B, X i = 0 otherwise. The probability that edge e i connects a vertex in A to a vertex in B is 1/2, and thus E [ X i ]=.

8 Let C ( A, B ) be a random variable denoting the value of the cut corresponding to the sets A and B. Then E [ C ( A, B )] = E X i = E [ X i ] = m =. Since the expectation of the random variable C ( A, B ) is m/2, there exists a partition A and B with at least m/2 edges connecting the set A to the set B. ■ The expectation argument does not give a lower bound on the probability that a random partition has a cut of value at least m/2. To derive such a bound, let p = Pr C ( A, B ) ≥.

9 and since C ( A, B ) ≤ m, = E [ C ( A, B )] = i Pr( C ( A, B )= i ) + i Pr( C ( A, B )= i ) ≤ (1– p )( –1) + pm. which implies that p ≥. 6.2.2. Application: Maximum Satisfiability In a logical formula, a literal is either a Boolean variable or the negation of a Boolean variable. We use to denote the negation of the variable x.

10 A satisfiability (SAT) problem, or a SAT formula, is a logical expression that is the conjunction (AND) of a set of clauses, where each clause is the disjunction (OR) of literals. In general, determining if a SAT formula has a solution is NP-hard. Theorem 6.4: Given a set of m clauses, let k i be the number of literals in the i th clause for i = 1,…,m. Let k = k i. Then there is a truth assignment that sat- isfies at least (1 – 2 – k i ) ≥ m (1 – 2 – k ) clauses.

11 a SAT formula: (x 1 ∨ x 2 ∨ x 3 ) ∧ ( x 1 ∨ x 3 ) ∧ (x 1 ∨ x 2 ∨ x 4 ) ∧ (x 4 ∨ x 3 ) ∧ (x 4 ∨ x 1 ) In this case, m=5, k 1 =3, k 2 =2, k 3 =3, k 4 =2, k 5 =2. Let x 1 =1, x 2 =0, x 3 =0, and x 4 =1. It satisfies the SAT formula.

12 Proof: Assign values independently and uniformly at random to the variables. The probability that the i th clause with k i literals is satisfied is at least (1– 2 – k i ). The expected number of satisfied clauses is therefore at least (1 – 2 – k i ) ≥ m (1 – 2 – k ), and there must be an assignment that satisfies at least that many clauses. ■

13 6.3. D ERANDOMIZATION U SING C ONDITIONAL E XPECTATIONS Recall that we find a partition of the n vertices into sets A and B by placing each vertex independently and uniformly at random in one of the two sets. This gives a cut with expected value E [ C ( A, B )] ≥ m/2. Now consider placing the vertices deterministically, one at a time, in an arbitrary order v 1, v 2,…, v n. Let x i be the set where v i is placed (so x i is either A or B ). Suppose that we have placed the first k vertices, and consider the expected value of the cut if the remaining vertices are then placed independently and uniformly. We write this quantity as E [ C ( A, B )│ x 1, x 1,…, x k ].

14 We show inductively how to place the next vertex so that E [ C ( A, B )│ x 1, x 2,…, x k ]≤ E [ C ( A, B )│ x 1, x 2,…, x k+1 ]. It follows that E [ C ( A, B )] ≤ E [ C ( A, B )│ x 1, x 2,…, x n ]. The right-hand side is the value of the cut determined by our placement algorithm, since if x 1, x 2,…, x n are all determined then we have a cut of the graph. Hence our algorithm returns a cut whose value is at least E [ C ( A, B )] ≥ m/2. The base case in the induction is E [ C ( A, B )│ x 1 ]= E [ C ( A, B )]. We now prove the inductive step, that E [ C ( A, B )│ x 1, x 2,…, x k ]≤ E [ C ( A, B )│ x 1, x 2,…, x k+1 ].

15 Consider placing v k+1 randomly, and let Y k+1 be a random variable representing the set where it is placed. Then E [ C ( A, B )│ x 1, x 2,…, x k ]= E [ C ( A, B )│ x 1, x 2,…, x k,Y k+1 = A ] E [ C ( A, B )│ x 1, x 2,…, x k,Y k+1 = B ]. Since max( E [ C ( A, B )│ x 1, x 2,…, x k,Y k+1 = A ], E [ C ( A, B )│ x 1, x 2,…, x k,Y k+1 = B ]) ≥ E [ C ( A, B )│ x 1, x 2,…, x k ], we only need to compare E [ C ( A, B )│ x 1, x 2,…, x k,Y k+1 = A ] and E [ C ( A, B )│ x 1, x 2,…, x k,Y k+1 = B ] and put v k+1 in the set with big expectation. Once we do this, we will have a placement satisfying E [ C ( A, B )│ x 1, x 2,…, x k ]≤ E [ C ( A, B )│ x 1, x 2,…, x k+1 ].

16 6.4. S AMPLE AND M ODIFY Sample-and-modify technique: 1. Construct a random structure that does not have the required properties. 2. Modify the random structure so that it does have the required property. 6.4.1. Application: Independent Sets An independent set in a graph G is a set of vertices with no edges between them. Finding the largest independent set in a graph is an NP-hard problem.

17 Theorem 6.5: Let G = (V, E) be a graph on n vertices with m edges. Then G has an independent set with at least n 2 /4m vertices. Proof: Let d=2m/n be the average degree of the verti- ces in G. Consider the following randomized algorithm. 1. Delete each vertex of G independently with prob- ability 1 – 1/ d. 2. For each remaining edge, remove it and one of its adjacent vertices. Let X be the number of vertices that survive the first step of the algorithm. We have

18 E [ X ] =. Let Y be the number of edges that survive the first step. So E [ Y ] = 2 =. And E [ X-Y ]= - =. The expected size of the independent set generated by the algorithm is n/2d, so the graph has an indepen- dent set with at least n/2d = n 2 /4m vertices. ■ 6.4.2. Application: Graphs with Large Girth The girth of a graph, is the length of its smallest cycle.

19 Theorem 6.6: For any integer k ≥ 3 there is a graph with n nodes, at least n 1+1/k edges, and girth at least k. Proof: Sample a random graph G ∈ G n,p with p = n 1/k-1. Let X be the number of edges in the graph. Then E [ X ]= p = 1– n 1/k+1. Let Y be the number of cycles in the graph of length at most k–1. Any specific possible cycle of length i, where 3 ≤ i ≤ k –1, occurs with probability p i. Also, there are possible cycles of length i. Hence, E [ Y ] = p i ≤ n i p i = n i/k < k n (k–1)k.

20 When n is sufficiently large, the expected number of edges in the resulting graph is E [ X – Y ] ≥ 1 – n 1/k+1 – k n (k-1)/k ≥ n 1/k+1. Hence there exists a graph with at least n 1+1/k edges and girth at least k. ■

21 6.5. T HE S ECOND M OMENT M ETHOD Theorem 6.7: If X is a nonnegative integer-valued ran- dom variable, then Pr( X =0) ≤. Proof: Pr( X =0) ≤ Pr(│ X – E [ X ]│≥ E [ X ]) ≤. ■ 6.5.1. Application: Threshold Behavior in Random Graphs Theorem 6.8: In G n,p, suppose that p= f(n), where f(n)= o(n -2/3 ). Then, for any ε > 0 and for sufficiently large n,

22 the probability that a random graph chosen from G n,p has a clique of four or more vertices is less than ε. Similarly, if f(n) = ω(n -2/3 ) then, for sufficiently large n, the probability that a random graph chosen from G n,p does not have a clique with four or more vertices is less than ε. Proof: Case ①, p = f(n) and f(n) = o(n -2/3 ). Let C 1,…,C be an enumeration of all the subsets of four vertices in G. Let 1 if C i is a 4-clique, X i = 0 otherwise. Let X = X i,

23 so that E [ X ]= p 6. In this case E [ X ] = o( l ), which means that E [ X ]< ε for sufficiently large n. Since X is a nonnegative integer- valued random variable, it follows that Pr( X ≥ 1) ≤ E [ X ] < ε. Hence, the probability that a random graph chosen from G n,p has a clique of four or more vertices is less than ε. Case ②, p = f(n) and f(n) = ω(n -2/3 ). In this case, E [ X ] → ∞ as n grows large. We can use Theorem 6.7 to prove that Pr( X =0)=o(1) in this case.

24 Lemma 6.9: Let Y i, i = 1,…,m, be 0-1 random variables, and let Y = Y i. Then Var [Y] ≤ E [Y] + Cov ( Y i,Y j ). Proof: For any sequence of random variables Y 1,…,Y m, Var Y i = Var [ Y i ] + Cov ( Y i,Y j ). This is the generalization of Theorem 3.2 to m variables. When Y i is a 0-1 random variable, E [ Y i 2 ]= E [ Y i ], so Var [ Y i ]= E [ Y i 2 ]–( E [ Y i ]) 2 ≤ E [ Y i ], giving the lemma. □

25


Download ppt "CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28."

Similar presentations


Ads by Google