Presentation is loading. Please wait.

Presentation is loading. Please wait.

1. content basic principles of the probabilistic method maximum satisfiability (MAX-SAT) problem. expanding graphs problem of oblivious routing Lovasz.

Similar presentations


Presentation on theme: "1. content basic principles of the probabilistic method maximum satisfiability (MAX-SAT) problem. expanding graphs problem of oblivious routing Lovasz."— Presentation transcript:

1 1

2 content basic principles of the probabilistic method maximum satisfiability (MAX-SAT) problem. expanding graphs problem of oblivious routing Lovasz Local Lemma, method of conditional probabilities 2

3 Two principles 3 1) Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than its expectation. We know of many intuitive versions of this principle in real life - for instance, if we are told that the average annual income of theoretical computer scientists is $20,000, we know that there is at least one theoretical computer scientist whose income is $20,000 or greater. 2) If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property. For instance, if we were told that a ball chosen randomly from a bin is red with probability 1/3, then we know that the bin contains at least one red ball. 1) Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than its expectation. We know of many intuitive versions of this principle in real life - for instance, if we are told that the average annual income of theoretical computer scientists is $20,000, we know that there is at least one theoretical computer scientist whose income is $20,000 or greater. 2) If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property. For instance, if we were told that a ball chosen randomly from a bin is red with probability 1/3, then we know that the bin contains at least one red ball.

4 Example 1 4 Theorem 1.2: The expected size of the autopartition produced by RandAuto is O(n log n). Example 5.1: for any set of n disjoint line segments in the plane, there is always an autopartition of size O(n log n). This follows directly from the fact that if we were to run the RandAuto algorithm, the random variable defined to be the size of the autopartition can assume a value that is no more than its expectation; thus, there is an autopartition of this size on any instance.

5 Example 2 5 Theorem 2.1: Given any instance of T 2k, the expected number of steps for the game tree randomized algorithm is at most 3 k. Since n = 4 k the expected running time of our randomized algorithm is n log 4 3,which we bound by n 0.793. Theorem 2.1: Given any instance of T 2k, the expected number of steps for the game tree randomized algorithm is at most 3 k. Since n = 4 k the expected running time of our randomized algorithm is n log 4 3,which we bound by n 0.793. Example 5.2: Any algorithm for game tree evaluation that produces the correct answer on every instance develops a certificate of correctness: for each instance, it can exhibit a set of leaves whose values together guarantee the value it declares is the correct answer. By Theorem 2.1, the expected number of leaves on any instance of T 2k is at most n 0.793, where n = 2 2k. It follows that on any instance of T 2k, there is a set of n 0.793 leaves whose values certify the value of the root for that instance. Note that we assert the existence of such a certificate with certainty, even though the technique used for establishing it was probabilistic. Example 5.2: Any algorithm for game tree evaluation that produces the correct answer on every instance develops a certificate of correctness: for each instance, it can exhibit a set of leaves whose values together guarantee the value it declares is the correct answer. By Theorem 2.1, the expected number of leaves on any instance of T 2k is at most n 0.793, where n = 2 2k. It follows that on any instance of T 2k, there is a set of n 0.793 leaves whose values certify the value of the root for that instance. Note that we assert the existence of such a certificate with certainty, even though the technique used for establishing it was probabilistic.

6 Example 3 6 Example 5.3: We saw that for every n x n 0-1 matrix A, for a randomly chosen vector b Є {-1, +1} n, we have ||Ab|| ∞ ≤ 4√n lnn, with probability at least 1 -2/n. From this we may conclude that for every such matrix A, there always exists a vector b Є {-1,+1} n such that ||Ab|| ∞ ≤ 4√n lnn. Example 5.3: We saw that for every n x n 0-1 matrix A, for a randomly chosen vector b Є {-1, +1} n, we have ||Ab|| ∞ ≤ 4√n lnn, with probability at least 1 -2/n. From this we may conclude that for every such matrix A, there always exists a vector b Є {-1,+1} n such that ||Ab|| ∞ ≤ 4√n lnn.

7 Example 4 7 finding a large cut in a graph: Given an undirected graph G(V,E) with n vertices and m edges, we wish to partition the vertices of G into two sets A and B so as to maximize the number of edges (u, v) such that u Є A and v Є B. The problem of finding an optimal max- cut is NP_hard;. finding a large cut in a graph: Given an undirected graph G(V,E) with n vertices and m edges, we wish to partition the vertices of G into two sets A and B so as to maximize the number of edges (u, v) such that u Є A and v Є B. The problem of finding an optimal max- cut is NP_hard;. Theorem 5.1: For any undirected graph G(V,E) with n vertices and m edges, there is a partition of the vertex set V into two sets A and B such that |{(u,v) Є E | u Є A and v Є B}| ≥ m/2. proof: Consider the following experiment. Each vertex of G is independently and equiprobably assigned to either A or B. For an edge (u,v), the probability that its end-points are in different sets is 1/2. By linearity of expectation, the expected number of edges with end-points in different sets is thus m/2. It follows that there must be a partition satisfying the theorem Theorem 5.1: For any undirected graph G(V,E) with n vertices and m edges, there is a partition of the vertex set V into two sets A and B such that |{(u,v) Є E | u Є A and v Є B}| ≥ m/2. proof: Consider the following experiment. Each vertex of G is independently and equiprobably assigned to either A or B. For an edge (u,v), the probability that its end-points are in different sets is 1/2. By linearity of expectation, the expected number of edges with end-points in different sets is thus m/2. It follows that there must be a partition satisfying the theorem

8 8 Question:Having shown existence of a combinatorial object using the probabilistic method, can we find the object efficiently? Answer: 1)we have a deterministic polynomial-time algorithm that finds the combinatorial object whose existence is guaranteed by the probabilistic method. 2) we have a randomized polynomial-time algorithm that works with high probability. 3)we have a deterministic or randomized algorithm, but that is non-uniform. 4)we have instances where we know of no efficient algorithm for finding the object in question. Answer: 1)we have a deterministic polynomial-time algorithm that finds the combinatorial object whose existence is guaranteed by the probabilistic method. 2) we have a randomized polynomial-time algorithm that works with high probability. 3)we have a deterministic or randomized algorithm, but that is non-uniform. 4)we have instances where we know of no efficient algorithm for finding the object in question.

9 5.2. Maximum Satisfiability 9 Maximum satisfiability : rather than decide whether there is an assignment that satisfies all the clauses, we instead seek an assignment that maximizes the number of satisfied clauses. This problem, called the MAX-SAT problem, is known to be NP-hard, Note: We may assume without loss of generality that no clause contains both a literal and its complement, since such clauses are satisfied by any truth assignment. m/2 is the best possible universal guarantee, since the instance may consist of the two clauses x and x, in which case no better guarantee is possible. Note: We may assume without loss of generality that no clause contains both a literal and its complement, since such clauses are satisfied by any truth assignment. m/2 is the best possible universal guarantee, since the instance may consist of the two clauses x and x, in which case no better guarantee is possible.

10 Max SAT 10 Theorem 5.2: For any set of m clauses, there is a truth assignment for the variables that satisfies at least m/2 clauses. Proof: set each variable to true or false independently and equiprobably. For 1 < i < m, let Z i = 1 if the ith clause is satisfied and 0 otherwise. For any clause containing k literals, the probability that it is not satisfied by this random assignment is 2 -k, the probability that a clause with k literals is satisfied is 1 — 2 -k ≥ 1/2, implying that E[Z i ] ≥ 1/2 for all i. The expected number of clauses satisfied by this random assignment is ∑ i=0 m E[Z j ] ≥ m/2. Thus, there exists at least one assignment of values to the variables for this. Proof: set each variable to true or false independently and equiprobably. For 1 < i < m, let Z i = 1 if the ith clause is satisfied and 0 otherwise. For any clause containing k literals, the probability that it is not satisfied by this random assignment is 2 -k, the probability that a clause with k literals is satisfied is 1 — 2 -k ≥ 1/2, implying that E[Z i ] ≥ 1/2 for all i. The expected number of clauses satisfied by this random assignment is ∑ i=0 m E[Z j ] ≥ m/2. Thus, there exists at least one assignment of values to the variables for this.

11 Max SAT 11 Given an instance I, m * (I) : the maximum number of clauses that can be satisfied m A (I) : the number of clauses satisfied by an algorithm A. The performance ratio of an algorithm A = m A (I)/m * (I). If A achieves a performance ratio of a, we call it an a-approximation algorithm. For a randomized algorithm A, we replace m A (I) by E[m A (I)] Given an instance I, m * (I) : the maximum number of clauses that can be satisfied m A (I) : the number of clauses satisfied by an algorithm A. The performance ratio of an algorithm A = m A (I)/m * (I). If A achieves a performance ratio of a, we call it an a-approximation algorithm. For a randomized algorithm A, we replace m A (I) by E[m A (I)] Theorem 5.2 yields an algorithm whose performance guarantee is 1 – 2 -k, where every clause contains at least k literals. It follows that we have a randomized 3/4-approximation algorithm for in- instances of MAX-SAT in which every clause has at least two literals. How we can yield a randomized 3/4-approximation in clauses with one literal? Theorem 5.2 yields an algorithm whose performance guarantee is 1 – 2 -k, where every clause contains at least k literals. It follows that we have a randomized 3/4-approximation algorithm for in- instances of MAX-SAT in which every clause has at least two literals. How we can yield a randomized 3/4-approximation in clauses with one literal?

12 Second Algorithm 12 formulating the problem as an integer linear program, solve the linear programming relaxation, and then using the randomized rounding technique. For each clause C j in the instance, we associate an indicator variable z j Є {0,1} to indicate whether or not that clause is satisfied. For each variable x i, we use an indicator variable y i to indicate the value assumed by that variable; thus y i = 1 if the variable x i is set true, and y i = 0 otherwise. Let C j + be the set of indices of variables that appear in the uncomplemented form in clause C j, and C j - be the set of indices of variables that appear in the complemented form in clause C j.; formulating the problem as an integer linear program, solve the linear programming relaxation, and then using the randomized rounding technique. For each clause C j in the instance, we associate an indicator variable z j Є {0,1} to indicate whether or not that clause is satisfied. For each variable x i, we use an indicator variable y i to indicate the value assumed by that variable; thus y i = 1 if the variable x i is set true, and y i = 0 otherwise. Let C j + be the set of indices of variables that appear in the uncomplemented form in clause C j, and C j - be the set of indices of variables that appear in the complemented form in clause C j.;

13 formulating the MAX-SAT problem 13 we allow y i and Z j to assume real values in the interval [0,1]. Let ŷ i be the value obtained for variable y i by solving this linear program, and let ž j be the value obtained for Z j. first : the expected number of clauses satisfied by linear program is at least (1 — 1/e)∑ j ž j Second : the number of clauses satisfied by the better of these two solutions is at least 3/4 ∑ j ž j we allow y i and Z j to assume real values in the interval [0,1]. Let ŷ i be the value obtained for variable y i by solving this linear program, and let ž j be the value obtained for Z j. first : the expected number of clauses satisfied by linear program is at least (1 — 1/e)∑ j ž j Second : the number of clauses satisfied by the better of these two solutions is at least 3/4 ∑ j ž j

14 Max SAT 14 Lemma 5.3: Let C j be a clause with k literals. The probability that it is satisfied by randomized rounding is at least β k ž j that β k= 1-(1-1/k) k proof: Since we are focusing on a single clause C j, we may assume without loss of generality that all the variables contained in it appear in uncomplemented form. So we have Clause C j remains unsatisfied by randomized rounding only all y i ’s is rounded to 0. this occurs with probability. remains to show that is minimize when it suffices to show that 1-(1 - z/k) k ≥β k z for all positive integers k and 0≤ z ≤ 1 function f(x) = 1-(1 - x/k) k is a concave function, to show that it is never less than a linear function g(x ) over the interval [0,1], it suffices to verify the inequality at the end-points x = 0 and x = 1. Applying this principle to the linear function g(z) = β k z, the lemma follows proof: Since we are focusing on a single clause C j, we may assume without loss of generality that all the variables contained in it appear in uncomplemented form. So we have Clause C j remains unsatisfied by randomized rounding only all y i ’s is rounded to 0. this occurs with probability. remains to show that is minimize when it suffices to show that 1-(1 - z/k) k ≥β k z for all positive integers k and 0≤ z ≤ 1 function f(x) = 1-(1 - x/k) k is a concave function, to show that it is never less than a linear function g(x ) over the interval [0,1], it suffices to verify the inequality at the end-points x = 0 and x = 1. Applying this principle to the linear function g(z) = β k z, the lemma follows Note: β k= 1-(1-1/k) k ≥1-1/e so the expected number of clauses satisfied by randomized rounding is at least 1-1/e ∑ j ž j

15 Comparing two algorithms 15 n 1 denote the expected number of clauses that are satisfied when each variable is independently set to 1 with probability ½ and n 2 denote the exexpected number of clauses that are satisfied when we use the linear programming followed by randomized rounding.

16 16

17 5.3. Expanding Graphs 17 expanding graph is a graph in which the number of neighbors of any set of vertices S is larger than some positive constant multiple of |S|. Definition 5.1: An (n,d,α,c) OR-concentrator is a bipartite multigraph G(L, R, E), with the independent sets of vertices L and R each of cardinality n, such that 1. Every vertex in L has degree at most d. 2. For any subset S of vertices from L such that |S| < αn, there are at least c |S| neighbors in R it is desirable to have d as small as possible and c as large as possible Definition 5.1: An (n,d,α,c) OR-concentrator is a bipartite multigraph G(L, R, E), with the independent sets of vertices L and R each of cardinality n, such that 1. Every vertex in L has degree at most d. 2. For any subset S of vertices from L such that |S| < αn, there are at least c |S| neighbors in R it is desirable to have d as small as possible and c as large as possible

18 Expanding Graphs 18 Theorem 5.6: There is an integer n 0 such that for all n > n 0, there is an (n, 18,1/3,2) OR-concentrator Proof: Consider a random bipartite graph on the vertices in L and R, in which each vertex of L chooses its neighbors by sampling (with replacement) d vertices independently and uniformly from R and we discard all but one copy of multiple edges. Let ε s denote the event that a subset of s vertices of L has fewer than cs neighbors in R. We will first bound Pr[ ε s ], and then sum Pr [ε s ] over the values of s no larger than αn to obtain an upper bound on the probability that the random graph fails to be an OR-concentrator with the parameters we seek. Proof: Consider a random bipartite graph on the vertices in L and R, in which each vertex of L chooses its neighbors by sampling (with replacement) d vertices independently and uniformly from R and we discard all but one copy of multiple edges. Let ε s denote the event that a subset of s vertices of L has fewer than cs neighbors in R. We will first bound Pr[ ε s ], and then sum Pr [ε s ] over the values of s no larger than αn to obtain an upper bound on the probability that the random graph fails to be an OR-concentrator with the parameters we seek.

19 Expanding Graphs 19 Fix any subset S L of size s, and any subset T R of size cs. There are ( n s ) ways of choosing S, and ( n cs ) ways of choosing T. The probability that T contains all of the at most ds neighbors of the vertices in S is (cs/n) ds. Thus, the probability of the event that all the ds edges emanating from some s vertices of L fall within any cs vertices of R is bounded as follows for α = 1/3 and using s < αn, we have Let r = 2/3 I8 *3e 3, and note that r < 1/2. We obtain that : Fix any subset S L of size s, and any subset T R of size cs. There are ( n s ) ways of choosing S, and ( n cs ) ways of choosing T. The probability that T contains all of the at most ds neighbors of the vertices in S is (cs/n) ds. Thus, the probability of the event that all the ds edges emanating from some s vertices of L fall within any cs vertices of R is bounded as follows for α = 1/3 and using s < αn, we have Let r = 2/3 I8 *3e 3, and note that r < 1/2. We obtain that :

20 20 با متد احتمالاتی وجود گراف زیر اثبات میشود: گراف دو بخشی شامل بخش های L با n راس و R با 2 log 2 n راس که درجه هر راس L حداکثر d = 2 log 2 n (4 log 2 n)/n است و دو خصوصیت زیر را دارد: 1.هر زیر مجموعه با n/2 راس در L حداقل به 2 log 2 n - n راس R وصل است. 2.هیچ راس R به بیش از 12 log 2 n راس L وصل نیست. یک الگوریتمRP مانندA برای تشخیص وجود یک رشته x در زبان L به ازای 0≤ r ≤n-1 : X عضو رشته باشد به ازای حداقل نیمی از اعداد r ، A(x,r) باید جواب 1 بدهد X عضو زبان نباشد به ازای همه مقادیر r، A(x,r) باید جواب صفر بدهد الگوریتم رندم فصل 3 در t تکرارو در هر تکرار با 2log n بیت با احتمال خطای 1/t جواب میدهد. در این فصل با تعداد تکرار 1 و با log 2 n بیت میخواهیم با احتمال خطای 1/n به جواب برسیم.

21 21 با توجه به خاصیت این گراف: 1.هر زیر مجموعه با n/2 راس در L حداقل به 2 log 2 n - n راس R وصل است. تعداد n/2 log 2 n راس R امکان دارد که به هیچ یک از n/2 مقدار اعداد تخصیص داده شده به L وصل نباشد پس احتمال خطا کمتر از مرتبه n/n log n است. الگوریتم: به ازای هر راس در L یک عدد در بازه [0, n-1] و به ازای هر راس در R یک عدد در بازه [0, 2 log 2 n ] تخصیص میدهد با log 2 n بیت یک راس از R مانند v انتخاب می شود و مقادیر متصل با آن راس در L را به عنوان ورودی های r به A(x,r) میدهیم. اگر X عضو رشته نباشد این روش به ازای همه r هاجواب صفر میدهد و احتمال خطا ندارد. X عضو زبان باشد به ازای نیمی از مقادیر r، باید A(x,r) باید جواب یک بدهد. با چه احتمالی A(x,r) به ازای همه r های ورودیش جواب صفر میدهد؟

22 Routing 22 N_Scheme برای مسله مسیر یابی مجموعه ای از جایگشتها است که به ازای هر نمونه از مبدا و مقصد ها اگر به صورت تصادفی یک جایگشت انتخاب شود در کمتر از 15n گام همه بسته ها به مقصد برسند. درمسئله مسیر یابی برای رساندن N بسته به مقصد در یک مکعب n بعدی گفته شد که برای هر مبدا و مقصد میتوان یک مقصد میانی بصورت رندم در نظر گرفت که با احتمال 1-1/N با کمتر از 15n گام همه بسته ها به مقصد میرسند. تعداد بیت رندم مورد نیاز برای مشخص کردن مقصد های میانی هر مبدا و مقصد Nn بیت میباشد(N=2 n ). تعداد بیت ها زیاد است میتوان بجای انتخاب تک تک مقصد های میانی یک جایگشت بصورت رندم انتخاب شود ولی تعداد جایگشت ها زیاد است(N N ). طبق متد احتمالاتی ثابت میشود که اگر به صورت رندم از N N جایگشت، N 3 جایگشت انتخاب شود،برای هر نمونه اولیه مبدا و مقصدها اگر به صورت رندم از این N 3 جایگشت یک جایگشت انتخاب شود، هر بسته با احتمال 1-2/N در کمتر از 15n گام به مقصد میرسد. تعداد بیت رندم مورد نیاز 3n میباشد.

23 The Lovasz Lemma این لم که در روش های احتمالاتی بسیار کاربرد دارد از قرار زیر است: فرض کنیم فضای احتمالی از پیشامدهای ”بد” داریم. هم چنین فرض کنید احتمال رخ دادن هر یک از پیشامدها حداکثرp باشد و به علاوه هر پیشامد حداکثر به d پیشامد دیگر وابسته است. آنگاه احتمال این که هیچ یک از پیشامدهای بد اتفاق نیفتد مثبت است. نتیجه 5.12: اگر E 1,..E n رخداد های فضای احتمال باشند و برای هر i pr[E i ]≤p باشد.اگر هر رخداد حداکثر به d رخداد دیگر وابسته باشد و ep(d+1) ≤1 باشند آنگاه pr[∩ n i=1 E i ]>0 است. 23

24 یک الگوریتم لاس وگاس برای مسئله K_SAT 24 در مسئله K_SAT اگر هر متغیر حداکثر در 2 k/50 عبارت ظاهر شود میتوان یک الگوریتم تصادفی لاس وگاس طراحی کرد. در این مسله احتمال ارضا نشدن یک عبارت p=2 -k وd=k2 k/50 است پس شرایط لم برقرار و احتمال اینکه همه عبارات ارضا شوند مثبت است پس طبق متد احتمالاتی یک مقداردهی مناسب وجود دارد. الگوریتم: فاز یک: در ابتدا مقدار هر متغیر نامشخص است در هر مرحله یک متغیر به رندم انتخاب شده و مقدار آن صفر یا یک میشود در هر مرحله اگر در یک عبارت k/2 متغیر آن مقدار دهی شده باشد و آن عبارت هنوز ارضا نشده آن عبارت مارکدار شده و بقیه متغیر های آن نامشخص کنار گذاشته میشوند. فاز دوم: فقط متغیر های نامشخص و عبارت های مارکدار را در نظر میگیریم وبا احتمال زیاد این عبارت ها به بخش های مجزا با سایز O(log n) تبدیل شده اند حال برای هر بخش تک تک مقدار دهی های متغیر ها چک میشود تا همه عبارت های مارکدار ارضا شوند.

25 آیا عبارتهای مارکدار ارضا پذیرند؟ سایز هر بخش چقدر است و آیا زمان اجرای مرحله دوم چند جمله ای است؟  پاسخ 1: شرایط لم همه را برقرار میباشد و احتمال ارضا شدن هر بخش مثبت است پس طبق متد احتمالاتی هر بخش ارضا پذیر است.  پاسخ 2: با احتمال 1-o(1) هر بخش سایز O(log n) دارد و تعداد متغیر های نامشخص هر بخش نیز (O( log n است و باید 2 O(log n) مقدار دهی چک شود.  نکته: اگر سایز یکی از بخش ها زیاد بود فاز اول تکرار میشود. 25

26 متد احتمال شرطی برای برخی مسایل میتوان از متد احتمال شرطی استفاده کرد. برای مثال برای مسئله یافتن بردار b که ||Ab||<4√n ln n شود درخت محاسبه را در نظر بگیرید. مسیر ریشه تا هر برگ یک بردار b را میدهد و میدانیم در هر برگ احتمال جواب نادرست دادن متناظر با آن b یا صفر است و یا یک. احتمال رسیدن به یک b بد برای هر نود میانی a را با توجه به فرزند چپc و فرزند راست d مشخص میکنیم: P(a)= (p(c) + p(d))/2 فرض کنید E i رخدادی باشد که سطر iام ماتریس A که در b ضرب شود بیشتر از 4√n ln n باشد تعریف میکنیم: حال بجای استفاده از p(a) در هر مرحله از P^(A) استفاده میکنیم که الگوریتم یک الگوریتم معین است و در زمان چند جمله ای محاسبه میشود. 26


Download ppt "1. content basic principles of the probabilistic method maximum satisfiability (MAX-SAT) problem. expanding graphs problem of oblivious routing Lovasz."

Similar presentations


Ads by Google