Download presentation
Presentation is loading. Please wait.
Published byAlia Alborn Modified over 10 years ago
1
Sub Exponential Randomize Algorithm for Linear Programming Paper by: Bernd Gärtner and Emo Welzl Presentation by : Oz Lavee
2
Linear programming The linear programming problem is a well known problem in computational geometry The last decade brought a progress in the efficiency of the linear programming algorithms Most of the algorithms were exponential in the dimension of the problem
3
Linear programming The last progress is a randomized algorithm that solve linear programming problem with n inequalities and d variables (R d ) in expected time of: This algorithm that we will see is a combination of Matoušek and Kalai sub exponential bounds and Clarkson algorithms
4
Definition : Linear programming problem Find a non negative vector x R d that minimize c T x subject to n linear inequalities Ax b where x 0 C – d-vector represent direction X – d-vector A[n,d] – n inequalities over d variables
5
Example over R 2 C h1 h2 X h1,h2 - inequalities
6
Definitions let H be the set of n half spaces defined by Ax b Let H + be the set of d halfspaces defined by X 0 For a G H H + we define v G as the lexicographically minimal point x minimizing c T x over h G h
7
Definitions : basis A set of halfspaces B H H + is called a basis if,. v B is finite and for any subset B’ of B v B’ < v B A basis of G is a minimal subset B G such that v B = v G
8
Definitions : violations a constraint h H H + is violated by G if and only if v G < v G {h} h is violated by v G if h is violated by G h1 h2 h3 G= {h1,h3} h = h2 vGvG
9
Definitions : extreme a constraint h G is extreme in G if v G-{h} < v G h1 h2 h3 G = {h1,h2,h3} h2 is extreme vGvG
10
Lemma 1 1. For F G H H +, v F < v G 2. v F, v G are finite and v F = v G. h is violated by F if and only if h is violated by G. 3. If v G is finite then any basis of G has exactly d constraints, and G has at the most d extreme constraints
11
The algorithm Our algorithm is a combination of 3 algorithms that use each other: 1. Clarkson first algorithm (CL1) for n>>d 2. Clarkson second algorithm (CL2) for 3. Subexponenial algorithm(subex) for 6d 2 CL1 CL2 subex n>>d 6d 2
12
Clarkson First Algorithm (CL1) set H of n constraints where n>>d We choose a random sample R H of size,compute v R and the set V of constraints from H that are violated by v R If V is not too large we add it to initially set G = H +, choose another sample R and compute V R G and so on…
13
Clarkson First Algorithm (CL1) CL1 (H) if n < 9d 2 then return CL2(H) else r =, G = H + repeat choose random v = CL2 (R) V = {h H | v violate h} if then G = G V until V = Ø return v
14
Two important facts Fact 1: The expected size of V is no more than The probability that |V| > is at the most The number of attempts to get a small enough V expected to be at the most 2
15
Three important fact Fact 2: If any constraint is violated by v = v G R then for any basis B of H there must be a constraint that is violated by v. The number of expansion of G is at the most d
16
CL1 Run time CL1 compute v H H+ with expected o(d 2 n) operations and at the most 2d calls to CL2 with constraints
17
Clarkson Second Algorithm CL2 This algorithm is very similar to the CL1 with the main different that instead of forcing V to be in the next iteration We increase the probability of the elements in v to be chosen for R in the next iteration
18
Clarkson Second Algorithm CL2 We will use factor µ for each constraint that will be initialized to 1. For any constraint in V we will double his factor For any basis B the elements of B increase so quickly that after a logarithmic time they will be chosen with high probability
19
Clarkson Second Algorithm CL2 CL2 (H) if n 6d 2 then return subex(H) else r = 6d 2 repeat choose random v = subex (R) V = {h H | v violate h} if then for all h V do µ h = 2µ h until V = Ø return v
20
Lemma 4 Let k be a positive integer then after kd Successful Iterations we have For any basis B of H
21
Run time CL2 Since and since 2 > e 1/3 after big enough k 2 k > ne k/3 let k = 3ln(n) 2 k = n 3ln2 > n 2 = ne 3ln(n)/3 There will be at the most 3dln(n) successful iterations There will be at the most 6dln(n) iterations
22
Run time CL2 In each iteration there are O(dn) arithmetic operations and one call to subex Totally O(d 2 nlogn) operations and 6dln(n) calls to subex
23
The Sub Exponential Algorithm (SUBEX) The idea : – H a group of n constraints. – Remove a random constraint h – Compute recursively v H-{h} – If h is not violated then done. – Else try again by removing (hopefully different) constraint The probability that h is violated is d/n
24
The Subex Algorithm In order to get efficiency we will pass to the subex procedure in addition to the set G of constraints a candidate basis We assume that we have the following primitive procedures: – Basis(B,h) : calculate the basis of B {h} – Violation test – Calculate V B of basis B
25
The Subex Algorithm Subex(G,B) if G=B return (V B, B) else choose random h G-B (v,B’) = Subex(G- {h},B) if h violates v return Subex(G,basis(B’,h)) else return (v,B’)
26
The Subex Algorithm The number of steps is finite since: – The first recursive call decrease the number of constraints – The second recursive call increase the value of the temporal solution Inductively it can be shown that the step keeps the correctness of the temporal solution
27
The Subex Algorithm Run Time The subex algorithm is computing v H H+ with We called the subex algorithm with 6d 2 constraints so the run time for each call to subex is
28
Total Run Time The runtime of cl2 = O(d 2 nlogn + 6dln(n)T subex (6d 2 ) = The runtime of cl1 =
29
The Abstract Framework We can expand this algorithm to be used on larger range of problems. Let H be a finite set let (W {- }, ) linear ordered set of values Let w : 2 H W {- } value function
30
The Abstract Framework Our goal : to find a minimal subset B H where w(B) = w(H) We can use the algorithm that we saw in order to solve this problem if 2 axiom are satisfied.
31
The axioms 1. For any F,G such that F G H we have w(F) w(G).(monotonicity) 2. For any F G H with w(F) = w(G) if h H w(G) < w(G {h}) than w(F) < w(F {h})
32
LP-type problems If those axioms hold for a definition of a problem we will call it LP-type problem From lemma 1 we can see that the linear programming problem is LP-type.
33
LP-type problems For the efficiency of the algorithm we need one more parameter for (H,w): the maximum size of any basis of H which is referred to as the combinatorial dimension of (H,w) denote as
34
LP-type problems Any LP-type problem can be solved using the above algorithm but it is not necessarily be in sub exponential time In order to have sub exponential time the problem should have the property of basis regularity Basis regularity – all the basis has exactly constraints
35
LP-type problems - Examples Smallest closing ball- given a set of n points in R d find the smallest closing ball (combinatorial dimension- d+1) Polytope distance – given two polytopes P,Q. compute p P q Q minimizing dist (p,q) (combinatorial dimension- d+2)
36
Summery We have seen a randomized sub exponential in d-space algorithm for the linear programming problem We have seen the family of LP-type problems
37
References Linear-programming – Randomization and abstract frame work / Bernd Gärtner and Emo welzl
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.