Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Quicksort
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Less Than Matching Orgad Keller.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Lecture #3; Based on slides by Yinyu Ye
Linear Programming (LP) (Chap.29)
Sub Exponential Randomize Algorithm for Linear Programming Paper by: Bernd Gärtner and Emo Welzl Presentation by : Oz Lavee.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Advanced Topics in Algorithms and Data Structures Lecture 7.2, page 1 Merging two upper hulls Suppose, UH ( S 2 ) has s points given in an array according.
An Approximate Truthful Mechanism for Combinatorial Auctions An Internet Mathematics paper by Aaron Archer, Christos Papadimitriou, Kunal Talwar and Éva.
Introduction to Algorithms
Dragan Jovicic Harvinder Singh
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
How should we define corner points? Under any reasonable definition, point x should be considered a corner point x What is a corner point?
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Convexity of Point Set Sandip Das Indian Statistical Institute.
Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.
Rajat K. Pal. Chapter 3 Emran Chowdhury # P Presented by.
Linear Programming Computational Geometry, WS 2006/07 Lecture 5, Part IV Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik.
Lecture 4: Linear Programming Computational Geometry Prof. Dr. Th. Ottmann 1 Linear Programming Overview Formulation of the problem and example Incremental,
Lecture 4: Linear Programming Computational Geometry Prof. Dr. Th. Ottmann 1 Linear Programming Overview Formulation of the problem and example Incremental,
Design and Analysis of Algorithms
Half-Space Intersections
Prune-and-search Strategy
Lecture 4: Linear Programming Computational Geometry Prof. Dr. Th. Ottmann 1 Linear Programming Overview Formulation of the problem and example Incremental,
Linear Programming Computational Geometry, WS 2007/08 Lecture 7 Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik Fakultät.
Approximation Algorithms
Linear Programming Computational Geometry, WS 2007/08 Lecture 7, Part II Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik.
Point Location Computational Geometry, WS 2007/08 Lecture 5 Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik Fakultät für.
Linear Programming Computational Geometry, WS 2006/07 Lecture 5, Part III Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik.
Lecture 6: Point Location Computational Geometry Prof. Dr. Th. Ottmann 1 Point Location 1.Trapezoidal decomposition. 2.A search structure. 3.Randomized,
Lecture 4: Linear Programming Computational Geometry Prof. Dr. Th. Ottmann 1 Linear Programming Overview Formulation of the problem and example Incremental,
Linear Programming Computational Geometry, WS 2006/07 Lecture 5, Part II Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik.
Orgad Keller Modified by Ariel Rosenfeld Less Than Matching.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Decision Procedures An Algorithmic Point of View
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
UNC Chapel Hill M. C. Lin Linear Programming Reading: Chapter 4 of the Textbook Driving Applications –Casting/Metal Molding –Collision Detection Randomized.
© The McGraw-Hill Companies, Inc., Chapter 6 Prune-and-Search Strategy.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
1 Prune-and-Search Method 2012/10/30. A simple example: Binary search sorted sequence : (search 9) step 1  step 2  step 3  Binary search.
Linear Programming Problem. Definition A linear programming problem is the problem of optimizing (maximizing or minimizing) a linear function (a function.
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Linear Programming: Formulations, Geometry and Simplex Method Yi Zhang January 21 th, 2010.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
UNC Chapel Hill M. C. Lin Randomized Linear Programming For any set of H of half-planes, there is a good order to treat them. Thus, we can improve the.
Common Intersection of Half-Planes in R 2 2 PROBLEM (Common Intersection of half- planes in R 2 ) Given n half-planes H 1, H 2,..., H n in R 2 compute.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Approximation Algorithms based on linear programming.
Computational Geometry
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Linear program Separation Oracle. Rounding We consider a single-machine scheduling problem, and see another way of rounding fractional solutions to integer.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
Perturbation method, lexicographic method
6.5 Stochastic Prog. and Benders’ decomposition
Dr. Arslan Ornek IMPROVING SEARCH
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chap 3. The simplex method
Depth Estimation via Sampling
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
CHAPTER 4. LINEAR PROGRAMMING
I. The Problem of Molding
CMPS 3130/6130 Computational Geometry Spring 2017
6.5 Stochastic Prog. and Benders’ decomposition
Chapter 2. Simplex method
Presentation transcript:

Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables. –D = number of variables or dimensions. –Objective function is the function to be maximized. – Linear program is the set of constraints together with the objective function. –Feasible region is the intersection of the half-spaces, which is the set of points that satisfy all the constraints. Feasible region can be bounded, unbounded, empty. If empty problem is infeasible Maximize C 1 X 1 + C 2 X 2 + … + C d X d Subject to A 1,1 X 1 + … + A 1,d X d ≤ b 1 A 2,1 X 1 + … + A 2,d X d ≤ b 2 … A n,1 X 1 + … + A n,d X d ≤ b n

Linear Programming Operations Research has developed many algorithms to solve linear programs that perform well in practice. Our LP has N linear constants in 2 variables. Most OR applications have high-dimension (# constraints and variables) and do not work well in low dimensions (# of variables). Computational Geometry algorithms can do better in low dimensions

Linear Program (H,c) H is set of n two-dimensional constraints gives objective function GOAL: find so that and is maximized Let C denote feasible region for (H, c)

Linear Program Four possible cases Convention: to give unique solution for 3 rd example, choose lexicographically smallest point. Infeasible No solution Unbounded Return ray E C P C e p Non-Unique Solution Unique (vertex) solution

Incremental 2-dimensional linear programming Add constraints one by one Maintain optimal vertex of intermediate feasible region. Slight problem, requires that solution exists! –Not true for unbounded linear program –We will use subroutine for this

If (H,c) unboundedreturn ray in C else return so that is bounded. (h1 and h2 are certificates) Unbounded LP (H,c) certificates

Let Ci = feasible region with respect to halfplanes h1-hi = Note: Fact: c i = Ø then c j = Ø for all j≥i (and LP is infeasible) Linear Programming Let (H,c) be bounded linear program –h1 and h2 are certificates returned by UnboundedLP(H,c) –Number remaining halfplanes h3,h4,…,hn

How does optimal vertex change as we add h i ? V i is optimal vertex for C i L i is line bounding h i

Lemma 4.5 : Let c i and v i be defined as before (i)If v i-1  h i, then v i = v i-1 (ii)If v i-1  h i, then either c i =  or v i  l i. Proof : Let v i-1  h i (1) c i = c i-1  h i implies c i  c i-1 (2) v i-1  c i-1 and v i  h i implies v i-1  c i Note that the optimal point in c i cannot be better than optimal point in c i-1 (smaller) implies v i-1 is optimal in c i

(ii)Let v i-1  h i Suppose c i   and v i  l i (contradiction) (1) Consider segment vector v i-1 v i -by definition v i-1  c i-1 -since c i  c i-1, v i  c i-1 -since c i-1 is convex this implies the vector v i-1 v i  c i-1 (2) since v i-1 is optimal for c i-1 and f c is linear this implies f c (p) increases monotonically along the vector v i1 v i as p moves from v i to v i-1.

(Proof continued) (3) Consider intersection point q of vector v i-1 v i and l i - q exists since v i-1  h i and v i  c i Since vector v i-1 v i  c i-1, q must be in c i but value of the objective function increases along the vector v i-1 v i so f c (q) > f c (v i ) which is a contradiction to the definition of v i

To update optimal point : (1) If v i-1  h i then we are done (v i = v i-1 ) (2) If v i-1  h i we need to find v i on l i but this is just a one dimensional LP One-Dimensional LP : Find p on l i that maximizes f c (p) subject to constraints p  h j, 1  j  i. Without loss of generality, assume l i is x-axis and let x j = l i  h j. We will now see how to solve one dimesional LP

To solve One-Dimensional LP : x left = max 1  j < i {x j | l i  h j is bounded to left } x right = min 1  j < i {x j | l i  h j is bounded to right} The interval [x left, x right ] is a feasible region - LP is infeasible if x left > x right - Otherwise, optimal point is x left or x right Running time of One-Dimensional LP : O(n)

Algorithm: Two Dimensional LP(H,c) Input : LP(H,c) Output : Infeasible, Unbounded (and ray in c), or solution point p maximizing f c (p) 1. Run UNBOUNDEDLP(H,c) report if (H,c) is infeasible or unbounded (and ray in c) 2. Let h 1 and h 2 be certificates returned by UNBOUNDEDLP(H,c) letv 2 = h 1  h 2 and let h 3,h 4,...,h n be half planes in H for i = 3 to n if v i-1  h i then v i := v i-1 else v i := 1DLP({h 1, h 2,..., h i-1 }, c) if v i doesn’t exist report infeasible. endif end for return v n end algorithm

Running time : -Unbounded LP implies O(n) (We will see later) - Each iteration O(i) implies  O(i) = O(n 2 ) Therefore O(n 2 ) in total. Correctness : Follows from Lemma 4.5 (each iteration have correct) But this algorithm is worse than the one for constructing entire convex region.

Incremental LP Nice and simple. But…takes O(n 2 ) time in worst case, which is worse than the previous algorithm that computed the entire feasible region!

Is our analysis too crude? i.e. is algorithm actually better than we thought? Algorithm has n-2 stages, (each time add a half plane) We said stage i takes O(i) time, the time for 1D- LP with i half-planes. Note however: stage i takes:  O(i) time if optimal vertex changes  do 1D-LP (previous optimal is not in h i ).  O(1) time if optimal vertex does not change (previous optimal is in h i, so still optimal).

Question: how many times can optimal vertex change? Idea: if we can show it changes only say k times, than we can bound running time at O(kn). Unfortunately: there are cases in which optimal vertex can change every time…

Question: how many times can optimal vertex change? Thus, if we consider the planes (in this order), then the optimal vertex changes every time, and we have to do a 1D-LP each time! Running time O(n 2 ) !! Notice however, that if we had been lucky and added the vertices in the reverse order then the optimum would never change! Hmm… can we determine the right order in which to add the planes?

Randomization Unfortunately, we can not really determine the exact best order without a lot of work… Answer: Randomization Choose a random permutation of the planes and add them in that order.  We could have bad luck and pick a bad order that gives O(n 2 ) running time.  But most orders are not bad (as we’ll see) and so usually we do pretty well.

Changes to Algorithm Before start adding half-planes, randomly permute them. The running time is O(n). RandomPermutation(A) input: A[1…n] output: A[1…n] --- permuted randomly for i = n downto 2 random_index = Random(i) swap(A[i], A[random_index]) endfor

Randomized incremental algorithm Algorithm is now randomized algorithm. random choices made in permutation subroutine What is running time of randomized incremental algorithm?  Depends on permutation, and there are (n-2)! of them…  We’ll study the expected running time.  Each permutation of input is equally likely and doesn’t depend on the input planes  No assumptions made on input and so expectation is w.r.t. random order in which half-planes are treated and holds for any set of half-planes.

Expected running time Theorem 4.8: The 2D-LP with n constraints can be solved in O(n) expected time using a randomized incremental algorithm. Proof:  Running time of RandomPermutation() and UnboundedLP() are O(n). We’ll see the latter one.  Need to consider time for adding n-2 half- planes.

Expected running time  Adding a half-plane takes  Constant time if the optimum doesn’t change  O(i) time if does change (i th half-plane with 1D-LP).  We will bound time for all 1D-LPs.  Let X i be random variable:

Expected running time  If X i = 1, then 1D-LP takes O(i) time. Otherwise, adding h i takes O(1) time. Total time adding all half-planes (with 1D-LP) is:  We bound this sum using linearity of expectation: expected value of sum of RV’s is sum of the expected values.

Expected running time  What is E[X i ]? Probability that v i-1  h i.  “Backward analysis”  Algorithm done, v n is optimum vertex and vertex of C n.  Is it a vertex of C n-1 ? The answer is no only if h n is one of half-plane defining v n. How likely is this…? Only at most 2/(n-2).

Expected running time  And in general, to bound E[X i ] we  Fix subset of first i half-planes (determine C i ).  Compute a new optimum when adding h i if h i was one of two half-planes defining new optimum. E[X i ] = 2/(i-2).  So total bound:

Expected running time Randomized Incremental Algorithm takes O(n) expected time. Important: Expectation is only with respect to random permutation and applies to any input set.