Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Slides:



Advertisements
Similar presentations
February 14, 2002 Putting Linear Programs into standard form
Advertisements

Randomized Algorithms for Selection and Sorting Prepared by John Reif, Ph.D. Analysis of Algorithms.
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Solving IPs – Cutting Plane Algorithm General Idea: Begin by solving the LP relaxation of the IP problem. If the LP relaxation results in an integer solution,
1 Constraint operations: Simplification, Optimization and Implication.
Sub Exponential Randomize Algorithm for Linear Programming Paper by: Bernd Gärtner and Emo Welzl Presentation by : Oz Lavee.
Linear Programming Simplex Method
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 6.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
CSL758 Instructors: Naveen Garg Kavitha Telikepalli Scribe: Manish Singh Vaibhav Rastogi February 7 & 11, 2008.
CS38 Introduction to Algorithms Lecture 7 April 22, 2014.
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
Lecture 4: Linear Programming Computational Geometry Prof. Dr. Th. Ottmann 1 Linear Programming Overview Formulation of the problem and example Incremental,
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 12 June 18, 2006
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Randomized Algorithms and Randomized Rounding Lecture 21: April 13 G n 2 leaves
Lecture 4: Linear Programming Computational Geometry Prof. Dr. Th. Ottmann 1 Linear Programming Overview Formulation of the problem and example Incremental,
Linear Programming Computational Geometry, WS 2007/08 Lecture 7 Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik Fakultät.
Approximation Algorithms
Linear Programming Computational Geometry, WS 2006/07 Lecture 5, Part III Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik.
Lecture 4: Linear Programming Computational Geometry Prof. Dr. Th. Ottmann 1 Linear Programming Overview Formulation of the problem and example Incremental,
Branch and Bound Algorithm for Solving Integer Linear Programming
Linear Programming Computational Geometry, WS 2006/07 Lecture 5, Part II Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
Decision Procedures An Algorithmic Point of View
Algebra 2 Chapter 3 Notes Systems of Linear Equalities and Inequalities Algebra 2 Chapter 3 Notes Systems of Linear Equalities and Inequalities.
STUDY OF THE HIRSCH CONJECTURE BASED ON “A QUASI-POLYNOMIAL BOUND FOR THE DIAMETER OF GRAPHS OF POLYHEDRA” Instructor: Dr. Deza Presenter: Erik Wang Nov/2013.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
UNC Chapel Hill M. C. Lin Linear Programming Reading: Chapter 4 of the Textbook Driving Applications –Casting/Metal Molding –Collision Detection Randomized.
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
Order Statistics The ith order statistic in a set of n elements is the ith smallest element The minimum is thus the 1st order statistic The maximum is.
1 Prune-and-Search Method 2012/10/30. A simple example: Binary search sorted sequence : (search 9) step 1  step 2  step 3  Binary search.
4  The Simplex Method: Standard Maximization Problems  The Simplex Method: Standard Minimization Problems  The Simplex Method: Nonstandard Problems.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 7.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
Find the Kth largest number Special topics in Advanced Algorithms -Slides prepared by Raghu Srinivasan.
Linear Programming: Formulations, Geometry and Simplex Method Yi Zhang January 21 th, 2010.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
UNC Chapel Hill M. C. Lin Randomized Linear Programming For any set of H of half-planes, there is a good order to treat them. Thus, we can improve the.
Approximation Algorithms based on linear programming.
Computational Geometry
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
Optimization Problems
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
The minimum cost flow problem
Perturbation method, lexicographic method
Chapter 5. Optimal Matchings
Linear Programming.
Linear Programming CISC4080, Computer Algorithms CIS, Fordham Univ.
Uri Zwick – Tel Aviv Univ.
3-3 Optimization with Linear Programming
Well, just how many basic
CISC5835, Algorithms for Big Data
CMPS 3130/6130 Computational Geometry Spring 2017
Chapter 5. The Duality Theorem
Linear Programming Example: Maximize x + y x and y are called
Presentation transcript:

Algorithms Lecture 10 Lecturer: Moni Naor

Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2 … c d ¢ x d Subject to: a 1,1 ¢ x 1 + a 1,2 ¢ x 2 … a 1,d ¢ x d · b 1 a 2,1 ¢ x 1 + a 2,2 ¢ x 2 … a 2,d ¢ x d · b 1... a n,1 ¢ x 1 + a n,2 ¢ x 2 … a n,d ¢ x d · b n n – number of constraints and d – number of variables or dimension

Linear Programming in Two Dimensions Feasible region Optimal vertex

What is special in low dimension Only d constraints determine the solution –The optimal value on those d constraints determine the global one –Problem is reduced to finding those constraints that matter –We know that equality hold in those constraints Generic algorithms: –Fourier-Motzkin: (n/2) 2 d –Worst case of Simplex: number of bfs/vertices ( n d )

Key Observation If we know that an inequality constraints is defining: –we can reduce the number variables Projection Substitution Feasible region Optimal vertex 4X 1 - 6x 2 =4

B Incremental Algorithm B Input : A set of n constraints H on d variables Output : the set of defining constraints B 0. If |H|=d output B (H)=H 1.Pick a random constraint If h 2 H B recursively find B (H \ h) BB 2. If B (H \ h) does not violate h output B (H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

Correctness, Termination and Analysis Correctness: by induction… Termination: if the non defining constraints chosen –No need to rerun Analysis: probability that h is one the defining constraints is d/n B 0. If |H|=d output B (H)=H 1.Pick a random constraint If h 2 H B recursively find B (H \ h) B B 2. If B (H \ h) does not violate h output B (H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

Analysis Analysis: probability that h is one the defining constraints is d/n T(d,n) = d/n T(d-1,n-1) + T(d,n-1) by induction = d/n (d-1)! (n-1) + d! (n-1) = (n-1)d!(1/n +1) · nd! B 0. If |H|=d output B (H)=H 1.Pick a random constraint If h 2 H B recursively find B (H \ h) B B 2. If B (H \ h) does not violate h output B (H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

How to improve The algorithm is wasteful: When the solution does not fit the new a new is computed from scratch B 0. If |H|=d output B (H)=H 1.Pick a random constraint If h 2 H B recursively find B (H \ h) B B 2. If B (H \ h) does not violate h output B (H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

Random Sampling idea Build the basis by adding the constraints in a manner related to history Input : A set of n constraints H on d variables Output : the set of defining constraints 0. If |H|= c d 2 return simplex on H S Ã  Repeat Pick random R ½ H of size r Solve recursively on S [ R solution is u V = set of constraints in H violated by u If |V| · t, then S Ã S [ V Until V= 

Correctness, Termination and Analysis Claim : Each time we augment S (S Ã S [ V), we add to S a new constraint from the ``real” basis B of H –If u did not violate any constraint in B it would be optimal –So V must contain an element from B which was not in S before Since |B|=d, we can augment S at most d times Therefore the number of constraints in the recursive call is |R|+|S| · r +dt Important factor for analysis: what is the probability of successful augmentation

Sampling Lemma For any H and S ½ H The expected (over R ) number of constraints V that violate u (optimum on S [ R ) is at most nd/r Proof Let X(R,h) be 1 iff h violated h( S [ R) Need to bound E R [  h X(R,h)] = 1 / #R  |R|=r  h X(R,h) instead consider all subsets Q = R [ h of size r+1 = 1 / #R  |Q|=r+1  h 2 Q X(Q\{h},h) = (#Q/#R) (r+1) ¢ Prob Q,h 2 Q X(Q\{h},h) · n ¢ d / (r+1)

Analysis Setting t= 2 nd /r implies (from Markov’s inequality): –number of recursive call until a successful (V · t) augmentation is constant Number of constraints in recursive call bounded by r+O(d 2 n/r) Setting r=d n 1/2 means that this is O(r) Total expected running time T(n) · 2 d T(d n 1/2 ) + O(d 2 n) Result O( (log n) log d (Simplex time) ) + O(d 2 n) Can be improved to O(d d +d 2 n) Can be improved to O(d d 1/2 +d 2 n) using [Kalai, Matousek-Sharir-Welzl]

References Motwani and Raghavan, Randomized Algorithms Chapter 9.10 Michael Goldwasser, A Survey of Linear Programming in Randomized Subexponential Time Pioter Indyk’s course at MIT, Geometric Computation Applet: