Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.

Slides:



Advertisements
Similar presentations
Notes 6.6 Fundamental Theorem of Algebra
Advertisements

Primal Dual Combinatorial Algorithms Qihui Zhu May 11, 2009.
CPSC 455/555 Combinatorial Auctions, Continued… Shaili Jain September 29, 2011.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 21 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Lecture 12 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Separating Hyperplanes
Infinite Horizon Problems
Ron Lavi Presented by Yoni Moses.  Introduction ◦ Combining computational efficiency with game theoretic needs  Monotonicity Conditions ◦ Cyclic Monotonicity.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 9 Wednesday, 11/15/06 Linear Programming.
Linear Programming and Approximation
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
Solving a system of equations 1 x x x 3 = 3 2 x x x 3 = 2 3 x x x 3 = 3.
Approximation Algorithms
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Reductions Some of these lecture slides are adapted from CLRS Chapter 31.5 and.
The max flow problem
Chapter 10: Iterative Improvement
Lecture 10: Support Vector Machines
Polyhedral Optimization Lecture 3 – Part 3 M. Pawan Kumar Slides available online
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
LINEAR PROGRAMMING PROBLEM Definition and Examples.
Computational aspects of stability in weighted voting games Edith Elkind (NTU, Singapore) Based on joint work with Leslie Ann Goldberg, Paul W. Goldberg,
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
Linear Programming Piyush Kumar. Graphing 2-Dimensional LPs Example 1: x y Feasible Region x  0y  0 x + 2 y  2 y  4 x  3 Subject.
Polynomial Time Algorithms P = { computational problems that can be solved efficiently } i.e., solved in time · n c, for some constant c, where n=input.
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
1 Time Analysis Analyzing an algorithm = estimating the resources it requires. Time How long will it take to execute? Impossible to find exact value Depends.
Analysis of Algorithms
MIT and James Orlin1 NP-completeness in 2005.
Linear Programming Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction.
296.3Page :Algorithms in the Real World Linear and Integer Programming II – Ellipsoid algorithm – Interior point methods.
CS38 Introduction to Algorithms Lecture 16 May 22, CS38 Lecture 16.
Approximation Algorithms for Prize-Collecting Forest Problems with Submodular Penalty Functions Chaitanya Swamy University of Waterloo Joint work with.
Discrete Optimization Lecture #3 2008/3/41Shi-Chung Chang, NTUEE, GIIE, GICE Last Time 1.Algorithms and Complexity » Problems, algorithms, and complexity.
C&O 355 Mathematical Programming Fall 2010 Lecture 8 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
Section 5.5 The Real Zeros of a Polynomial Function.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Linear Programming Short-run decision making model –Optimizing technique –Purely mathematical Product prices and input prices fixed Multi-product production.
Linear Programming: Formulations, Geometry and Simplex Method Yi Zhang January 21 th, 2010.
Submodularity Reading Group Submodular Function Minimization via Linear Programming M. Pawan Kumar
Support Vector Machine: An Introduction. (C) by Yu Hen Hu 2 Linear Hyper-plane Classifier For x in the side of o : w T x + b  0; d = +1; For.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
OR Chapter 4. How fast is the simplex method  Efficiency of an algorithm : measured by running time (number of unit operations) with respect to.
OR Chapter 4. How fast is the simplex method.
Linear Programming Piyush Kumar Welcome to CIS5930.
Approximation Algorithms based on linear programming.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Algorithms in the Real World
Lap Chi Lau we will only use slides 4 to 19
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Topics in Algorithms Lap Chi Lau.
The minimum cost flow problem
Perturbation method, lexicographic method
Support Vector Machines Introduction to Data Mining, 2nd Edition by
Linear Programming Piyush Kumar Welcome to COT 5405.
Linear Programming and Approximation
2. Generating All Valid Inequalities
EMIS 8373 Complexity of Linear Programming
Chapter 5. The Duality Theorem
Chapter 10: Iterative Improvement
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Presentation transcript:

Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior point algorithms (Karmakar, 1984)

Worst-case efficient algorithms The simplex algorithm runs in worst case exponential time (as a function of the size of the input). We want an algorithm running in worst case polynomial time (as a function of the size of the input). What exactly do we mean by this?

Models of computation Model 1: Our computer holds exact real values. The input is given by some real matrices. The size of the input is the number of entries in the matrices. We perform a real arithmetic operation in each step of computation. The output is the exact real result.

Models of computation Model 2: Our computer holds bits (bytes, words). The input is given by rational matrices. The size of the input is the total number of bits in the matrices (each number being described in binary notation). We perform some logical bit operation in each step of computation (or word operations, but we “charge” for each bit operation). The output is the exact rational result.

Model 1 vs. Model 2 Model 1 is closer to the way we usually think about algorithms operating with real numbers. It is a very useful abstraction. It is usually easier to design an algorithm for Model 1. Model 2 is closer to reality. Model 1 cannot be implemented in a 100% faithful way. Model 2 can (and is). We know a polynomial time algorithm for linear programming in one of the models but not the other. Which one? We know a polyomial time algorithm in Model 2 but not in Model 1!

Do we have time to report the output? For rational instance with an optimal solution, there is a rational optimal solution. Suppose the input coefficients to an LP instance are rational numbers, given as fractions in binary notation. Is the number of bits in the description of an optimal basic feasible solution polynomially bounded in the number of bits in the input?

Do we have time to report the output? Fundamental theorem of Linear Programming: If there is an optimal solution, there is a basic optimal solution. Claim: The size of the encoding of a basic feasible solution is polynomial in the size of the input.

Proof

Do we have time to report the output?

We have time to report the output!

Worst case polynomial time algorithms for Linear Programming The Ellipsoid algorithm (Khachian, 1979). Theoretical breakthrough. Algorithm far from being efficient in practice. Spurred new research into LP algorithms. Interior Point algorithms (Karmakar, 1984). Algorithm efficient in practice. Tons of followup research and many interior point algorithms developed. Best algorithms beat the simplex algorithm.

Ellipsoid algorithm Idea of algorithm: Enclose (possibly a subset of) all possible feasible points within a large ball (i.e. ellipsoid). Test if center of ellipsoid is feasible. If not, the ellipsoid is partitioned into two parts by a hyperplane, with all possible feasible points in one part. Find a new smaller ellipsoid that encloses this half-ellipsoid (pick the smallest), and repeat.

Finding a half-ellipsoid

Finding the new ellipsoid

Why this works

Technical issue

Ellipsoids

The algorithm

Implementation issues The algorithm involves taking a squareroot! Solution: Compute with limited precision. In the analysis make each new ellipsoid slightly bigger to account for introduced errors. Double the number of iterations to account for larger volumes.

Unique feature of the Ellipsoid Algorithm We do not need an explicit description of the system of inequalities: We just need an efficient separation oracle. We can potentially solve systems with exponentially many inequalities! This is utilized in many (theoretical) applications of the Ellipsoid method.

Interior point algorithms Idea of algorithm: A polyhedron has complex structure on the border with exponentially many corners, edges, etc. Navigating there takes time. Well inside the polyhedron there is no structure. One can navigate directly towards the optimum point without obstacles.

The linear program and its dual.

Conversion to barrier problem

The central path

Lagrange multipliers

Result

Existence of solution

Path following algorithm

Key points 1.Balance making progress and making sure boundary is possible to avoid at suboptimal solutions. 2.Set up system of equations. Approximate using by dropping non-linear terms. (This is equivalent to Newton’s method).

Convergence

Convergence theorem