OR-1 20111 Chapter 4. How fast is the simplex method  Efficiency of an algorithm : measured by running time (number of unit operations) with respect to.

Slides:



Advertisements
Similar presentations
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Advertisements

What happens to Klee-Minty examples if maximum increase rule is used?
Linear Programming (LP) (Chap.29)
Advanced Topics in Algorithms and Data Structures Lecture 7.2, page 1 Merging two upper hulls Suppose, UH ( S 2 ) has s points given in an array according.
Introduction to Algorithms
Dragan Jovicic Harvinder Singh
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
© 2007 Pearson Education Chapter 14: Solving and Analyzing Optimization Models.
1 Linear Programming Jose Rolim University of Geneva.
CSCI 3160 Design and Analysis of Algorithms Tutorial 6 Fei Chen.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 9 Wednesday, 11/15/06 Linear Programming.
Linear Programming and Approximation
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Chapter 10: Iterative Improvement
Simplex Method LP problem in standard form. Canonical (slack) form : basic variables : nonbasic variables.
D Nagesh Kumar, IIScOptimization Methods: M3L6 1 Linear Programming Other Algorithms.
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Decision Procedures An Algorithmic Point of View
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
Linear Programming Piyush Kumar. Graphing 2-Dimensional LPs Example 1: x y Feasible Region x  0y  0 x + 2 y  2 y  4 x  3 Subject.
Mathematics Review and Asymptotic Notation
OR Chapter 3. Pitfalls OR  Selection of leaving variable: a)No restriction in minimum ratio test : can increase the value of the entering.
Some Key Facts About Optimal Solutions (Section 14.1) 14.2–14.16
296.3Page :Algorithms in the Real World Linear and Integer Programming II – Ellipsoid algorithm – Interior point methods.
1 Max 8X 1 + 5X 2 (Weekly profit) subject to 2X 1 + 1X 2  1000 (Plastic) 3X 1 + 4X 2  2400 (Production Time) X 1 + X 2  700 (Total production) X 1.
3.3 Implementation (1) naive implementation (2) revised simplex method
OR Chapter 1. Introduction  Ex : Diet Problem Daily requirements : energy(2000kcal), protein(55g), calcium(800mg) Food Serving size Energy (kcal)
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
Chapter 3 Algorithms Complexity Analysis Search and Flow Decomposition Algorithms.
McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Table of Contents CD Chapter 14 (Solution Concepts for Linear Programming) Some Key Facts.
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
Section 5.5 The Real Zeros of a Polynomial Function.
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
1.2 Guidelines for strong formulations  Running time for LP usually depends on m and n ( number of iterations O(m), O(log n)). Not critically depend on.
Chapter 10 Advanced Topics in Linear Programming
OR Relation between (P) & (D). OR optimal solution InfeasibleUnbounded Optimal solution OXX Infeasible X( O )O Unbounded XOX (D) (P)
OR Integer Programming ( 정수계획법 ). OR
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
The minimum cost flow problem. Solving the minimum cost flow problem.
OR Chapter 4. How fast is the simplex method.
Linear Programming Piyush Kumar Welcome to CIS5930.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
MIT and James Orlin © The Geometry of Linear Programs –the geometry of LPs illustrated on GTC.
Chap 10. Sensitivity Analysis
Chapter 1. Introduction Ex : Diet Problem
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
The minimum cost flow problem
4.2 Real Zeros Finding the real zeros of a polynomial f(x) is the same as solving the related polynomial equation, f(x) = 0. Zero, solution, root.
Chap 9. General LP problems: Duality and Infeasibility
Chapter 6. Large Scale Optimization
Chap 3. The simplex method
Linear Programming Piyush Kumar Welcome to COT 5405.
Chapter 6. Large Scale Optimization
2. Generating All Valid Inequalities
EMIS 8373 Complexity of Linear Programming
Chapter 11 Limitations of Algorithm Power
Chapter 5. The Duality Theorem
DUALITY THEORY Reference: Chapter 6 in Bazaraa, Jarvis and Sherali.
Chapter 10: Iterative Improvement
Branch-and-Bound Algorithm for Integer Program
1.2 Guidelines for strong formulations
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
1.2 Guidelines for strong formulations
Presentation transcript:

OR Chapter 4. How fast is the simplex method  Efficiency of an algorithm : measured by running time (number of unit operations) with respect to the problem size.  Size of problem : amount of information (size of storage, length of encoding) to represent the problem in computer (using 2 alphabets 0 and 1) Ex) positive integer x : rational number p/q : m  n matrix A : (  a  means smallest integer not less than a, e.g.  1.5  = 2,  -1.5  = -1 )  Size of an LP is a function of m, n, log u ( u is largest number (absolute value) in LP, assuming data are given in integers)

OR  Running time : worst case view point  An algorithm is considered efficient if the worst case running time is bounded from above by a polynomial function of problem size (for large instances only, we ignore some irregularities for small instances). (polynomial time algorithm) Size of problem encoding Running time t

OR  Why polynomial function? Suppose it takes 1  sec in running time when n = 1, then  Suppose one iteration of simplex method takes polynomial time of the input size (which is true). Then, if the number of iterations is a polynomial function of input length for any LP, it implies that simplex method is a polynomial time algorithm n2n2 100  sec400  sec2500  sec3600  sec 2n2n 1000  sec1 sec35.7 years 366 centuries

OR  Empirically the number of iterations is O(m) and O(logn) ( function f(s) is called O(g(s)) if there exist a positive constant c and a positive integer s’ such that f(s)  cg(s) when s  s’.)  Bad counter example : Klee and Minty (1972) subject to If we use largest coefficient rule, number of iterations is 2 n – 1, hence running time is not polynomial

OR  ex) n = 3 Note that m = n, and feasible region is given approximately as 0  x 1  1, 0  x 2  100, 0  x 3  Hence it is an elongated skewed hypercube with 2 n extreme points. (Note that we need 3 equations to define an extreme point in this example. Only one of the lower and upper bound constraints for each variable can be chosen to define an extreme point, hence total of 2 n extreme points exist.) Simplex method with largest coefficient rule searches all extreme points until it finds the optimal solution. But, largest increase rule finds the optimal solution in one iteration.

OR  Geometric view largest increase rule

OR  Alternative rules : Largest increase rule : counter example by Jeroslow (1973) Bland’s rule : counterexample by Avis and Chvatal (1978) Hence, until now, simplex method is not a polynomial time algorithm theoretically. But it performs well practically.  First polynomial time algorithm for LP : ellipsoid method (1979) by L. G. Khachian. But, practically much inferior to simplex. However, it has some important theoretical implications in determining computational complexity of some optimization problems.  Another polynomial time algorithm : Interior point method by L. Karmarkar (1984) : Better than simplex in many cases practically. Many versions. Based on ideas from nonlinear programming.

OR  Interior point method will be briefly mentioned when we study complementary slackness theorem.  Comparing simplex and interior point method: Interior is generally fast, especially for large problems. But simplex is competitive for some problems and recent developments in dual simplex algorithm makes solving LP of large size manageable. In addition, simplex is effective when we solve the LP again after making some changes in data (reoptimization). Such capability is quite important when we solve integer programming problems. But little progress has been made for interior point algorithm in this respect. Recently, interior point method is used for some nonlinear programming problems (convex programs) with successful results.