OR Chapter 4. How fast is the simplex method Efficiency of an algorithm : measured by running time (number of unit operations) with respect to the problem size. Size of problem : amount of information (size of storage, length of encoding) to represent the problem in computer (using 2 alphabets 0 and 1) Ex) positive integer x : rational number p/q : m n matrix A : ( a means smallest integer not less than a, e.g. 1.5 = 2, -1.5 = -1 ) Size of an LP is a function of m, n, log u ( u is largest number (absolute value) in LP, assuming data are given in integers)
OR Running time : worst case view point An algorithm is considered efficient if the worst case running time is bounded from above by a polynomial function of problem size (for large instances only, we ignore some irregularities for small instances). (polynomial time algorithm) Size of problem encoding Running time t
OR Why polynomial function? Suppose it takes 1 sec in running time when n = 1, then Suppose one iteration of simplex method takes polynomial time of the input size (which is true). Then, if the number of iterations is a polynomial function of input length for any LP, it implies that simplex method is a polynomial time algorithm n2n2 100 sec400 sec2500 sec3600 sec 2n2n 1000 sec1 sec35.7 years 366 centuries
OR Empirically the number of iterations is O(m) and O(logn) ( function f(s) is called O(g(s)) if there exist a positive constant c and a positive integer s’ such that f(s) cg(s) when s s’.) Bad counter example : Klee and Minty (1972) subject to If we use largest coefficient rule, number of iterations is 2 n – 1, hence running time is not polynomial
OR ex) n = 3 Note that m = n, and feasible region is given approximately as 0 x 1 1, 0 x 2 100, 0 x 3 Hence it is an elongated skewed hypercube with 2 n extreme points. (Note that we need 3 equations to define an extreme point in this example. Only one of the lower and upper bound constraints for each variable can be chosen to define an extreme point, hence total of 2 n extreme points exist.) Simplex method with largest coefficient rule searches all extreme points until it finds the optimal solution. But, largest increase rule finds the optimal solution in one iteration.
OR Geometric view largest increase rule
OR Alternative rules : Largest increase rule : counter example by Jeroslow (1973) Bland’s rule : counterexample by Avis and Chvatal (1978) Hence, until now, simplex method is not a polynomial time algorithm theoretically. But it performs well practically. First polynomial time algorithm for LP : ellipsoid method (1979) by L. G. Khachian. But, practically much inferior to simplex. However, it has some important theoretical implications in determining computational complexity of some optimization problems. Another polynomial time algorithm : Interior point method by L. Karmarkar (1984) : Better than simplex in many cases practically. Many versions. Based on ideas from nonlinear programming.
OR Interior point method will be briefly mentioned when we study complementary slackness theorem. Comparing simplex and interior point method: Interior is generally fast, especially for large problems. But simplex is competitive for some problems and recent developments in dual simplex algorithm makes solving LP of large size manageable. In addition, simplex is effective when we solve the LP again after making some changes in data (reoptimization). Such capability is quite important when we solve integer programming problems. But little progress has been made for interior point algorithm in this respect. Recently, interior point method is used for some nonlinear programming problems (convex programs) with successful results.