LINEAR PROGRAMMING Modeling a problem is boring --- and a distraction from studying the abstract form! However, modeling is very important: --- for your.

Slides:



Advertisements
Similar presentations
February 21, 2002 Simplex Method Continued
Advertisements

Thursday, March 7 Duality 2 – The dual problem, in general – illustrating duality with 2-person 0-sum game theory Handouts: Lecture Notes.
Tuesday, March 5 Duality – The art of obtaining bounds – weak and strong duality Handouts: Lecture Notes.
February 14, 2002 Putting Linear Programs into standard form
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Title Name institution. April, 9-11th, 2015FORIL XI
IEOR 4004 Midterm Review (part I)
Analysis of Algorithms
State-Space Collapse via Drift Conditions Atilla Eryilmaz (OSU) and R. Srikant (Illinois) 4/10/20151.
16.410: Eric Feron / MIT, Spring 2001 Introduction to the Simplex Method to solve Linear Programs Eric Feron Spring 2001.
Linear Programming – Simplex Method: Computational Problems Breaking Ties in Selection of Non-Basic Variable – if tie for non-basic variable with largest.
Hillier and Lieberman Chapter 4.
Simplex (quick recap). Replace all the inequality constraints by equalities, using slack variables.
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
Linear Programming (LP) (Chap.29)
Linear Programming Simplex Method
LECTURE 14 Minimization Two Phase method by Dr. Arshad zaheer
Introduction to Algorithms
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 8 Tuesday, 11/19/02 Linear Programming.
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
The Simplex Method: Standard Maximization Problems
Operation Research Chapter 3 Simplex Method.
1 Linear Programming Jose Rolim University of Geneva.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 9 Wednesday, 11/15/06 Linear Programming.
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Approximation Algorithms
Chapter 10: Iterative Improvement
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 99 Chapter 4 The Simplex Method.
Linear Programming (LP)
Chapter 4 The Simplex Method
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
LINEAR PROGRAMMING SIMPLEX METHOD.
1. The Simplex Method.
Chapter 6 Linear Programming: The Simplex Method
8. Linear Programming (Simplex Method) Objectives: 1.Simplex Method- Standard Maximum problem 2. (i) Greedy Rule (ii) Ratio Test (iii) Pivot Operation.
Simplex Algorithm.Big M Method
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
Topic III The Simplex Method Setting up the Method Tabular Form Chapter(s): 4.
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Public Policy Modeling Simplex Method Tuesday, October 13, 2015 Hun Myoung Park, Ph.D. Public Management & Policy Analysis Program Graduate School of International.
1 Chapter 7 Linear Programming. 2 Linear Programming (LP) Problems Both objective function and constraints are linear. Solutions are highly structured.
Solving Linear Programming Problems: The Simplex Method
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
1 1 Slide © 2005 Thomson/South-Western Linear Programming: The Simplex Method n An Overview of the Simplex Method n Standard Form n Tableau Form n Setting.
Chapter 4 Linear Programming: The Simplex Method
1 Bob and Sue solved this by hand: Maximize x x 2 subject to 1 x x 2 ≤ x x 2 ≤ 4 x 1, x 2 ≥ 0 and their last dictionary was: X1.
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Table of Contents CD Chapter 14 (Solution Concepts for Linear Programming) Some Key Facts.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
1. 2 We studying these special cases to: 1- Present a theoretical explanation of these situations. 2- Provide a practical interpretation of what these.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc. Linear Programming: An Algebraic Approach 4 The Simplex Method with Standard Maximization.
The Simplex Method. and Maximize Subject to From a geometric viewpoint : CPF solutions (Corner-Point Feasible) : Corner-point infeasible solutions 0.
An Introduction to Linear Programming
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Solving Linear Program by Simplex Method The Concept
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Chap 10. Sensitivity Analysis
The minimum cost flow problem
Linear Programming (LP) (Chap.29)
Perturbation method, lexicographic method
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Chapter 4 Linear Programming: The Simplex Method
Chapter 3 The Simplex Method and Sensitivity Analysis
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Presentation transcript:

LINEAR PROGRAMMING Modeling a problem is boring --- and a distraction from studying the abstract form! However, modeling is very important: --- for your motivation, --- and for your training on how to use the theory on a real application! Read the text book modeling of a LP problem –(Ch 19, introductory material) 4/12/20151 linear programming

Linear Programming: Standard Form Output: Find values for n variables x 1, x 2, …, x n : Input: Maximize (  j=1 n c j x j ) (Maximize the objective function) Subject to:  j=1 n a ij x j  b i, for i = 1, … m (subject to m linear constraints, and) for all j = 1,…n, x j  0 (all variables are non-negative) The constant coefficients are c j, a ij, and b i. Problem size: (n, m). 4/12/20152linear programming

Example 1: Maximize (2x1 –3x2 +3x3) Three constraints: x1 +x2 –x3  7, -x1 –x2 +x3  -7, x1 –2x2 +2x3  4, x1, x2, x3  0 Linear Programming: Standard Form 4/12/20153linear programming

Non-standard to Standard Form Note: In the standard form the inequalities must be strict (‘ ,’ rather than ‘>’), equations are linear (power of x is 0 or 1) 1) Objective function minimizes as opposed to maximizes Example 2.1: minimize (-2x1 + 3x2) Action: Change the signs of coefficients Example 2.1: maximize (2x1 –3x2) 2) Some variables need not be non-negative Example 2.2: Constraints: x1 +x2 = 7, x1 –2x2  4,x1  0, no constraint on x2. Action: replace occurrence of any unconstrained variable xj, with a new expression (xj’ – xj’’), and add two new constraints xj’, xj’’  0. Example 2.2: Constraints: x1 +x2’ –x2’’ = 7, x1 –2x2’ +2x2’’  4,x1, x2’, x2’’  0. The number of variables in the problem may at most be doubled, from n to 2n, a polynomial-time increase. 4/12/20154linear programming

3) There may be equality constraints Example 2.3: Constraint: x1 +x2 –x3 = 7 Action: replace each equality-linear constraint with two new constraint with , and , and same left and right hand sides. Example 2.3: Two new constraints: x1 +x2 –x3  7, and x1 +x2 –x3  7. Total number of constraints may at most be doubled, from m to 2m, a polynomial-time increase. Non-standard to Standard Form 4/12/20155linear programming

4) There may be linear constraints involving ‘  ’ rather than ‘  ’ as required by the standard form. Example 2.4: Constraint: x1 +x2 –x3  7 Action: Change the sign of the coefficients (as in the case 1). Example 2.4: New constraint replacing the old one: -x1 –x2 +x3  -7. (Note that now the example 2 is the same as example 1) Note that equivalent forms of an LP have the same solution as the original one. Non-standard to Standard Form 4/12/20156linear programming

Simplex algorithm, Khachien’s algorithm, Karmarkar’s algorithm solves LP. First one is worst-case exp-time algorithm, other two are poly-time algorithms. Simplex: A non-null region in the Cartesian space over the variables, such that any point within the region satisfies the constraints. Algorithms for LP 4/12/20157linear programming x2 x1 => Five linear constraints Simplex

Simplex: A non-null region in the Cartesian space over the variables, such that any point within the region satisfies the constraints. Simplex algorithm iteratively moves from one corner point of the simplex to another trying to increase the value of the optimizing function Solution: where the objective function z has largest value Algorithms for LP 4/12/20158linear programming Objective Function As a line x1 => Five linear constraints Obj. Func = z Algorithm wants to take the line further away from the origin

Solution: where the objective function z has largest value The optimum value of the optimizing function exists at some boundary (a corner point, or on infinite number of points over a hyperplane) of the simplex. Algorithms for LP 4/12/20159linear programming Objective Function As a line x1 => Five linear constraints Optimum Obj. Func = z Algorithm moves from corner to corner of simplex

When the constraints are unsatisfiable the simplex does not exist or it is a null region. Algorithms for LP 4/12/201510linear programming x2 x1  Four linear constraints  Simplex is unbounded  No solution, or solution at infinite point Simplex

When the constraints are unsatisfiable the simplex does not exist or it is a null region. Algorithms for LP 4/12/201511linear programming x2 x1  Four linear constraints  Simplex does not exist  No solution, or constraints are conflicting No Simplex

Slack Form of a Standard Form: for Simplex Algorithm Simplex algorithm works with the slack forms (defined below) of an input LP, the latter changes with iterations. Slack form is equivalent to the input, or, has the same solution. Slack Form of a standard form of LP: (1) Create a variable z for optimizing function (  j=1 n c j x j ): z = v + (  j=1 n c j x j ), v is a constant (in the initial slack form v=0) (2) For each linear constraint (  j=1 n a ij x j  b i,1  i  m) create an extra variable x j+i and rewrite the constraints: x j+i = b i -  j=1 n a ij x j, 1  i  m (3) Now the only constraints are over the variables, including the new ones (but not z). For all variables, x j  0, 1  j  n+m 4/12/201512linear programming

Example: Slack Form of a Standard Form 4/12/201513linear programming

Iterations within Simplex Algorithm LP in slack form is to find a coordinate in the first quadrant where the value of z is maximum. Note that the slack forms are non-standard. Simplex algorithm modifies one slack form to another (i) until z cannot be increased any more, or (ii) terminates when a solution cannot be found. The variables on the left-hand side of the linear equations in (2) are called basic variables (B), and those on the right-hand side are called non-basic variables (N). Example: Objective functions becomes: z = 0 +3x1 +x2 +2x3 Linear constraints become equations: x4 = 30 -x1 -x2 -3x3 x5 = 24 -2x1 -2x2 -5x3 x6 = 36 -4x1 -x2 -2x3 Variable constraints: x1, x2, x3, x4, x5, x6  0 Two types of variables’ Sets: Non-basic={x1, x2, x3}, Basic={x4, x5, x6} 4/12/201514linear programming

Iterations within Simplex Algorithm The variables on the left-hand side of the linear equations in (2) are called basic variables (B), and those on the right-hand side are called non-basic variables (N). Only non-basic variables appear in z. In the initial slack form, original variables are the non-basic variables. So, the solution we are seeking is the coordinate for those variables where z is max. Example: z = 0 +3x1 +x2 +2x3 Linear equations: x4 = 30 -x1 -x2 -3x3 x5 = 24 -2x1 -2x2 -5x3 x6 = 36 -4x1 -x2 -2x3 Constraints: x1, x2, x3, x4, x5, x6 ≥0 Sets: N={x1, x2, x3}, B={x4, x5, x6} Simplex algorithm shuffles variables between the sets N and B, exchanging one variable in N with another in B, in each iteration, with the objective of increasing the value of z. This operation is the heart of the algorithm, and is called the pivot operation (described below). 4/12/201515linear programming

Pivot Algorithm: Core of Simplex z = 0 +3x1 +x2 +2x3 Linear equations: x4 = 30 -x1 -x2 -3x3 x5 = 24 -2x1 -2x2 -5x3 x6 = 36 -4x1 -x2 -2x3 Constraints: x1, x2, x3, x4, x5, x6  0 Sets: N={x1, x2, x3}, B={x4, x5, x6} A basic solution is the coordinate where you put all non-basic variables as 0, and the basic variables are determined by those. So, for this example - the basic solution is: (x1 = 0, x2 = 0, x3 = 0, x4 = 30, x5 = 24, x6 = 36). This basic solution is feasible since all variables are non-negative. At this point z=0. The pivot operation’s objective is to increase z. 4/12/201516linear programming

Example of Pivoting z = 0 +3x1 +x2 +2x3 Linear equations: x4 = 30 -x1 -x2 -3x3 x5 = 24 -2x1 -2x2 -5x3 x6 = 36 -4x1 -x2 -2x3 Constraints: x1, x2, x3, x4, x5, x6  0 Sets: N={x1, x2, x3}, B={x4, x5, x6} A non-basic variable whose coefficient is positive in the expression for z can be increased to increase the value of z. In example above, any non-basic variable is a candidate. We arbitrarily choose x1. This choice is called the entering variable x e in pivot. Our choice x e = x1. If the other non-basic variables hold their value to 0, as in the basic solution: x1 can be increased up to 30 without violating constraint on x4, x1 can be increased up to 12 without violating constraint on x5, x1 can be increased up to 9 without violating constraint on x6. So, x6 is the most constraining basic variable. x6 is chosen as the leaving variable x l by the pivot, x l = x6. 4/12/201517linear programming

Pivot exchanges x e (x1) and x l (x6) between the sets N and B. Write an equation for x1 in terms of x2, x3, and x6. x6 = 36 -4x1 -x2 -2x3 => x 1 = x 2 /4 + x 3 /2 -x 6 /4 Replace x1 with this new expression wherever else x1 appears. The new slack form for the example after this pivot operation is: z = 27 +(1/4)x2 +(1/2)x3 -(3/4)x6 [now, max for z, i.e., v=27] x1 = 9 –(1/4)x2 –(1/2)x3 –(1/4)x6 x4 = 21 –(3/4)x2 –(5/2)x3 +(1/4)x6 x5 = 6 –(3/2)x2 -4x3 +(1/2)x6 x1, x2, x3, x4, x5, x6  0 N={x2, x3, x6}, B={x1, x4, x5} Basic solution after this pivot is (x1, x2, x3, x4, x5, x6) = (9, 0, 0, 21, 6, 0). z at this point is 27 that is >0, zero was in the previous slack form. For the next pivot candidates for the entering variable are x2 and x3 with positive coefficients. Pivoting steps 4/12/201518linear programming

Stopping of Pivoting Iterations in Simplex z = 27 +(1/4)x2 +(1/2)x3 -(3/4)x6 [now, max for z, i.e., v=27] For the next pivot candidates for the entering variable are x2 and x3 with positive coefficients. If x e =x3 is chosen, then the leaving variable is x l =x5. Pivot will be applied similarly as before exchanging x3 and x5 from N and B. Simplex algorithm stops pivoting when none of the coefficients for basic variables in z is non-negative. This indicates that the final solution has been found. Optimum value for the initial objective function of the standard form LP is the last value of v in z of this terminating slack form. The basic solution at this point provides the coordinate for initial non-basic variables (x1, x2, and x3, in our example) where this optimum value v for the objective function can be found. For the example above - the final basic solution is (8, 4, 0, 18, 0, 0) where v=28. So, the optimum value for the objective function is 28 at (x1=8, x2=4, x3=0). 4/12/201519linear programming

4/12/201520linear programming

Initialization of Simplex (1) Initialization simplex creates and returns the first slack form. In case the corresponding initial basic solution is not feasible, because one of the variables has negative value, then the init-simplex does a complex operation of checking if the constraints are satisfiable or not. In the latter case init-simplex returns a modified equivalent slack form for which the basic solution is feasible. Otherwise, init-simplex terminates simplex because no solution may exist for the input. 4/12/201521linear programming

4/12/201522linear programming

4/12/2015linear programming23

Simplex Algorithm Steps (2) Simplex iteratively chooses x e and x l and keeps calling pivot algorithm. (3) If at a stage none of the basic variables is constrained (can increase unbounded as the x e increases), then no x l exist at that stage. This indicates the input is unbounded, or the objective function can become infinity. The region simplex is not bounded. (4) Simplex terminates when none of the coefficients of basic variables in z for that iteration is non-negative and returns the optimum value (v) and the solution co-ordinates. 4/12/201524linear programming

Simplex algorithm has exponential time-complexity in the worst case, but runs very efficiently on most of the inputs. Note, each iteration in Simplex algorithm is polynomial, but number of iteration may be exponential Simplex algorithm may also go into infinite loop over pivoting (v remains same from slack form to slack form), but some tie-braking policy in the choice of x e may stop that from happening. Proof of the simplex algorithm uses the dual LP form. Facts: Simplex Algorithm 4/12/201525linear programming

Dual LP 4/12/201526linear programming

Primal vs. Dual LP: Proof Sketch of Optimality Dual LP is not an equivalent problem of the primal. Dual LP is a minimization problem. Lemma of weak duality: (  j=1 n c j x j )  (  i=1 m b i y i ), for all feasible solutions of both the primal and dual LP’s. Corollary: When (  j=1 n c j x j ) = (  i=1 m b i y i ) (say, = v), then v is the optimal value for each of the primal and dual LP. When the simplex algorithm terminates with none of the coefficients of the basic variables in z as non-negative, then this corollary is satisfied, thus, proving that the algorithm returns optimal value for the objective function of the primal input LP. 4/12/201527linear programming

Relation between Primal and Dual Problems Dual LP is not an equivalent problem of the primal. Lemma of weak duality: (  j=1 n c j x j )  (  i=1 m b i y i ), for all feasible solutions of both the primal and dual LP’s. Corollary: When (  j=1 n c j x j ) = (  i=1 m b i y i ) (say, = v), then v is the optimal value for each of the primal and dual LP. When the simplex algorithm terminates with none of the coefficients of the basic variables in z as non-negative, then this corollary is satisfied, thus, proving that the algorithm returns optimal value for the objective function of the primal input LP. Interior-point algorithms work simultaneously on Primal and Dual problem, thus having polynomial-time convergence: –Khachiyan’s ellipsoid method O(n 6 ) –Karmarkar’s interior-point: O(n 3.5 ) 4/12/201528linear programming