MIT and James Orlin © 2003 1 Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm file Simplex2_AMII_05a_gr.

Slides:



Advertisements
Similar presentations
February 14, 2002 Putting Linear Programs into standard form
Advertisements

SIMPLEX METHOD FOR LP LP Model.
Chapter 6 Linear Programming: The Simplex Method
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Dr. Sana’a Wafa Al-Sayegh
Computational Methods for Management and Economics Carla Gomes Module 6a Introduction to Simplex (Textbook – Hillier and Lieberman)
Dragan Jovicic Harvinder Singh
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Computational Methods for Management and Economics Carla Gomes Module 6b Simplex Pitfalls (Textbook – Hillier and Lieberman)
Linear programming Thomas S. Ferguson University of California at Los Angeles Compressive Sensing Tutorial PART 3 Svetlana Avramov-Zamurovic January 29,
The Simplex Method: Standard Maximization Problems
Operation Research Chapter 3 Simplex Method.
Design and Analysis of Algorithms
Solving Linear Programs: The Simplex Method
Chapter 10: Iterative Improvement
Linear Programming (LP)
The Simplex Method.
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
Chapter 3 An Introduction to Linear Programming
LINEAR PROGRAMMING SIMPLEX METHOD.
Chapter 6 Linear Programming: The Simplex Method
The Two-Phase Simplex Method LI Xiao-lei. Preview When a basic feasible solution is not readily available, the two-phase simplex method may be used as.
Simplex method (algebraic interpretation)
Chapter 6 Linear Programming: The Simplex Method Section 2 The Simplex Method: Maximization with Problem Constraints of the Form ≤
This presentation shows how the tableau method is used to solve a simple linear programming problem in two variables: Maximising subject to two  constraints.
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Topic III The Simplex Method Setting up the Method Tabular Form Chapter(s): 4.
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Solving Linear Programming Problems: The Simplex Method
Business Mathematics MTH-367 Lecture 15. Chapter 11 The Simplex and Computer Solutions Methods continued.
Mechanical Engineering Department 1 سورة النحل (78)
Linear Programming Erasmus Mobility Program (24Apr2012) Pollack Mihály Engineering Faculty (PMMK) University of Pécs João Miranda
Simplex Method Continued …
1 1 Slide © 2005 Thomson/South-Western Linear Programming: The Simplex Method n An Overview of the Simplex Method n Standard Form n Tableau Form n Setting.
Chapter 4 Linear Programming: The Simplex Method
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
1 Chapter 4 The Simplex Algorithm PART 2 Prof. Dr. M. Arslan ÖRNEK.
1 THE REVISED SIMPLEX METHOD CONTENTS Linear Program in the Matrix Notation Basic Feasible Solution in Matrix Notation Revised Simplex Method in Matrix.
1 Simplex Method (created by George Dantzig in late 1940s) A systematic way of searching for an optimal LP solution BMGT 434, Spring 2002 Instructor: Chien-Yu.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
An-Najah N. University Faculty of Engineering and Information Technology Department of Management Information systems Operations Research and Applications.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
 LP graphical solution is always associated with a corner point of the solution space.  The transition from the geometric corner point solution to the.
OR  Now, we look for other basic feasible solutions which gives better objective values than the current solution. Such solutions can be examined.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc. Linear Programming: An Algebraic Approach 4 The Simplex Method with Standard Maximization.
Simplex Method Review. Canonical Form A is m x n Theorem 7.5: If an LP has an optimal solution, then at least one such solution exists at a basic feasible.
1 Simplex algorithm. 2 The Aim of Linear Programming A Linear Programming model seeks to maximize or minimize a linear function, subject to a set of linear.
Decision Support Systems INF421 & IS Simplex: a linear-programming algorithm that can solve problems having more than two decision variables.
Chapter 4 The Simplex Algorithm and Goal Programming
The Simplex Method. and Maximize Subject to From a geometric viewpoint : CPF solutions (Corner-Point Feasible) : Corner-point infeasible solutions 0.
1 Two-Phase Simplex Method file Simplex3_AMII_05b_gr Rev. 1.4 by M. Miccio on December 17, 2014 from a presentation at the Fuqua School of Business MIT.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chap 10. Sensitivity Analysis
Perturbation method, lexicographic method
Linear programming Simplex method.
Chapter 4 Linear Programming: The Simplex Method
Chapter 3 The Simplex Method and Sensitivity Analysis
Part 3. Linear Programming
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Linear programming Simplex method.
Simplex method (algebraic interpretation)
Chapter 10: Iterative Improvement
Presentation transcript:

MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm file Simplex2_AMII_05a_gr Rev. 3.1 by M. Miccio on December 17, 2014 from a presentation at the Fuqua School of Business MIT and James Orlin © 2003

The Simplex Algorithm 2 Background George Dantzig born , Portland invented "Simplex Method of Optimisation" in 1947 this grew out of his work with the USAF

The Simplex Algorithm 3 Background originates from military planning tasks: –plans or schedules for training –logistical supply –deployment of men has usually a “polynomial” computational cost

MIT and James Orlin © Overview of Lecture Getting an LP into standard form Getting an LP into canonical form Optimality conditions Improving a solution A simplex pivoting

MIT and James Orlin © Overview of Lecture Getting an LP into standard form

MIT and James Orlin © Linear Programs in Standard Form We say that a linear program is in standard form if the following are all true: 1. Non-negativity constraints for all variables. 2. All remaining constraints are expressed as equality constraints. 3. The right hand side vector, b, is non-negative. EXAMPLE 1: maximize 3x 1 + 2x 2 - x 3 + x 4 x 1 + 2x 2 + x 3 - x 4  5; not equality -2x 1 - 4x 2 + x 3 + x 4  -1; not equality x 1  0, x 2  0, x 4  0 x 3 may be negative  The EXAMPLE 1 is a LP problem in its original form, but not in standard form

MIT and James Orlin © s 1 is called a slack variable, which measures the amount of “unused resource” Note that s 1 = 5 - x 1 - 2x 2 - x 3 + x 4 Converting “  ” constraints  To convert a “  ” constraint to an equality, add a slack variable. After (augmented form) x 1 + 2x 2 + x 3 - x 4 + s 1 = 5 s 1  0 Before (original form) x 1 + 2x 2 + x 3 - x 4  5

MIT and James Orlin © Converting Inequalities into Equalities plus Non-negatives Resources Consider the 2nd inequality: -2x 1 - 4x 2 + x 3 + x 4  -1 (original form) Step 1. Eliminate the negative RHS 2x 1 + 4x 2 - x 3 - x 4  1 (standard form)

MIT and James Orlin © Converting “  ” constraints 2x 1 + 4x 2 - x 3 - x 4  1 (standard form) Convert to an equality: 2x 1 + 4x 2 - x 3 - x 4 – s 2 = 1 (augmented form) The variable added s 2  0 is called a “surplus variable”, which measures the amount of “resource in excess of minimum”   To convert a “  ” constraint to an equality, subtract a surplus variable.

MIT and James Orlin © More Transformations How can one convert a maximization problem to a minimization problem? Example: Maximize 3W + 2P Subject to “constraints” has the same optimum solution(s) as Minimize -3W -2P Subject to “constraints”

MIT and James Orlin © The Last Transformations (for now) Transforming variables that may take on negative values. EXAMPLE 2: maximize 3x 1 + 4x x 3 subject to 2x 1 - 5x 2 + 2x 3 = 7 other constraints: x 1  0 x 2 is unconstrained in sign x 3  0

MIT and James Orlin © Transforming variables that may take on negative values. Transforming x 1 : replace x 1 by y 1 = -x 1 → y 1  0 max -3 y 1 + 4x x 3 -2 y 1 -5 x 2 +2 x 3 = 7 y 1   0, x 2 is unconstrained in sign, x 3  0 One can recover x 1 from y 1 e.g., y 1 = 1, x 2 = -1, x 3 = 2 is feasible thenx 1 =-1, x 2 = -1, x 3 = 2 is a solution maximize 3x 1 + 4x x 3 subject to 2x 1 - 5x 2 + 2x 3 = 7 other constraints: x 1  0, x 2 is unconstrained in sign, x 3  0

MIT and James Orlin © max -3 y 1 + 4(y 3 - y 2 ) + 5 x 3 -2 y 1 -5 y y 2 +2 x 3 = 7 all vars  0 Transforming variables that may take on negative values. Transforming x 2 : replace x 2 by x 2 = y 3 - y 2 ; with y 2  0, y 3  0 One can recover x 2 from y 2, y 3 e.g., y 1 = 1, y 2 = 0, y 3 = 1, x 3 = 2 is feasible theny 1 = 1, x 2 = 1, x 3 = 2 is a solution maximize 3x 1 + 4x x 3 subject to 2x 1 - 5x 2 + 2x 3 = 7 other constraints x 1  0, x 2 is unconstrained in sign, x 3  0

Standard form in vector-matrix representation MIT and James Orlin © x is the n-size column vector of variables (decision + slack + surplus) m is the number of constraints c T is the k-size row vector (with k < n) of cost coefficients A is an mn matrix of constraint coefficients  A has to be a full rank matrix, i.e., r = min(m,n) b is the m-size column vector of resources Ax = b represents a system of m linear eqs. in n unknows

MIT and James Orlin © Standard form with Tableau representation x1x1 x2x2 x4x4 x3x3 -z = = = TABLEAU  A linear program in standard form can be represented in a tabular arrangement known as tableau Example 3:maximizez =-3x 1 + 2x 2 +0  x 3 +0  x 4 (2D case)subject to -3x 1 + 3x 2 + x 3 = 6 -4x 1 + 2x 2 + x 4 = 2 x 1, x 2, x 3, x 4  0 b

MIT and James Orlin © Another Example (for home) Exercise 4 (also 2D): transform the following to a standard form for maximization: Minimize x 1 + 3x 2 Subject to 2x 1 + 5x 2   12 x 1 + x 2  1 x 1  0

MIT and James Orlin © Overview of Lecture Getting an LP into standard form Getting an LP into canonical form

Solutions for Ax = b MIT and James Orlin © How many solutions for Ax = b ? Rouchè-Capelli theorem : A full-rank system, consisting of m linear equations in n variables, has ∞ n-m solutions

Special solutions for Ax = b MIT and James Orlin © basic solution A solution to a linear program model, consisting of m equations in n variables, obtained by solving for m variables (basic variables) in terms of the remaining (n - m) variables and setting the (n - m) variables equal to zero (non-basic variables) feasible basic solution A basic solution for which the m variables that are different from zero and form the basis are also non-negative degenerate feasible basic solution Basic feasible solutions where at least one of the basic variables is zero are called degenerate

generic solutions Map of solutions for Ax = b MIT and James Orlin © feasible solutions feasible basic solutions degenerate feasible basic solutions basic solutions

MIT and James Orlin © LP Canonical Form = LP Standard Form + Jordan Canonical Form ==== 2 6 x1x1 x2x2 x4x4 x3x3 -z = 0  The simplex method starts with an LP in canonical form (or it creates canonical form at a preprocess step.) z is not a decision variable x4x4 x3x3 If the columns of A can be rearranged so that it contains the identity matrix of order m (the number of rows in A) then the tableau is said to be in canonical form EXAMPLE 3

MIT and James Orlin © Overview of Lecture Getting an LP into standard form Getting an LP into canonical form The Simplex algoritm with ≤ constraints

MIT and James Orlin © LP Canonical Form with the first feasible basic solution ==== 2 6 x1x1 x2x2 x4x4 x3x3 -z = 0 The 1st feasible basic solution is x 1 = 0, x 2 = 0, x 3 = 6, x 4 = 2 (set the non-basic variables to 0, and then solve) The basic variables are x 3 and x 4 The non-basic variables are x 1 and x x4x4 x3x3 EXAMPLE 3

MIT and James Orlin © For each constraint there is a basic variable ==== 2 6 x2x2 x4x4 x3x z = 0 Constraint 1: basic variable is x 3 Constraint 1 Constraint 2: basic variable is x 4 Constraint 2 The basis consists of variables x 3 and x 4 x1x EXAMPLE 3

The underlying Theorems The corners or vertices of the feasible region are referred to as the extreme points. Theorem of the Corner Principle : In any LP problem with a nonempty bounded feasible region, the optimal value of the objective function, if exists, will be achieved on an extreme point. –Restatement of the previous Theorem : If an LP problem has an optimal solution, then it has a basic optimal solution. Theorem of Solutions: The No. of extreme points is equal to the N b, i.e., the No. of feasible basic solutions. When looking for the optimal solution, you do not have to evaluate all feasible solution points. You have to consider only the extreme points of the feasible region. 25

Extreme Points in 2D x1x1x1x1 FeasibleRegion x 2 x 2 © 2006 Thomson South-Western. All Rights Reserved.

The Simplex Algorithm 27 The Simplex algorithm strategy in a qualitative way: 1.start from a first basic feasible solution 2.look for an "adjacent" basic feasible solved form whose basic feasible solution improves the value of the objective function  "adjacent" basic feasible solution means that (m-1) variables remain the same in the set of basic variables, whereas 1 basic variable is replaced by a non-basic variable 3.if there is no such adjacent basic feasible solved form, then the optimum has been found ( ideally !)

The Simplex algorithm and Extreme points MIT and James Orlin © A system of linear inequalities defines a polytope as a feasible region in the hyperspace. The simplex algorithm begins at a starting vertex (1 st feasible basic solution) and moves along the edges of the polytope until it reaches the vertex of the optimum solution. The algorithm always terminates because the number of vertices in the polytope is finite

The Simplex algorithm result Any LP problem falls in one of three categories: 1.has a optimal solution (unique or ”alternate” optimal solutions) 2.has an objective function that can be increased without bound 3.is infeasible (e.g., non-convex feasible region) However, an unbounded feasible region does not imply an unbounded objective function Yet there may be an optimal solution –This is common in minimization problems and is possible in maximization problems.

MIT and James Orlin © Overview of Lecture Getting an LP into standard form Getting an LP into canonical form Optimality conditions

MIT and James Orlin © x1x Optimality Conditions Preview ==== 2 6 x2x2 x4x4 x3x z = 0 Obvious Fact: If one can improve the current basic feasible solution x, then x is not optimal. Idea: assign a small value  to just one of the non- basic variables, and then adjust the basic variables. z = -3x 1 + 2x 2 max!  current value of -z EXAMPLE 3

MIT and James Orlin © x1x The current basic feasible solution (bfs) is not optimal! ==== 2 6 x2x2 x4x4 x3x z = 0 Increase x 2 from 0 to  > 0. Let x 1 stay at 0. What happens to x 3, x 4 and z? x 3 = . x 4 = . z = 2 . x1x If there is a positive coefficient in the z row, the basis is not optimal** z = -3x 1 + 2x 2 max! EXAMPLE 3

MIT and James Orlin © Optimality Conditions ==== 2 6 x2x2 x4x4 x3x z = -8 x1x1  If there is no positive coefficient in the z row, the basic feasible solution is optimal! NB: EXAMPLE 3 with different cost coeff. z = -2x 1 - 4x max! z  8 for all other feasible solutions. But z = 8 in the current basic feasible solution This basic feasible solution is optimal!

MIT and James Orlin © x 1 = 0 x 2 =  x 3 = . x 4 = . z = 2 . x 1 = 0 x 2 = 1 x 3 = 3 x 4 = 0 z = 2 x1x Let x 2 = . How large can  be? What is the solution after changing x 2 ? ==== 2 6 x2x2 x4x4 x3x z = 0 What is the value of  that maximizes z, but leaves a feasible solution? x1x  = 1.   The resulting solution is a new basic feasible solution with a different basis. EXAMPLE 3

Optimality criterion It relies on a check of the (reduced) cost coefficients c j ’ Search for the maximum: Search for the minimum: 35

MIT and James Orlin © Overview of Lecture Getting an LP into standard form Getting an LP into canonical form Optimality conditions Improving a solution: a pivot

11 November, 2002 The Simplex Algorithm 37 Improving a solution, a pivot pivoting  the algebraic manipulation of the Tableau result of pivoting  move one variable out of basic variables (  exit variable) and another one in (  entry variable) adjacent solution  the new basic feasible solution obtained just after one single pivoting operation on Tableau

Text description of Pivoting MIT and James Orlin © The geometrical operation of moving from a basic feasible solution to an adjacent basic feasible solution is implemented as a pivot operation. First, a nonzero pivot element is selected in a non-basic column. The row containing this element is multiplied by its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in row r, then the column becomes the r-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which corresponded to the r-th column of the identity matrix before the operation. The variable corresponding to the pivot column enters the set of basic variables and is called the entering (entry) variable The variable being replaced leaves the set of basic variables and is called the leaving (exit or departing) variable. The tableau is still in canonical form, but with the set of basic variables changed by one element.

Optimal choice of Pivot Column MIT and James Orlin © The column with j=c becomes pivot. The corresponding variable x c is the entry variable. max !min !

Choice of Pivot Row MIT and James Orlin © The row with i=r becomes pivot. The variable x * for which the coefficient is unity in the pivot row is selected as the leaving (exit) variable. Minimum ratio rule

Pivoting formulas MIT and James Orlin © r is the pivot row c is the pivot column a rc is the pivot element  j=1,n a rj  a rj /a rc b r  b r /a rc  i=1,m with m≠r and  j=1,n a ij  a ij - a rj a ic b i  b i - a rj a ic c j  c j - a rj a ic

MIT and James Orlin © x1x Pivoting to obtain a better solution (Tableau animation) ==== x2x2 x4x4 x3x z = 0 If we pivot on the coefficient 2, we obtain the new basic feasible solution x 1 = 0 x 2 = 1 x 3 = 3 x 4 = 0 z = 2 New Solution: Basic variables are x 2 and x 3. Nonbasics: x 1 and x 4. EXAMPLE 3 z = -3x 1 + 2x 2 max!

MIT and James Orlin © x1x1 OK. Let’s iterate again. ==== x2x2 x4x4 x3x3 -z = The cost coefficient of x 1 is positive. Set x 1 =  and x 4 = x 1 =  x 2 =  x 3 = . x 4 = 0 z = 2 +  How large can  be? z = x 1 – x max! EXAMPLE 3

MIT and James Orlin © /3-1/2 00-1/3-1/2 01/3-1/2 x1x1 Perform another pivot ==== x2x2 x4x4 x3x3 -z = What is the largest value of  ? x 1 =  x 2 =  x 3 = . x 4 = 0 z = 2 +   x 1 =  x 2 = 3 x 3 = 0 x 4 = 0 z = 3 Pivot on the coefficient with the number Variable x 1 becomes basis, x 3 becomes nonbasic. So, x 1 becomes the basic variable for constraint 1.

MIT and James Orlin © /31/2 00-1/3-1/2 101/3-1/2 x1x1 Check for optimality ==== x2x2 x4x4 x3x3 -z = There is no positive coefficient in the z-row. x 1 =  x 2 =  x 3 = . x 4 = 0 z = 2 +  The current basic feasible solution is optimal! x 1 =  x 2 = 3 x 3 = 0 x 4 = 0 z = 3 z = -x 3 /3 – x 4 /2 + 3 max!

MIT and James Orlin © Summary of Simplex Algorithm (  constraints) Start in canonical form with the 1 st basic feasible solution 1. Check for optimality conditions  for max: is there a positive coefficient in the cost row? 2. If not optimal, determine a non-basic variable that should be made positive  for max: choose a variable with a positive coeff. in the cost row 3. Increase that non-basic variable and perform a pivot, obtaining a new bfs (or unboundedness) 4. Continue until optimal (or unbounded).