The basis of the whole field of operations research

Slides:



Advertisements
Similar presentations
February 14, 2002 Putting Linear Programs into standard form
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
LIAL HORNSBY SCHNEIDER
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Computational Methods for Management and Economics Carla Gomes Module 8b The transportation simplex method.
Computational Methods for Management and Economics Carla Gomes Module 6b Simplex Pitfalls (Textbook – Hillier and Lieberman)
Operation Research Chapter 3 Simplex Method.
Design and Analysis of Algorithms
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 99 Chapter 4 The Simplex Method.
The Simplex Method.
Chapter 4 The Simplex Method
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
Water Resources Development and Management Optimization (Linear Programming) CVEN 5393 Feb 25, 2013.
LINEAR PROGRAMMING SIMPLEX METHOD.
Simplex method (algebraic interpretation)
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 Chapter 7 Linear Programming. 2 Linear Programming (LP) Problems Both objective function and constraints are linear. Solutions are highly structured.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Solving Linear Programming Problems: The Simplex Method
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
1 1 Slide © 2005 Thomson/South-Western Linear Programming: The Simplex Method n An Overview of the Simplex Method n Standard Form n Tableau Form n Setting.
Chapter 4 Linear Programming: The Simplex Method
1 Simplex Method (created by George Dantzig in late 1940s) A systematic way of searching for an optimal LP solution BMGT 434, Spring 2002 Instructor: Chien-Yu.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
University of Colorado at Boulder Yicheng Wang, Phone: , Optimization Techniques for Civil and Environmental Engineering.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
Part 3. Linear Programming 3.2 Algorithm. General Formulation Convex function Convex region.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc. Linear Programming: An Algebraic Approach 4 The Simplex Method with Standard Maximization.
1 Simplex algorithm. 2 The Aim of Linear Programming A Linear Programming model seeks to maximize or minimize a linear function, subject to a set of linear.
Discrete Optimization
Linear Programming Many problems take the form of maximizing or minimizing an objective, given limited resources and competing constraints. specify the.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Solving Linear Program by Simplex Method The Concept
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Optimization Problems
Chap 10. Sensitivity Analysis
Linear Programming Dr. T. T. Kachwala.
Perturbation method, lexicographic method
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
10CS661 OPERATION RESEARCH Engineered for Tomorrow.
Linear programming Simplex method.
10CS661 OPERATION RESEARCH Engineered for Tomorrow.
The Two-Phase Simplex Method
Linear Programming.
Chapter 4 Linear Programming: The Simplex Method
ENGM 631 Optimization Ch. 4: Solving Linear Programs: The Simplex Method.
Chap 3. The simplex method
Chapter 3 The Simplex Method and Sensitivity Analysis
Dual simplex method for solving the primal
Part 3. Linear Programming
Linear Programming SIMPLEX METHOD.
Starting Solutions and Convergence
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Linear programming Simplex method.
Chapter 8. General LP Problems
Chapter 8. General LP Problems
Simplex method (algebraic interpretation)
Chapter 10: Iterative Improvement
Chapter 8. General LP Problems
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

The basis of the whole field of operations research Linear Programming The basis of the whole field of operations research

Overview Classes of Optimization Problems Linear Programming The Simplex Algorithm

Classes of Optimization Problems unconstrained constrained ... linear non-linear real-valued integer quadratic ... ...

Solution Spaces A solution space or feasible region is the union of all points in the domain that satisfy the problem constraints. The most important distinction is between convex and non-convex solution spaces: Convexity means that any interpolation between feasible points only yields feasible points.

Local and Global Optima Plot of an objective function for a non-convex problem with local maxima at a and b. Only the one at b is a global maximum. Convex problems are generally easier to solve because Theorem: Any local extremum for a maximization problem on a convex feasible region (and concave objective function) is a global extremum.

Convex Optimization Problems Convex problems are by far easier to solve (computationally less expensive). We will therefore first look at linear programming in the domain of real numbers.

Adequate Minimum Cost Diet Nutrition Values in units per Dollar cost (1945) One of the first automated linear optimization problems G. Stigler. “The cost of expenditure”. Journal of Farm Economics, 1945 “... there does not appear to be any direct method of finding the minimum...” G. Dantzig. “Linear Programming and Extensions”,Princeton Univ. Press, 1963. manually with desktop calculator: 120 person-days with Simplex on IBM 701: 4 minutes (1947)

Linear Programming

Gauss-Jordan Elimination Let us first review how a system of linear equations is solved. This is performed by Gauss-Jordan Elimination: Example: (1) 3x+5y-z = 15 (2) 7x-2y+z = 1 (3) y+z = 0 by (3) z=-y (1.1) 3x+5y+y = 15 (1.2) 7x- 2y- y = 1 by (1.1) y=5/2-1/2*x (1.2.1) 17/2*x = 17/2 x = 1 more elegantly:

Application Example maximize A company wants to optimize their production plan for three products B/C/D with the following resource requirements and limits: maximize profit f(x*) = 3.0 x[b] + 1.0 x[c] + 3.0 x[d] subject to c(x*) = 2.0 x[b] + 2.0 x[c] + 1.0 x[d] <= 30.0 and 1.0 x[b] + 2.0 x[c] + 3.0 x[d] <= 25.0 and 2.0 x[b] + 1.0 x[c] + 1.0 x[d] <= 20.0 and x[b]>=0 and x[c]>=0 and x[d]>=0

Slack Variables transform to standard form by introducing slack variables the “slack” measures the unused part of a resource (i.e. how “tight” a constraint is) s[f] == 30 – 2 x[b] – 2 x[c] – x[d] s[l] == 35 – x[b] – 2 x[c] – 3 x[d] s[m] == 20 – 2 x[b] – x[c] – x[d] Note: All slack variables must always be positive! s[_] >= 0. In standard form the x[i] must also be positive. s[f] == 21 s[l] == 11 s[m] == 13 e.g. {x[b] -> 1, x[c] -> 2, x[d] -> 3} yields

Basic Feasible Solution A basic feasible solution is a point in the feasible region (i.e. a valuation of the problem variables) that fulfils all problem constraints, i.e. a point at which all explicit problem constraints are fulfilled and all slack variables are positive z == 3 x[b] + x[c] + 3 x[d] s[f] == 30 – 2 x[b] – 2 x[c] – x[d] s[l] == 35 – x[b] – 2 x[c] – 3 x[d] s[m] == 20 – 2 x[b] – x[c] – x[d] trivial solution: {x[b] -> 0, x[c] -> 0, x[d] -> 0} z=0 better: {x[b] -> 1, x[c] -> 2, x[d] -> 3} z=14 even better: {x[b] -> 2, x[c] -> 4, x[d] -> 5} z=25 impossible: {x[b] -> 3, x[c] -> 4, x[d] -> 5} s[l]<0

Simplex Algorithm: Idea The solution to a linear programming problem can be found by iterative improvement: (1) take any feasible solution (2) check whether any resources are left (3) check whether exchanging some “activity” (product) improves the solution (4) if so, exchange & repeat from step (2) otherwise the optimum is reached.

Simplex: Geometric Interpretation tight constraint s[i]=0 feasible region Observe that the optimimum of the objective function can only exists at a corner point of the simplex: The objective function is linear. Imagine the objective plane z=f(x,y)=ax+by At all other points some resource has reserves that can be utilized by producing more, thus optimizing profit

How to start? We will return to the problem of finding a good initial solution later. In the example case (and in many practical cases), the trivial solution (x*=0*) is feasible, but this is not always true. For now it seems reasonable to start with a solution in which only a single activity is chosen. This should be the most profitable activity (here product B) and it should be as large as possible, i.e. to the point where the most limiting constraint becomes tight (here s[m]). productionPlan = { x[b] -> 10, x[c] -> 0, x[d] -> 0}; z == 30 s[f] == 21 s[l] == 11 s[m] == 13

Economic Position s[m] == 20 – 2 x[b] – x[c] – x[d] The row of the tableau that specifies the limiting constraint is called the Pivot Row. s[m] == 20 – 2 x[b] – x[c] – x[d] Solving the Pivot row for the current activity yields This describes the current activity in terms of other possible activities and available resources. Replacing this into the objective function yields: This is the “economic position” at the current tentative production plan.

Shadow Prices intercept shadow price (3/2) opportunity cost (3/2) The intercept gives the profit for the current production plan The shadow price gives the amount by which the profit could be increased if this resource constraint could be relaxed (i.e. the break even price for buying more of this resource) The opportunity cost gives the potential increase of profit by increasing the respective activity after accounting for necessary reduction in the current activity (e.g. one unit less b enables us to produce two more units d with the same price ($3) as b unless another constraint becomes tight.) This interpretation is only valid while the current limiting constraint remains binding! (i.e. in particular for the current tentative production plan)

Solution Revision Obviously, the opportunity costs should lead our way for revising the production plan if we want to maximize profit. Linearity of the problem implies that we should substitute as much of the activity with the highest opportunity cost (here D) as the resource constraints permit. To find the most limiting constraint we substitute the pivot row (solved for the current activity x[b]) into the constraints:

Stopping Condition z == 39 s[f] == 10 s[l] == 0 s[m] == 0 By comparison of the coefficients we can see that s[l] will be the limiting constraint for x[d] Obviously we can replace 6 units of D (for 3 units of B) before the s[l] becomes tight. The new production plan is thus {x[b] -> 7, x[c] -> 0, x[d] -> 6} z == 39 s[f] == 10 s[l] == 0 s[m] == 0 Is this optimal??? We have to repeat the above process until no improvement possible. When all opportunity costs are less or equal zero (<=0), the current solution must be optimal and we can stop. What is a reasonable stopping condition?

Pivoting 1. identify best substitute x[j] from highest opportunity cost 2. identify limiting resource s[i] 3. select pivot row according to (2) 4. solve pivot row for variable x[j] 5. substitute (4) in all other equations 6a. terminate if all opportunity costs are negative or zero 6b otherwise goto step 1

z == 3 x[b] + x[c] + 3 x[d] s[f] == 30 – 2 x[b] – 2 x[c] – x[d] s[l] == 35 – x[b] – 2 x[c] – 3 x[d] s[m] == 20 – 2 x[b] – x[c] – x[d] pivoting on x[b] / s[m] yields: pivoting on x[d] / s[l] yields the optimum solution:

Terminology In linear programming, the problem representation by a set of equations, like we have used them, is called a dictionary. A more compact representation by a matrix of coefficients used in actual implementations is called the tableau. The left-hand side variables (i.e. the variables that do only occur in one equation) are called basic. All other variables are called non-basic. The final dictionary/tableau yields the optimum solution by setting all non-basic variables to zero.

? ? ? Potential Problems Optimum not found Optimum non-existent Non-termination Problem unbounded Problem infeasible Cycling abort if there is no limiting resource for some pivot. ? Relax constraints & check feasibility ? In practice almost irrelevant can be avoided by clever choice of pivot elements. See: V. Chvatal. Linear Programming. W.H. Freeman, 1983. ? e.g. max (x+y) s.t. x-y <= 0

Degenerate Solutions & Cycling A basic feasible solution is called degenerate if at least one of the basic variables takes the value 0. => the next pivot does not necessarily change the solution (possibly only variables with value 0 are swapped) => Cycling can occur (worst case) Bland’s Anti-Cycling Rule Number the variables. In case of ties, let the lowest numbered variable enter the basis. If there is a tie for choosing the exit variable, use the lowest numbered variable.

Relaxing a Linear System A linear problem is infeasible if the constraints are too restrictive. z == x[1] – x[2] + x[3] s[1] == 4 – 2 x[1] + x[2] – 2 x[3] s[2] == -5 – 2 x[1] + 3 x[2] – x[3] s[3] == -1 + x[1] – x[2] + 2 x[3] The trivial solution is infeasible. Is the problem feasible at all? To answer this question we add a so-called artificial variable a[0] to the system, such that the system is always feasible. The original system is feasible iff there is a solution to the relaxed system with a[0]=0: z == a[0] s[1] == 4 + a[0] – 2 x[1] + x[2] – 2 x[3] s[2] == -5 + a[0] – 2 x[1] + 3 x[2] – x[3] s[3] == -1 + a[0] + x[1] – x[2] + 2 x[3] most stringent constraint

Checking Feasibility By pivoting on the artificial variable a[0] (and the most negative intercept) we get a basic feasible solution for the relaxed system z == -5 – s[2] – 2 x[1] + 3 x[2] – x[3] s[1] == 9 + s[2] – 2 x[2] – x[3] a[0] == 5 + s[2] + 2 x[1] – 3 x[2] + x[3] s[3] == 4 + s[2] + 3 x[1] – 4 x[2] + 3 x[3] Applying the Simplex method to this tableau yields the final tableau Therefore the original problem is feasible with x*={0, 11/5, 8/5}

Artificial Variables More generally, for any linear problem we can introduce artificial variables a[i] that make the relaxed problem trivially feasible. For example Can be rewritten to Note however that, because of the s[i], here we would not need all a[i]

Two Phase Simplex We can now answer the question how we obtain a basic feasible solution for a linear programming problem. We simply perform two phases of Simplex optimization: (1) relax the original problem with artificial variables a[i] obtain the trivial initial solution for the relaxed problem by substituting for a[i] in the objective function -sum(a[i]) drive sum(a[i]) to 0 by maximizing -sum(a[i]) using Simplex (2a) If the optimal value of the objective function is >0 the original problem is infeasible (2b) Otherwise set a[i]=0 in the final tableau of the revised problem substitute the resulting equations of the final tableau from phase I into the original objective function (2c) combine revised objective function with equations from phase I ..and perform Simplex on the thus obtained tableau.

Example For the example problem, we start phase II with the tableau ( originally: x[1]-x[2]+x[3] ) For this tableau Simplex generates the solution

Redundant Constraints It can happen that not all artificial variables are parametric (non-basic) after phase 0 --- we cannot set them to zero! Let the equation for this variable be intercept must be 0, otherwise problem is infeasible Two possibilities: All variables on the RHS are artificial: Delete this equation and the artificial variable aj. This happens if one of the original constraints was redundant. At least one (non-artificial) variable Xi has a coefficient non-zero: pivot out aj (Xi enters basis). Repeat until all artificial variables are zero.

Minimization Of course, Simplex can also handle minimization problems, because is the same as Also other inequalities and equalities are easily translated to standard form: This way of handling equations increases the tableau significantly.

Complexity Linear Programming is known to be polynomial (by reduction to LSI - linear strict inequalities) Simplex is exponential in the worst-case (construct a polytope with exponentially many vertices such that Simplex may trace all vertices) but “well-behaved” in practice!

Summary Today we have looked at Classes of Optimization Problems Linear Programming The Simplex Algorithm

Exercise 1: Manual simplex Execute the simplex manually on the following example

Exercise 2: Solve LPs in MiniZinc Model and solve the example of the slides using MiniZinc