-- Interior-Point Methods for LP

Slides:



Advertisements
Similar presentations
Solving Linear Programming Problems: The Simplex Method
Advertisements

Chapter 5 The Simplex Method The most popular method for solving Linear Programming Problems We shall present it as an Algorithm.
Computational Methods for Management and Economics Carla Gomes Module 6a Introduction to Simplex (Textbook – Hillier and Lieberman)
Dragan Jovicic Harvinder Singh
A Randomized Polynomial-Time Simplex Algorithm for Linear Programming Daniel A. Spielman, Yale Joint work with Jonathan Kelner, M.I.T.
Solving Integer Programs. Natural solution ideas that don’t work well Solution idea #1: Explicit enumeration: Try all possible solutions and pick the.
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Solving Linear Programs: The Simplex Method
Simplex Method LP problem in standard form. Canonical (slack) form : basic variables : nonbasic variables.
ISM 206 Lecture 4 Duality and Sensitivity Analysis.
Problem Set # 4 Maximize f(x) = 3x1 + 2 x2 subject to x1 ≤ 4 x1 + 3 x2 ≤ 15 2x1 + x2 ≤ 10 Problem 1 Solve these problems using the simplex tableau. Maximize.
D Nagesh Kumar, IIScOptimization Methods: M3L6 1 Linear Programming Other Algorithms.
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Linear Programming Piyush Kumar. Graphing 2-Dimensional LPs Example 1: x y Feasible Region x  0y  0 x + 2 y  2 y  4 x  3 Subject.
Paper review of ENGG*6140 Optimization Techniques Paper Review -- Interior-Point Methods for LP for LP Yanrong Hu Ce Yu Mar. 11, 2003.
Duality Theory LI Xiaolei.
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
1 Chapter 7 Linear Programming. 2 Linear Programming (LP) Problems Both objective function and constraints are linear. Solutions are highly structured.
Chapter 6 Linear Programming: The Simplex Method Section R Review.
Duality Theory  Every LP problem (called the ‘Primal’) has associated with another problem called the ‘Dual’.  The ‘Dual’ problem is an LP defined directly.
1© 2003 by Prentice Hall, Inc. Upper Saddle River, NJ Linear Programming: The Simplex Method.
Solving Linear Programming Problems: The Simplex Method
296.3Page :Algorithms in the Real World Linear and Integer Programming II – Ellipsoid algorithm – Interior point methods.
1 Max 8X 1 + 5X 2 (Weekly profit) subject to 2X 1 + 1X 2  1000 (Plastic) 3X 1 + 4X 2  2400 (Production Time) X 1 + X 2  700 (Total production) X 1.
WOOD 492 MODELLING FOR DECISION SUPPORT Lecture 3 Basics of the Simplex Algorithm.
Linear Programming Erasmus Mobility Program (24Apr2012) Pollack Mihály Engineering Faculty (PMMK) University of Pécs João Miranda
Discrete Optimization Lecture #3 2008/3/41Shi-Chung Chang, NTUEE, GIIE, GICE Last Time 1.Algorithms and Complexity » Problems, algorithms, and complexity.
Chapter 3 Linear Programming Methods
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
An-Najah N. University Faculty of Engineering and Information Technology Department of Management Information systems Operations Research and Applications.
Chapter 10 Advanced Topics in Linear Programming
Cutting-Plane Algorithm BY: Mustapha.D. Ibrahim. Introduction The cutting-plane algorithm starts at the continuous optimum LP solution Special constraints.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Foundations-1 The Theory of the Simplex Method. Foundations-2 The Essence Simplex method is an algebraic procedure However, its underlying concepts are.
Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar
Algorithms in the Real World
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Solving Linear Program by Simplex Method The Concept
Lap Chi Lau we will only use slides 4 to 19
Chap 10. Sensitivity Analysis
Topics in Algorithms Lap Chi Lau.
Chapter 5 The Simplex Method
The minimum cost flow problem
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
10CS661 OPERATION RESEARCH Engineered for Tomorrow.
10CS661 OPERATION RESEARCH Engineered for Tomorrow.
SOLVING LINEAR PROGRAMMING PROBLEMS: The Simplex Method
Dr. Arslan Ornek IMPROVING SEARCH
James B. Orlin Presented by Tal Kaminker
Chap 9. General LP problems: Duality and Infeasibility
Chapter 5. Sensitivity Analysis
ENGM 631 Optimization Ch. 4: Solving Linear Programs: The Simplex Method.
Chapter 3 The Simplex Method and Sensitivity Analysis
Basic Linear Programming Concepts
Dual simplex method for solving the primal
Linear Programming SIMPLEX METHOD.
Solving Linear Programming Problems: Asst. Prof. Dr. Nergiz Kasımbeyli
Starting Solutions and Convergence
Spreadsheet Modeling & Decision Analysis
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Part 4 Nonlinear Programming
Chapter-III Duality in LPP
Chapter 10: Iterative Improvement
Prepared by Po-Chuan on 2016/05/24
Constraints.
Presentation transcript:

-- Interior-Point Methods for LP Paper review of ENGG*6140 Optimization Techniques Paper Review -- Interior-Point Methods for LP Yanrong Hu Ce Yu Mar. 11, 2003

The original algorithm description A variant of Karmarkar’s algorithm Outline General introduction The original algorithm description A variant of Karmarkar’s algorithm - a detailed simple example Another example More issues

Interior-point methods for linear programming: General introduction Interior-point methods for linear programming: The most dramatic new development in operations research (linear programming) during the 1980s. Opened in 1984 by a young mathematician at AT&T Bell Labs, N. Karmarkar. Polynomial-time algorithm and has great potential for solving huge linear programming problems beyond the reach of the simplex method. Karmarkar’s method stimulated development in both interior-point and simplex methods.

Interior-point method Trial solutions General introduction Difference between Interior point methods and the simplex method The big difference between Interior point methods and the simplex method lies in the nature of trial solutions. Simplex method Interior-point method Trial solutions CPF (Corner Point Feasible) solutions Interior points (points inside the boundary of the feasible region) complexity worst case:# iterations can increase exponentially in the number of variables n: Polynomial time

Karmarkar’s original Algorithm Karmarkar assumes that the LP is given in Canonical form of the problem:  Assumptions: To apply the algorithm to LP problem in standard form, a transformation is needed. Min Z = CX s.t. AX b X  0

An example of transformation Karmarkar’s original Algorithm An example of transformation from standard form to canonical form Min z = y1+y2 (z=cy) s.t. y1+2y2  2 (AYb) y1, y2  0 Min z=5x1 + 5x2 s.t. 3x1+8x2+3x3-2x4=0 (AX=0) x1+ x2+ x3+ x4=1 (1X =1) xj  0, j=1,2,3,4

The principal idea: The algorithm creates a sequence of points having Karmarkar’s original Algorithm The principal idea: The algorithm creates a sequence of points having decreasing values of the objective function. In the step, the point is brought into the center of the simplex by a projective transformation.

The steps of Karmarkar’s algorithm are: Karmarkar’s original Algorithm The steps of Karmarkar’s algorithm are: Step 0. start with the solution point and compute the step length parameters Step k. Define:    and compute: X is brought to the center by:

The Affine Variant of Karmarkar’s Algorithm: A Variant The Affine Variant of Karmarkar’s Algorithm: Concept 1: Shoot through the interior of the feasible region toward an optimal solution. Concept 2: Move in a direction that improves the objective function value at the fastest feasible rate. Concept 3: Transform the feasible region to place the current trail solution near its center, thereby enabling a large improvement when concept 2 is implemented.

An Example The Problem: Max Z= x1+ 2x2 s.t. x1+ x28 x1,x2 0 Optimal solution is (x1,x2)=(0,8) with Z=16

An Example The algorithm begins with an initial solution that lies in the interior of the feasible region.  Arbitrarily choose (x1,x2)=(2,2) to be the initial solution. Concept 1: Shoot through the interior of the feasible region toward an optimal solution. Max Z= x1+ 2x2 s.t. x1+ x28 x1,x2 0 Concept 2: Move in a direction that improves the objective function value at the fastest feasible rate. The direction is perpendiculars to (and toward) the objective function line. (3,4)=(2,2)+(1,2), where the vector (1,2) is the gradient of the Objective Function. The gradient is the coefficient of the objective function.

The algorithm actually operates on the augmented form. An Example The algorithm actually operates on the augmented form. Max Z = cT X s.t. AX = b X  0  The augmented form can be written in general as: Max Z = x1+ 2x2 s.t. x1+ x2+ x3 = 8 x1,x2,x3  0 letting x3 be the slack s.t. x1+ x28 x1,x2  0 Initial solution: (2,2) Gradient of obj. fn.: (1,2) (2,2,4) (1,2,0)

An Example Using Projected Gradient to implement Concept 1 & 2: Show The augmented form graphically Adding the gradient to the initial leads to (3,4,4)= (2, 2, 4)+ (1,2,0) (infeasible) To remain feasible, the algorithm project the point (3,4,4) down onto the feasible triangle. To do so, the projected gradient – gradient projected onto the feasible region – is used. Then the next trial solution move in the direction of projected gradient - Feasible region : the triangle - Optimum (0,8,0), Initial solution: (2,2,4) - Gradient of Obj. fn. : cT =[ 1, 2, 0 ]

An Interior-point Algorithm Using Projected Gradient to implement Concept 1 & 2  a formula is available for computing the projected gradient: projection matrix : P = I-AT(AAT)-1A projected gradient: cp = Pc now we are ready to move from (2,2,4) in the direction of cp to a new point:  determines how far we move. large   too close to the boundary small   more iterations we have chosen  =0.5, so the new trial solution move to: x = ( 2, 3, 3)

An Interior-point Algorithm Centering scheme for implementing Concept 3 Why -- The centering scheme keeps turning the direction of the projected gradient to point more nearly toward an optimal solution as the algorithm converges toward this solution. How -- Simply changing the scale (units) for each of the variable so that the trail solution becomes equidistant from the constraint boundaries in the new coordinate system. Define D=diag{x}, Concept 3: Transform the feasible region to place the current trail solution near its center, thereby enabling a large improvement when concept 2 is implemented. bring x to the center in the new coordinate

An Interior-point Algorithm Initial trial solution (x1,x2,x3) =(2,2,4) In this new coordinate system, the problem becomes:

Summary and Illustration of the Algorithm Iteration 1. Given the initial solution (x1,x2,x3) = (2,2,4) Summary of the General Procedure Step1. given the current trial solution (x1,x2,…,xn), set X is brought to the center (1,1,1) by:

Summary and Illustration of the Algorithm Iteration 1. (cont.) Summary of the General Procedure (cont.) To compute projected gradient:

Summary and Illustration of the Algorithm Iteration 1. (cont.) Summary of the General Procedure (cont.) Compute projected gradient:

Summary and Illustration of the Algorithm Iteration 1. (cont.) step4. Determine how far to move by identify v. Then make move by calculating step4. define v as the absolute value of the negative component of cp having the largest value, so that v=|-2|=2. In this coordinate, the algorithm moves from the current trial solution to the next one.

Summary and Illustration of the Algorithm Iteration 1. (cont.) Summary of the General Procedure (cont.) Step 5. In the original coordinate, the solution is Step 5. Back to the original coordinate by calculating this completes the iteration 1. The new solution will be used to start the next iteration.

Summary and Illustration of the Algorithm Iteration 2. Given the current trial solution X is brought to the center (1,1,1,) in the new coordinate, and move to (0.83, 1.4, 0.5), corresponds (2.08, 4.92,1.0) in the original coordinate

Summary and Illustration of the Algorithm Iteration 3. Given the current trial solution x=(2.08, 4.92, 1.0) X is brought to the center (1,1,1,) in the new coordinate, and move to ( 0.54,1.30,0.50), corresponds (1. 13, 6.37, 0.5) in the original coordinate

An Example Effect of rescaling of each iteration: Sliding the optimal solution toward (1,1,1) while the other BF solutions tend to slide away. A B C D

Summary and Illustration of the Algorithm More iterations. Starting from the current trial solution x, following the steps, x is moving toward the optimum (0,8). When the trial solution is virtually unchanged from the proceeding one, then the algorithm has virtually converged to an optimal solution. So stop.

Another Example The problem Augmented form MAX Z = 5x1 + 4x2 ST 6x1 + 4x2<=24 x1 + 2x2<=6 -x1 + x2<=1 x2<=2 x1, x2>=0 Augmented form MAX Z = 5x1 + 4x2 ST 6x1 + 4x2 + x3=24 x1 + 2x2 + x4=6 -x1 + x2 + x5=1 x2 + x6=2 xj>=0, j=1,2,3,4,5,6

Another Example Starting from an initial solution x=(1,1,14,3,1,1) The trial solution at each iteration

More issues Interior-point methods is designed for dealing with big problems. Although the claim that it’s much faster than the simplex method is controversy, many tests on huge LP problems show its outperformance. After Karmarkar’s paper, many related methods have been developed to extend the applicability of Karmarkar’s algorithm, e.g. Infeasible interior points method -- remove the assumption that there always exits a nonempty interior. Methods applying to LP problems in standard form.

More issues Methods dealing with finding initial solution, and estimating the optimal solution. Methods working with primal-dual problems. Studies about moving step-long/short steps. Studies about efficient implementation and complexity of various methods. Karmarkar’s paper not only started the development of interior point methods, but also encouraged rapid improvement of simplex methods.

Reference [1] N. Karmarkar, 1984, A New Polynomial - Time Algorithm for Linear Programming, Combinatorica 4 (4), 1984, p. 373-395. [2] M.J. Todd, (1994), Theory and Practice for Interior-point method, ORSA Jounal on Computing 6 (1), 1994, p. 28-31. [3] I. Lustig, R. Marsten, D. Shanno, (1993), Interior-point Methods for Linear Programming: Computational State of the Art, ORSA Journal on Computing, 6 (1), 1994, p. 1-14. [4] Hillier,Lieberman,Introduction to Operations Research (7th edition) 320-334 [5] Taha, Operations Research: An Introduction (6th edition) 336-345 [6] E.R. Barnes, 1986, A Variation on Karmarkar’s Algorithm for Sloving Linear Programming problems, Mathematical Programming 36, 1986, p. 174-182. [7] R.J. Vanderbei, M.S. Meketon and B.A. Freeman, A Modification of karmarkar’s Linear Programming Algorithm, Algorithmica: An International Journal in Computer Science 1 (4), 1986 p. 395 – 407. [8] D. Gay (1985) A Variant of Karmarkar’s Linear Programming Algorithm for Problems in Standard form, Mathematical Programming 37 (1987) 81-90 more…

Thank You! Yanrong Hu Ce Yu Mar. 11 2003 Paper Review of ENGG*6140 Optimization Techniques Thank You! Yanrong Hu Ce Yu Mar. 11 2003