The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.

Slides:



Advertisements
Similar presentations
Lecture #3; Based on slides by Yinyu Ye
Advertisements

Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Computational Methods for Management and Economics Carla Gomes Module 6a Introduction to Simplex (Textbook – Hillier and Lieberman)
Dragan Jovicic Harvinder Singh
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Optimization Problems 虞台文 大同大學資工所 智慧型多媒體研究室. Content Introduction Definitions Local and Global Optima Convex Sets and Functions Convex Programming Problems.
Linear Programming Fundamentals Convexity Definition: Line segment joining any 2 pts lies inside shape convex NOT convex.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
主講人:虞台文 大同大學資工所 智慧型多媒體研究室
Linear Inequalities and Linear Programming Chapter 5
Computational Methods for Management and Economics Carla Gomes Module 6b Simplex Pitfalls (Textbook – Hillier and Lieberman)
Linear programming Thomas S. Ferguson University of California at Los Angeles Compressive Sensing Tutorial PART 3 Svetlana Avramov-Zamurovic January 29,
The Simplex Method: Standard Maximization Problems
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
Solving Linear Programs: The Simplex Method
CSC5160 Topics in Algorithms Tutorial 1 Jan Jerry Le
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
The Simplex algorithm.
1. The Simplex Method.
The Two-Phase Simplex Method LI Xiao-lei. Preview When a basic feasible solution is not readily available, the two-phase simplex method may be used as.
Simplex method (algebraic interpretation)
1 Linear programming Linear program: optimization problem, continuous variables, single, linear objective function, all constraints linear equalities or.
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
3.3 Implementation (1) naive implementation (2) revised simplex method
Simplex Method Continued …
Discrete Optimization Lecture #3 2008/3/41Shi-Chung Chang, NTUEE, GIIE, GICE Last Time 1.Algorithms and Complexity » Problems, algorithms, and complexity.
Chapter 4 Linear Programming: The Simplex Method
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
Chapter 3 Linear Programming Methods
I.4 Polyhedral Theory 1. Integer Programming  Objective of Study: want to know how to describe the convex hull of the solution set to the IP problem.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm file Simplex2_AMII_05a_gr.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
Part 3. Linear Programming 3.2 Algorithm. General Formulation Convex function Convex region.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
Hopfield Neural Networks for Optimization 虞台文 大同大學資工所 智慧型多媒體研究室.
Lecture 2: Limiting Models of Instruction Obeying Machine 虞台文 大同大學資工所 智慧型多媒體研究室.
OR  Now, we look for other basic feasible solutions which gives better objective values than the current solution. Such solutions can be examined.
Linear Programming 虞台文.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Foundations-1 The Theory of the Simplex Method. Foundations-2 The Essence Simplex method is an algebraic procedure However, its underlying concepts are.
1 Simplex algorithm. 2 The Aim of Linear Programming A Linear Programming model seeks to maximize or minimize a linear function, subject to a set of linear.
Decision Support Systems INF421 & IS Simplex: a linear-programming algorithm that can solve problems having more than two decision variables.
Chapter 4 The Simplex Algorithm and Goal Programming
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chap 10. Sensitivity Analysis
Perturbation method, lexicographic method
Chap 9. General LP problems: Duality and Infeasibility
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 5. Sensitivity Analysis
Chap 3. The simplex method
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Part 3. Linear Programming
Solving Linear Programming Problems: Asst. Prof. Dr. Nergiz Kasımbeyli
Chapter 8. General LP Problems
ISyE 4231: Engineering Optimization
I.4 Polyhedral Theory (NW)
I.4 Polyhedral Theory.
Chapter 8. General LP Problems
Simplex method (algebraic interpretation)
Chapter 8. General LP Problems
Chapter 2. Simplex method
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室

Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a Tableau Choosing a Profitable Column Pivot Selection Anticycling Algorithm The Two-Phase Simplex Algorithm

Linear Programming Basic Feasible Solutions 大同大學資工所 智慧型多媒體研究室

The Goal of the Simplex Algorithm Any LP can be converted into the standard form

Convex Polytope defines a convex set F F may be 1.Bounded 2.Unbounded 3.Empty

The Basic Idea of Simplex Algorithm basic feasible solution at a corner Finding optimum by moving around the corner of the convex polytope in cost descent sense

The Basic Idea of Simplex Algorithm basic feasible solution at a corner Finding optimum by moving around the corner of the convex polytope in cost descent sense How to find an initial feasible solution? How to move from corner to corner?

Assumption 1 Assume that A is of rank m. There is a basis Independent has an inverse.

Basic Solution The basis solution corresponding to B is: The basis solution may be infeasible.

Example x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 feasible

Example x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 infeasible

Basic Feasible Solution defines an nonempty convex set F If a basic solution is in F, it is called a basic feasible solution (bfs).

The Existence of BFS’s F may be 1.Bounded 2.Unbounded 3.Empty If F is nonempty, at least one bfs. defines an nonempty convex set F

The Existence of BFS’s F may be 1.Bounded 2.Unbounded 3.Empty If F is nonempty, at least one bfs. How to find it?

Assumptions 1. A is of rank m. 2. F is nonempty. 3. c ’ x is bounded below for x  F. 1. A is of rank m. 2. F is nonempty. 3. c ’ x is bounded below for x  F. defines an nonempty convex set F

Assumptions 1. A is of rank m. 2. F is nonempty. 3. c ’ x is bounded below for x  F. 1. A is of rank m. 2. F is nonempty. 3. c ’ x is bounded below for x  F. There exist bfs’s. defines an nonempty convex set F Ensure there is a bounded solution. Are all bfs’s the vertices of the convex polytope defined by F ?

The Simplex Alogrithm The Geometry of Linear Programs 大同大學資工所 智慧型多媒體研究室

Linear Subspaces of R d S  R d is a subspace of R d if it is closed under vector addition and scalar multiplication. S defined below is a subspace of R d. a set of homogenous linear equations

Dimensions

Affine subspaces A translated linear subspace. E.g., an affine subspace a set of nonhomogenous linear equations an affine subspace

Dimensions an affine subspace

Subsets of R d The following subsets are not subspace or affine subspace. A line segment The first quadrant A halfspace

Dimensions The following subsets are not subspace or affine subspace. A line segment The first quadrant A halfspace The dimension of any subset of R d is the smallest dimension of any affine subspace which contains it. Dim = 1 Dim = 2

The Feasible Spaces of LP 1. A is of rank m. 2. F is nonempty. 3. c ’ x is bounded below for x  F. 1. A is of rank m. 2. F is nonempty. 3. c ’ x is bounded below for x  F. defines an nonempty convex set F

Hyperplane/Halfspace An affine subspace of R d of dimension d  1 is called a hyperplane. A hyperplane defines two closed halfspaces: Fact: Halfspaces are convex.

Convex Polytopes The intersection of a finite number of halfspaces, when it is bounded and nonempty is called a convex polytope, or simply a polytope. Fact: Halfspaces are convex.

x1x1 x2x2 x3x3 Example     (0, 0, 3) (1, 0, 3) (2, 0, 2) (2, 0, 0) (2, 2, 0)(0, 2, 0) (0, 1, 3) (0, 0, 0)    

More on Polytopes Geometric views of a polytope – The convex hull of a finite set of points. – The intersection of a finite number of halfspaces, when it is bounded and nonempty, i.e., defined by The algebraic view of a polytope The feasible space defined by LP (in standard form). Any relation?

LP  Polytopes Assume m columns n  m columns x1x1 x2x2 xnmxnm x n  m+1 xnxn A can always be in this form if rank(A)=m. LP

LP  Polytopes Assume m columns n  m columns x1x1 x2x2 xnmxnm x n  m+1 xnxn LP

LP  Polytopes 00 LP

Polytopes  LP m inequalities m equalities

Polytopes  LP m inequalities Slack variables

Polytopes  LP Slack variables

Polytopes and LP Slack variables H I H

Polytopes and LP The answer

Polytopes and LP The answer

Polytopes & F of LP defines a polytope defines a feasible set Are there any relations?

Polytopes & F of LP defines a polytope defines a feasible set Some points in P are vertices. Some points in F are bfs’s. Are there any relations?

Theorem 1 a convex polytope a feasible set of the corresponding LP is a vertex of P is a bfs.

Theorem 1 Pf) “”“” See textbook.

Theorem 1 Pf) “”“” Fact:is a vertex, then it cannot be the strict combination of points of P, i.e.,

Theorem 1 Pf) “”“” Fact:is a vertex, then it cannot be the strict combination of points of P, i.e.,

Theorem 1 Pf) “”“” Fact: Define B = {A j : x j > 0, 1  j  n} Let We want to show at most m nonzero x i ’s. We want to show that vectors in B are linearly independent. Suppose not. Then, some d j  0. >0 if  is sufficiently small.

Theorem 1 Pf) “”“” Fact: Define B = {A j : x j > 0, 1  j  n} Let We want to show at most m nonzero x i ’s. We want to show that vectors in B are linearly independent. Suppose not. Then, some d j  0. >0 if  is sufficiently small. Define  is sufficiently small.

Theorem 1 Pf) “”“” Fact: Define B = {A j : x j > 0, 1  j  n} Let We want to show at most m nonzero x i ’s. A j are linearly independent. | B |  m.  Since rank(A) = m, we can always augment B to include m linearly independent vectors. Using B to form basic columns renders x a bfs.

Discussion Pf) “”“” Fact: Define B = {A j : x j > 0, 1  j  n} Let We want to show at most m nonzero x i ’s. A j are linearly independent. | B |  m.  Since rank(A) = m, we can always augment B to include m linearly independent vectors. Using B to form basic columns renders x a bfs. If | B | < m, there may be many ways to augment to m linearly independent vectors. Two different B and B ’ may corresponds to the same bfs. If | B | < m, there may be many ways to augment to m linearly independent vectors. Two different B and B ’ may corresponds to the same bfs. Two different, their corresponding B and B ’ must be different.

Example x1x1 x2x2 x3x3     (0, 0, 3) (1, 0, 3) (2, 0, 2) (2, 0, 0) (2, 2, 0)(0, 2, 0) (0, 1, 3) (0, 0, 0)    

Example Corresponding F of LP

Example Corresponding F of LP

Example Corresponding F of LP B and B ’ determine the same bfs, i.e, x = (2, 2, 0, 0, 0, 3, 0) ’. B and B ’ determine the same bfs, i.e, x = (2, 2, 0, 0, 0, 3, 0) ’.

Degeneration A bfs is called degenerate if it contains more than n  m zeros.

x x Theorem 2 A bfs is called degenerate if it contains more than n  m zeros. If two distinct bases corresponding to the same bfs x, then x is degenerate. Pf) Let B and B ’ be two distinct bases which determine the same bfs x. B B’B’ n m m

x x Theorem 2 A bfs is called degenerate if it contains more than n  m zeros. Pf) Let B and B ’ be two distinct bases which determine the same bfs x. B B’B’ n m m 00 x has more than n  m zeros and, hence, is degenerate. If two distinct bases corresponding to the same bfs x, then x is degenerate.

Discussion A bfs is called degenerate if it contains more than n  m zeros. If two distinct bases corresponding to the same bfs x, then x is degenerate. Changing basis may keep bfs unchanged.

More on Theorem 1 and 2 Vertices of polytopeBfs’s of LP Change vertexChange bfs ? A bfs is determined by a chosen basis. Change verticesChange basis ?

Costs ? ? ?

Costs constant

Theorem 3 There is an optimal bfs in any instance of LP. Furthermore, if q bfs’s are optimal, their convex combinations are also optimal.

Theorem 3 There is an optimal bfs in any instance of LP. Furthermore, if q bfs’s are optimal, their convex combinations are also optimal. Let x 0  P be the optimal solution, and let x 1, …, x N be the vertices of P. with Let j be the index corresponding to the vertex with lowest cost. x j is the optimum. Pf) and

Theorem 3 There is an optimal bfs in any instance of LP. Furthermore, if q bfs’s are optimal, their convex combinations are also optimal. Assume y 1, …, y q be the optimal vertices. Pf) Let withand

Linear Programming Moving from Bfs to Bfs 大同大學資工所 智慧型多媒體研究室

Facts The optimal solution can be found from vertices of the corresponding polytope. The bfs’s of LP and the vertices of polytope are close correlated. The algorithm to solve LP: – Move from vertex to vertex; or – Move from bfs to bfs

The BFS’s Basis columns Nonbasis columns The bfs is determined from the set of basis columns, i.e., B. Move from bfs to bfs:

Move from Bfs to Bfs Let x 0 be the bfs determined by B. Denote the basic components of x 0 as x i0, i = 1, …, m. For any, we have    

Move from Bfs to Bfs m + 1 columns are involved. Choose one A j from to enter B. Choose one A B(i) to leave B. Who enters? Who leaves?

Move from Bfs to Bfs m + 1 columns are involved. Choose one A j from to enter B. Choose one A B(i) to leave B. Who enters? Who leaves?  must be positive. Make one of them zero, and keep others nonnegative. The one that we want to make zero must have x ij > 0. Why?

Move from Bfs to Bfs Suppose that A j wants to enter B. Then, we choose Make one of them zero, and keep others nonnegative.

Move from Bfs to Bfs Suppose that A j wants to enter B. Then, we choose Make one of them zero, and keep others nonnegative. 1.How about if x i0 = 0 for some i ? 2.How about if all x ij  0 ? 1.How about if x i0 = 0 for some i ? 2.How about if all x ij  0 ?

Example x1x1 x2x2 x3x3     (0, 0, 3) (1, 0, 3) (2, 0, 2) (2, 0, 0) (2, 2, 0)(0, 2, 0) (0, 1, 3) (0, 0, 0)    

Example Corresponding F of LP

Example Corresponding F of LP 2214   

Example Corresponding F of LP 2214    Choosing  =  0 = 1 makes A 6 leaves B. 

Example Corresponding F of LP 224  01 11 33 11 00 33   = 0  = 1 Choosing  =  0 = 1 makes A 6 leaves B.

Linear Programming Organization of a Tableau 大同大學資工所 智慧型多媒體研究室

x1x1 x2x2 x3x3 x4x4 x5x5 Ans Example x1x1 x2x2 x3x3 x4x4 x5x5 Ans 11 2 1 R1 R2  R1 R3  R1 Elementary Row operations

x1x1 x2x2 x3x3 x4x4 x5x5 Ans Example 11 2 1 B Suppose that we want to choose A 1 to enter the basis, i.e., choosing A 1 as the pivot column.    1/3 2/2 min

x1x1 x2x2 x3x3 x4x4 x5x5 Ans Example 11 2 1 B    1/3 2/2 min

x1x1 x2x2 x3x3 x4x4 x5x5 Ans Example 11 2 1 /3 2/2 min x1x1 x2x2 x3x3 x4x4 x5x5 Ans 1/3 4/3 10/ /3  7/3 11/3 1/3  2/3 1/

Linear Programming Choosing a Profitable Column 大同大學資工所 智慧型多媒體研究室

Cost Change by Bfs 0  Bfs 1 bfs 0 = x 0 obtained using B a basis cost of bfs 0 Aj  BAj  B want to enter the basis cjcj unit cost corresponding to A j (x j ). decomposition of A j. cost difference by bringing one unit of A j into bfs 0.

Choose Profitable Columns bfs 0 = x 0 obtained using B a basis cost of bfs 0 Aj  BAj  B want to enter the basis cjcj unit cost corresponding to A j (x j ). decomposition of A j. cost difference by bringing one unit of A j into bfs 0. zjzj <0 relative cost (on B ) Do these for each A j  B.

Choose Profitable Columns bfs 0 = x 0 obtained using B a basis cost of bfs 0 Aj  BAj  B want to enter the basis cjcj unit cost corresponding to A j (x j ). decomposition of A j. cost difference by bringing one unit of A j into bfs 0. zjzj <0 relative cost (on B ) Do these for each A j  B. Bringing  0 units of A j into changes bfs 0 to bfs 1 and changes the cost by the amount:

Choose Profitable Columns bfs 0 = x 0 obtained using B a basis cost of bfs 0 Aj  BAj  B want to enter the basis cjcj unit cost corresponding to A j (x j ). decomposition of A j. cost difference by bringing one unit of A j into bfs 0. zjzj <0 relative cost (on B ) Do these for each A j  B. Which one is most profitable if more than one are negative?

Relative Costs zjzj <0 relative cost (on B ) x1x1 x2x2 xnmxnm x n  m+1 xnxn c1c1 c2c2 cnmcnm c n  m+1 cncn c B =( c B (1), …, c B ( m ) ) ’ ` B The tableau...

Optimality Criterion zjzj <0 relative cost (on B ) relative cost vector the basis for the bfs, say x 0.

Optimality Criterion zjzj <0 relative cost (on B ) relative cost vector the basis for the bfs, say x 0. The bfs is optimal.

The Tableau  : the cost of a given x. <0

The Tableau <0 x1x1  xnmxnm x n  m+1 x n  m+2  xnxn  Ans a 11  a 1, n  m 10  00b1b1 a 21  a 2, n  m 01  00b2b am1am1  a m, n  m 00  10bmbm c1c1  cnmcnm c n  m+1 c n  m+2  cncn 11 0

Assumption <0 x1x1  xnmxnm x n  m+1 x n  m+2  xnxn  Ans a 11  a 1, n  m 10  00b1b1 a 21  a 2, n  m 01  00b2b am1am1  a m, n  m 00  10bmbm c1c1  cnmcnm c n  m+1 c n  m+2  cncn 11 0  0 Bfs x 0 = 0 0 b1b1 b2b2 bmbm …… Active Variables Inactive Variables

Initialization <0 x1x1  xnmxnm x n  m+1 x n  m+2  xnxn  Ans a 11  a 1, n  m 10  00b1b1 a 21  a 2, n  m 01  00b2b am1am1  a m, n  m 00  10bmbm c1c1  cnmcnm c n  m+1 c n  m+2  cncn 11 0 Clear these columns Pivots Bfs x 0 = 0 0 b1b1 b2b2 bmbm

Initialization <0 x1x1  xnmxnm x n  m+1 x n  m+2  xnxn  Ans a 11  a 1, n  m 10  00b1b1 a 21  a 2, n  m 01  00b2b am1am1  a m, n  m 00  10bmbm c1c1  cnmcnm c n  m+1 c n  m+2  cncn 11 0 c B(1) c B(2) cB(m)cB(m) R1R1 R2R2 RmRm R m+1 1 Bfs x 0 = 0 0 b1b1 b2b2 bmbm

Initialization <0 x1x1  xnmxnm x n  m+1 x n  m+2  xnxn  Ans a 11  a 1, n  m 10  00b1b1 a 21  a 2, n  m 01  00b2b am1am1  a m, n  m 00  10bmbm c1c1  cnmcnm c n  m+1 c n  m+2  cncn 11 0 c B(1) c B(2) cB(m)cB(m) R1R1 R2R2 RmRm R m+1 11 0 Bfs x 0 = 0 0 b1b1 b2b2 bmbm …… The negative of the cost of the initial bfs......

Initialization <0 x1x1  xnmxnm x n  m+1 x n  m+2  xnxn  Ans a 11  a 1, n  m 10  00b1b1 a 21  a 2, n  m 01  00b2b am1am1  a m, n  m 00  10bmbm c1c1  cnmcnm c n  m+1 c n  m+2  cncn 11 0 c B(1) c B(2) cB(m)cB(m) R1R1 R2R2 RmRm R m+1 11 0 Bfs x 0 = 0 0 b1b1 b2b2 bmbm …… The negative of the cost of the initial bfs

Choosing a Pivot Column x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn  Ans a 11  a1ja1j  a 1,n  m 1  0  00b1b ai1ai1  a ij  ai,nmai,nm 0  1  00bibi am1am1  a mj  am,nmam,nm 0  0  10bmbm c1c1  cjcj  cnmcnm 0  0  0 11 00 <0 Bfs x 0 = 0 0 b1b1 bibi bmbm ……… 0 If all are positive, we are done. We usually choose the most negative column as the pivot column. But, this is not necessary the best. …

Choosing the Pivot x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn  Ans a 11  a1ja1j  a 1,n  m 1  0  00b1b ai1ai1  a ij  ai,nmai,nm 0  1  00bibi am1am1  a mj  am,nmam,nm 0  0  10bmbm c1c1  cjcj  cnmcnm 0  0  0 11 00 Bfs x 0 = 0 0 b1b1 bibi bmbm ……… 0 … Suppose that A j is the pivot column, and the pivot is in the i th row. For each row with a ij > 0, compute b i /a ij. Among them, choose the row with minimum b i /a ij as the pivot.  0 unit of A j will enter the basis.

Clear the Pivot Column x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn  Ans a 11  a1ja1j  a 1,n  m 1  0  00b1b ai1ai1  a ij  ai,nmai,nm 0  1  00bibi am1am1  a mj  am,nmam,nm 0  0  10bmbm c1c1  cjcj  cnmcnm 0  0  0 11 00 Bfs x 0 = 0 0 b1b1 bibi bmbm ……… 0 …  0 unit of A j will enter the basis. 1/a ij b i /a ij a i1 /a ij a i,n  m /a ij 1/a ij

Clear the Pivot Column x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn  Ans a 11  a1ja1j  a 1,n  m 1  0  00b1b ai1ai1  a ij  ai,nmai,nm 0  1  00bibi am1am1  a mj  am,nmam,nm 0  0  10bmbm c1c1  cjcj  cnmcnm 0  0  0 11 00 Bfs x 0 = 0 0 b1b1 bibi bmbm ……… 0 …  0 unit of A j will enter the basis. 1/a ij b i /a ij a i1 /a ij a i,n  m /a ij 1 RiRi  a 1j /a ij  a mj /a ij b 1  b i a 1j /a ij b m  b i a mj /a ij

Bfs 0  Bfs 1 x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn  Ans a 11  a1ja1j  a 1,n  m 1  0  00b1b ai1ai1  a ij  ai,nmai,nm 0  1  00bibi am1am1  a mj  am,nmam,nm 0  0  10bmbm c1c1  cjcj  cnmcnm 0  0  0 11 00 Bfs x 1 = ………  0 unit of A j will enter the basis. 1/a ij b i /a ij a i1 /a ij a i,n  m /a ij 1 RiRi  a 1j /a ij  a mj /a ij b 1  b i a 1j /a ij b m  b i a mj /a ij b i /a ij b 1  b i a 1j /a ij b m  b i a mj /a ij …

Cost Improvement x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn  Ans a 11  a1ja1j  a 1,n  m 1  0  00b1b ai1ai1  a ij  ai,nmai,nm 0  1  00bibi am1am1  a mj  am,nmam,nm 0  0  10bmbm c1c1  cjcj  cnmcnm 0  0 11 00 Bfs x 1 = 0 0 b 1  b i a 1j /a ij 0 b m  b i a mj /a ij ………  0 unit of A j will enter the basis. 1/a ij b i /a ij a i1 /a ij a i,n  m /a ij 1 RiRi  a 1j /a ij  a mj /a ij b 1  b i a 1j /a ij b m  b i a mj /a ij b i /a ij Negative …

The Cost Column x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn  Ans a 11  a1ja1j  a 1,n  m 1  0  00b1b ai1ai1  a ij  ai,nmai,nm 0  1  00bibi am1am1  a mj  am,nmam,nm 0  0  10bmbm c1c1  cjcj  cnmcnm 0  0  0 11 00 useless

Example x1x1 x2x2 x3x3 x4x4 x5x5 Ans

Initialization x1x1 x2x2 x3x3 x4x4 x5x5 Ans x1x1 x2x2 x3x3 x4x4 x5x 11 2 1 R1 R2  R1 R3  R1

Initialization x1x1 x2x2 x3x3 x4x4 x5x5 Ans x1x1 x2x2 x3x3 x4x4 x5x 11 2 1 66 33 3 11 2 1 B

x1x1 x2x2 x3x3 x4x4 x5x5 66 33 3 11 2 1 /2 3/3 Choosing the Pivot

x1x1 x2x2 x3x3 x4x4 x5x5 Ans 66 33 3 11 2 1 Clear the Pivot Column x1x1 x2x2 x3x3 x4x4 x5x5 Ans   

The Solution x1x1 x2x2 x3x3 x4x4 x5x5 Ans    x 2 = 0.5 x 4 = 2.5 x 5 = 1.5  =

Linear Programming Pivot Selection 大同大學資工所 智慧型多媒體研究室

The Pivot Selection Steepest descent – the aforementioned method Greatest decrement – compute for each and choose the most negative one. All-variable gradient x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn Ans a 11  a1ja1j  a 1,n  m 1  0  0b1b ai1ai1  a ij  ai,nmai,nm 0  1  0bibi am1am1  a mj  am,nmam,nm 0  0  1bmbm c1c1  cjcj  cnmcnm 0  0  0 00

All-Variable Gradient x1x1  xjxj  xnmxnm x n  m+1  xnm+jxnm+j  xnxn Ans a 11  a1ja1j  a 1,n  m 1  0  0b1b ai1ai1  a ij  ai,nmai,nm 0  1  0bibi am1am1  a mj  am,nmam,nm 0  0  1bmbm c1c1  cjcj  cnmcnm 0  0  0 00 Bfs x 0 = 0 0 b1b1 bibi bmbm ……… 0 … Bfs x ’ = ……… b i /a ij b 1  b i a 1j /a ij b m  b i a mj /a ij … x’  x0 =x’  x0 = 0 0 bibi ……… b i /a ij  b i a 1j /a ij  b i a mj /a ij … 00  a ij 1 a1ja1j  a mj  a ij /b i Cost change when bringing one unit of x j into the bfs. Compute this for each Choose the most negative one.

Linear Programming Anticycling Algorithm 大同大學資工所 智慧型多媒體研究室

Cycling x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Ans x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Cycling x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Ans

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Cycling

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Ans x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Cycling

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Ans x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Cycling

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Ans x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 Cycling

Bland’s Anticycling Algorithm Suppose in the simplex algorithm we choose the column to enter basis by and row by Then, the algorithm terminates after a finite number of steps. choose the lowest numbered favorable column. choose the lowest numbered favorable column to leave the basis.

Linear Programming The Two-Phase Simplex Algorithm 大同大學資工所 智慧型多媒體研究室

Example x1x1 x2x2 x3x3 x4x4 x5x5 Ans x1x1 x2x2 x3x3 x4x4 x5x 11 2 1 R1 R2  R1 R3  R1 B Why?

LP in Canonical Form Minimize Subject to Minimize Subject to

LP in Canonical Form Minimize Subject to Bfs: How about if b i < 0 for some i ?

Example Subject to Minimize Subject to Minimize

Example Subject to Minimize The initial basic solution is infeasible.

Phase I Subject to Minimize x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ans 11 1 2 11 11 11 1 11 22 1 0

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Phase I x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ans 11 1 2 11 11 11 1 11 22 1 0 x1x1 x2x2 x3x3 x4x4 x5x5 x6x 1 2 11 11 1 22 most negative

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ans Phase I x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ans x1x1 x2x2 x3x3 x4x4 x5x5 x6x 1 2 11 11 1 2 1 /3  2/3 11 2/3  1/3 0 1/ /3 2  4/3 1/3 2/2 3/3

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ansx1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Phase I x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ans 1 /3  2/3 11 2/3  1/3 0 1/ /3 2  4/3 4/ /3  2/3  1/3 0 1/ 11 1 4/3 2/3 0

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ans Phase I /3  2/3  1/3 0 1/ 11 1 4/3 2/3 0 x 1 = 4/3 x 2 = 1/3 x 5 = 2/3 11 22 1

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ans Phase I /3  2/3  1/3 0 1/ 11 1 4/3 2/3 0 x 1 = 4/3 x 2 = 1/3 x 5 = 2/3 Discussion: In what condition, we can conclude the problem is infeasible?

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 Ans Phase II /3  2/3  1/3 0 1/ 11 1 4/3 2/3 0 x 1 = 4/3 x 2 = 1/3 x 5 = 2/3 x1x1 x2x2 x3x3 x4x4 x5x5 Ans /3  2/3  1/3 0 1/ /3 2/3 0

x1x1 x2x2 x3x3 x4x4 x5x5 Ansx1x1 x2x2 x3x3 x4x4 x5x5 Phase II x 1 = 4/3 x 2 = 1/3 x 5 = 2/3 x1x1 x2x2 x3x3 x4x4 x5x5 Ans /3  2/3  1/3 0 1/ /3 2/ /3  2/3  1/3 1/ /3 2/3 33

x1x1 x2x2 x3x3 x4x4 x5x5 Ans Phase II x 1 = 4/3 x 2 = 1/3 x 5 = 2/ /3  2/3  1/3 1/ /3 2/3 33 You can start the original simplex algorithm from here. In this example, it is now optimal.

Two-Phase Simplex Algorithm  LP in Standard Form Minimize Subject to Minimize Subject to Negate both sides if b i < 0. Therefore, we assume b i ’s are nonnegative. Phase I  must be zero if succeeded.

Two-Phase Simplex Algorithm  LP in Standard Form Minimize Subject to Minimize Subject to Phase II Start this phase if we find a bfs in Phsae I. Phase I  must be zero if succeeded.

Example Phase I

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans

Phase I x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x  10 88 33 11 11 000 88

x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans  10 88 33 11 11 000 88 Phase I x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans 12/31/  7/3  2/3 10  5/3 104/3 011/31/301  2/3 0110/3 0  4/3 1/3 11 11 10/300  14/3

Phase I x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans 12/31/  7/3  2/3 10  5/3 104/3 011/31/301  2/3 0110/3 0  4/3 1/3 11 11 10/300  14/3 x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans     11 11 400 44

Phase I x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans     11 11 400 44 x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans     1  1.5

Phase I x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans     1  1.5 x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans    

Phase II x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 Ans     x1x1 x2x2 x3x3 x4x4 x5x5 Ans  

Phase II x1x1 x2x2 x3x3 x4x4 x5x5 Ans   x1x1 x2x2 x3x3 x4x4 x5x5 Ans    4.5

Phase II x1x1 x2x2 x3x3 x4x4 x5x5 Ans    4.5 x 2 = 0.5 x 4 = 2.5 x 5 = 1.5 We now can enter the Phase II of the two-phase simplex algorithm. However, we are now already in optimum.