C&O 355 Lecture 6 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.

Slides:



Advertisements
Similar presentations
February 14, 2002 Putting Linear Programs into standard form
Advertisements

C&O 355 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
C&O 355 Lecture 15 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
1 Outline relationship among topics secrets LP with upper bounds by Simplex method basic feasible solution (BFS) by Simplex method for bounded variables.
Duality for linear programming. Illustration of the notion Consider an enterprise producing r items: f k = demand for the item k =1,…, r using s components:
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
C&O 355 Lecture 16 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
C&O 355 Lecture 8 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Lecture 9 N. Harvey TexPoint fonts used in EMF.
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
C&O 355 Lecture 5 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Lecture 3 Linear Programming: Tutorial Simplex Method
Lecture #3; Based on slides by Yinyu Ye
Linear Programming (LP) (Chap.29)
Linear Programming – Simplex Method
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Dragan Jovicic Harvinder Singh
The Simplex Method and Linear Programming Duality Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305,
C&O 355 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 20 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Mathematical Programming Fall 2010 Lecture 21 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Lecture 20 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
How should we define corner points? Under any reasonable definition, point x should be considered a corner point x What is a corner point?
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Computational Methods for Management and Economics Carla Gomes Module 6b Simplex Pitfalls (Textbook – Hillier and Lieberman)
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
Design and Analysis of Algorithms
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
OR Chapter 3. Pitfalls OR  Selection of leaving variable: a)No restriction in minimum ratio test : can increase the value of the entering.
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
Simplex Method Continued …
Discrete Optimization Lecture #3 2008/3/41Shi-Chung Chang, NTUEE, GIIE, GICE Last Time 1.Algorithms and Complexity » Problems, algorithms, and complexity.
Chapter 4 Linear Programming: The Simplex Method
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
1 Chapter 4 The Simplex Algorithm PART 2 Prof. Dr. M. Arslan ÖRNEK.
C&O 355 Mathematical Programming Fall 2010 Lecture 5 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
C&O 355 Lecture 7 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chap 10. Sensitivity Analysis
Perturbation method, lexicographic method
Chapter 4 Linear Programming: The Simplex Method
Chap 9. General LP problems: Duality and Infeasibility
Chapter 8. General LP Problems
CS5321 Numerical Optimization
ISyE 4231: Engineering Optimization
C&O 355 Lecture 3 N. Harvey Review of Lecture 2:
Chapter 8. General LP Problems
Graphical solution A Graphical Solution Procedure (LPs with 2 decision variables can be solved/viewed this way.) 1. Plot each constraint as an equation.
Chapter 8. General LP Problems
Practical Issues Finding an initial feasible solution Cycling
Chapter 2. Simplex method
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

C&O 355 Lecture 6 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A

Outline Proof of Optimality Does the algorithm terminate? Bland’s Rule Corollaries

1.What is a corner point? (BFS and bases) 2.What if there are no corner points? (Infeasible) 3.What are the “neighboring” bases? (Increase one coordinate) 4.What if no neighbors are strictly better? (Might move to a basis that isn’t strictly better (if ± =0), but whenever x changes it’s strictly better) 5.How can I find a starting feasible basis? 6.Does the algorithm terminate? 7.Does it produce the right answer? Local-Search Algorithm Let B be a feasible basis (If none, Halt: LP is infeasible) For each entering coordinate k  B If “benefit” of coordinate k is > 0 Compute y( ± ) (If ± = 1, Halt: LP is unbounded) Find leaving coordinate h 2 B (y( ± ) h =0) Set x=y( ± ) and B’=B n {h} [ {k} Halt: return x

The Benefits Vector Recall: Benefit of coordinate k is c k - c B T A B -1 A k Let’s define a benefits vector r to record all the benefits Define: Note: For k  B, r k is benefit of coordinate k For k 2 B, r k =0 Claim: For any point z, we have c T z = c T x + r B T z B Proof:

Quick Optimality Proof Suppose algorithm terminates with BFS x and basis B Claim: x is optimal. Proof: Loop terminates ) every k  B has benefit · 0 ) r · 0 For any feasible point z, we have: c T z = c T x + r B T z B · c T x ) x is optimal. ¥ ¸ 0 since z feasible · 0 since loop terminated

1.What is a corner point? (BFS and bases) 2.What if there are no corner points? (Infeasible) 3.What are the “neighboring” bases? (Increase one coordinate) 4.What if no neighbors are strictly better? (Might move to a basis that isn’t strictly better (if ± =0), but whenever x changes it’s strictly better) 5.How can I find a starting feasible basis? 6.Does the algorithm terminate? 7.Does it produce the right answer? (Yes) Local-Search Algorithm Let B be a feasible basis (If none, Halt: LP is infeasible) For each entering coordinate k  B If “benefit” of coordinate k is > 0 Compute y( ± ) (If ± = 1, Halt: LP is unbounded) Find leaving coordinate h 2 B (y( ± ) h =0) Set x=y( ± ) and B’=B n {h} [ {k} Halt: return x

Does Algorithm Terminate? Depends on rule for choosing entering / leaving coordinates For some rules, examples exist where algorithm cycles forever [Hoffman 1951] How can this happen? – If coordinate k has benefit>0 but ± =0 then basis changes but BFS doesn’t – So the BFS x is degenerate (it is defined by multiple bases) – Geometrically, there are >m tight constraints at x – “Unlikely coincidences” exist among the constraints Oldest rule to avoid cycling: “Perturbation method” [Dantzig, Orden, Wolfe 1954] – Basic idea: Add random noise to vector b. Then no unlikely coincidences will occur in constraints. If noise very small, optimal basis of noisy problem is also an optimal basis of original problem. Amazingly simple rule: “Smallest index rule” [Bland 1977] – Choose entering coordinate k to be smallest i.e., k = min{ i : coordinate i has positive benefit } – Choose leaving coordinate h to be the smallest i.e., h = min{ i : coordinate i has y( ± ) i =0 }

Cycling Suppose algorithm cycles Let F = { i : i is an entering coordinate at some step on cycle } Claim 1: All BFS on cycle are the same BFS x, and x i =0 8 i 2 F. Proof: If BFS changes then objective value increases. (Positive benefit) Obj. value never decreases, so we can’t cycle back to worse BFS. So every step on cycle has ± =0 ) y( ± )=x. A leaving coordinate h always has y( ± ) h =0. But y( ± )=x, so x h =0. ¤ If it enters during cycle, it must leave during cycle

Bland’s Rule Works Claim: Bland’s rule never cycles. Proof Idea: – Suppose it cycles – Let F = { i : i is an entering coordinate at some step on cycle } – Let v = maximum i in F – Consider steps when v is entering or leaving – If v is entering/leaving, then every other i 2 F is ineligible (because we always choose smallest i to enter/leave) – We can infer some strong properties about all other i 2 F – Use those strong properties to get a contradiction – How? Non-obvious… Define an auxiliary LP, show that it has an optimal solution, and it is also unbounded.

Let B be a basis on cycle just before v enters Let r be the benefits vector at this step ) v has a positive benefit (r v >0) No other i 2 F is eligible to enter ) every other i 2 F has non-positive benefit (r i · 0 8 i 2 F n {v}) Let B’ be a basis on cycle just before v leaves (some u 2 F enters) Let d be the direction we’re trying to move in at this step Recall: y( ± ) = x+ ± d, where d B’ =- A B’ -1 A u, d u =1, d i =0 8 i  B’ [ {u} Recall: leaving coordinate is a minimizer of min{ -x i /d i : i s.t. d i <0 } ) d v <0 Recall: x v =0 (by Claim 1) No other i 2 F is eligible to leave ) every other i 2 F is not a minimizer of this But x i =0 too ) They are not minimizers because d i ¸ 0 8 i 2 F n {v}

Auxiliary LP Let x be the BFS (of original LP) for all bases in the cycle Claim 2: x is an optimal solution of AuxLP. Claim 3: We can increase x u  1 without ruining feasibility. So AuxLP is unbounded. (AuxLP)

Claim 2: x is optimal for AuxLP Subclaim: x is feasible for AuxLP x i =0 8 i  B, since B is a basis determining x x i =0 8 i 2 F, by Claim 1 For any z, we have c T z = c T x + r B T z B Subclaim: For z feasible for AuxLP, r B T z B · 0 For coordinate i  B [ F, we have z i =0 For coordinate i 2 F n {v}, we have r i · 0 but z i ¸ 0 For coordinate v, we have r v >0 but z v · 0 So c T z · c T x ) x is optimal ¥ (AuxLP)

Claim 3: AuxLP is unbounded (AuxLP) Claim: B’ is a basis for AuxLP and it determines BFS x. Claim: coordinate u has positive benefit. (Definition of benefit does not depend on sign constraints) If we increase x u to ², must adjust x B’ to preserve “Az=b” constraint – New point is z( ² )=x+ ² d. – The d vector is the same as before: it does not depend on sign constraints z( ² ) feasible,. But i  B [ F ) i  B’ [ {u}. So z( ² ) feasible 8 ² ¸ 0. ¥ We know:

1.What is a corner point? (BFS and bases) 2.What if there are no corner points? (Infeasible) 3.What are the “neighboring” bases? (Increase one coordinate) 4.What if no neighbors are strictly better? (Might move to a basis that isn’t strictly better (if ± =0), but whenever x changes it’s strictly better) 5.How can I find a starting feasible basis? 6.Does the algorithm terminate? (If Bland’s rule used) 7.Does it produce the right answer? (Yes) Local-Search Algorithm Let B be a feasible basis (If none, Halt: LP is infeasible) For each entering coordinate k  B If “benefit” of coordinate k is > 0 Compute y( ± ) (If ± = 1, Halt: LP is unbounded) Find leaving coordinate h 2 B (y( ± ) h =0) Set x=y( ± ) and B’=B n {h} [ {k} Halt: return x

Corollaries Corollary: “Fundamental Theorem of LP” For any LP, the outcome is either: – An optimal solution exists – Problem is Infeasible – Problem is Unbounded Corollary: For any equational form LP, if an optimal solution exists, then a BFS is optimal. More coming soon…

Finding a starting basis Consider LP max { c T x : x 2 P } where P={ x : Ax=b, x ¸ 0 } How can we find a feasible point? Trick: Just solve a different LP! – Note: c is irrelevant. We can introduce a new objective function – WLOG, b ¸ 0 (Can multiply constraints by -1) – Get “relaxation” of P by introducing more “slack variables”: Q = { (x,y) : Ax+y=b, x ¸ 0, y ¸ 0 } – Note: (x,0) 2 Q, x 2 P. Can we find such a point? – Solve the new LP min { § i y i : (x,y) 2 Q } – If the optimal value is 0, then x 2 P. If not, P is empty! – How do we find feasible point for the new LP? – (x,y)=(0,b) is a trivial solution!

1.What is a corner point? (BFS and bases) 2.What if there are no corner points? (Infeasible) 3.What are the “neighboring” bases? (Increase one coordinate) 4.What if no neighbors are strictly better? (Might move to a basis that isn’t strictly better (if ± =0), but whenever x changes it’s strictly better) 5.How can I find a starting feasible basis? (Solve an easier LP) 6.Does the algorithm terminate? (If Bland’s rule used) 7.Does it produce the right answer? (Yes) Local-Search Algorithm Let B be a feasible basis (If none, Halt: LP is infeasible) For each entering coordinate k  B If “benefit” of coordinate k is > 0 Compute y( ± ) (If ± = 1, Halt: LP is unbounded) Find leaving coordinate h 2 B (y( ± ) h =0) Set x=y( ± ) and B’=B n {h} [ {k} Halt: return x