Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chap 3. The simplex method

Similar presentations


Presentation on theme: "Chap 3. The simplex method"โ€” Presentation transcript:

1 Chap 3. The simplex method
Standard form problem minimize ๐‘ โ€ฒ ๐‘ฅ subject to ๐ด๐‘ฅ=๐‘ ๐‘ฅโ‰ฅ0 ๐ด:๐‘šร—๐‘›, full row rank From earlier results, we know that there exists an extreme point (b.f.s.) optimal solution if LP has finite optimal value. Simplex method searches b.f.sโ€™ to find optimal one. If LP unbounded, there exists an extreme ray ๐‘‘ ๐‘– in the recession cone ๐พ (๐พ= ๐‘ฅ:๐ด๐‘ฅ=0, ๐‘ฅโ‰ฅ0 ) such that ๐‘โ€ฒ ๐‘‘ ๐‘– <0. Simplex finds the direction ๐‘‘ ๐‘– if LP unbounded, hence providing the proof of unboundedness. Linear Programming 2018

2 3.1 Optimality conditions
A strategy for algorithm : Given a feasible solution ๐‘ฅ, look at the neighborhood of ๐‘ฅ for a feasible point that gives improved objective value. If no such point exists, we are at a local minimum point. In general, such local minimum point is not a global minimum point. However, if we minimize a convex function over a convex set (convex program), local minimum point is a global minimum point, which is the case for linear programming problem. (HW later) Def: ๐‘ƒ is polyhedron. Given ๐‘ฅโˆˆ๐‘ƒ, ๐‘‘โˆˆ ๐‘… ๐‘› is a feasible direction at ๐‘ฅ if โˆƒ ๐œƒ>0 such that ๐‘ฅ+๐œƒ๐‘‘โˆˆ๐‘ƒ. Given a b.f.s. ๐‘ฅโˆˆ๐‘ƒ, ( ๐‘ฅ ๐ต = ๐ต โˆ’1 ๐‘, ๐‘ฅ ๐‘ =0 for ๐‘: nonbasic) ( ๐ต= ๐ด ๐ต(1) , ๐ด ๐ต(2) ,โ€ฆ, ๐ด ๐ต(๐‘š) ) want to find a new point ๐‘ฅ+๐œƒ๐‘‘ such that it satisfies ๐ด๐‘ฅ=๐‘ and ๐‘ฅโ‰ฅ0 and the new point gives an improved objective value. Linear Programming 2018

3 Consider moving to ๐‘ฅ+๐œƒ๐‘‘= ๐‘ฅ ๐ต ๐‘ฅ ๐‘ +๐œƒ ๐‘‘ ๐ต ๐‘‘ ๐‘ ,
where ๐‘‘ ๐‘— =1 for some nonbasic variable ๐‘ฅ ๐‘— , ๐‘—โˆˆ๐‘ 0 for other nonbasic variables except ๐‘ฅ ๐‘— and ๐‘ฅ ๐ต โ† ๐‘ฅ ๐ต +๐œƒ ๐‘‘ ๐ต We require ๐ด ๐‘ฅ+๐œƒ๐‘‘ =๐‘ for ๐œƒ>0 ๏ƒž need ๐ด๐‘‘=0 (iff condition to satisfy ๐ด ๐‘ฅ+๐œƒ๐‘‘ =๐‘, for ๐œƒ>0) ๏ƒž 0=๐ด๐‘‘= ๐‘–=1 ๐‘› ๐ด ๐‘– ๐‘‘ ๐‘– = ๐‘–=1 ๐‘š ๐ด ๐ต(๐‘–) ๐‘‘ ๐ต(๐‘–) + ๐ด ๐‘— =๐ต ๐‘‘ ๐ต + ๐ด ๐‘— ๏ƒž ๐‘‘ ๐ต =โˆ’ ๐ต โˆ’1 ๐ด ๐‘— Assuming that columns of ๐ด are permuted so that ๐ด= ๐ต,๐‘ and ๐‘ฅ= ๐‘ฅ ๐ต , ๐‘ฅ ๐‘ , ๐‘‘= ๐‘‘ ๐ต , ๐‘‘ ๐‘ , ๐‘‘= โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ๐‘’ ๐‘— is called ๐‘—โˆ’๐‘กโ„Ž basic direction. Linear Programming 2018

4 Note that โˆƒ (๐‘›โˆ’๐‘š) basic directions when ๐ต is given.
Recall ๐‘‘:๐ด๐‘‘=0 is the null space of ๐ด and its basis is given by columns of ๐‘ƒ โˆ’ ๐ต โˆ’1 ๐‘ ๐ผ ๐‘›โˆ’๐‘š , where ๐‘ƒ is a permutation matrix for ๐ด๐‘ƒ= ๐ต,๐‘ ๐‘ƒ โˆ’ ๐ต โˆ’1 ๐‘ ๐ผ ๐‘›โˆ’๐‘š =๐‘ƒ โˆ’ ๐ต โˆ’1 ๐ด 1 โ‹ฏ โˆ’ ๐ต โˆ’1 ๐ด ๐‘— โ‹ฏ โˆ’ ๐ต โˆ’1 ๐ด ๐‘›โˆ’๐‘š ๐‘’ 1 โ‹ฏ ๐‘’ ๐‘— โ‹ฏ ๐‘’ ๐‘›โˆ’๐‘š Each column here gives a basic direction. Those ๐‘›โˆ’๐‘š basic directions constitute a basis for null space of matrix ๐ด (from earlier). Hence we can move along any direction ๐‘‘ which is a linear combination of these basis vectors while satisfying ๐ด ๐‘ฅ+๐œƒ๐‘‘ =๐‘, ๐œƒ>0. Linear Programming 2018

5 However, we also need to satisfy the nonnegativity constraints in addition to ๐ด๐‘ฅ=๐‘ to remain feasible. Since ๐‘‘ ๐‘— =1>0 for nonbasic variable index ๐‘— and ๐‘ฅ ๐‘— =0 at the current solution ( ๐‘ฅ ๐‘— is a nonbasic variable), moving along (โˆ’ ๐‘‘ ๐‘— ) will make ๐‘ฅ ๐‘— โ‰ฅ0 violated immediately. Hence we do not consider moving along (โˆ’ ๐‘‘ ๐‘— ) direction. Therefore, the direction we can move is the nonnegative linear combination of the basic directions, which is the cone generated by the ๐‘›โˆ’๐‘š basic directions. Note that if a basic direction satisfies ๐‘‘ ๐ต =โˆ’ ๐ต โˆ’1 ๐ด ๐‘— โ‰ฅ0, it is an extreme ray of recession cone of ๐‘ƒ (recall HW). In simplex method, we choose one of the basic directions as the direction of movement. Linear Programming 2018

6 Two cases: (a) current solution ๐‘ฅ is nondegenerate : ๐‘ฅ ๐ต >0 guarantees that ๐‘ฅ ๐ต +๐œƒ ๐‘‘ ๐ต >0 for some ๐œƒ>0. (b) ๐‘ฅ is degenerate : some basic variable ๐‘ฅ ๐ต(๐‘–) =0. It may happen that ๐‘–โˆ’๐‘กโ„Ž component of ๐‘‘ ๐ต =โˆ’ ๐ต โˆ’1 ๐ด ๐‘— is negative. ๏ƒž Then ๐‘ฅ ๐ต(๐‘–) becomes negative if we move along ๐‘‘. So we cannot make ๐œƒ>0. Details later. Linear Programming 2018

7 ๐‘ฅ 1 , ๐‘ฅ 3 nonbasic at E. ๐‘ฅ 3 , ๐‘ฅ 5 nonbasic at F ( ๐‘ฅ 4 basic at 0).
๐‘ฅ 3 =0 F E ๐‘ฅ 5 =0 ๐‘ฅ 1 =0 ๐‘ฅ 4 =0 G ๐‘ฅ 2 =0 Figure 3.2: ๐‘›=5, ๐‘›โˆ’๐‘š=2 ๐‘ฅ 1 , ๐‘ฅ 3 nonbasic at E. ๐‘ฅ 3 , ๐‘ฅ 5 nonbasic at F ( ๐‘ฅ 4 basic at 0). Linear Programming 2018

8 Now consider the cost function:
Want to choose the direction that improves the objective value (๐‘โ€ฒ ๐‘ฅ+๐œƒ ๐‘‘ ๐‘— โˆ’๐‘โ€ฒ๐‘ฅ=๐œƒ๐‘โ€ฒ ๐‘‘ ๐‘— <0) ๐‘โ€ฒ ๐‘‘ ๐‘— = ๐‘ ๐ต โ€ฒ, ๐‘ ๐‘ โ€ฒ โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ๐‘’ ๐‘— = ๐‘ ๐‘— โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘— โ‰ก ๐‘ ๐‘— (called reduced cost) If ๐‘ ๐‘— <0, ๐‘—โˆˆ๐‘, then objective value improves if we move to ๐‘ฅ+๐œƒ ๐‘‘ ๐‘— for some ๐œƒ>0. (๐‘: index set of nonbasic variables) Note) For ๐‘–โˆ’๐‘กโ„Ž basic variable, ๐‘ ๐‘– may be computed using above formula. ๐‘ ๐‘– = ๐‘ ๐‘– โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐ต ๐‘– = ๐‘ ๐‘– โˆ’ ๐‘ ๐ต โ€ฒ ๐‘’ ๐‘– =0, for all ๐‘–โˆˆ๐ต (๐ต: index set of basic variables) Linear Programming 2018

9 Thm 3.1: (optimality condition)
Consider b.f.s. ๐‘ฅ with basis matrix ๐ต. Let ๐‘ be the reduced cost vector. (a) If ๐‘ โ‰ฅ0 ๏ƒž ๐‘ฅ is optimal (sufficient condition for optimality) (b) ๐‘ฅ is optimal and nondegenerate ๏ƒž ๐‘ โ‰ฅ0 Pf) (a) Assume ๐‘ โ‰ฅ0. ๐‘ฆ is an arbitrary point in ๐‘ƒ. Let ๐‘‘=๐‘ฆโˆ’๐‘ฅ ๏ƒž ๐ด๐‘‘=0 ๏ƒž ๐ต ๐‘‘ ๐ต + ๐‘–โˆˆ๐‘ ๐ด ๐‘– ๐‘‘ ๐‘– =0 ๏ƒž ๐‘‘ ๐ต =โˆ’ ๐‘–โˆˆ๐‘ ๐ต โˆ’1 ๐ด ๐‘– ๐‘‘ ๐‘– ๐‘โ€ฒ๐‘‘=๐‘โ€ฒ๐‘ฆโˆ’๐‘โ€ฒ๐‘ฅ= ๐‘ ๐ต โ€ฒ ๐‘‘ ๐ต + ๐‘–โˆˆ๐‘ ๐‘ ๐‘– ๐‘‘ ๐‘– = ๐‘–โˆˆ๐‘ ๐‘ ๐‘– โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘– ๐‘‘ ๐‘– = ๐‘–โˆˆ๐‘ ๐‘ ๐‘– ๐‘‘ ๐‘– โ‰ฅ0 ( ๐‘ ๐‘– โ‰ฅ0, ๐‘‘ ๐‘– โ‰ฅ0 since ๐‘ฆ ๐‘– โ‰ฅ0, ๐‘ฅ ๐‘– =0 for ๐‘–โˆˆ๐‘ and ๐‘‘=๐‘ฆโˆ’๐‘ฅ) (b) Suppose ๐‘ฅ is nondegenerate b.f.s. and ๐‘ ๐‘— <0 for some ๐‘—. ๐‘ฅ ๐‘— must be a nonbasic variable and we can obtain improved solution by moving to ๐‘ฅ+๐œƒ ๐‘‘ ๐‘— , ๐œƒ>0 and small. Hence ๐‘ฅ is not optimal. ๏‚ƒ Linear Programming 2018

10 Def 3.3: Basis matrix ๐ต is said to be optimal if (a) ๐ต โˆ’1 ๐‘โ‰ฅ0
Note that the condition ๐‘ โ‰ฅ0 is a sufficient condition for optimality of a b.f.s. ๐‘ฅ, but it is not necessary. The necessity holds only when ๐‘ฅ is nondegenerate. Def 3.3: Basis matrix ๐ต is said to be optimal if (a) ๐ต โˆ’1 ๐‘โ‰ฅ0 (b) ๐‘ โ€ฒ=๐‘โ€ฒโˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ดโ‰ฅ0โ€ฒ Linear Programming 2018

11 3.2 Development of the simplex method
(Assume nondegenerate b.f.s. for the time being) Suppose we are at a b.f.s. ๐‘ฅ and computed ๐‘ ๐‘— , ๐‘—โˆˆ๐‘. If ๐‘ ๐‘— โ‰ฅ0, โˆ€ ๐‘—โˆˆ๐‘, current solution is optimal. Otherwise, choose ๐‘—โˆˆ๐‘ such that ๐‘ ๐‘— <0 and find ๐‘‘ vector (๐‘—-th basic direction) (If have maximization problem, choose ๐‘—โˆˆ๐‘ such that ๐‘ ๐‘— >0.) ( ๐‘‘ ๐‘— =1, ๐‘‘ ๐‘– =0 for ๐‘–โ‰ ๐ต 1 , โ€ฆ, ๐ต ๐‘š , ๐‘—, and ๐‘‘ ๐ต =โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ) Want to find ๐œƒ โˆ— =max โก{๐œƒโ‰ฅ0:๐‘ฅ+๐œƒ๐‘‘โˆˆ๐‘ƒ}. Cost change is ๐œƒ โˆ— ๐‘โ€ฒ๐‘‘= ๐œƒ โˆ— ๐‘ ๐‘— The vector ๐‘‘ satisfies ๐ด ๐‘ฅ+๐œƒ๐‘‘ =๐‘, also want to satisfy (๐‘ฅ+๐œƒ๐‘‘)โ‰ฅ0. Linear Programming 2018

12 (a) If ๐‘‘โ‰ฅ0, then (๐‘ฅ+๐œƒ๐‘‘)โ‰ฅ0 for all ๐œƒโ‰ฅ0. Hence ๐œƒ โˆ— =โˆž.
(b) If ๐‘‘ ๐‘– <0 for some ๐‘–, ( ๐‘ฅ ๐‘– +๐œƒ ๐‘‘ ๐‘– )โ‰ฅ0 ๏ƒž ๐œƒโ‰คโˆ’ ๐‘ฅ ๐‘– / ๐‘‘ ๐‘– For nonbasic variables, ๐‘‘ ๐‘– โ‰ฅ0, ๐‘–โˆˆ๐‘. Hence only consider basic variables. ๐œƒ โˆ— = min ๐‘–=1,โ€ฆ,๐‘š: ๐‘‘ ๐ต ๐‘– <0 โˆ’ ๐‘ฅ ๐ต(๐‘–) ๐‘‘ ๐ต(๐‘–) (called minimum ratio test) Let ๐‘ฆ=๐‘ฅ+ ๐œƒ โˆ— ๐‘‘. Have ๐‘ฆ ๐‘— = ๐œƒ โˆ— >0 for entering nonbasic variable ๐‘ฅ ๐‘— . (we assumed nondegeneracy, hence ๐‘ฅ ๐ต(๐‘–) >0 for all basic variables) Let ๐‘™ be the index of the basic variable selected in the minimum ratio test, i.e. โˆ’ ๐‘ฅ ๐ต ๐‘™ ๐‘‘ ๐ต ๐‘™ = min ๐‘–=1,โ€ฆ,๐‘š: ๐‘‘ ๐ต ๐‘– <0 โˆ’ ๐‘ฅ ๐ต ๐‘– ๐‘‘ ๐ต ๐‘– = ๐œƒ โˆ— . Then ๐‘ฅ ๐ต(๐‘™) + ๐œƒ โˆ— ๐‘‘ ๐ต(๐‘™) =0. Linear Programming 2018

13 Replace ๐‘ฅ ๐ต(๐‘™) in the basis with the entering variable ๐‘ฅ ๐‘— .
New basis matrix is ๐ต = | ๐ด ๐ต(1) | โ‹ฏ | ๐ด ๐ต(๐‘™โˆ’1) | | ๐ด ๐‘— | | ๐ด ๐ต(๐‘™+1) | โ‹ฏ | ๐ด ๐ต(๐‘š) | Also replace the set {๐ต 1 ,โ€ฆ,๐ต ๐‘š } of basic indices by { ๐ต 1 ,โ€ฆ, ๐ต ๐‘š } given by ๐ต ๐‘– = ๐ต ๐‘– , ๐‘–โ‰ ๐‘™, ๐‘—, ๐‘–=๐‘™. Linear Programming 2018

14 (b) ๐‘ฆ=๐‘ฅ+ ๐œƒ โˆ— ๐‘‘ is a b.f.s. with basis ๐ต .
Thm 3.2: (a) ๐ด ๐ต(๐‘–) , ๐‘–โ‰ ๐‘™, and ๐ด ๐‘— are linearly independent. Hence ๐ต is a basis matrix. (b) ๐‘ฆ=๐‘ฅ+ ๐œƒ โˆ— ๐‘‘ is a b.f.s. with basis ๐ต . Pf) (a) If ๐ด ๐ต ๐‘– , ๐‘–=1,โ€ฆ,๐‘š are linearly dependent. ๏ƒž โˆƒ ๐œ† 1 ,โ€ฆ, ๐œ† ๐‘š , not all of them zero, such that ๐‘–=1 ๐‘š ๐œ† ๐‘– ๐ด ๐ต (๐‘–) = ๐ต ๐œ†=0. ๏ƒž ๐‘–=1 ๐‘š ๐œ† ๐‘– ๐ต โˆ’1 ๐ด ๐ต (๐‘–) =0 Hence ๐ต โˆ’1 ๐ด ๐ต (๐‘–) โ€ฒ๐‘  are linearly dependent. But ๐ต โˆ’1 ๐ด ๐ต (๐‘–) = ๐‘’ ๐‘– , ๐‘–=1,โ€ฆ,๐‘š, ๐‘–โ‰ ๐‘™ ๐ต โˆ’1 ๐ด ๐‘— =โˆ’ ๐‘‘ ๐ต , and by definition โˆ’ ๐‘‘ ๐ต(๐‘™) โ‰ 0. Hence ๐ต โˆ’1 ๐ด ๐‘— and ๐ต โˆ’1 ๐ด ๐ต(๐‘–) , ๐‘–โ‰ ๐‘™ are linearly independent, contradiction. (b) Have ๐‘ฆโ‰ฅ0, ๐ด๐‘ฆ=๐‘, ๐‘ฆ ๐‘– =0, ๐‘–โˆ‰ ๐ต . Columns of ๐ต are linearly independent. Hence b.f.s. ๏ฟ Linear Programming 2018

15 (a) optimal basis ๐ต and optimal b.f.s
See the text for a complete description of an iteration of the simplex method. Thm 3.3: Assume standard polyhedron ๐‘ƒโ‰ โˆ… and every b.f.s. is nondegenerate. Then simplex method terminates after a finite number of iterations. At termination, two possibilities: (a) optimal basis ๐ต and optimal b.f.s (b) Have found a vector ๐‘‘ satisfying ๐ด๐‘‘=0, ๐‘‘โ‰ฅ0, and cโ€™d < 0, and the optimal cost is โˆ’โˆž. Linear Programming 2018

16 Remarks ๐ด๐‘ฅ=๐‘ ๏ƒž ๐ต,๐‘ ๐‘ฅ ๐ต ๐‘ฅ ๐‘ =๐‘, ๐‘‘= ๐‘‘ ๐ต ๐‘‘ ๐‘ = โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ๐‘’ ๐‘—
๐ด๐‘ฅ=๐‘ ๏ƒž ๐ต,๐‘ ๐‘ฅ ๐ต ๐‘ฅ ๐‘ =๐‘, ๐‘‘= ๐‘‘ ๐ต ๐‘‘ ๐‘ = โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ๐‘’ ๐‘— 1) Suppose ๐‘ฅ nondegenerate b.f.s. and we move to ๐‘ฅ+๐œƒ๐‘‘, ๐œƒ>0. Consider the point ๐‘ฆ=๐‘ฅ+ ๐œƒ โˆ— ๐‘‘, ๐œƒ โˆ— >0 and ๐‘ฆ feasible. (nondegeneracy of ๐‘ฅ guarantees the existence of ๐œƒ โˆ— >0 and ๐‘ฆ feasible) Then ๐ด ๐‘ฅ+ ๐œƒ โˆ— ๐‘‘ =๐‘ ๐‘ฆ=( ๐‘ฆ ๐ต , ๐‘ฆ ๐‘ ) ๏ƒž ๐‘ฆ ๐ต = ๐‘ฅ ๐ต + ๐œƒ โˆ— ๐‘‘ ๐ต >0 for sufficiently small ๐œƒ โˆ— >0 ๐‘ฆ ๐‘ = ๐‘ฅ ๐‘ + ๐œƒ โˆ— ๐‘‘ ๐‘ =0+ ๐œƒ โˆ— ๐‘’ ๐‘— =(0,โ€ฆ, ๐œƒ โˆ— ,0,โ€ฆ,0) Since (๐‘›โˆ’๐‘šโˆ’1) of constraints ๐‘ฅ ๐‘— โ‰ฅ0 are active and ๐‘š constraints ๐ด๐‘ฅ=๐‘ active, we have (๐‘›โˆ’1) constraints are active at (๐‘ฅ+ ๐œƒ โˆ— ๐‘‘) (also the active constraints are lin. ind.) and no more inequalities are active. Linear Programming 2018

17 Hence we get a new b.f.s., which is a 0-dimensional face of ๐‘ƒ.
(continued) Hence ๐‘ฆ is in the face defined by the active constraints, which is one-dimensional since the equality set of the face is ๐‘›โˆ’1 -dimensional. So ๐‘ฆ is in one-dimensional face of ๐‘ƒ (edge) and no other proper face of it. When ๐œƒ โˆ— is such that at least one of the basic variables becomes 0 (say ๐‘ฅ ๐‘™ ), then entering nonbasic variable replaces ๐‘ฅ ๐‘™ in the basis and the new basis matrix is nonsingular and the leaving basic variable ๐‘ฅ ๐‘™ =0 ๏ƒž ๐‘ฅ ๐‘™ โ‰ฅ0 becomes active. Hence we get a new b.f.s., which is a 0-dimensional face of ๐‘ƒ. For a nondegenerate simplex iteration, we start from a b.f.s. ( 0-dimensional face), then follow an edge ( 1-dimensional face ) of ๐‘ƒ until we reach another b.f.s. ( 0-dimensional face) Linear Programming 2018

18 The recession cone of ๐‘ƒ is ๐พ={๐‘ฆ:๐ด๐‘ฆ=0, ๐‘ฆโ‰ฅ0}. (๐‘ƒ=๐พ+๐‘„)
(continued) (2) If ๐‘‘= ๐‘‘ ๐ต ๐‘‘ ๐‘ = โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ๐‘’ ๐‘— โ‰ฅ0, then ๐‘ฅ+๐œƒ๐‘‘โ‰ฅ0, โˆ€ ๐œƒ>0, hence feasible. The recession cone of ๐‘ƒ is ๐พ={๐‘ฆ:๐ด๐‘ฆ=0, ๐‘ฆโ‰ฅ0}. (๐‘ƒ=๐พ+๐‘„) Since ๐‘‘โˆˆ๐พ and (๐‘›โˆ’1) independent rows active at ๐‘‘, ๐‘‘ is an extreme ray of ๐พ (recall HW) and ๐‘โ€ฒ๐‘‘= ๐‘ ๐‘— โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘— <0 ๏ƒž LP unbounded. Hence, given a basis (b.f.s.), finding an extreme ray ๐‘‘ (basic direction) in the recession cone with ๐‘ โ€ฒ ๐‘‘<0 provides a proof of unboundedness of LP. Linear Programming 2018

19 Simplex method for degenerate problems
If degeneracy allowed, two possibilities : (a) current b.f.s. degenerate ๏ƒž ๐œƒ โˆ— may be 0 (if, for some ๐‘™, ๐‘ฅ ๐ต(๐‘™) =0 and ๐‘‘ ๐ต(๐‘™) <0) Perform the iteration as usual with ๐œƒ โˆ— =0. New basis ๐ต is still nonsingular ( solution not changed, only basis changes), hence the current solution is b.f.s with different basis ๐ต . ( Note that we may have nondegenerate iteration although we have a degenerate solution.) (b) although ๐œƒ โˆ— may be positive, new point may have more than one of the original basic variables become 0 at the new point. Only one of them exits the basis and the resulting solution is degenerate. (It happens when we have ties in the minimum ratio test.) Linear Programming 2018

20 Figure 3.3: ๐‘›โˆ’๐‘š=2. ๐‘ฅ 4 , ๐‘ฅ 5 nonbasic. (๐‘”,๐‘“ are basic dir.)
โˆ’๐‘” ๐‘ฅ ๐‘” ๐‘“ ๐‘ฅ 5 =0 ๐‘ฅ 4 =0 โ„Ž ๐‘ฅ 3 =0 ๐‘ฅ 6 =0 ๐‘ฆ ๐‘ฅ 2 =0 ๐‘ฅ 1 =0 Figure 3.3: ๐‘›โˆ’๐‘š=2. ๐‘ฅ 4 , ๐‘ฅ 5 nonbasic. (๐‘”,๐‘“ are basic dir.) Then pivot with ๐‘ฅ 4 entering, ๐‘ฅ 6 exiting basis. (โ„Ž,โˆ’๐‘” are basic dir.) Now if ๐‘ฅ 5 enters basis, we follow the direction โ„Ž until ๐‘ฅ 1 โ‰ฅ0 becomes active, in which case ๐‘ฅ 1 leaves basis. Linear Programming 2018

21 Cycling : a sequence of basis changes that leads back to the initial basis. ( only basis changes, no solution change) Cycling may occur if there exists degeneracy. Finite termination of the simplex method is not guaranteed. Need special rules for entering and/or leaving variable selection to avoid cycling (later). Although cycling hardly occurs in practice, prolonged degenerate iterations might happen frequently, especially in well-structured problems. Hence how to get out of degenerate iterations as early as possible is of practical concern. Linear Programming 2018

22 Pivot Selection (a) Smallest (largest) coefficient rule : choose ๐‘ฅ ๐‘— with ๐‘Ž๐‘Ÿ๐‘”๐‘š๐‘–๐‘› ๐‘—โˆˆ๐‘ ๐‘ ๐‘— : ๐‘ ๐‘— <0 (b) largest increase rule: ๐‘ฅ ๐‘— with ๐‘ ๐‘— <0 and ๐œƒ โˆ— ๐‘ ๐‘— is max. (c) steepest edge rule (d) maintain candidate list (e) smallest subscript rule ( avoid cycling). Linear Programming 2018

23 Review of calculus Purpose: Interpret the value ๐‘ โ€ฒ ๐‘‘ ๐‘— โ‰ก ๐‘ ๐‘— in a different way and derive the logic for the steepest edge rule Def: ๐‘>0 integer. โ„Ž: ๐‘… ๐‘› โ†’๐‘…, then โ„Ž(๐‘ฅ)โ‰ก๐‘œ ๐‘ฅ ๐‘ if and only if lim ๐‘ฅ ๐‘˜ โ†’0 โ„Ž ๐‘ฅ ๐‘˜ ๐‘ฅ ๐‘˜ ๐‘ =0 for all sequences ๐‘ฅ ๐‘˜ with ๐‘ฅ ๐‘˜ โ‰ 0 for all ๐‘˜, that converge to 0. Def: ๐‘“: ๐‘… ๐‘› โ†’๐‘… is called differentiable at ๐‘ฅ if and only if there exists a vector ๐›ป๐‘“(๐‘ฅ) (called gradient) such that ๐‘“ ๐‘ง =๐‘“ ๐‘ฅ +๐›ป๐‘“ ๐‘ฅ โ€ฒ ๐‘งโˆ’๐‘ฅ +๐‘œ ๐‘งโˆ’๐‘ฅ or in other words, lim ๐‘งโ†’๐‘ฅ ๐‘“ ๐‘ง โˆ’๐‘“ ๐‘ฅ โˆ’๐›ป๐‘“(๐‘ฅ)โ€ฒ ๐‘งโˆ’๐‘ฅ ๐‘งโˆ’๐‘ฅ =0 (Frechet differentiability) Linear Programming 2018

24 ๐‘“โ€ฒ(๐‘ฅ;๐‘ฆ)โ‰ก lim ๐œ†โ†“0 ๐‘“ ๐‘ฅ+๐œ†๐‘ฆ โˆ’๐‘“(๐‘ฅ) ๐œ† if it exists.
Def : ๐‘“: ๐‘… ๐‘› โ†’๐‘…. One sided directional derivative of ๐‘“ at ๐‘ฅ with respect to a vector ๐‘ฆ is defined as ๐‘“โ€ฒ(๐‘ฅ;๐‘ฆ)โ‰ก lim ๐œ†โ†“0 ๐‘“ ๐‘ฅ+๐œ†๐‘ฆ โˆ’๐‘“(๐‘ฅ) ๐œ† if it exists. Note that โˆ’ ๐‘“ โ€ฒ ๐‘ฅ;โˆ’๐‘ฆ = lim ๐œ†โ†‘0 ๐‘“ ๐‘ฅ+๐œ†๐‘ฆ โˆ’๐‘“(๐‘ฅ) ๐œ† . Hence the one-sided directional derivative ๐‘“โ€ฒ(๐‘ฅ;๐‘ฆ) is two-sided if and only if ๐‘“โ€ฒ(๐‘ฅ;โˆ’๐‘ฆ) exists and ๐‘“ โ€ฒ ๐‘ฅ;โˆ’๐‘ฆ =โˆ’๐‘“โ€ฒ(๐‘ฅ;๐‘ฆ). Def : ๐‘–โˆ’๐‘กโ„Ž partial derivative of ๐‘“ at ๐‘ฅ : ๐œ•๐‘“ ๐‘ฅ ๐œ• ๐‘ฅ ๐‘– = lim ๐œ†โ†’0 ๐‘“ ๐‘ฅ+๐œ† ๐‘’ ๐‘– โˆ’๐‘“(๐‘ฅ) ๐œ† if it exists (two sided) ( Gateuax differentiability ) ( ๐‘“ is called Gateaux differentiable at ๐‘ฅ if all (two-sided) directional derivatives of ๐‘“ at a vector ๐‘ฅ exist and ๐‘“โ€ฒ(๐‘ฅ;๐‘ฆ) is a linear function of ๐‘ฆ. F differentiability implies G differentiability, but not conversely. We do not need to distinguish F and G differentiability for our purposes here.) Linear Programming 2018

25 Suppose ๐‘“ is F differentiable at ๐‘ฅ, then for any ๐‘ฆโ‰ 0
0= lim ๐œ†โ†“0 ๐‘“ ๐‘ฅ+๐œ†๐‘ฆ โˆ’๐‘“ ๐‘ฅ โˆ’๐›ป๐‘“(๐‘ฅ)โ€ฒ(๐œ†๐‘ฆ) ๐œ† ๐‘ฆ = 1 ๐‘ฆ ๐‘“ โ€ฒ ๐‘ฅ;๐‘ฆ โˆ’๐›ป๐‘“ ๐‘ฅ โ€ฒ๐‘ฆ Hence ๐‘“ โ€ฒ ๐‘ฅ;๐‘ฆ exists and ๐‘“ โ€ฒ ๐‘ฅ;๐‘ฆ =๐›ป๐‘“ ๐‘ฅ โ€ฒ๐‘ฆ (linear function of ๐‘ฆ) If ๐‘“ is F differentiable, then it implies ๐‘“ โ€ฒ ๐‘ฅ;โˆ’๐‘ฆ =โˆ’๐‘“โ€ฒ(๐‘ฅ;๐‘ฆ) from above, hence ๐‘“โ€ฒ(๐‘ฅ;๐‘ฆ) is two-sided. In particular, ๐›ป๐‘“ ๐‘ฅ โ€ฒ ๐‘’ ๐‘– = lim ๐œ†โ†’0 ๐‘“ ๐‘ฅ+๐œ† ๐‘’ ๐‘– โˆ’๐‘“(๐‘ฅ) ๐œ† = ๐œ•๐‘“ ๐œ• ๐‘ฅ ๐‘– (๐‘ฅ) Hence ๐›ป๐‘“ ๐‘ฅ = ๐œ•๐‘“ ๐œ• ๐‘ฅ 1 ๐‘ฅ ,โ€ฆ, ๐œ•๐‘“ ๐œ• ๐‘ฅ ๐‘› (๐‘ฅ) . Linear Programming 2018

26 But, ๐‘“โ€ฒ(๐‘ฅ;๐‘‘) is sensitive to the size (norm) of ๐‘‘.
In simplex algorithm, moving direction ๐‘‘= โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ๐‘’ ๐‘— for ๐‘ฅ ๐‘— entering. Then, ๐‘“ โ€ฒ ๐‘ฅ;๐‘‘ =๐›ป๐‘“ ๐‘ฅ โ€ฒ ๐‘‘= ๐‘ โ€ฒ ๐‘‘= ๐‘ ๐ต โ€ฒ, ๐‘ ๐‘ โ€ฒ โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ๐‘’ ๐‘— = ๐‘ ๐‘— โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘— = ๐‘ ๐‘— . Hence the rate of change ๐‘ โ€ฒ ๐‘‘ in the objective function when we move in the direction ๐‘‘ from ๐‘ฅ is the directional derivative. So ๐‘ ๐‘— โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘— is the rate of change of ๐‘“ when we move in the direction ๐‘‘. But, ๐‘“โ€ฒ(๐‘ฅ;๐‘‘) is sensitive to the size (norm) of ๐‘‘. ( ๐‘“ โ€ฒ ๐‘ฅ;๐‘˜๐‘‘ =๐›ป๐‘“ ๐‘ฅ โ€ฒ ๐‘˜๐‘‘ =๐‘˜๐‘“โ€ฒ(๐‘ฅ;๐‘‘)) To make fair comparison among basic directions, use normalized vector ๐‘‘/ ๐‘‘ to compute ๐‘“โ€ฒ(๐‘ฅ;๐‘‘). Linear Programming 2018

27 ๐›ป๐‘“ ๐‘ฅ โ€ฒ ๐‘‘ ๐‘— ๐‘‘ ๐‘— = ๐‘ ๐‘— ๐‘‘ ๐‘— = ๐‘ ๐‘— โˆ’ ๐ต โˆ’1 ๐ด ๐‘— 2 +1
๐›ป๐‘“ ๐‘ฅ โ€ฒ ๐‘‘ ๐‘— ๐‘‘ ๐‘— = ๐‘ ๐‘— ๐‘‘ ๐‘— = ๐‘ ๐‘— โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ( ๐‘‘ = ๐‘– ๐‘‘ ๐‘– , ๐‘‘= โˆ’ ๐ต โˆ’1 ๐ด ๐‘— ๐‘’ ๐‘— ) Hence, among basic directions with ๐‘ ๐‘— <0, choose the one with smallest normalized directional derivative. (steepest edge rule. choose the basic direction which makes smallest angle with (โˆ’๐‘) vector.) Problem here is that we need to compute ๐‘‘ ๐‘— (additional efforts needed). But, once ๐‘‘ ๐‘— is computed, it can be updated efficiently in subsequent iterations. Competitive (especially the dual form) against other rules in real implementation. Linear Programming 2018


Download ppt "Chap 3. The simplex method"

Similar presentations


Ads by Google