Download presentation
Presentation is loading. Please wait.
1
Chap 3. The simplex method
Standard form problem minimize ๐ โฒ ๐ฅ subject to ๐ด๐ฅ=๐ ๐ฅโฅ0 ๐ด:๐ร๐, full row rank From earlier results, we know that there exists an extreme point (b.f.s.) optimal solution if LP has finite optimal value. Simplex method searches b.f.sโ to find optimal one. If LP unbounded, there exists an extreme ray ๐ ๐ in the recession cone ๐พ (๐พ= ๐ฅ:๐ด๐ฅ=0, ๐ฅโฅ0 ) such that ๐โฒ ๐ ๐ <0. Simplex finds the direction ๐ ๐ if LP unbounded, hence providing the proof of unboundedness. Linear Programming 2018
2
3.1 Optimality conditions
A strategy for algorithm : Given a feasible solution ๐ฅ, look at the neighborhood of ๐ฅ for a feasible point that gives improved objective value. If no such point exists, we are at a local minimum point. In general, such local minimum point is not a global minimum point. However, if we minimize a convex function over a convex set (convex program), local minimum point is a global minimum point, which is the case for linear programming problem. (HW later) Def: ๐ is polyhedron. Given ๐ฅโ๐, ๐โ ๐
๐ is a feasible direction at ๐ฅ if โ ๐>0 such that ๐ฅ+๐๐โ๐. Given a b.f.s. ๐ฅโ๐, ( ๐ฅ ๐ต = ๐ต โ1 ๐, ๐ฅ ๐ =0 for ๐: nonbasic) ( ๐ต= ๐ด ๐ต(1) , ๐ด ๐ต(2) ,โฆ, ๐ด ๐ต(๐) ) want to find a new point ๐ฅ+๐๐ such that it satisfies ๐ด๐ฅ=๐ and ๐ฅโฅ0 and the new point gives an improved objective value. Linear Programming 2018
3
Consider moving to ๐ฅ+๐๐= ๐ฅ ๐ต ๐ฅ ๐ +๐ ๐ ๐ต ๐ ๐ ,
where ๐ ๐ =1 for some nonbasic variable ๐ฅ ๐ , ๐โ๐ 0 for other nonbasic variables except ๐ฅ ๐ and ๐ฅ ๐ต โ ๐ฅ ๐ต +๐ ๐ ๐ต We require ๐ด ๐ฅ+๐๐ =๐ for ๐>0 ๏ need ๐ด๐=0 (iff condition to satisfy ๐ด ๐ฅ+๐๐ =๐, for ๐>0) ๏ 0=๐ด๐= ๐=1 ๐ ๐ด ๐ ๐ ๐ = ๐=1 ๐ ๐ด ๐ต(๐) ๐ ๐ต(๐) + ๐ด ๐ =๐ต ๐ ๐ต + ๐ด ๐ ๏ ๐ ๐ต =โ ๐ต โ1 ๐ด ๐ Assuming that columns of ๐ด are permuted so that ๐ด= ๐ต,๐ and ๐ฅ= ๐ฅ ๐ต , ๐ฅ ๐ , ๐= ๐ ๐ต , ๐ ๐ , ๐= โ ๐ต โ1 ๐ด ๐ ๐ ๐ is called ๐โ๐กโ basic direction. Linear Programming 2018
4
Note that โ (๐โ๐) basic directions when ๐ต is given.
Recall ๐:๐ด๐=0 is the null space of ๐ด and its basis is given by columns of ๐ โ ๐ต โ1 ๐ ๐ผ ๐โ๐ , where ๐ is a permutation matrix for ๐ด๐= ๐ต,๐ ๐ โ ๐ต โ1 ๐ ๐ผ ๐โ๐ =๐ โ ๐ต โ1 ๐ด 1 โฏ โ ๐ต โ1 ๐ด ๐ โฏ โ ๐ต โ1 ๐ด ๐โ๐ ๐ 1 โฏ ๐ ๐ โฏ ๐ ๐โ๐ Each column here gives a basic direction. Those ๐โ๐ basic directions constitute a basis for null space of matrix ๐ด (from earlier). Hence we can move along any direction ๐ which is a linear combination of these basis vectors while satisfying ๐ด ๐ฅ+๐๐ =๐, ๐>0. Linear Programming 2018
5
However, we also need to satisfy the nonnegativity constraints in addition to ๐ด๐ฅ=๐ to remain feasible. Since ๐ ๐ =1>0 for nonbasic variable index ๐ and ๐ฅ ๐ =0 at the current solution ( ๐ฅ ๐ is a nonbasic variable), moving along (โ ๐ ๐ ) will make ๐ฅ ๐ โฅ0 violated immediately. Hence we do not consider moving along (โ ๐ ๐ ) direction. Therefore, the direction we can move is the nonnegative linear combination of the basic directions, which is the cone generated by the ๐โ๐ basic directions. Note that if a basic direction satisfies ๐ ๐ต =โ ๐ต โ1 ๐ด ๐ โฅ0, it is an extreme ray of recession cone of ๐ (recall HW). In simplex method, we choose one of the basic directions as the direction of movement. Linear Programming 2018
6
Two cases: (a) current solution ๐ฅ is nondegenerate : ๐ฅ ๐ต >0 guarantees that ๐ฅ ๐ต +๐ ๐ ๐ต >0 for some ๐>0. (b) ๐ฅ is degenerate : some basic variable ๐ฅ ๐ต(๐) =0. It may happen that ๐โ๐กโ component of ๐ ๐ต =โ ๐ต โ1 ๐ด ๐ is negative. ๏ Then ๐ฅ ๐ต(๐) becomes negative if we move along ๐. So we cannot make ๐>0. Details later. Linear Programming 2018
7
๐ฅ 1 , ๐ฅ 3 nonbasic at E. ๐ฅ 3 , ๐ฅ 5 nonbasic at F ( ๐ฅ 4 basic at 0).
๐ฅ 3 =0 F E ๐ฅ 5 =0 ๐ฅ 1 =0 ๐ฅ 4 =0 G ๐ฅ 2 =0 Figure 3.2: ๐=5, ๐โ๐=2 ๐ฅ 1 , ๐ฅ 3 nonbasic at E. ๐ฅ 3 , ๐ฅ 5 nonbasic at F ( ๐ฅ 4 basic at 0). Linear Programming 2018
8
Now consider the cost function:
Want to choose the direction that improves the objective value (๐โฒ ๐ฅ+๐ ๐ ๐ โ๐โฒ๐ฅ=๐๐โฒ ๐ ๐ <0) ๐โฒ ๐ ๐ = ๐ ๐ต โฒ, ๐ ๐ โฒ โ ๐ต โ1 ๐ด ๐ ๐ ๐ = ๐ ๐ โ ๐ ๐ต โฒ ๐ต โ1 ๐ด ๐ โก ๐ ๐ (called reduced cost) If ๐ ๐ <0, ๐โ๐, then objective value improves if we move to ๐ฅ+๐ ๐ ๐ for some ๐>0. (๐: index set of nonbasic variables) Note) For ๐โ๐กโ basic variable, ๐ ๐ may be computed using above formula. ๐ ๐ = ๐ ๐ โ ๐ ๐ต โฒ ๐ต โ1 ๐ด ๐ต ๐ = ๐ ๐ โ ๐ ๐ต โฒ ๐ ๐ =0, for all ๐โ๐ต (๐ต: index set of basic variables) Linear Programming 2018
9
Thm 3.1: (optimality condition)
Consider b.f.s. ๐ฅ with basis matrix ๐ต. Let ๐ be the reduced cost vector. (a) If ๐ โฅ0 ๏ ๐ฅ is optimal (sufficient condition for optimality) (b) ๐ฅ is optimal and nondegenerate ๏ ๐ โฅ0 Pf) (a) Assume ๐ โฅ0. ๐ฆ is an arbitrary point in ๐. Let ๐=๐ฆโ๐ฅ ๏ ๐ด๐=0 ๏ ๐ต ๐ ๐ต + ๐โ๐ ๐ด ๐ ๐ ๐ =0 ๏ ๐ ๐ต =โ ๐โ๐ ๐ต โ1 ๐ด ๐ ๐ ๐ ๐โฒ๐=๐โฒ๐ฆโ๐โฒ๐ฅ= ๐ ๐ต โฒ ๐ ๐ต + ๐โ๐ ๐ ๐ ๐ ๐ = ๐โ๐ ๐ ๐ โ ๐ ๐ต โฒ ๐ต โ1 ๐ด ๐ ๐ ๐ = ๐โ๐ ๐ ๐ ๐ ๐ โฅ0 ( ๐ ๐ โฅ0, ๐ ๐ โฅ0 since ๐ฆ ๐ โฅ0, ๐ฅ ๐ =0 for ๐โ๐ and ๐=๐ฆโ๐ฅ) (b) Suppose ๐ฅ is nondegenerate b.f.s. and ๐ ๐ <0 for some ๐. ๐ฅ ๐ must be a nonbasic variable and we can obtain improved solution by moving to ๐ฅ+๐ ๐ ๐ , ๐>0 and small. Hence ๐ฅ is not optimal. ๏ Linear Programming 2018
10
Def 3.3: Basis matrix ๐ต is said to be optimal if (a) ๐ต โ1 ๐โฅ0
Note that the condition ๐ โฅ0 is a sufficient condition for optimality of a b.f.s. ๐ฅ, but it is not necessary. The necessity holds only when ๐ฅ is nondegenerate. Def 3.3: Basis matrix ๐ต is said to be optimal if (a) ๐ต โ1 ๐โฅ0 (b) ๐ โฒ=๐โฒโ ๐ ๐ต โฒ ๐ต โ1 ๐ดโฅ0โฒ Linear Programming 2018
11
3.2 Development of the simplex method
(Assume nondegenerate b.f.s. for the time being) Suppose we are at a b.f.s. ๐ฅ and computed ๐ ๐ , ๐โ๐. If ๐ ๐ โฅ0, โ ๐โ๐, current solution is optimal. Otherwise, choose ๐โ๐ such that ๐ ๐ <0 and find ๐ vector (๐-th basic direction) (If have maximization problem, choose ๐โ๐ such that ๐ ๐ >0.) ( ๐ ๐ =1, ๐ ๐ =0 for ๐โ ๐ต 1 , โฆ, ๐ต ๐ , ๐, and ๐ ๐ต =โ ๐ต โ1 ๐ด ๐ ) Want to find ๐ โ =max โก{๐โฅ0:๐ฅ+๐๐โ๐}. Cost change is ๐ โ ๐โฒ๐= ๐ โ ๐ ๐ The vector ๐ satisfies ๐ด ๐ฅ+๐๐ =๐, also want to satisfy (๐ฅ+๐๐)โฅ0. Linear Programming 2018
12
(a) If ๐โฅ0, then (๐ฅ+๐๐)โฅ0 for all ๐โฅ0. Hence ๐ โ =โ.
(b) If ๐ ๐ <0 for some ๐, ( ๐ฅ ๐ +๐ ๐ ๐ )โฅ0 ๏ ๐โคโ ๐ฅ ๐ / ๐ ๐ For nonbasic variables, ๐ ๐ โฅ0, ๐โ๐. Hence only consider basic variables. ๐ โ = min ๐=1,โฆ,๐: ๐ ๐ต ๐ <0 โ ๐ฅ ๐ต(๐) ๐ ๐ต(๐) (called minimum ratio test) Let ๐ฆ=๐ฅ+ ๐ โ ๐. Have ๐ฆ ๐ = ๐ โ >0 for entering nonbasic variable ๐ฅ ๐ . (we assumed nondegeneracy, hence ๐ฅ ๐ต(๐) >0 for all basic variables) Let ๐ be the index of the basic variable selected in the minimum ratio test, i.e. โ ๐ฅ ๐ต ๐ ๐ ๐ต ๐ = min ๐=1,โฆ,๐: ๐ ๐ต ๐ <0 โ ๐ฅ ๐ต ๐ ๐ ๐ต ๐ = ๐ โ . Then ๐ฅ ๐ต(๐) + ๐ โ ๐ ๐ต(๐) =0. Linear Programming 2018
13
Replace ๐ฅ ๐ต(๐) in the basis with the entering variable ๐ฅ ๐ .
New basis matrix is ๐ต = | ๐ด ๐ต(1) | โฏ | ๐ด ๐ต(๐โ1) | | ๐ด ๐ | | ๐ด ๐ต(๐+1) | โฏ | ๐ด ๐ต(๐) | Also replace the set {๐ต 1 ,โฆ,๐ต ๐ } of basic indices by { ๐ต 1 ,โฆ, ๐ต ๐ } given by ๐ต ๐ = ๐ต ๐ , ๐โ ๐, ๐, ๐=๐. Linear Programming 2018
14
(b) ๐ฆ=๐ฅ+ ๐ โ ๐ is a b.f.s. with basis ๐ต .
Thm 3.2: (a) ๐ด ๐ต(๐) , ๐โ ๐, and ๐ด ๐ are linearly independent. Hence ๐ต is a basis matrix. (b) ๐ฆ=๐ฅ+ ๐ โ ๐ is a b.f.s. with basis ๐ต . Pf) (a) If ๐ด ๐ต ๐ , ๐=1,โฆ,๐ are linearly dependent. ๏ โ ๐ 1 ,โฆ, ๐ ๐ , not all of them zero, such that ๐=1 ๐ ๐ ๐ ๐ด ๐ต (๐) = ๐ต ๐=0. ๏ ๐=1 ๐ ๐ ๐ ๐ต โ1 ๐ด ๐ต (๐) =0 Hence ๐ต โ1 ๐ด ๐ต (๐) โฒ๐ are linearly dependent. But ๐ต โ1 ๐ด ๐ต (๐) = ๐ ๐ , ๐=1,โฆ,๐, ๐โ ๐ ๐ต โ1 ๐ด ๐ =โ ๐ ๐ต , and by definition โ ๐ ๐ต(๐) โ 0. Hence ๐ต โ1 ๐ด ๐ and ๐ต โ1 ๐ด ๐ต(๐) , ๐โ ๐ are linearly independent, contradiction. (b) Have ๐ฆโฅ0, ๐ด๐ฆ=๐, ๐ฆ ๐ =0, ๐โ ๐ต . Columns of ๐ต are linearly independent. Hence b.f.s. ๏ฟ Linear Programming 2018
15
(a) optimal basis ๐ต and optimal b.f.s
See the text for a complete description of an iteration of the simplex method. Thm 3.3: Assume standard polyhedron ๐โ โ
and every b.f.s. is nondegenerate. Then simplex method terminates after a finite number of iterations. At termination, two possibilities: (a) optimal basis ๐ต and optimal b.f.s (b) Have found a vector ๐ satisfying ๐ด๐=0, ๐โฅ0, and cโd < 0, and the optimal cost is โโ. Linear Programming 2018
16
Remarks ๐ด๐ฅ=๐ ๏ ๐ต,๐ ๐ฅ ๐ต ๐ฅ ๐ =๐, ๐= ๐ ๐ต ๐ ๐ = โ ๐ต โ1 ๐ด ๐ ๐ ๐
๐ด๐ฅ=๐ ๏ ๐ต,๐ ๐ฅ ๐ต ๐ฅ ๐ =๐, ๐= ๐ ๐ต ๐ ๐ = โ ๐ต โ1 ๐ด ๐ ๐ ๐ 1) Suppose ๐ฅ nondegenerate b.f.s. and we move to ๐ฅ+๐๐, ๐>0. Consider the point ๐ฆ=๐ฅ+ ๐ โ ๐, ๐ โ >0 and ๐ฆ feasible. (nondegeneracy of ๐ฅ guarantees the existence of ๐ โ >0 and ๐ฆ feasible) Then ๐ด ๐ฅ+ ๐ โ ๐ =๐ ๐ฆ=( ๐ฆ ๐ต , ๐ฆ ๐ ) ๏ ๐ฆ ๐ต = ๐ฅ ๐ต + ๐ โ ๐ ๐ต >0 for sufficiently small ๐ โ >0 ๐ฆ ๐ = ๐ฅ ๐ + ๐ โ ๐ ๐ =0+ ๐ โ ๐ ๐ =(0,โฆ, ๐ โ ,0,โฆ,0) Since (๐โ๐โ1) of constraints ๐ฅ ๐ โฅ0 are active and ๐ constraints ๐ด๐ฅ=๐ active, we have (๐โ1) constraints are active at (๐ฅ+ ๐ โ ๐) (also the active constraints are lin. ind.) and no more inequalities are active. Linear Programming 2018
17
Hence we get a new b.f.s., which is a 0-dimensional face of ๐.
(continued) Hence ๐ฆ is in the face defined by the active constraints, which is one-dimensional since the equality set of the face is ๐โ1 -dimensional. So ๐ฆ is in one-dimensional face of ๐ (edge) and no other proper face of it. When ๐ โ is such that at least one of the basic variables becomes 0 (say ๐ฅ ๐ ), then entering nonbasic variable replaces ๐ฅ ๐ in the basis and the new basis matrix is nonsingular and the leaving basic variable ๐ฅ ๐ =0 ๏ ๐ฅ ๐ โฅ0 becomes active. Hence we get a new b.f.s., which is a 0-dimensional face of ๐. For a nondegenerate simplex iteration, we start from a b.f.s. ( 0-dimensional face), then follow an edge ( 1-dimensional face ) of ๐ until we reach another b.f.s. ( 0-dimensional face) Linear Programming 2018
18
The recession cone of ๐ is ๐พ={๐ฆ:๐ด๐ฆ=0, ๐ฆโฅ0}. (๐=๐พ+๐)
(continued) (2) If ๐= ๐ ๐ต ๐ ๐ = โ ๐ต โ1 ๐ด ๐ ๐ ๐ โฅ0, then ๐ฅ+๐๐โฅ0, โ ๐>0, hence feasible. The recession cone of ๐ is ๐พ={๐ฆ:๐ด๐ฆ=0, ๐ฆโฅ0}. (๐=๐พ+๐) Since ๐โ๐พ and (๐โ1) independent rows active at ๐, ๐ is an extreme ray of ๐พ (recall HW) and ๐โฒ๐= ๐ ๐ โ ๐ ๐ต โฒ ๐ต โ1 ๐ด ๐ <0 ๏ LP unbounded. Hence, given a basis (b.f.s.), finding an extreme ray ๐ (basic direction) in the recession cone with ๐ โฒ ๐<0 provides a proof of unboundedness of LP. Linear Programming 2018
19
Simplex method for degenerate problems
If degeneracy allowed, two possibilities : (a) current b.f.s. degenerate ๏ ๐ โ may be 0 (if, for some ๐, ๐ฅ ๐ต(๐) =0 and ๐ ๐ต(๐) <0) Perform the iteration as usual with ๐ โ =0. New basis ๐ต is still nonsingular ( solution not changed, only basis changes), hence the current solution is b.f.s with different basis ๐ต . ( Note that we may have nondegenerate iteration although we have a degenerate solution.) (b) although ๐ โ may be positive, new point may have more than one of the original basic variables become 0 at the new point. Only one of them exits the basis and the resulting solution is degenerate. (It happens when we have ties in the minimum ratio test.) Linear Programming 2018
20
Figure 3.3: ๐โ๐=2. ๐ฅ 4 , ๐ฅ 5 nonbasic. (๐,๐ are basic dir.)
โ๐ ๐ฅ ๐ ๐ ๐ฅ 5 =0 ๐ฅ 4 =0 โ ๐ฅ 3 =0 ๐ฅ 6 =0 ๐ฆ ๐ฅ 2 =0 ๐ฅ 1 =0 Figure 3.3: ๐โ๐=2. ๐ฅ 4 , ๐ฅ 5 nonbasic. (๐,๐ are basic dir.) Then pivot with ๐ฅ 4 entering, ๐ฅ 6 exiting basis. (โ,โ๐ are basic dir.) Now if ๐ฅ 5 enters basis, we follow the direction โ until ๐ฅ 1 โฅ0 becomes active, in which case ๐ฅ 1 leaves basis. Linear Programming 2018
21
Cycling : a sequence of basis changes that leads back to the initial basis. ( only basis changes, no solution change) Cycling may occur if there exists degeneracy. Finite termination of the simplex method is not guaranteed. Need special rules for entering and/or leaving variable selection to avoid cycling (later). Although cycling hardly occurs in practice, prolonged degenerate iterations might happen frequently, especially in well-structured problems. Hence how to get out of degenerate iterations as early as possible is of practical concern. Linear Programming 2018
22
Pivot Selection (a) Smallest (largest) coefficient rule : choose ๐ฅ ๐ with ๐๐๐๐๐๐ ๐โ๐ ๐ ๐ : ๐ ๐ <0 (b) largest increase rule: ๐ฅ ๐ with ๐ ๐ <0 and ๐ โ ๐ ๐ is max. (c) steepest edge rule (d) maintain candidate list (e) smallest subscript rule ( avoid cycling). Linear Programming 2018
23
Review of calculus Purpose: Interpret the value ๐ โฒ ๐ ๐ โก ๐ ๐ in a different way and derive the logic for the steepest edge rule Def: ๐>0 integer. โ: ๐
๐ โ๐
, then โ(๐ฅ)โก๐ ๐ฅ ๐ if and only if lim ๐ฅ ๐ โ0 โ ๐ฅ ๐ ๐ฅ ๐ ๐ =0 for all sequences ๐ฅ ๐ with ๐ฅ ๐ โ 0 for all ๐, that converge to 0. Def: ๐: ๐
๐ โ๐
is called differentiable at ๐ฅ if and only if there exists a vector ๐ป๐(๐ฅ) (called gradient) such that ๐ ๐ง =๐ ๐ฅ +๐ป๐ ๐ฅ โฒ ๐งโ๐ฅ +๐ ๐งโ๐ฅ or in other words, lim ๐งโ๐ฅ ๐ ๐ง โ๐ ๐ฅ โ๐ป๐(๐ฅ)โฒ ๐งโ๐ฅ ๐งโ๐ฅ =0 (Frechet differentiability) Linear Programming 2018
24
๐โฒ(๐ฅ;๐ฆ)โก lim ๐โ0 ๐ ๐ฅ+๐๐ฆ โ๐(๐ฅ) ๐ if it exists.
Def : ๐: ๐
๐ โ๐
. One sided directional derivative of ๐ at ๐ฅ with respect to a vector ๐ฆ is defined as ๐โฒ(๐ฅ;๐ฆ)โก lim ๐โ0 ๐ ๐ฅ+๐๐ฆ โ๐(๐ฅ) ๐ if it exists. Note that โ ๐ โฒ ๐ฅ;โ๐ฆ = lim ๐โ0 ๐ ๐ฅ+๐๐ฆ โ๐(๐ฅ) ๐ . Hence the one-sided directional derivative ๐โฒ(๐ฅ;๐ฆ) is two-sided if and only if ๐โฒ(๐ฅ;โ๐ฆ) exists and ๐ โฒ ๐ฅ;โ๐ฆ =โ๐โฒ(๐ฅ;๐ฆ). Def : ๐โ๐กโ partial derivative of ๐ at ๐ฅ : ๐๐ ๐ฅ ๐ ๐ฅ ๐ = lim ๐โ0 ๐ ๐ฅ+๐ ๐ ๐ โ๐(๐ฅ) ๐ if it exists (two sided) ( Gateuax differentiability ) ( ๐ is called Gateaux differentiable at ๐ฅ if all (two-sided) directional derivatives of ๐ at a vector ๐ฅ exist and ๐โฒ(๐ฅ;๐ฆ) is a linear function of ๐ฆ. F differentiability implies G differentiability, but not conversely. We do not need to distinguish F and G differentiability for our purposes here.) Linear Programming 2018
25
Suppose ๐ is F differentiable at ๐ฅ, then for any ๐ฆโ 0
0= lim ๐โ0 ๐ ๐ฅ+๐๐ฆ โ๐ ๐ฅ โ๐ป๐(๐ฅ)โฒ(๐๐ฆ) ๐ ๐ฆ = 1 ๐ฆ ๐ โฒ ๐ฅ;๐ฆ โ๐ป๐ ๐ฅ โฒ๐ฆ Hence ๐ โฒ ๐ฅ;๐ฆ exists and ๐ โฒ ๐ฅ;๐ฆ =๐ป๐ ๐ฅ โฒ๐ฆ (linear function of ๐ฆ) If ๐ is F differentiable, then it implies ๐ โฒ ๐ฅ;โ๐ฆ =โ๐โฒ(๐ฅ;๐ฆ) from above, hence ๐โฒ(๐ฅ;๐ฆ) is two-sided. In particular, ๐ป๐ ๐ฅ โฒ ๐ ๐ = lim ๐โ0 ๐ ๐ฅ+๐ ๐ ๐ โ๐(๐ฅ) ๐ = ๐๐ ๐ ๐ฅ ๐ (๐ฅ) Hence ๐ป๐ ๐ฅ = ๐๐ ๐ ๐ฅ 1 ๐ฅ ,โฆ, ๐๐ ๐ ๐ฅ ๐ (๐ฅ) . Linear Programming 2018
26
But, ๐โฒ(๐ฅ;๐) is sensitive to the size (norm) of ๐.
In simplex algorithm, moving direction ๐= โ ๐ต โ1 ๐ด ๐ ๐ ๐ for ๐ฅ ๐ entering. Then, ๐ โฒ ๐ฅ;๐ =๐ป๐ ๐ฅ โฒ ๐= ๐ โฒ ๐= ๐ ๐ต โฒ, ๐ ๐ โฒ โ ๐ต โ1 ๐ด ๐ ๐ ๐ = ๐ ๐ โ ๐ ๐ต โฒ ๐ต โ1 ๐ด ๐ = ๐ ๐ . Hence the rate of change ๐ โฒ ๐ in the objective function when we move in the direction ๐ from ๐ฅ is the directional derivative. So ๐ ๐ โ ๐ ๐ต โฒ ๐ต โ1 ๐ด ๐ is the rate of change of ๐ when we move in the direction ๐. But, ๐โฒ(๐ฅ;๐) is sensitive to the size (norm) of ๐. ( ๐ โฒ ๐ฅ;๐๐ =๐ป๐ ๐ฅ โฒ ๐๐ =๐๐โฒ(๐ฅ;๐)) To make fair comparison among basic directions, use normalized vector ๐/ ๐ to compute ๐โฒ(๐ฅ;๐). Linear Programming 2018
27
๐ป๐ ๐ฅ โฒ ๐ ๐ ๐ ๐ = ๐ ๐ ๐ ๐ = ๐ ๐ โ ๐ต โ1 ๐ด ๐ 2 +1
๐ป๐ ๐ฅ โฒ ๐ ๐ ๐ ๐ = ๐ ๐ ๐ ๐ = ๐ ๐ โ ๐ต โ1 ๐ด ๐ ( ๐ = ๐ ๐ ๐ , ๐= โ ๐ต โ1 ๐ด ๐ ๐ ๐ ) Hence, among basic directions with ๐ ๐ <0, choose the one with smallest normalized directional derivative. (steepest edge rule. choose the basic direction which makes smallest angle with (โ๐) vector.) Problem here is that we need to compute ๐ ๐ (additional efforts needed). But, once ๐ ๐ is computed, it can be updated efficiently in subsequent iterations. Competitive (especially the dual form) against other rules in real implementation. Linear Programming 2018
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.