Download presentation
Presentation is loading. Please wait.
1
Chapter 6. Large Scale Optimization
6.1 Delayed column generation min ๐ โฒ ๐ฅ, ๐ด๐ฅ=๐, ๐ฅโฅ0. ๐ด is full row rank with a large number of columns. Impractical to have all columns initially. Start with a few columns and a basic feasible solution (restricted problem). Want to generate (find) entering nonbasic variable (column) as needed. ((delayed) column generation) If ๐ ๐ <0, then ๐ฅ ๐ can enter basis. Hence solve min ๐ ๐ over all ๐. If min ๐ ๐ <0, have found an entering variable (column). If min ๐ ๐ โฅ0, no entering column exists, hence current basis optimal. If entering column found, add the column to the restricted problem and solve the restricted problem again to optimality. min ๐โ๐ผ ๐ ๐ ๐ฅ ๐ , ๐โ๐ผ ๐ด ๐ ๐ฅ ๐ =๐, ๐ฅโฅ0 ( ๐ผ : index set of variables (columns) we have at hand) Then continue to find entering columns. Linear Programming 2018
2
6.2. Cutting stock problem W = 70 17 17 17 15 scrap
Linear Programming 2018
3
๐ ๐ rolls of width ๐ค ๐ , ๐=1,2,โฆ,๐ need to be produced.
Rolls of paper with width W ( called raw) to be cut into small pieces (called final). ๐ ๐ rolls of width ๐ค ๐ , ๐=1,2,โฆ,๐ need to be produced. How to cut the raws to minimize the number of raws used while satisfying order (equivalent to minimizing scraps)? ex) W = 70, then 3 of ๐ค 1 = 17 and 1 of ๐ค 2 = 15 can be produced from a raw. This way of production can be represented as pattern (3, 1, 0, 0, โฆ , 0). ๐ 1๐ , ๐ 2๐ ,โฆ, ๐ ๐๐ for ๐โ๐กโ pattern is feasible if ๐=1 ๐ ๐ ๐๐ ๐ค ๐ โค๐. Linear Programming 2018
4
where ๐ ๐๐ is the number of ๐โ๐กโ finals produced in ๐โ๐กโ pattern.
Formulation min ๐=1 ๐ ๐ฅ ๐ ๐=1 ๐ ๐ ๐๐ ๐ฅ ๐ = ๐ ๐ , ๐=1,โฆ,๐ ๐ฅ ๐ โฅ0 and integer, ๐=1,โฆ,๐, where ๐ ๐๐ is the number of ๐โ๐กโ finals produced in ๐โ๐กโ pattern. ๐ฅ ๐ : the number of raws to be cut using cutting pattern ๐. ๐: total number of possible cutting patterns, which can be very large. ( ๐=1 ๐ ๐ด ๐ ๐ฅ ๐ =๐, where ๐ด ๐ is the vector denoting the ๐โ๐กโ cutting pattern.) We need integer solution, but LP relaxation can be used to find good approximate solution if solution value large (round down the solution to obtain an integer solution). Use a few more raws to meet unsatisfied demand. For initial b.f.s., for ๐=1,โฆ,๐, let ๐โ๐กโ pattern consists of one final of width ๐ค ๐ and none of the other widths. Linear Programming 2018
5
Dynamic programming algorithm for integer knapsack problem.
After computing ๐ vector ( ๐ โฒ = ๐ ๐ต โฒ ๐ต โ1 ) from the optimal solution to the restricted problem, we try to find entering nonbasic variable (column). Candidate for entering column (pattern) is any nonbasic variable with reduced cost 1โ๐โฒ ๐ด ๐ <0 (๐โฒ ๐ด ๐ >1) , hence solve min 1โ๐โฒ ๐ด ๐ over all possible patterns. ๏ solve max ๐โฒ ๐ด ๐ over all possible patterns (integer knapsack problem) ๐ง= max ๐=1 ๐ ๐ ๐ ๐ ๐ ๐=1 ๐ ๐ค ๐ ๐ ๐ โค๐ ๐ ๐ โฅ0 and integer, ๐=1,โฆ,๐ If ๐ง>1, have found a cutting pattern (nonbasic variable) that can enter the basis. Otherwise (๐งโค1), current optimal solution to the restricted problem is optimal to the whole problem. Here, ๐ ๐ can be interpreted as the value of ๐โ๐กโ final at the current solution (current basis ๐ต). Dynamic programming algorithm for integer knapsack problem. ( can assume ๐ค ๐ >0, integer. knapsack is NP-hard, so no polynomial time algorithm is known. ) Linear Programming 2018
6
For ๐ฃโฅ ๐ค ๐๐๐ , ๐น ๐ฃ = max ๐=1,โฆ,๐ ๐น ๐ฃโ ๐ค ๐ + ๐ ๐ : ๐ฃโฅ ๐ค ๐
Let ๐น(๐ฃ) be the optimal value of the problem when knapsack capacity is ๐ฃ. ๐ค ๐๐๐ = min ๐ ๐ค ๐ For ๐ฃ< ๐ค ๐๐๐ , ๐น ๐ฃ =0 For ๐ฃโฅ ๐ค ๐๐๐ , ๐น ๐ฃ = max ๐=1,โฆ,๐ ๐น ๐ฃโ ๐ค ๐ + ๐ ๐ : ๐ฃโฅ ๐ค ๐ Suppose ๐ 0 is opt. solution when r.h.s. is ๐ฃโ ๐ค ๐ , then ๐ 0 + ๐ ๐ is a feasible solution when r.h.s. is ๐ฃ. Hence ๐น ๐ฃ โฅ๐น ๐ฃโ ๐ค ๐ + ๐ ๐ , ๐=1,โฆ,๐, ๐ฃโฅ ๐ค ๐ Suppose ๐ โ is optimal solution when r.h.s. is ๐ฃโฅ ๐ค ๐๐๐ , then there exists some ๐ with ๐ ๐ โ >0 and ๐ฃโฅ ๐ค ๐ . Hence ๐ โ โ ๐ ๐ is a feasible solution when r.h.s. is ๐ฃโ ๐ค ๐ . So ๐น ๐ฃโ ๐ค ๐ โฅ๐น ๐ฃ โ ๐ ๐ ( ๐น ๐ฃ โค๐น ๐ฃโ ๐ค ๐ + ๐ ๐ for some ๐) Linear Programming 2018
7
Actual solution recovered by backtracking the recursion.
Running time of the algorithm is ๐(๐๐) which is not polynomial of the length of encoding. Called pseudopolynomial running time ( polynomial of data ๐ itself). Note : the running time becomes polynomial if it is polynomial with respect to ๐ and log 2 ๐ , but ๐= 2 log 2 ๐ which is not polynomial of log 2 ๐ . Many practical problems can be naturally formulated similar to the cutting stock problem. Especially in 0-1 IP with many columns. For cutting stock problem, we only obtained a fractional solution. But for 0-1 IP, fractional solution can be of little help and we need a mechanism to find optimal integer solution ( branch-and-price approach, column generation combined with branch-and-bound ). Linear Programming 2018
8
6.3. Cutting plane methods Dual of column generation (constraint generation) Consider max ๐ โฒ ๐, ๐โฒ ๐ด ๐ โค ๐ ๐ , ๐=1,โฆ,๐ (1) ( ๐ can be very large ) Solve max ๐ โฒ ๐, ๐โฒ ๐ด ๐ โค ๐ ๐ , ๐โ๐ผ, ๐ผโ 1,โฆ,๐ (2) and get optimal solution ๐ โ to (2). If ๐ โ is feasible to (1), then it is also optimal to (1) If ๐ โ is infeasible to (1), find a violated constraint in (1) and add it to (2), then reoptimize (2) again. Repeat it. Linear Programming 2018
9
Solve min ๐ ๐ โ ๐ โ โฒ ๐ด ๐ over all ๐. If optimal value โฅ0 ๏ ๐ โ โ๐
Separation problem : Given a polyhedron ๐ (described with possibly many inequalities) and a vector ๐ โ , determine if ๐ โ โ๐. If ๐ โ โ๐, find a (valid) inequality violated by ๐ โ . Solve min ๐ ๐ โ ๐ โ โฒ ๐ด ๐ over all ๐. If optimal value โฅ0 ๏ ๐ โ โ๐ If optimal value <0 ๏ ๐ ๐ < ๐ โ โฒ ๐ด ๐ (violated) Linear Programming 2018
10
6.4. Dantzig-Wolfe decomposition
Use of decomposition theorem to represent a specially structured LP problem in different form. Column generation is used to solve the problem. Consider an LP in the following form min ๐ 1 โฒ ๐ฅ 1 + ๐ 2 โฒ ๐ฅ 2 ๐ท 1 ๐ฅ 1 + ๐ท 2 ๐ฅ 2 = ๐ 0 ๐น 1 ๐ฅ = ๐ 1 (1) ๐น 2 ๐ฅ 2 = ๐ 2 ๐ฅ 1 , ๐ฅ 2 โฅ0 ๐ฅ 1 , ๐ฅ 2 : dimension ๐ 1 , ๐ 2 , ๐ 0 , ๐ 1 , ๐ 2 : dimension ๐ 0 , ๐ 1 , ๐ 2 Let ๐ ๐ = ๐ฅ ๐ โฅ0: ๐น ๐ ๐ฅ ๐ = ๐ ๐ , ๐=1,2. Assume ๐ ๐ โ โ
. Note that the nonnegativity constraints guarantee that ๐ ๐ is pointed, hence ๐={0}. (๐=๐+๐พ+๐) Linear Programming 2018
11
Plug into (2) ๏ get master problem
๏ min ๐ 1 โฒ ๐ฅ 1 + ๐ 2 โฒ ๐ฅ 2 ๐ท 1 ๐ฅ 1 + ๐ท 2 ๐ฅ 2 = ๐ 0 (2) ๐ฅ 1 โ ๐ 1 , ๐ฅ 2 โ ๐ 2 ๐ฅ ๐ โ ๐ ๐ can be represented as ๐ฅ ๐ = ๐โ ๐ฝ ๐ ๐ ๐ ๐ ๐ฅ ๐ ๐ + ๐โ ๐พ ๐ ๐ ๐ ๐ ๐ค ๐ ๐ ๐ ๐ ๐ , ๐ ๐ ๐ โฅ0, ๐โ ๐ฝ ๐ ๐ ๐ ๐ =1 Plug into (2) ๏ get master problem min ๐โ ๐ฝ 1 ๐ 1 ๐ ๐ 1 โฒ ๐ฅ 1 ๐ + ๐โ ๐พ 1 ๐ 1 ๐ ๐ 1 โฒ ๐ค 1 ๐ + ๐โ ๐ฝ 2 ๐ 2 ๐ ๐ 2 โฒ ๐ฅ 2 ๐ + ๐โ ๐พ 2 ๐ 2 ๐ ๐ 2 โฒ ๐ค 2 ๐ ๐โ ๐ฝ 1 ๐ 1 ๐ ๐ท 1 ๐ฅ 1 ๐ + ๐โ ๐พ 1 ๐ 1 ๐ ๐ท 1 ๐ค 1 ๐ + ๐โ ๐ฝ 2 ๐ 2 ๐ ๐ท 2 ๐ฅ 2 ๐ + ๐โ ๐พ 2 ๐ 2 ๐ ๐ท 2 ๐ค 2 ๐ = ๐ 0 ๐โ ๐ฝ 1 ๐ 1 ๐ =1 ๐โ ๐ฝ 2 ๐ 2 ๐ =1 ๐ ๐ ๐ , ๐ ๐ ๐ โฅ0, โ๐,๐,๐ dual vec. ๐ dual var. ๐ 1 dual var. ๐ 2 Linear Programming 2018
12
Alternatively, its columns can be viewed as
๐โ ๐ฝ 1 ๐ 1 ๐ ๐ท 1 ๐ฅ 1 ๐ ๐โ ๐ฝ 2 ๐ 2 ๐ ๐ท 2 ๐ฅ 2 ๐ ๐โ ๐พ 1 ๐ 1 ๐ ๐ท 1 ๐ค 1 ๐ ๐โ ๐พ 2 ๐ 2 ๐ ๐ท 2 ๐ค 2 ๐ = ๐ The new formulation has many variables (columns), but it can be solved by column generation technique. Actual solution ๐ฅ 1 , ๐ฅ 2 can be recovered from ๏ฌ and ๐. ๐ฅ ๐ is expressed as convex combination of extreme points of ๐ ๐ + conical combination of extreme rays of ๐ ๐ . Linear Programming 2018
13
Decomposition algorithm
Start with a restricted mater problem with a few columns, providing an initial b.f.s. to the restricted master. Suppose having an optimal b.f.s. to the restricted master problem, dual vector ๐= ๐, ๐ 1 , ๐ 2 , ๐โ ๐
๐ 0 , ๐ 1 , ๐ 2 โ๐
. Then reduced costs are (for ๐ 1 , ๐ 1 ) ๐ 1 โฒ ๐ฅ 1 ๐ โ ๐ โฒ ๐ 1 ๐ ๐ท 1 ๐ฅ 1 ๐ = ๐ 1 โฒ โ ๐ โฒ ๐ท 1 ๐ฅ 1 ๐ โ ๐ 1 ๐ 1 โฒ ๐ค 1 ๐ โ ๐ โฒ ๐ 1 ๐ ๐ท 1 ๐ค 1 ๐ = ๐ 1 โฒ โ ๐ โฒ ๐ท 1 ๐ค 1 ๐ Entering variable (column) identified if reduced costs for some variables (which are not present currently in the restricted master) <0. Hence solve min ๐ 1 โฒ โ๐โฒ ๐ท 1 ๐ฅ 1 , ๐ฅ 1 โ ๐ (called subproblem) Linear Programming 2018
14
Generate a column for ๐ 1 ๐ , i.e. ๐ท 1 ๐ค 1 ๐ 0 0 .
(a) optimal cost is โโ ๏ returns extreme ray ๐ค 1 ๐ with ๐ 1 โฒ โ๐โฒ ๐ท 1 ๐ค 1 ๐ <0. Generate a column for ๐ 1 ๐ , i.e. ๐ท 1 ๐ค 1 ๐ (b) optimal finite and < ๐ 1 ๏ returns extreme point ๐ฅ 1 ๐ with ๐ 1 โฒ โ๐โฒ ๐ท 1 ๐ฅ 1 ๐ < ๐ 1 . Generate a column for ๐ 1 ๐ , i.e. ๐ท 1 ๐ฅ 1 ๐ (c) optimal cost โฅ ๐ 1 ๏ ๐ 1 โฒ โ๐โฒ ๐ท 1 ๐ฅ 1 ๐ โฅ ๐ โ ๐ฅ 1 ๐ , ๐ 1 โฒ โ๐โฒ ๐ท 1 ๐ค 1 ๐ โฅ0 โ ๐ค 1 ๐ no entering variable among ๐ 1 ๐ , ๐ 1 ๐ . Perform the same for ๐ 2 ๐ , ๐ 2 ๐ . The method can also be used when there are more than 2 blocks or just one block in the constraints. (see ex. 6.2, 6.3) Linear Programming 2018
15
Starting the algorithm
Find extreme points ๐ฅ 1 1 , ๐ฅ of ๐ 1 and ๐ 2 . May assume that ๐ท 1 ๐ฅ ๐ท 2 ๐ฅ 2 1 โค๐ (if not, multiply (-1) on both sides of the corresponding constraint), then solve min ๐ก=1 ๐ 0 ๐ฆ ๐ก ๐=1,2 ๐โ ๐ฝ ๐ ๐ ๐ ๐ ๐ท ๐ ๐ฅ ๐ ๐ + ๐โ ๐พ ๐ ๐ ๐ ๐ ๐ท ๐ ๐ค ๐ ๐ +๐ฆ= ๐ 0 ๐โ ๐ฝ 1 ๐ 1 ๐ =1 ๐โ ๐ฝ 2 ๐ 2 ๐ =1 ๐ ๐ ๐ โฅ0, ๐ ๐ ๐ โฅ0, ๐ฆ ๐ก โฅ0, โ ๐,๐,๐,๐ก Initial b.f.s.: ๐ 1 1 = ๐ 2 1 =1, ๐ ๐ ๐ =0, ๐โ 1, ๐ ๐ ๐ =0, โ ๐, ๐ฆ= ๐ 0 โ ๐ท 1 ๐ฅ 1 1 โ ๐ท 2 ๐ฅ 2 1 . From optimal solution of phase 1 (objective value = 0), can find initial b.f.s. for phase 2. Linear Programming 2018
16
Termination and computational experience
Fast improvement in early iterations, but convergence becomes slow in the tail of the sequence. Revised simplex is more competitive in terms of running time. Suitable for large, structured problems. Researches on improving the convergence speed. Adding a column is equivalent to adding a violated constraint in the dual problem. Dual extreme point solution oscillates much in the dual space, which is not desirable โ stabilized column generation. Think in dual space. How to obtain dual optimal solution fast? Column generation also used to solve LP relaxation of D-W decomposed integer programs. Need integer solution โ branch-and-price. Advantages of decompositon approach also lies in the capability to handle (isolate) difficult structures in the subproblem when we consider large integer programs (e.g., constrained shortest path, robust knapsack problem type). Recall the alternative formulation for communication path problem in Chapter 1. It is D-W decomposition. Linear Programming 2018
17
Bounds on the optimal cost
Thm 6.1 : Suppose optimal ๐ง โ is finite. Let ๐ง be the current best solution (upper bound on ๐ง โ ), ๐ ๐ dual variable value for ๐โ๐กโ convexity constraint and ๐ง ๐ finite optimal cost for ๐โ๐กโ subproblem. Then ๐ง+ ๐ ๐ง ๐ โ ๐ ๐ โค ๐ง โ โค๐ง. pf) Modify the current dual solution to a dual feasible solution by decreasing the value of ๐ ๐ to ๐ง ๐ . Dual of master problem is max ๐โฒ ๐ 0 + ๐ 1 + ๐ 2 ๐โฒ ๐ท 1 ๐ฅ 1 ๐ + ๐ 1 โค ๐ 1 โฒ ๐ฅ 1 ๐ , โ ๐โ ๐ฝ 1 ๐โฒ ๐ท 1 ๐ค 1 ๐ โค ๐ 1 โฒ ๐ค 1 ๐ , โ ๐โ ๐พ 1 ๐โฒ ๐ท 2 ๐ฅ 2 ๐ + ๐ 2 โค ๐ 2 โฒ ๐ฅ 2 ๐ , โ ๐โ ๐ฝ 2 ๐โฒ ๐ท 2 ๐ค 2 ๐ โค ๐ 2 โฒ ๐ค 2 ๐ , โ๐โ ๐พ 2 Linear Programming 2018
18
Suppose have a b.f.s. to master problem with ๐ง and ๐, ๐ 1 , ๐ 2 .
(continued) Suppose have a b.f.s. to master problem with ๐ง and ๐, ๐ 1 , ๐ 2 . Have ๐โฒ ๐ 0 + ๐ 1 + ๐ 2 =๐ง Optimal cost ๐ง 1 to the first subproblem finite ๏ min ๐โ ๐ฝ ๐ 1 โฒ ๐ฅ 1 ๐ โ๐โฒ ๐ท 1 ๐ฅ 1 ๐ = ๐ง 1 min ๐โ ๐พ ๐ 1 โฒ ๐ค 1 ๐ โ๐โฒ ๐ท 1 ๐ค 1 ๐ โฅ0 Note that currently we have ๐ง 1 = min ๐โ ๐ฝ ๐ 1 โฒ โ๐โฒ ๐ท 1 ๐ฅ 1 ๐ < ๐ 1 ( reduced cost ๐ 1 โฒ โ๐โฒ ๐ท 1 ๐ฅ 1 ๐ โ ๐ 1 <0 for entering variable). If we use ๐ง 1 in place of ๐ 1 , get dual feasibility for the first two sets of dual constraints. Similarly, use ๐ง 2 in place of ๐ 2 . Cost is ๐โฒ ๐ 0 + ๐ง 1 + ๐ง ๏ ๐ง โ โฅ๐โฒ ๐ 0 + ๐ง 1 + ๐ง 2 =๐โฒ ๐ 0 + ๐ 1 + ๐ 2 + ๐ง 1 โ ๐ ๐ง 2 โ ๐ 2 =๐ง+ ๐ง 1 โ ๐ ๐ง 2 โ ๐ 2 ๏ Linear Programming 2018
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.