Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 3 Dynamic Programming.

Similar presentations


Presentation on theme: "Chapter 3 Dynamic Programming."— Presentation transcript:

1 Chapter 3 Dynamic Programming

2 Learning Objectives After completing this chapter, students will be able to: Understand the overall approach of dynamic programming. Use dynamic programming to solve the shortest-route problem. Develop dynamic programming stages. Describe important dynamic programming terminology. Describe the use of dynamic programming in solving knapsack problems.

3 Chapter Outline 1. Introduction. 2. Shortest-Route Problem Solved by Dynamic Programming. 3. Dynamic Programming Terminology. 4. Dynamic Programming Notation. 5. Knapsack Problem.

4 1. Introduction Dynamic programming is a quantitative analytic technique applied to large, complex problems that have sequences of decisions to be made. Dynamic programming divides problems into a number of decision stages; the outcome of a decision at one stage affects the decision at each of the next stages. The technique is useful in a large number of multi-period business problems, such as: Smoothing production employment, Allocating capital funds, Allocating salespeople to marketing areas, and Evaluating investment opportunities.

5 Assignment # 1 The Dynamic programming technique is useful in a large number of multi-period business problems, such as: smoothing production employment, allocating capital funds, allocating salespeople to marketing areas, and evaluating investment opportunities. Task: Write a report on using Dynamic programming technique to solve some multi-period business problems. Give some examples with real data applications. Due Date: after 2 weeks. 5

6 Dynamic Programming vs. Linear Programming
Dynamic programming differs from linear programming in two ways: First: There is no algorithm (like the simplex method) that can be programmed to solve all problems.

7 Second: Linear programming is a method that gives single-stage (i.e., one-time period) solutions. Instead, dynamic programming is a technique that allows a difficult problem to be broken down into a sequence of easier sub-problems, which are then evaluated by stages. Example: Dynamic programming has the power to determine the optimal solution over a one-year time horizon by breaking the problem into 12 smaller one-month horizon problems and to solve each of these optimally. Hence, it uses a multistage approach.

8 Four Steps of Dynamic Programming
Divide the original problem into sub-problems called stages. Solve the last stage of the problem for all possible conditions or states (backward procedure). Working backward from that last stage, solve each intermediate stage. This is done by determining optimal policies from that stage to the end of the problem. Obtain the optimal solution for the original problem by solving all stages sequentially.

9 Solving Types of Dynamic Programming Problems
Two types of DP problems examples: network non-network The Shortest-Route Problem is a network problem that can be solved by dynamic programming. The Knapsack Problem is an example of a non-network problem that can be solved using dynamic programming.

10 2- SHORTEST-ROUTE PROBLEM SOLVED BY DYNAMIC PROGRAMMING
George Yates is to travel from Rice, Georgia (1) to Dixieville, Georgia (7). George wants to find the shortest route. there are small towns between Rice and Dixieville. The road map is on the next slide. The circles (nodes) on the map represent cities such as Rice, Athens, Georgetown, Dixieville, Brown, and so on. The arrows (arcs) represent highways between the cities.

11 Dixieville Brown

12 Highway Map between Rice and Dixieville
1 4 3 2 5 6 7 4 miles 10 miles 14 miles 2 miles 6 miles 12 miles 5 miles Rice Lakecity Athens Hope Georgetown Brown Figure M2.1

13 We can solve this by inspection, but it is instructive seeing dynamic programming used here to show how to solve more complex problems.

14 Step-1: Divide the problem into sub-problems or stages.
Figure M2.2 (next slide) reveals the stages of this problem. In dynamic programming (backward procedure), we start with the last part of the problem, Stage 1, and work backward to the beginning of the problem or network, which is Stage 3 in this problem. Table M2.1 (second slide) summarizes the arcs and arc distances for each stage.

15 The Stages for George Yates Problem
Dixieville 1 4 3 2 5 6 7 4 miles 10 miles 14 miles 2 miles 6 miles 12 miles 5 miles Rice Lakecity Athens Hope Georgetown Brown Stage 1 Stage 2 Stage 3 Figure M2.2

16 Table M2.1: Distance Along Each Arc
Dixieville 1 4 3 2 5 6 7 4 miles 10 miles 14 miles 2 miles 6 miles 12 miles 5 miles Rice Lakecity Athens Hope Georgetown Brown Stage 1 Stage 2 Stage 3 STAGE ARC ARC DISTANCE 6-7 2 3-5 12 3-6 6 2-5 4 1-3 5 Table M2.1

17 Step 2: Solve The Last Stage – Stage 1
Solve Stage 1, the last part of the network. This is usually trivial. Find the shortest path to the end of the network: node 7 in this problem. The objective is to find the shortest distance to node 7.

18 At Stage 1: The shortest paths, from nodes: 5 and 6 to node 7 are the only paths. Also note in Figure M2.3 (next slide) that the minimum distances are enclosed in boxes by the entering nodes to stage 1, node 5 and node 6.

19 George Yates-Stage 1 1 4 3 2 5 6 7 Figure M2.3
Minimum Distance to Node 7 from Node 5 1 4 3 2 5 6 7 10 14 12 Stage 1 Minimum Distance to Node 7 from Node 6 Figure M2.3

20 1 4 3 2 5 6 7 10 14 12 STAGE 1 5 14 5-7 6 2 6-7 Stage 1 BEGINNING NODE
SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH 5 14 5-7 6 2 6-7

21 Step 3: Moving Backwards Solving Intermediate Problems
Moving backward, now solve for Stages 2 and 3. At Stage 2 use Figure M2.4. (next slide):

22 Solution for Stage - 2 Minimum Distance to Node 7 from Node 4 1 4 3 2 5 6 7 10 14 12 24 8 Minimum Distance to Node 7 from Node 2 Stage 2 Stage 1 Fig M2.4

23 Fig M2.4 (previous slide) Analysis
1 4 3 2 5 6 7 10 14 12 24 8 Stage 1 Stage 2 If we are at node 4, the shortest and only route to node 7 is arcs 4–5 and 5–7 with a total minimum distance of 24 miles (10+14). At node 3, the shortest route is arcs 3–6 and 6–7 with a total minimum distance of 8 miles = Min{(12+14), (6+2)}. If we are at node 2, the shortest route is arcs 2–6 and 6–7 with a minimum total distance of 12 miles = Min{(4+14), (10+2)}.

24 1 4 3 2 5 6 7 10 14 12 24 8 Stage 1 Stage 2 STAGE 2 BEGINNING NODE SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH 4 24 4-5 5-7 3 8 3-6 6-7 2 12 2-6 1 - <#>

25 For Stage 2, we have: 1 4 3 2 5 6 7 10 14 12 24 8 Stage 1 Stage 2 State variables are the entering nodes: (a) node 2 , (b) node 3, (c ) node 4. 2. Decision variables are the arcs or routes: (a) (b) 3-5 (c ) 3-6 (d) (e) 2-6. 3. The decision criterion is the minimization of the total distances traveled. 4. The optimal policy for any beginning condition is shown in Figure M2.6:

26 1 4 3 2 5 6 7 10 14 12 24 8 State variables are the entering nodes
The optimal policy is the arc, for any entering node, that will minimize total distance to the destination at this stage Decision variables are all the arcs Stage 1 Figure M2.6 Stage 2

27 Solution for the Third Stage
24 14 Minimum Distance to Node 7 from Node 1 4 10 5 4 14 12 13 8 4 5 1 3 7 2 2 6 2 6 10 12 2 Stage 3

28 1 4 3 2 5 6 7 10 14 12 24 8 13 STAGE 3 1 13 1-3 3-6 6-7 BEGINNING NODE
SHORTEST DISTANCE TO NODE 7 ARCS ALONG THIS PATH 1 13 1-3 3-6 6-7 1 - <#>

29 Step 4 : Final Step The final step is to find the optimal solution after all stages have been solved. To obtain the optimal solution at any stage, only consider the arcs to the next stage and the optimal solution at the next stage. For Stage 3, we only have to consider the three arcs to Stage 2 (1–2, 1–3, and 1–4) and the optimal policies at Stage 2.

30 3. DYNAMIC PROGRAMMING TERMINOLOGY
Stage: a period or a logical sub-problem. State variables: possible beginning situations or conditions of a stage. These have also been called the input variables. Decision variables: alternatives or possible decisions that exist at each stage. Decision criterion: a statement concerning the objective of the problem. Optimal policy: a set of decision rules, developed as a result of the decision criteria, that gives optimal decisions for any entering condition at any stage. Transformation: normally, an algebraic statement that reveals the relationship between stages:

31 Shortest Route Problem Transformation Calculation
In the shortest-route problem, the following transformation can be given: Distance from the beginning of a given stage to the last node Distance from the beginning of the previous stage to the last node = + Distance from the given stage to the previous stage

32 4. Dynamic Programming Notation
In addition to terminology, mathematical notation can also be used to describe any dynamic programming problem. Here, an input, decision, output and return are specified for each stage. This helps to set up and solve the problem. Consider Stage 2 in the George Yates Dynamic Programming problem. This stage can be represented by the diagram shown in Figure M2.7 (as could any given stage of a given dynamic programming problem).

33 sn = input to stage n (M2-1)
Input, Decision, Output, and Return for Stage 2 in George Yates’s Problem Decision d2 Fig M2.7 Stage 2 Input s2 Output s1 Return r2 sn = input to stage n (M2-1) dn = decision at stage n (M2-2) rn = return at stage n (M2-3) Note that the input to one stage is also the output from another stage. e.g., the input to Stage 2, s2, is also the output from Stage 3. This leads us to the following equation: sn −1 = output from Stage n (M2-4)

34 Transformation Function
A transformation function allows us to go from one stage to another. The total return function allows us to keep track of profits and costs. tn = transformation function at Stage n (M2-5) The following general formula allows us to go from one stage to another using the transformation function: Sn-1 = tn (Sn , dn) (M2-6) The total return allows us to keep track of the total profit or costs at each stage : fn = total return at stage n. (M2-7)

35 Dynamic Programming Key Equations
sn  Input to stage n dn  Decision at stage n rn  Return at stage n sn-1  Input to stage n-1 tn  Transformation function at stage n sn-1 = tn [sn , dn]  General relationship between stages fn  Total return at stage n

36 5. KNAPSACK PROBLEM

37 5. KNAPSACK PROBLEM The “knapsack problem” involves the maximization or minimization of a value, such as profits or costs. Like a linear programming problem, there are restrictions. Imagine a knapsack or pouch that can only hold a certain weight or volume. We can place different types of items in the knapsack. Our objective is to place items in the knapsack to maximize total value without breaking the knapsack because of too much weight or a similar restriction.

38 Types of Knapsack Problems
e.g., Choosing items to place in the cargo compartment of an airplane. selecting which payloads to put on the next NASA space shuttle. The restriction can be volume, weight, or both. Some scheduling problems are also knapsack problems. e.g., we may want to determine which jobs to complete in the next two weeks. The two-week period is the knapsack, and we want to load it with jobs in such a way so as to maximize profits or minimize costs. The restriction is the number of days or hours during the two-week period.

39 Roller’s Air Transport Service Problem
Roller’s Air Transport Service ships cargo by plane in the United States and Canada. The remaining capacity for one of the one of the flights from Seattle to Vancouver is 10 tons. There are four different items to ship. Each item has a weight in tons, a net profit in thousands of dollars, and a total number that is available. This information is presented in Table M2.2. TABLE M2.2: Items to be Shipped. ITEM WEIGHT (tons) PROFIT/ UNIT (103$) NUMBER AVAILABLE 1 3 6 2 4 9 8 5

40

41 TABLE M2.3: Relationship Between Items and Stages
1 4 2 3

42 Decisions Returns Figure M2.8 Roller’s Air Transport Service Problem
Stage 3 (Item 2) d3 s2 r3 Stage 4 (Item 1) s4 d4 s3 r4 Stage 2 (Item 3) d2 s1 r2 Stage 1 (Item 4) d1 s0 r1 Decisions Returns

43 1 3 6 2 4 9 8 5 1 4 3 6 2 9 8 5 ITEM WEIGHT/UNIT (tons) PROFIT/
NUMBER AVAILABLE 1 3 6 2 4 9 8 5 ITEM STAGE WEIGHT/UNIT (tons) PROFIT/ UNIT ($) MAXIMUM VALUE OF DECISION 1 4 3 6 2 9 8 5

44 The Transformation Functions
s3 = s d4 stage (a) Weight/Unit Stage 4 (Item 1) s4 d4 s3 r4 Decisions Output of stage 4 (S3) is the remaining weight in the plane after this stage 4 = Remaining weight before stage 4 (S4) – weight taken in stage 4 (1× d4) The general transformation function for knapsack problem: sn-1 = (an x sn ) + (bn x dn) + cn an ,bn and cn are coefficients (an =1 and cn =0 for this problem)

45 COEFFICIENTS OF TRANSITION FUNCTION
sn-1 = (an x sn ) + (bn x dn) + cn STAGE Item COEFFICIENTS OF TRANSITION FUNCTION an bn cn 4 1 -1 3 2 -4 -3 -2 Weight/Unit

46 Decisions Returns s3 = s4 - 1 d4 stage 4 (a)
(Item 2) d3 s2 r3 Stage 4 (Item 1) s4 d4 s3 r4 Stage 2 (Item 3) d2 s1 r2 Stage 1 (Item 4) d1 s0 r1 Decisions Returns s3 = s d4 stage (a) s2 = s d3 stage (b) s1 = s d2 stage (c) s0 = s d1 stage (d)

47 The Return Function rn = bn x dn For this example: an = cn = 0
Profit/Unit Decision The general form for the return function: rn = (an x sn) + (bn x dn) + cn an , bn , cn are the coefficients for the return function. For this example: an = cn = 0

48 The return values table:
rn = bn x dn bn DECISIONS ITEM STAGE UPPER LOWER 3 6 ≤ dn ≤ 1 4 9 2 8 5 Number available Example: r4 = 3 d4 Profit/Unit

49 r4 = 3 d4 r3 = 9 d3 r2 = 8 d2 r1 = 5 d1 Units shipped Profit/Unit
Return at the stage

50 STAGE 1 f1 f0 s0 r1 d1 s1 Tons available (all possibilities)
Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit. Tons available (all possibilities) Units shipped Return = d1 x (profit/Unit) f1 f0 s0 r1 d1 s1 1 2 …. 10 Tons available for stage 0 Profit for stage 0 = 0 (nothing is shipped) Total profit f1 = r1 + f0

51 STAGE 1 f1 f0 s0 r1 d1 s1 Tons available Units shipped
Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit. Tons available Units shipped Return = d1 x (profit/Unit) f1 f0 s0 r1 d1 s1 1 2 …. 10 Tons available for stage 0 Profit for stage 0 = 0 (nothing is shipped) Total profit f1 = r1 + f0 51

52 STAGE 1 f1 f0 s0 r1 d1 s1 Tons available
Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit. Tons available Units shipped (all possibilities) Return = d1 x (profit/Unit) f1 f0 s0 r1 d1 s1 (0) 1 (5) 5 (1) 2 3 Tons available for stage 0 Profit for stage 0 = 0 (nothing is shipped) Total profit f1 = r1 + f0 (Optimal)

53 Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit.
STAGE 1 Item 4: Weight/Unit= 2 tons, Maximum units = 2, Profit = 5/unit. f1 f0 s0 r1 d1 s1 (0) 1 (5) 5 (1) 2 3 (10) 10 (2) 4 6 7 8 9 Same for S1 = 4 to 10 1 - <#> Rows S1 = 5, 6, .., 10 are similar to row S1 = 4 (Optimal)

54 STAGE 2 Item 3: Weight/unit = 3 tons, Maximum units = 2, Profit = 8/unit. f2 f1 s1 r2 d2 s2 (0) 1 (5) 5 2 (8) 3 8 (1) (10) 10 4 (13) 13 (16) 6 16 (2) From Stage 1 Take f1 optimal Depending on s1 f2 = r2 + f1

55 Item 3: Weight/unit = 3 tons, Maximum units = 2, Profit = 8/unit.
STAGE 2 f2 f1 s1 r2 d2 s2 (0) 1 (5) 5 2 (8) 3 8 (1) (10) 10 4 (13) 13 (16) 6 16 (2) (18) 7 18 (21) 9 (26) 1 - <#>

56 Item 2: weight/unit =4 tons, Maximum units = 1, Profit = 9/unit.
STAGE 3 f3 f2 s2 r3 d3 s3 (0) 1 (5) 5 2 (8) 8 3 (10) 9 10 4 (13) 13 (16) 14 16 6 (18) 17 18 7 (21) 19 21 (22) (1) (26) 25 26

57 Item 1: Weight/item =1 tons, Maximum units = 6, Profit = 3/unit.
STAGE 4 f4 f3 s3 r4 d4 s4 26 25 27 (28) 22 21 18 16 13 10 9 8 7 6 5 4 3 12 15 1 2 (4) (5) (6) f4 = r4 + f3 From Stage 3 Depending on s3 There are three possible decisions that will give the same highest profit

58 One possible optimal solution:
STAGE 4 (Item 1) f4 f3 s3 r4 d4 s4 (28) 10 4 18 (6) STAGE 3 (Item 2) f3 f2 s2 r3 d3 s3 (0) 1 (5) 5 2 (8) 8 3 (10) 9 10 4 (13) 13 ….. STAGE 2

59 One possible optimal solution:
STAGE 4 (Item 1) f4 f3 s3 r4 d4 s4 (28) 10 4 18 (6) STAGE 3 (Item 2) f3 f2 s2 r3 d3 s3 (10) 10 4 (0) STAGE 2 (Item 3) f2 f1 s1 r2 d2 s2 (10) 10 4 (0) STAGE 1 (Item 4) f1 f0 s0 r1 d1 s1 (10) 10 (2) 4

60 f3 STAGE 4 f4 f3 s3 r4 d4 s4 (28) 10 4 18 (6) STAGE 3 f2 s2 r3 d3 s3
(10) 10 4 (0) STAGE 2 f2 f1 s1 r2 d2 s2 (10) 10 4 (0) STAGE 1 f1 f0 s0 r1 d1 s1 (10) 10 (2) 4 FINAL SOLUTION OPTIMAL RETURN (rn) OPTIMAL DECISION (dn) Item STAGE (n) 18 6 1 4 2 3 10 28 8 Total 1 - <#>

61 FINAL SOLUTION (One Possible Solution)
One possible optimal solution: FINAL SOLUTION (One Possible Solution) OPTIMAL RETURN (rn) OPTIMAL DECISION (dn) Item STAGE (n) 18 6 1 4 2 3 10 28 8 Total

62 Second Possible Optimal Solution:
STAGE 4 (Item 1) f4 f3 s3 r4 d4 s4 (28) 13 5 15 (5) 10 STAGE 3 (Item 2) f3 f2 s2 r3 d3 s3 (13) 13 5 (0) STAGE 2 (Item 3) (13) 5 2 8 (1) STAGE 1 (Item 4) f1 f0 s0 r1 d1 s1 (5) 5 (1) 2 Optimal Solution: Item1 Item Item Item Profit

63 Third Possible Optimal Solution:
STAGE 4 (Item 1) f4 f3 s3 r4 d4 s4 (28) 16 6 12 (4) 10 STAGE 3 (Item 2) f3 f2 s2 r3 d3 s3 (16) 16 6 (0) STAGE 2 (Item 3) (16) 16 (2) 6 STAGE 1 (Item 4) f1 f0 s0 r1 d1 s1 (0) Optimal Solution: Item1 Item Item Item Profit

64 Solution Using Software
ITEM WEIGHT (tons) PROFIT/ UNIT (103$) NUMBER AVAILABLE 1 3 6 2 4 9 8 5 Mathematical Model: Integer Programming: Maximize f = 3x1 + 9x2 + 8x3 + 5x4 Subject to: x1 + 4x2 + 3x3 + 2x4 ≤ 10, x1 ≤ 6 , x2 ≤ 1 , x3 ≤ 2 , x4 ≤ 2 , xj ≥ 0, Integer ∀ j .

65 Solution Using QM for Windows

66

67

68

69 Using Excel 1 3 6 2 4 9 8 5 Max f = 3x1 + 9x2 + 8x3 + 5x4
ITEM WEIGHT (tons) PROFIT/ UNIT (103$) NUMBER AVAILABLE 1 3 6 2 4 9 8 5 Max f = 3x1 + 9x2 + 8x3 + 5x4 S. t.: x1 + 4x2 + 3x3 + 2x4 , x1 ≤ 6 , x2 ≤ 1 , x3 ≤ 2 , x4 ≤ 2 , xj ≥ 0, Integer ∀ j .

70 Cell F5: = SUMPRODUCT(B5:E5;B2:E2)

71

72 Integer Variables

73 Solution

74 Lab Exercise Solve the Knapsack Example using Excel and QM for Windows.

75 GLOSSARY Decision Criterion.
A statement concerning the objective of a dynamic programming problem. Decision Variable. The alternatives or possible decisions that exist at each stage of a dynamic programming problem. Dynamic Programming. A quantitative technique that works backward from the end of the problem to the beginning of the problem in determining the best decision for a number of interrelated decisions.

76 Glossary continued Optimal Policy. A set of decision rules, developed as a result of the decision criteria, that gives optimal decisions at any stage of a dynamic programming problem. Stage. A logical sub-problem in a dynamic programming problem. State Variable. A term used in dynamic programming to describe the possible beginning situations or conditions of a stage. Transformation. An algebraic statement that shows the relationship between stages in a dynamic programming problem.


Download ppt "Chapter 3 Dynamic Programming."

Similar presentations


Ads by Google