Download presentation
Presentation is loading. Please wait.
1
DYNAMIC PROGRAMMING
2
A short list of categories
Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide and conquer algorithms Dynamic programming algorithms Greedy algorithms Branch and bound algorithms Brute force algorithms Randomized algorithms
3
Simple recursive algorithms I
A simple recursive algorithm: Solves the base cases directly Recurs with a simpler subproblem Does some extra work to convert the solution to the simpler subproblem into a solution to the given problem I call these “simple” because several of the other algorithm types are inherently recursive
4
Divide and Conquer A divide and conquer algorithm consists of two parts: Divide the problem into smaller subproblems of the same type, and solve these subproblems recursively Combine the solutions to the subproblems into a solution to the original problem Traditionally, an algorithm is only called “divide and conquer” if it contains at least two recursive calls 2
5
Greedy algorithms An optimization problem is one in which you want to find, not just a solution, but the best solution A “greedy algorithm” sometimes works well for optimization problems A greedy algorithm works in phases: At each phase: You take the best you can get right now, without regard for future consequences You hope that by choosing a local optimum at each step, you will end up at a global optimum 2
6
Example: Counting money
Suppose you want to count out a certain amount of money, using the fewest possible bills and coins A greedy algorithm would do this would be: At each step, take the largest possible bill or coin that does not overshoot Example: To make $6.39, you can choose: a $5 bill a $1 bill, to make $6 a 25¢ coin, to make $6.25 A 10¢ coin, to make $6.35 four 1¢ coins, to make $6.39 For US money, the greedy algorithm always gives the optimum solution 3
7
Dynamic programming algorithms
A dynamic programming algorithm remembers past results and uses them to find new results Dynamic programming is generally used for optimization problems Multiple solutions exist, need to find the “best” one Requires “optimal substructure” and “overlapping subproblems” Optimal substructure: Optimal solution contains optimal solutions to subproblems Overlapping subproblems: Solutions to subproblems can be stored and reused in a bottom-up fashion This differs from Divide and Conquer, where subproblems generally need not overlap
8
Fibonacci sequence 7 Fibonacci sequence: 0 , 1 , 1 , 2 , 3 , 5 , 8 , 13 , 21 , … Fi = i if i 1 Fi = Fi-1 + Fi-2 if i 2 Solved by a recursive program: Much replicated computation is done. It should be solved by a simple loop.
9
The shortest path To find a shortest path in a multi-stage graph
7 The shortest path To find a shortest path in a multi-stage graph Apply the greedy method : the shortest path from S to T : = 8
10
The shortest path in multistage graphs
7 The shortest path in multistage graphs e.g. The greedy method can not be applied to this case: (S, A, D, T) = 23. The real shortest path is: (S, C, F, T) = 9.
11
Introduction Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. Solves problems by combining the solutions to subproblems that contain common sub-sub-problems.
12
DP can be applied when the solution of a problem includes solutions to subproblems
We need to find a recursive formula for the solution We can recursively solve subproblems, starting from the trivial case, and save their solutions in memory In the end we’ll get the solution of the whole problem
13
Steps Steps to Designing a Dynamic Programming Algorithm
1. Characterize optimal sub-structure 2. Recursively define the value of an optimal solution 3. Compute the value bottom up 4. (if needed) Construct an optimal solution
14
Diff. B/w Dynamic Programming and Divide & Conquer:
Divide-and-conquer algorithms split a problem into separate subproblems, solve the subproblems, and combine the results for a solution to the original problem. Example: Quicksort, Mergesort, Binary search Divide-and-conquer algorithms can be thought of as top-down algorithms Dynamic Programming split a problem into subproblems, some of which are common, solve the subproblems, and combine the results for a solution to the original problem. Example: Matrix Chain Multiplication, Longest Common Subsequence Dynamic programming can be thought of as bottom-up
15
Diff. B/w Dynamic Programming and Divide & Conquer (Cont…):
In divide and conquer, subproblems are independent. Divide & Conquer solutions are simple as compared to Dynamic programming . Divide & Conquer can be used for any kind of problems. Only one decision sequence is ever generated In Dynamic Programming , subproblems are not independent. Dynamic programming solutions can often be quite complex and tricky. Dynamic programming is generally used for Optimization Problems. Many decision sequences may be generated.
16
Principle of optimality
7 Principle of optimality Principle of optimality: Suppose that in solving a problem, we have to make a sequence of decisions D1, D2, …, Dn. If this sequence is optimal, then the last k decisions, 1 k n must be optimal. e.g. the shortest path problem If i, i1, i2, …, j is a shortest path from i to j, then i1, i2, …, j must be a shortest path from i1 to j In summary, if a problem can be described by a multistage graph, then it can be solved by dynamic programming.
17
Dynamic programming Forward approach and backward approach:
7 Dynamic programming Forward approach and backward approach: Note that if the recurrence relations are formulated using the forward approach then the relations are solved backwards . i.e., beginning with the last decision On the other hand if the relations are formulated using the backward approach, they are solved forwards. To solve a problem by using dynamic programming: Find out the recurrence relations. Represent the problem by a multistage graph.
18
Multistage graph Computing a binomial coefficient Matrix-chain multiplication Longest Common Subsequence 0/1 Knapsack The Traveling Salesperson Problem Warshall’s algorithm for transitive closure Floyd’s algorithm for all-pairs shortest paths
19
METODE FORWARD cost(4,I) = c(I,L) = 7 cost(4,J) = c(J,L) = 8 cost(4,K) = c(K,L) = 11 cost(3,F) = min { c(F,I) + cost(4,I) | c(F,J) + cost(4,J) } cost(3,F) = min { | } = 17 cost(3,G) = min { c(G,I) + cost(4,I) | c(G,J) + cost(4,J) } cost(3,G) = min { | } = 12 cost(3,H) = min { c(H,J) + cost(4,J) | c(H,K) + cost(4,K) } cost(3,H) = min { | } = 18 cost(2,B) = min { c(B,F) + cost(3,F) | c(B,G) + cost(3,G) | c(B,H) + cost(3,H) } cost(2,B) = min { | | } = 20 cost(2,C) = min { c(C,F) + cost(3,F) | c(C,G) + cost(3,G) } cost(2,C) = min { | } = 15 cost(2,D) = min { c(D,H) + cost(3,H) } cost(2,D) = min { } = 27 cost(2,E) = min { c(E,G) + cost(3,G) | c(E,H) + cost(3,H) } cost(2,E) = min { | } = 18 cost(1,A) = min { c(A,B) + cost(2,B) | c(A,C) + cost(2,C) | c(A,D) + cost(2,D) | c(A,E) + cost(2,E) } cost(1,A) = min { | | | } = 21 Rute terpendek adalah A-C-G-I-L dengan panjang 21
20
METODE BACKWARD bcost(2,B) = c(A,B) = 7 bcost(2,C) = c(A,C) = 6 bcost(2,D) = c(A,D) = 5 bcost(2,E) = c(A,E) = 9. bcost(3,F) = min { c(B,F) + bcost(2,B) | c(C,F) + bcost(2,C) } bcost(3,F) = min { | } = 11 bcost(3,G) = min { c(B,G) + bcost(2,B) | c(C,G) + bcost(2,C) | c(E,G) + bcost(2,E) } bcost(3,G) = min { | | } = 9 bcost(3,H) = min { c(B,H) + bcost(2,B) | c(D,H) + bcost(2,D) | c(E,H) + bcost(2,E) } bcost(3,H) = min { | | } = 14 bcost(4,I) = min { c(F,I) + bcost(3,F) | c(G,I) + bcost(3,G) } bcost(4,I) = min { | } = 14 bcost(4,J) = min { c(F,J) + bcost(3,F) | c(G,J) + bcost(3,G) | c(H,J) + bcost(3,H) } bcost(4,J) = min { | | } = 16 bcost(4,K) = min { c(H,K) + cost(3,H) } bcost(4,K) = min { } = 22 bcost(5,L) = min { c(I,L) + bcost(4,I) | c(J,L) + bcost(4,J) | c(K,L) + bcost(4,K) } bcost(5,L) = min { | | } = 21 Rute terpendek adalah A-C-G-I-L dengan panjang 21
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.