Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.

Similar presentations


Presentation on theme: "Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN."— Presentation transcript:

1 Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN

2 Lecture 152 Dynamic Programming An essential characteristic of the divide and conquer algorithms that we have seen is that they partition the problem into independent subproblems Solving the subproblems solves the original problem This paradigm is totally dependent on the subproblems being independent, but what if the subproblems are not independent? When the subproblems are not independent the situation is complicated, primarily because direct recursive implementation of even the simplest algorithms of this type can require unthinkable amounts of time.

3 Lecture 153 Fibonacci Numbers We have already talked about Fibonacci numbers These number are defined as, Fib(0)=0 Fib(1)=1 Fib(n)=Fib(n-1) + Fib(n-2) Fibonacci numbers have many useful properties and appear often in nature We can implement these with a recursive function quite easily

4 Lecture 154 Recursive Fibonacci Fib(x) if x < 1 then return 0 if x = 1 then return 1 return Fib(x-1) + Fib(x-2) endalg The problem is that this implementation runs in exponential time. It is spectacularly inefficient! For example, if the computer takes about a second to compute Fib(N), then we know it will take more than a minute to compute Fib(N+9) and more than an hour to compute Fib(N+18)

5 Lecture 155 Iterative Fibonacci If we implement the Function iteratively, by storing each value in an array we can compute it in linear time Fib(F, x) F[0] = 0 F[1] = 1 for i = 2 to x F[i] = F[i-1] + F[i-2] endfor endalg These numbers grow very large very quickly, so an array size of 46 is sufficient to hold all the values In fact we can dispense with the array if we want and just keep track of the two previous numbers

6 Lecture 156 Analysis The recursive implementation takes about a minute to calculate Fib(40), whereas the iterative solution is almost instantaneous. This technique gives us an immediate way to get numerical solutions for any recurrence relation A recurrence is a recursive function with integer values Our analysis of the Fibonacci series suggests that we can evaluate any such function by computing all the values in order, starting at the smallest, using previously computed values at each step. We call this technique bottom-up dynamic programming

7 Lecture 157 Bottom-Up It is an algorithm-design technique that has been used successfully for a wide range of problems The problem with the recursive implementation is that each recursive call ignores values calculated in earlier calls and so exponential duplication occurs The first nine Fibonacci numbers are 0, 1, 1, 2, 3, 5, 8, 13, 21 If we examine how, F(8) is calculated by the recursive implementation we can get a feel for what is happening F(8) = 21

8 Lecture 158 Calculating F(8) Recursively  Fib(5) is 3 which is calculated 5 times in this implementation  If we could remember the value once calculated then we could remove these duplicate calculations 2 3 5 1 1 2 10 110 1 1 0 2 3 1 1 10 110 8 8 13 21 0 2 3 5 1 1 2 10 110 1 1 0 0 2 3 5 1 1 2 10 110 1 1 0 0 2 3 1 1 10 110

9 Lecture 159 Top-Down Approach  This is what is done with top-down dynamic programming  Get the algorithm to save each value it calculates, and at each call check if the value has already been calculated static knownF[MAX] = unknown Fib (x) if knownF[x] <> unkown then return knownF[x] endif if x < 1 then t = 0 if x = 1 then t = 1 if x > 1 then t = Fib(x-1) + Fib(x-2) endif knownF[x] = t return knownF[x] endalg

10 Lecture 1510 Storing Intermediate Values Implemented in this top-down dynamic way the algorithm now runs in linear time By design, dynamic programming eliminates all recomputation in any recursive program 21 13 8 5 3 2 1 8 5 3 2 1 1 01

11 Lecture 1511 Knapsack Problem Consider a warehouse of capacity M and a load of N types of items of varying size and value, which we want to allocate into the warehouse. The problem is to find the combination of items which should be chosen to maximise the total value of all the items in the warehouse There are many applications which solutions to the knapsack problem are important. For example, a transport company might wish to know the best way to load a ship, truck or cargo plane In some of these cases other factors do complicate the problem, and in some cases the problems become infeasible

12 Lecture 1512 Knapsack Problem In a recursive solution to the problem, each time we choose an item, we assume that we can (recursively) find an optimal way to pack the rest of the warehouse Knap(N, cap) max = 0 for i = 1 to N space = cap - items[i].size if space >= 0 then t = knap(space) + items[i].val if t > max then max = t endif endfor return max endalg

13 Lecture 1513 Knapsack Algorithm  This algorithm works by calculating for each item (recursively) the maximum value that we could achieve by including that item and then taking the maximum of all those values  However, this algorithm, like the simple recursive Fibonacci solution, runs in exponential time and so is not feasible  Once more the reason is due to massive recomputation, and once more a dynamic programming approach can be useful  To use top-down dynamic programming we need to remember intermediate solutions already calculated  note: N = number of items

14 Lecture 1514 Dynamic Program Algorithm static maxKnown[MAX] = unknown Knap( M, N ) max = 0 if maxKnown[M] <> unknown then return maxKnown[M} endif max = 0 for i = 1 to N space = M - items[i].size if space >= 0 then t = knap(space) + items[i].val if t > max then max = t, maxi = iendif endif endfor maxKnown[M] = max itemKnown[M] = items[maxi] return max endalg

15 Lecture 1515 Issues  For the knapsack problem the running time is proportional to NM  Bottom up dynamic programming could also be used for this problem.  In top-down dynamic programming we save known values; In bottom-up dynamic programming we precompute the values  Dynamic programming is an effective algorithmic technique but can become ineffective if the number of possible function values is too high to save  Also, if the values involved are floating point values, then we cannot save the values by indexing into an array  This is a fundamental problem and no good solution is known to problems of this type

16 Lecture 1516 Summary The Divide and Conquer paradigm is dependent on the subproblems being independent If they are dependent then massive recomputation can occur Dynamic programming is a technique which remembers any intermediate calculations This can be done in a bottom-up or top-down manner.  In top-down dynamic programming we save known values; In bottom-up dynamic programming we precompute the values


Download ppt "Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN."

Similar presentations


Ads by Google