Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Introduction to Algorithms
Stephen P. Carl - CS 2421 Recursive Sorting Algorithms Reading: Chapter 5.
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
CSC 427: Data Structures and Algorithm Analysis
1 Dynamic Programming Jose Rolim University of Geneva.
Design of Algorithms by Induction Part 2 Bibliography: [Manber]- Chap 5.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 6 Instructor: Paul Beame TA: Gidon Shavit.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Fibonacci numbers Fibonacci numbers:Fibonacci numbers 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,... where each number is the sum of the preceding two. Recursive.
Stacks & Recursion. Stack pushpop LIFO list - only top element is visible top.
An Example in the Design and Analysis of an Algorithm Demonstrates: -- Recurrence Relations -- Design Techniques -- The fact that analysis provide useful.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Analysis of Algorithms
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
1 CSC 427: Data Structures and Algorithm Analysis Fall 2008 Dynamic programming  top-down vs. bottom-up  divide & conquer vs. dynamic programming  examples:
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
Fundamentals CSE 373 Data Structures Lecture 5. 12/26/03Fundamentals - Lecture 52 Mathematical Background Today, we will review: ›Logs and exponents ›Series.
Recursion. What is recursion? Rules of recursion Mathematical induction The Fibonacci sequence Summary Outline.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Dynamic Programming continued David Kauchak cs302 Spring 2012.
Main Index Contents 11 Main Index Contents Building a Ruler: drawRuler() Building a Ruler: drawRuler() Merge Algorithm Example (4 slides) Merge Algorithm.
Lecture 11 Data Structures, Algorithms & Complexity Introduction Dr Kevin Casey BSc, MSc, PhD GRIFFITH COLLEGE DUBLIN.
Problem Set 5: Problem 2 22C:021 Computer Science Data Structures.
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
IS 2610: Data Structures Recursion, Divide and conquer Dynamic programming, Feb 2, 2004.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CSC5101 Advanced Algorithms Analysis
Dynamic Programming David Kauchak cs161 Summer 2009.
Chapter 9 Recursion © 2006 Pearson Education Inc., Upper Saddle River, NJ. All rights reserved.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
Recursion Continued Divide and Conquer Dynamic Programming.
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Dynamic Programming Examples By Alexandra Stefan and Vassilis Athitsos.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Lecture 12.
Dynamic Programming Sequence of decisions. Problem state.
Fibonacci Fibonacci series 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 Definition:
0/1 Knapsack Making Change (any coin set)
Advanced Design and Analysis Techniques
Applied Algorithms (Lecture 17) Recursion Fall-23
Algorithm Design Methods
Dynamic Programming.
Dynamic Programming.
Prepared by Chen & Po-Chuan 2016/03/29
Data Structures and Algorithms
Analysis of Algorithms CS 477/677
DYNAMIC PROGRAMMING.
Lecture 4 Dynamic Programming
This is not an advertisement for the profession
Dynamic Programming Sequence of decisions. Problem state.
Presentation transcript:

Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN

Lecture 152 Dynamic Programming An essential characteristic of the divide and conquer algorithms that we have seen is that they partition the problem into independent subproblems Solving the subproblems solves the original problem This paradigm is totally dependent on the subproblems being independent, but what if the subproblems are not independent? When the subproblems are not independent the situation is complicated, primarily because direct recursive implementation of even the simplest algorithms of this type can require unthinkable amounts of time.

Lecture 153 Fibonacci Numbers We have already talked about Fibonacci numbers These number are defined as, Fib(0)=0 Fib(1)=1 Fib(n)=Fib(n-1) + Fib(n-2) Fibonacci numbers have many useful properties and appear often in nature We can implement these with a recursive function quite easily

Lecture 154 Recursive Fibonacci Fib(x) if x < 1 then return 0 if x = 1 then return 1 return Fib(x-1) + Fib(x-2) endalg The problem is that this implementation runs in exponential time. It is spectacularly inefficient! For example, if the computer takes about a second to compute Fib(N), then we know it will take more than a minute to compute Fib(N+9) and more than an hour to compute Fib(N+18)

Lecture 155 Iterative Fibonacci If we implement the Function iteratively, by storing each value in an array we can compute it in linear time Fib(F, x) F[0] = 0 F[1] = 1 for i = 2 to x F[i] = F[i-1] + F[i-2] endfor endalg These numbers grow very large very quickly, so an array size of 46 is sufficient to hold all the values In fact we can dispense with the array if we want and just keep track of the two previous numbers

Lecture 156 Analysis The recursive implementation takes about a minute to calculate Fib(40), whereas the iterative solution is almost instantaneous. This technique gives us an immediate way to get numerical solutions for any recurrence relation A recurrence is a recursive function with integer values Our analysis of the Fibonacci series suggests that we can evaluate any such function by computing all the values in order, starting at the smallest, using previously computed values at each step. We call this technique bottom-up dynamic programming

Lecture 157 Bottom-Up It is an algorithm-design technique that has been used successfully for a wide range of problems The problem with the recursive implementation is that each recursive call ignores values calculated in earlier calls and so exponential duplication occurs The first nine Fibonacci numbers are 0, 1, 1, 2, 3, 5, 8, 13, 21 If we examine how, F(8) is calculated by the recursive implementation we can get a feel for what is happening F(8) = 21

Lecture 158 Calculating F(8) Recursively  Fib(5) is 3 which is calculated 5 times in this implementation  If we could remember the value once calculated then we could remove these duplicate calculations

Lecture 159 Top-Down Approach  This is what is done with top-down dynamic programming  Get the algorithm to save each value it calculates, and at each call check if the value has already been calculated static knownF[MAX] = unknown Fib (x) if knownF[x] <> unkown then return knownF[x] endif if x < 1 then t = 0 if x = 1 then t = 1 if x > 1 then t = Fib(x-1) + Fib(x-2) endif knownF[x] = t return knownF[x] endalg

Lecture 1510 Storing Intermediate Values Implemented in this top-down dynamic way the algorithm now runs in linear time By design, dynamic programming eliminates all recomputation in any recursive program

Lecture 1511 Knapsack Problem Consider a warehouse of capacity M and a load of N types of items of varying size and value, which we want to allocate into the warehouse. The problem is to find the combination of items which should be chosen to maximise the total value of all the items in the warehouse There are many applications which solutions to the knapsack problem are important. For example, a transport company might wish to know the best way to load a ship, truck or cargo plane In some of these cases other factors do complicate the problem, and in some cases the problems become infeasible

Lecture 1512 Knapsack Problem In a recursive solution to the problem, each time we choose an item, we assume that we can (recursively) find an optimal way to pack the rest of the warehouse Knap(N, cap) max = 0 for i = 1 to N space = cap - items[i].size if space >= 0 then t = knap(space) + items[i].val if t > max then max = t endif endfor return max endalg

Lecture 1513 Knapsack Algorithm  This algorithm works by calculating for each item (recursively) the maximum value that we could achieve by including that item and then taking the maximum of all those values  However, this algorithm, like the simple recursive Fibonacci solution, runs in exponential time and so is not feasible  Once more the reason is due to massive recomputation, and once more a dynamic programming approach can be useful  To use top-down dynamic programming we need to remember intermediate solutions already calculated  note: N = number of items

Lecture 1514 Dynamic Program Algorithm static maxKnown[MAX] = unknown Knap( M, N ) max = 0 if maxKnown[M] <> unknown then return maxKnown[M} endif max = 0 for i = 1 to N space = M - items[i].size if space >= 0 then t = knap(space) + items[i].val if t > max then max = t, maxi = iendif endif endfor maxKnown[M] = max itemKnown[M] = items[maxi] return max endalg

Lecture 1515 Issues  For the knapsack problem the running time is proportional to NM  Bottom up dynamic programming could also be used for this problem.  In top-down dynamic programming we save known values; In bottom-up dynamic programming we precompute the values  Dynamic programming is an effective algorithmic technique but can become ineffective if the number of possible function values is too high to save  Also, if the values involved are floating point values, then we cannot save the values by indexing into an array  This is a fundamental problem and no good solution is known to problems of this type

Lecture 1516 Summary The Divide and Conquer paradigm is dependent on the subproblems being independent If they are dependent then massive recomputation can occur Dynamic programming is a technique which remembers any intermediate calculations This can be done in a bottom-up or top-down manner.  In top-down dynamic programming we save known values; In bottom-up dynamic programming we precompute the values