Fundamental Structures of Computer Science

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Dynamic Programming.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
1 Dynamic Programming Jose Rolim University of Geneva.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Algorithm Design Techniques: Dynamic Programming.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
1 CSC 427: Data Structures and Algorithm Analysis Fall 2008 Dynamic programming  top-down vs. bottom-up  divide & conquer vs. dynamic programming  examples:
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming David Kauchak cs161 Summer 2009.
Dynamic Programming (DP) By Denon. Outline Introduction Fibonacci Numbers (Review) Longest Common Subsequence (LCS) More formal view on DP Subset Sum.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
Recursion Continued Divide and Conquer Dynamic Programming.
Dynamic Programming Fundamental Data Structures and Algorithms Klaus Sutner March 30, 2004.
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Dynamic Programming Examples By Alexandra Stefan and Vassilis Athitsos.
Greedy algorithms: CSC317
Lecture 12.
Dynamic Programming Sequence of decisions. Problem state.
Rod cutting Decide where to cut steel rods:
Advanced Design and Analysis Techniques
Data Structures and Algorithms
Topic 25 Dynamic Programming
Dynamic Programming.
Dynamic Programming.
Prepared by Chen & Po-Chuan 2016/03/29
Data Structures and Algorithms
Data Structures and Algorithms
Unit-5 Dynamic Programming
CS201: Data Structures and Discrete Mathematics I
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
CS6045: Advanced Algorithms
Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Chapter 15-1 : Dynamic Programming I
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
CSE 326: Data Structures Lecture #24 The Algorhythmics
Algorithms: Design and Analysis
Dynamic Programming.
CSC 413/513- Intro to Algorithms
Lecture 4 Dynamic Programming
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
This is not an advertisement for the profession
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

15-211 Fundamental Structures of Computer Science Dynamic Programming 15-211 Fundamental Structures of Computer Science Ananda Guna February 10, 2005

Basic Idea of Dynamic Programming Not much to do with “dynamic” or “programming” idea from control theory Solve problems by building up solutions to sub problems Basically reduce exponential time algorithms to polynomial time algorithms

Example 1 - Fibonacci The Fibonacci sequence Leonardo Pisano f(0) = 0 f(n+2) = f(n+1) + f(n) Leonardo Pisano aka Leonardo Fibonacci How many rabbits can be produced from a single pair in a year’s time? (1202) Assume New pair of offspring each month Each pair becomes fertile after one month Rabbits never die

Memoization

Fibonacci – the recursive program public static long fib(int n) { if (memo[n] != -1) return memo[n]; if (n<=1) return n; long u = fib2(n-1); long v = fib2(n-2); memo[n] = u + v; return u + v; }

Memoization Remember previous results When computing the value of a function, return the saved result rather than calculating it again The “function” must be a function! No side effects Returns the same value each time

Memoization Remember previous results When computing the value of a function, return the saved result rather than calculating it again The “function” must be a function! No side effects Returns the same value each time Saving the values Array Hashtable Trade-offs Time to retrieve vs. time to compute Storage space vs. time to compute

Fibonacci – the recursive program public static long fib(int n) { if (memo[n] != -1) return memo[n]; if (n<=1) return n; long u = fib(n-1); long v = fib(n-2); memo[n] = u + v; return u + v; } Fib:13

Fibonacci – the recursive program public static long fib(int n) { if (memo[n] != -1) return memo[n]; if (n<=1) return n; long u = fib(n-1); long v = fib(n-2); memo[n] = u + v; return u + v; } Fib2s

Fibonacci – the recursive program public static long fib(int n) { if (memo[n] != -1) return memo[n]; if (n<=1) return n; long u = fib(n-1); long v = fib(n-2); memo[n] = u + v; return u + v; } memo = new long[n+1]; for(int i=0; i<=n; i++) memo[i] = -1; Fib2s

Memoization Name coined by Donald Michie, Univ of Edinburgh (1960s) Fib is the Perfect Example Useful for game searches evaluation functions web caching

Fibonacci – optimizing the memo table public static long fib(int n) { if (n<=1) return n; long last = 1; long prev = 0; long t = -1; for (int i = 2; i<=n; i++) { t = last + prev; prev = last; last = t; } return t; Reduce memo table to two entries Fib3

Fibonacci – static table static long[] tab; public static void buildTable(int n) { tab = new long[n+1]; tab[0] = 0; tab[1] = 1; for (int i = 2; i<=n; i++) tab[i] = tab[i-1] + tab[i-2]; return; } public static long fib(int n) { return tab[n]; Amortize table building cost against multiple calls to fib. Fib4

Dynamic Programming Build up to a solution Solve all smaller subproblems first Combine solutions to get answers to larger subproblems

Dynamic Programming Build up to a solution Issues Solve all smaller subproblems first Combine solutions to get answers to larger subproblems Issues Are there too many subproblems? Can answers be combined?

Example 2 - Knapsack Problem Imagine a homework problem with few different parts, A thru G A B C D E F G value: 7 9 5 12 14 6 12 time: 3 4 2 6 7 3 5 You have 15 hours Which parts should you complete in order to get “maximum” credit?

Problem Class Operations Research What about a “greedy algorithm” ? Prioritize tasks to maximize the outcome What about a “greedy algorithm” ?

A Dynamic Programming Approach Consider a general problem with N parts, 1, 2, …., N and time[i] and value[i] are the time and value of item i. Let T be the total time Idea Create a table A where A[i][t] denotes max value we get if we use items 1,2…i and allow t time.

Knapsack problem ctd.. A B C D E F G value: 7 9 5 12 14 6 12 time: 3 4 2 6 7 3 5 Table A t 0 1 2 3 4 5 6 7 8 9 15 | ------------------------------------------------------ i :0| |------------------------------------------------------ 1| 2| 3| 4| ...| A has size (N+1) by (T+1).

The big question? How to figure out the next row from the previous ones? valueArray[i][t] = MAX( valueArray[i-1][t], // don't use item i valueArray[i-1][t - time[i]] + value[i]) //use it.

Pseudo Code for(t=0; t <= T; ++t) valueArray[0][t] = 0; for(i=1; i <= N; ++i) { for(t=0; t < time[i]; ++t) valueArray[i][t] = valueArray[i-1][t]; for(t=time[i]; t <= T; ++t) { valueArray[i][t] = MAX( valueArray[i-1][t], valueArray[i-1][t - time[i]] + value[i]); }

Questions? Complete the table for i=4, t=11. What is the running time?

Work area

Code ComputeValue(N,T) // T = time left, N = # items still to choose from { if (T <= 0 || N = 0) return 0; if (time[N] > T) return ComputeValue(N-1,T); // can't use Nth item return max(value[N] + ComputeValue(N-1, T - Time[N]), ComputeValue(N-1, T)); } What is the runtime of this code?

Memoizing As we calculate values, make a memo of those ComputeValue(N,T) // T = time left, N = # items still to choose from { if (T <= 0 || N = 0) return 0; if (arr[N][T] != unknown) return arr[N][T]; // OK, we haven't computed it yet. Compute and store. if (time[N] > T) arr[N][T] = ComputeValue(N-1,T); else arr[N][T] = max(value[N] + ComputeValue(N-1, T - Time[N]), ComputeValue(N-1, T)); return arr[N][T]; } What is the runtime of this code?

Fractional Knapsack Problem aka Supermarket Shopping Spree

FKP formulation V(k, A) = V(k-1, A) V(k-1, A – sk) + vk Max shopping cart value when choosing from items 1…k and using a cart with capacity A V(k, A) = V(k-1, A) V(k-1, A – sk) + vk Possibility 1: Max value is the same as when choosing from items 1…k-1 max Max value from items 1…k-1, but leaving room for item k Possibility 2: Max value includes selecting item k

The memoization table V 0 1 2 3 … C 1 2 3 … n Let v1=10, s1=3, v2=2, s2=1, v3=9, s3=2 A V 0 1 2 3 … C 1 2 3 … n k

The memoization table V 0 1 2 3 … C 1 0 0 0 0 … 0 2 3 0 0 0 10 … 10 … Let v1=10, s1=3, v2=2, s2=1, v3=9, s3=2 A V 0 1 2 3 … C 1 2 3 … n 0 0 0 0 … 0 0 0 0 10 … 10 0 2 2 10 … 12 k 0 2 9 11 …

Using dynamic programming Key ingredients: Simple subproblems. Problem can be broken into subproblems, typically with solutions that are easy to store in a table/array. Subproblem optimization. Optimal solution is composed of optimal subproblem solutions. Subproblem overlap. Optimal solutions to separate subproblems can have subproblems in common.

Example 3 - Matrix multiplication Four matrices. Compute ABCD. A: 50 x 10 B: 10 x 40 C: 40 x 30 D: 30 x 5 Matrix multiplication is associative Can compute the product in many ways (((AB)C)D) (A((BC)D)) ((AB)(CD)) (A(B(CD))) etc.

Matrix multiplication Four matrices. Compute ABCD. A: 50 x 10 B: 10 x 40 C: 40 x 30 D: 30 x 5 Matrix multiplication is associative Can compute the product in many ways (((AB)C)D) (A((BC)D)) ((AB)(CD)) (A(B(CD))) etc. Cost of multiplication M1: p x q M2: q x r Naïve algorithm requires pqr multiplications

Matrix multiplication Four matrices. Compute ABCD. A: 50 x 10 B: 10 x 40 C: 40 x 30 D: 30 x 5 Cost of multiplication M1: p x q M2: q x r Naïve algorithm requires pqr multiplications

Matrix multiplication Four matrices. Compute ABCD. A: 50 x 10 B: 10 x 40 C: 40 x 30 D: 30 x 5 Cost of multiplication M1: p x q M2: q x r Naïve algorithm requires pqr multiplications Example costs (((AB)C)D) 50x10x40 + 50x40x30 + 50x30x5 = 87,500 (A(B(CD))) 40x30x5 + 10x40x5 + 50x10x5 = 10,500

Matrix multiplication How to find the best association? Can this be solved using a greedy approach?

Matrix multiplication How to find the best association? Can this be solved using a greedy approach?

Matrix multiplication How to find the best association? Can this be solved using a greedy approach?

Dynamic programming How best to associate A1 x A2 x . . . x An Determine best costs of all possible segments Aleft , . . ., Aright

Dynamic programming How best to associate A1 x A2 x . . . x An Determine best costs of all possible segments Aleft , . . ., Aright How many possible orderings? (A1 x A2 x. . .x Ai) (Ai+1 x Ai+2 x. . .x An) T(N) = sum1i<N T(i) T(N-i) Catalan numbers – exponential growth T(1)=T(2)=1 T(3)=2 T(4)=5

Using dynamic programming 4 ingredients needed: An optimization problem. Simple subproblems. Problem can be broken into subproblems Easy to store in a table or array. Subproblem optimization. Optimal solution is composed of optimal subproblem solutions. Subproblem overlap. Optimal solutions to separate subproblems can have subproblems in common.

Dynamic programming Are there too many subproblems? Can answers be combined?

Next Week More on Dynamic Programming