D ESIGN & A NALYSIS OF A LGORITHM 13 – D YNAMIC P ROGRAMMING Informatics Department Parahyangan Catholic University.

Slides:



Advertisements
Similar presentations
Dynamic Programming.
Advertisements

Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Types of Algorithms.
Analysis of Algorithms
Greed is good. (Some of the time)
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
1 Divide & Conquer Algorithms. 2 Recursion Review A function that calls itself either directly or indirectly through another function Recursive solutions.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
1 Dynamic Programming Jose Rolim University of Geneva.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
First Ingredient of Dynamic Programming
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Dynamic Programming Chapter 15 Highlights Charles Tappert Seidenberg School of CSIS, Pace University.
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
CS 8833 Algorithms Algorithms Dynamic Programming.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Dynamic Programming David Kauchak cs302 Spring 2013.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CSC5101 Advanced Algorithms Analysis
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
CSE 373: Data Structures and Algorithms Lecture 21: Graphs V 1.
All-pairs Shortest paths Transitive Closure
Dynamic Programming Sequence of decisions. Problem state.
Design & Analysis of Algorithm Dynamic Programming
CS 3343: Analysis of Algorithms
Types of Algorithms.
Lecture 5 Dynamic Programming
CS 3343: Analysis of Algorithms
Unit-5 Dynamic Programming
Types of Algorithms.
Trevor Brown DC 2338, Office hour M3-4pm
Introduction to Algorithms: Dynamic Programming
DYNAMIC PROGRAMMING.
Types of Algorithms.
Dynamic Programming.
Presentation transcript:

D ESIGN & A NALYSIS OF A LGORITHM 13 – D YNAMIC P ROGRAMMING Informatics Department Parahyangan Catholic University

I NTRODUCTION We have seen some algorithm design principles, such as divide-and-conquer, brute force, and greedy Brute force is widely applicable, but inefficient Divide-and-conquer and greedy are fast, but only applicable on very specific problems. Dynamic Programming is somewhere in between them, while still providing polynomial time complexity, it is widely applicable.

D YNAMIC P ROGRAMMING (DP) Similar to divide-and-conquer, DP solves problem by combining the solutions of its sub-problems. The term “programming” here refers to a tabular method DP solves a sub-problem, then saves its solution in a table

F INDING N - TH FIBONACCI NUMBER 0n=0 F(n) = 1n=1 F(n-1) + F(n-2)n>1 FIBONACCI (n) if (n==0) return 0 else if (n==1) return 1 else return FIBONACCI(n-1) + FIBONACCI(n-2) FIBONACCI (n) if (n==0) return 0 else if (n==1) return 1 else return FIBONACCI(n-1) + FIBONACCI(n-2) Recursive solution :

F INDING N - TH FIBONACCI NUMBER F(3)F(2) F(1)F(2)F(0)F(1) F(0)F(1) F(2) F(0)F(1) F(5) F(4) F(3) Recursive solution :

F INDING N - TH FIBONACCI NUMBER Memoization : Maintain a table to store sub-problem’s solution // Initially : Arr[0] = 0 // Arr[1] = 1 // Arr[2..n] = -1 FIBONACCI (n) if (Arr[n] != -1) return Arr[n] else Arr[n]=FIBONACCI(n-1) + FIBONACCI(n-2) return Arr[n] // Initially : Arr[0] = 0 // Arr[1] = 1 // Arr[2..n] = -1 FIBONACCI (n) if (Arr[n] != -1) return Arr[n] else Arr[n]=FIBONACCI(n-1) + FIBONACCI(n-2) return Arr[n] solution with memoization:

F INDING N - TH FIBONACCI NUMBER F(3)F(2) F(1)F(2) F(0)F(1) F(5) F(4) F(3) solution with memoization: n F(n) =1 =2

F INDING N - TH FIBONACCI NUMBER Bottom-up solution : Use the natural ordering of sub-problems, solve them one-by-one starting from the “smallest” one FIBONACCI (n) Arr[0] = 0 Arr[1] = 1 if(n>1) for i=2 to n do Arr[i] = Arr[i-1] + Arr[i-2] return Arr[n] FIBONACCI (n) Arr[0] = 0 Arr[1] = 1 if(n>1) for i=2 to n do Arr[i] = Arr[i-1] + Arr[i-2] return Arr[n] Bottom-up solution:

T IME COMPLEXITY FIBONACCI (n) if (n==0) return 0 else if (n==1) return 1 else return FIBONACCI(n-1) + FIBONACCI(n-2) FIBONACCI (n) if (n==0) return 0 else if (n==1) return 1 else return FIBONACCI(n-1) + FIBONACCI(n-2) Recursive solution : Every instance has ≤ 2 recursive calls F(n-2)F(n-3)F(n-4)F(n-3) F(n) F(n-1) F(n-2) Height = n Therefore, time complexity is O(2 n )

T IME COMPLEXITY FIBONACCI (n) Arr[0] = 0 Arr[1] = 1 if(n>1) for i=2 to n do Arr[i] = Arr[i-1] + Arr[i-2] return Arr[n] FIBONACCI (n) Arr[0] = 0 Arr[1] = 1 if(n>1) for i=2 to n do Arr[i] = Arr[i-1] + Arr[i-2] return Arr[n] Bottom-up solution: There is a loop that iterates ≤ n times, each doing a constant amount of work. So the time complexity is O(n)

R OD C UTTING P ROBLEM Serling Enterprises buys long steel rods and cuts them into shorter rods, which it then sells. Each cut is free, however different rod length sells for different price. The management of Sterling Enterprises wants to know the best way to cut up the rods. We assume that we know : length price Example : n=4 (n-1) possible cut locations  2 n-1 ways of cutting

R OD C UTTING P ROBLEM Example : n=4 length price = =10 18 =9 115 = =4 BEST

R OD C UTTING P ROBLEM Consider a rod of length n, and we cut a rod of length i Then we left with a rod of length n-i Naturally, we want to optimize the selling price of the remaining rod ni1

R OD C UTTING P ROBLEM ROD-CUTTING (n) if (n==0) return 0 else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) return best ROD-CUTTING (n) if (n==0) return 0 else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) return best Recursive solution :

R OD C UTTING P ROBLEM Recursive solution : RC(4) RC(3) RC(2) RC(1) RC(0)RC(1)RC(0) RC(1)RC(0) RC(2) RC(1) RC(0) Same problem as recursive Fibonacci

R OD C UTTING P ROBLEM // Initially : Arr[0] = 0 // Arr[1..n] = -∞ ROD-CUTTING (n) if Arr[n] ≥ 0 return Arr[n] else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) Arr[n] = best return best // Initially : Arr[0] = 0 // Arr[1..n] = -∞ ROD-CUTTING (n) if Arr[n] ≥ 0 return Arr[n] else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) Arr[n] = best return best Solution with memoization: Exercise Write a bottom-up solution for Rod Cutting problem !

T IME C OMPLEXITY ROD-CUTTING (n) if (n==0) return 0 else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) return best ROD-CUTTING (n) if (n==0) return 0 else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) return best Recursive solution : Every instance has ≤ n recursive calls, and the depth of the recursive tree is O(n). So the time complexity is O(n n ) What is the time complexity for the bottom-up solution ?

S HORTEST P ATH IN DAG DAG = Directed Acyclic Graph Example : S A C B D E

S HORTEST P ATH IN DAG SACBDE S A C B D E Sort using topological sort

S HORTEST P ATH IN DAG SACBDE Starting from S, suppose we want to reach node D The only way to reach D is either through B or C dist(D) = MIN(dist(B)+1, dist(C)+3)

S HORTEST P ATH IN DAG A similar relation can be written for every node. As we have seen before, it is best to use bottom-up way of computing dist, that is from “left most” node to “right most” node DAG-SHORTEST-PATH () initialize all dist[.] to ∞ dist[S] = 0 for each vertex v except S in left-to-right order do for each edge (u,v) do dist[v] = MIN(dist[v], dist[u] + weight(u,v)) DAG-SHORTEST-PATH () initialize all dist[.] to ∞ dist[S] = 0 for each vertex v except S in left-to-right order do for each edge (u,v) do dist[v] = MIN(dist[v], dist[u] + weight(u,v))

D YNAMIC P ROGRAMMING DAG’s shortest path is a very general technique. We model many other problems into DAG problem. Example #1: Fibonacci Example #2 : Rod Cutting

P ROPERTIES OF D YNAMIC P ROGRAMMING Problem can be divided into smaller sub-problems Optimal substructure: an optimal solution to the problem contains within its optimal solutions to sub-problems Overlapping sub-problems: the space of sub-problems is “small”, in the sense that a recursive algorithm for the problem solves the same sub-problems over and over, rather than always generating new sub-problems

D YNAMIC P ROGRAMMING V.S. D IVIDE - AND -C ONQUER Divide-and-Conquer Dynamic Programming And so on…

L ONGEST I NCREASING S UBSEQUENCE Given a sequence of numbers a 1, a 2, a 3, …, a n. A subsequence is any subset of these numbers taken in order of the form a i1, a i2, a i3 …, a ik, where 1 ≤ i1 < i2 < … < ik ≤ n, and an increasing subsequence is one which the numbers are getting strictly larger. The task is to find the increasing subsequence of greatest length. Example: 5, 2, 8, 6, 3, 6, 9, 7

L ONGEST I NCREASING S UBSEQUENCE How do we model this problem into a DAG ? any number a i can precede a j iff a i < a j Consider this number, LIS that ends here must be either: subsequence consist of “6” alone LIS that ends at “5” + “6” LIS that ends at “2” + “6” LIS that ends at “”3” + “6”

L ONGEST I NCREASING S UBSEQUENCE How do we find the LIS of sequence 5, 2, 8, 6, 3, 6, 9, 7 ? Does the optimal solution always ends at “7” ? // Initially L[1..n] = 1 LONGEST-INCREASING-SUBSEQUENCE () for j=2 to n do for each i<j such that a i < a j do if(L[j] < 1 + L[i]) then L[j] = 1 + L[i] return maximum of L[1..n] // Initially L[1..n] = 1 LONGEST-INCREASING-SUBSEQUENCE () for j=2 to n do for each i<j such that a i < a j do if(L[j] < 1 + L[i]) then L[j] = 1 + L[i] return maximum of L[1..n] This algorithm only gives the sequence’s length. How do we find the actual sequence ?

R ECONSTRUCTING A SOLUTION We can extend the dynamic programming approach to record not only the optimal value for each sub-problem, but also the choice that led to that value // Initially L[1..n] = 1 // Initially prev[1..n] = 0 LONGEST-INCREASING-SUBSEQUENCE () for j=2 to n do for each i<j such that a i < a j do if(L[j] < 1 + L[i]) then L[j] = 1 + L[i] prev[j] = i return maximum of L[1..n] and array prev // Initially L[1..n] = 1 // Initially prev[1..n] = 0 LONGEST-INCREASING-SUBSEQUENCE () for j=2 to n do for each i<j such that a i < a j do if(L[j] < 1 + L[i]) then L[j] = 1 + L[i] prev[j] = i return maximum of L[1..n] and array prev

E XERCISE : Y UCKDONALD ’ S Yuckdonald’s is considering opening a series of restaurants along Quaint Valley Highway (QVH). The n possible locations are along a straight line, and the distance of these locations from the start of QVH are, in miles and in increasing order, m 1, m 2, …, m n. The constraints are as follows: At each location, Yuckdonald’s may open at most one restaurant. The expected profit from opening a restaurant at location i is p i > 0 Any two restaurants should be at least k miles apart, where k is a positive integer