Design & Analysis of Algorithm Dynamic Programming

Slides:



Advertisements
Similar presentations
Introduction to Algorithms 6.046J/18.401J/SMA5503
Advertisements

Dynamic Programming.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Introduction to Algorithms
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
IS 2610: Data Structures Recursion, Divide and conquer Dynamic programming, Feb 2, 2004.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
D ESIGN & A NALYSIS OF A LGORITHM 13 – D YNAMIC P ROGRAMMING Informatics Department Parahyangan Catholic University.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms General principle of greedy algorithm
Dynamic Programming Typically applied to optimization problems
All-pairs Shortest paths Transitive Closure
Dynamic Programming Sequence of decisions. Problem state.
Rod cutting Decide where to cut steel rods:
Lecture 5 Dynamic Programming
David Meredith Dynamic programming David Meredith
Algorithmics - Lecture 11
CS 3343: Analysis of Algorithms
Introduction to the Design and Analysis of Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Types of Algorithms.
Dynamic Programming CISC4080, Computer Algorithms CIS, Fordham Univ.
Lecture 5 Dynamic Programming
CS200: Algorithm Analysis
CSCE 411 Design and Analysis of Algorithms
Prepared by Chen & Po-Chuan 2016/03/29
Data Structures and Algorithms
CS 3343: Analysis of Algorithms
Unit-5 Dynamic Programming
Types of Algorithms.
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Advanced Algorithms Analysis and Design
Dynamic Programming.
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
Ch. 15: Dynamic Programming Ming-Te Chi
A Note on Useful Algorithmic Strategies
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
Trevor Brown DC 2338, Office hour M3-4pm
Introduction to Algorithms: Dynamic Programming
Dynamic Programming.
DYNAMIC PROGRAMMING.
Types of Algorithms.
Greedy Algorithms Comp 122, Spring 2004.
Lecture 4 Dynamic Programming
COMP108 Algorithmic Foundations Dynamic Programming
Dynamic Programming II DP over Intervals
Dynamic Programming CISC4080, Computer Algorithms CIS, Fordham Univ.
Analysis of Algorithms CS 477/677
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
Dynamic Programming.
Time Complexity and the divide and conquer strategy
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Design & Analysis of Algorithm Dynamic Programming Informatics Department Parahyangan Catholic University

Introduction We have seen some algorithm design principles, such as divide-and-conquer, brute force, and greedy Brute force is widely applicable, but inefficient Divide-and-conquer and greedy are fast, but only applicable on very specific problems. Dynamic Programming is somewhere in between them, while still providing polynomial time complexity, it is widely applicable.

Dynamic Programming (DP) Similar to divide-and-conquer, DP solves problem by combining the solutions of its sub-problems. The term “programming” here refers to a tabular method DP solves a sub-problem, then saves its solution in a table

Finding n-th fibonacci number 0 n=0 F(n) = 1 n=1 F(n-1) + F(n-2) n>1 FIBONACCI (n) if (n==0) return 0 else if (n==1) return 1 else return FIBONACCI(n-1) + FIBONACCI(n-2) Recursive solution :

Finding n-th fibonacci number Recursive solution : F(5) F(4) F(3) F(3) F(2) F(1) F(2) F(1) F(2) F(0) F(1) F(0) F(1) F(0) F(1)

Finding n-th fibonacci number Memoization : Maintain a table to store sub-problem’s solution // Initially : Arr[0] = 0 // Arr[1] = 1 // Arr[2..n] = -1 FIBONACCI (n) if (Arr[n] != -1) return Arr[n] else Arr[n]=FIBONACCI(n-1) + FIBONACCI(n-2) solution with memoization:

Finding n-th fibonacci number solution with memoization: F(5) F(4) F(3) =2 F(3) F(2) =1 F(1) F(2) F(0) F(1) n 1 2 3 4 5 F(n) -1 1 2 3 5

Finding n-th fibonacci number Bottom-up solution : Use the natural ordering of sub-problems, solve them one-by-one starting from the “smallest” one FIBONACCI (n) Arr[0] = 0 Arr[1] = 1 if(n>1) for i=2 to n do Arr[i] = Arr[i-1] + Arr[i-2] return Arr[n] Bottom-up solution:

Time complexity Recursive solution : FIBONACCI (n) if (n==0) return 0 else if (n==1) return 1 else return FIBONACCI(n-1) + FIBONACCI(n-2) Every instance has ≤ 2 recursive calls Height = n F(n-2) F(n-3) F(n-4) F(n) F(n-1) Therefore, time complexity is O(2n)

Time complexity Bottom-up solution: FIBONACCI (n) Arr[0] = 0 if(n>1) for i=2 to n do Arr[i] = Arr[i-1] + Arr[i-2] return Arr[n] Bottom-up solution: There is a loop that iterates ≤ n times, each doing a constant amount of work. So the time complexity is O(n)

Rod Cutting Problem Example : n=4 Serling Enterprises buys long steel rods and cuts them into shorter rods, which it then sells. Each cut is free, however different rod length sells for different price. The management of Sterling Enterprises wants to know the best way to cut up the rods. We assume that we know : length 1 2 3 4 5 6 7 8 9 10 price 17 20 24 30 Example : n=4 (n-1) possible cut locations  2n-1 ways of cutting

Rod Cutting Problem Example : n=4 length 1 2 3 4 5 6 7 8 9 10 price 17 20 24 30 Example : n=4 1 5 =7 9 =9 1 8 =9 1 5 =7 BEST 5 =10 1 5 =7 1 8 =9 1 =4

Rod Cutting Problem 1 i n Consider a rod of length n, and we cut a rod of length i Then we left with a rod of length n-i Naturally, we want to optimize the selling price of the remaining rod

Rod Cutting Problem Recursive solution : ROD-CUTTING (n) if (n==0) return 0 else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) return best

Same problem as recursive Fibonacci Rod Cutting Problem Recursive solution : RC(4) RC(2) RC(1) RC(0) RC(3) RC(2) RC(1) RC(0) RC(1) RC(0) RC(0) RC(1) RC(0) RC(0) RC(0) Same problem as recursive Fibonacci RC(0)

Rod Cutting Problem Exercise Solution with memoization: // Initially : Arr[0] = 0 // Arr[1..n] = -∞ ROD-CUTTING (n) if Arr[n] ≥ 0 return Arr[n] else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) Arr[n] = best return best Exercise Write a bottom-up solution for Rod Cutting problem !

What is the time complexity for the bottom-up solution ? Recursive solution : ROD-CUTTING (n) if (n==0) return 0 else best = -∞ for i=1 to n do best = MAX(best, price[i]+ROD-CUTTING(n-i)) return best Every instance has ≤ n recursive calls, and the depth of the recursive tree is O(n). So the time complexity is O(nn) What is the time complexity for the bottom-up solution ?

Shortest Path in DAG DAG = Directed Acyclic Graph Example : S A C B D 2 1 4 6 3

Shortest Path in DAG S A C B D E 2 1 4 6 3 Sort using topological sort

Shortest Path in DAG Starting from S, suppose we want to reach node D The only way to reach D is either through B or C dist(D) = MIN(dist(B)+1 , dist(C)+3) 3 2 4 6 1 1 S C A B D E 1 2

Shortest Path in DAG A similar relation can be written for every node. As we have seen before, it is best to use bottom-up way of computing dist, that is from “left most” node to “right most” node DAG-SHORTEST-PATH () initialize all dist[.] to ∞ dist[S] = 0 for each vertex v except S in left-to-right order do for each edge (u,v) do dist[v] = MIN(dist[v] , dist[u] + weight(u,v))

Dynamic Programming DAG’s shortest path is a very general technique. We model many other problems into DAG problem. Example #1: Fibonacci Example #2 : Rod Cutting 2 1 3 4 5 2 1 3 4

Properties of Dynamic Programming Problem can be divided into smaller sub-problems Optimal substructure: an optimal solution to the problem contains within its optimal solutions to sub-problems Overlapping sub-problems: the space of sub-problems is “small”, in the sense that a recursive algorithm for the problem solves the same sub-problems over and over, rather than always generating new sub-problems

Dynamic Programming V.S. Divide-and-Conquer And so on…

Longest Increasing Subsequence Given a sequence of numbers a1, a2, a3, … , an. A subsequence is any subset of these numbers taken in order of the form ai1, ai2, ai3…, aik, where 1 ≤ i1 < i2 < … < ik ≤ n, and an increasing subsequence is one which the numbers are getting strictly larger. The task is to find the increasing subsequence of greatest length. Example: 5, 2, 8, 6, 3, 6, 9, 7

Longest Increasing Subsequence How do we model this problem into a DAG ? any number ai can precede aj iff ai < aj Consider this number, LIS that ends here must be either: subsequence consist of “6” alone LIS that ends at “5” + “6” LIS that ends at “2” + “6” LIS that ends at “”3” + “6” 5 2 8 6 3 6 9 7

Longest Increasing Subsequence How do we find the LIS of sequence 5, 2, 8, 6, 3, 6, 9, 7 ? Does the optimal solution always ends at “7” ? // Initially L[1..n] = 1 LONGEST-INCREASING-SUBSEQUENCE () for j=2 to n do for each i<j such that ai < aj do if(L[j] < 1 + L[i]) then L[j] = 1 + L[i] return maximum of L[1..n] This algorithm only gives the sequence’s length. How do we find the actual sequence ?

Reconstructing a solution We can extend the dynamic programming approach to record not only the optimal value for each sub-problem, but also the choice that led to that value // Initially L[1..n] = 1 // Initially prev[1..n] = 0 LONGEST-INCREASING-SUBSEQUENCE () for j=2 to n do for each i<j such that ai < aj do if(L[j] < 1 + L[i]) then L[j] = 1 + L[i] prev[j] = i return maximum of L[1..n] and array prev

Exercise : Yuckdonald’s Yuckdonald’s is considering opening a series of restaurants along Quaint Valley Highway (QVH). The n possible locations are along a straight line, and the distance of these locations from the start of QVH are, in miles and in increasing order, m1, m2, …, mn. The constraints are as follows: At each location, Yuckdonald’s may open at most one restaurant. The expected profit from opening a restaurant at location i is pi > 0 Any two restaurants should be at least k miles apart, where k is a positive integer