Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Analysis of Algorithms
Greed is good. (Some of the time)
Chapter 4 The Greedy Approach. Minimum Spanning Tree A tree is an acyclic, connected, undirected graph. A spanning tree for a given graph G=, where E.
Greedy vs Dynamic Programming Approach
Chapter 7 Dynamic Programming 7.
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
0-1 Knapsack Problem A burglar breaks into a museum and finds “n” items Let v_i denote the value of ith item, and let w_i denote the weight of the ith.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
Chapter 1 Introduction Definition of Algorithm An algorithm is a finite sequence of precise instructions for performing a computation or for solving.
Analysis of Recursive Algorithms October 29, 2014
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
CS 8833 Algorithms Algorithms Dynamic Programming.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
CSC-305 Design and Analysis of AlgorithmsBS(CS) -6 Fall-2014CSC-305 Design and Analysis of AlgorithmsBS(CS) -6 Fall-2014 Design and Analysis of Algorithms.
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
December 14, 2015 Design and Analysis of Computer Algorithm Pradondet Nilagupta Department of Computer Engineering.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
1 Ch.19 Divide and Conquer. 2 BIRD’S-EYE VIEW Divide and conquer algorithms Decompose a problem instance into several smaller independent instances May.
1 Chapter 16: Greedy Algorithm. 2 About this lecture Introduce Greedy Algorithm Look at some problems solvable by Greedy Algorithm.
8 -1 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if i  2 Solved.
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Divide and Conquer. Problem Solution 4 Example.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
Dynamic Programming Sequence of decisions. Problem state.
Lecture on Design and Analysis of Computer Algorithm
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
CS 3343: Analysis of Algorithms
The Greedy Method and Text Compression
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Design and Analysis of Algorithm
The Knapsack Problem.
Chapter 16: Greedy Algorithm
Prepared by Chen & Po-Chuan 2016/03/29
Unit 3 (Part-I): Greedy Algorithms
Dynamic Programming Steps.
CS 3343: Analysis of Algorithms
Greedy Algorithm Enyue (Annie) Lu.
Dynamic Programming General Idea
Exam 2 LZW not on syllabus. 73% / 75%.
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Advanced Algorithms Analysis and Design
Lecture 6 Topics Greedy Algorithm
Dynamic Programming General Idea
Algorithm Design Methods
Lecture 4 Dynamic Programming
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming Sequence of decisions. Problem state.
Dynamic Programming Steps.
Major Design Strategies
Dynamic Programming Steps.
Advance Algorithm Dynamic Programming
Dynamic Programming.
Dynamic Programming Sequence of decisions. Problem state.
Knapsack Problem A dynamic approach.
Major Design Strategies
Presentation transcript:

Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.

Sequence Of Decisions As in the greedy method, the solution to a problem is viewed as the result of a sequence of decisions. Unlike the greedy method, decisions are not made in a greedy and binding manner.

0/1 Knapsack Problem Let x i = 1 when item i is selected and let x i = 0 when item i is not selected. i = 1 n p i x i maximize i = 1 n w i x i <= c subject to and x i = 0 or 1 for all i All profits and weights are positive.

Sequence Of Decisions Decide the x i values in the order x 1, x 2, x 3, …, x n. Decide the x i values in the order x n, x n-1, x n-2, …, x 1. Decide the x i values in the order x 1, x n, x 2, x n-1, … Or any other order.

Problem State The state of the 0/1 knapsack problem is given by  the weights and profits of the available items  the capacity of the knapsack When a decision on one of the x i values is made, the problem state changes.  item i is no longer available  the remaining knapsack capacity may be less

Problem State Suppose that decisions are made in the order x 1, x 2, x 3, …, x n. The initial state of the problem is described by the pair (1, c).  Items 1 through n are available (the weights, profits and n are implicit).  The available knapsack capacity is c. Following the first decision the state becomes one of the following:  (2, c) … when the decision is to set x 1 = 0.  (2, c-w 1 ) … when the decision is to set x 1 = 1.

Problem State Suppose that decisions are made in the order x n, x n-1, x n-2, …, x 1. The initial state of the problem is described by the pair (n, c).  Items 1 through n are available (the weights, profits and first item index are implicit).  The available knapsack capacity is c. Following the first decision the state becomes one of the following:  (n-1, c) … when the decision is to set x n = 0.  (n-1, c-w n ) … when the decision is to set x n = 1.

Principle Of Optimality An optimal solution satisfies the following property:  No matter what the first decision, the remaining decisions are optimal with respect to the state that results from this decision. Dynamic programming may be used only when the principle of optimality holds.

0/1 Knapsack Problem Suppose that decisions are made in the order x 1, x 2, x 3, …, x n. Let x 1 = a 1, x 2 = a 2, x 3 = a 3, …, x n = a n be an optimal solution. If a 1 = 0, then following the first decision the state is (2, c). a 2, a 3, …, a n must be an optimal solution to the knapsack instance given by the state (2,c).

x 1 = a 1 = 0 If not, this instance has a better solution b 2, b 3, …, b n. i = 2 n p i x i maximize i = 2 n w i x i <= c subject to and x i = 0 or 1 for all i i = 2 n p i b i > i = 2 n p i a i

x 1 = a 1 = 0 x 1 = a 1, x 2 = b 2, x 3 = b 3, …, x n = b n is a better solution to the original instance than is x 1 = a 1, x 2 = a 2, x 3 = a 3, …, x n = a n. So x 1 = a 1, x 2 = a 2, x 3 = a 3, …, x n = a n cannot be an optimal solution … a contradiction with the assumption that it is optimal.

x 1 = a 1 = 1 Next, consider the case a 1 = 1. Following the first decision the state is (2, c-w 1 ). a 2, a 3, …, a n must be an optimal solution to the knapsack instance given by the state (2,c -w 1 ).

x 1 = a 1 = 1 If not, this instance has a better solution b 2, b 3, …, b n. i = 2 n p i x i maximize i = 2 n w i x i <= c- w 1 subject to and x i = 0 or 1 for all i i = 2 n p i b i > i = 2 n p i a i

x 1 = a 1 = 1 x 1 = a 1, x 2 = b 2, x 3 = b 3, …, x n = b n is a better solution to the original instance than is x 1 = a 1, x 2 = a 2, x 3 = a 3, …, x n = a n. So x 1 = a 1, x 2 = a 2, x 3 = a 3, …, x n = a n cannot be an optimal solution … a contradiction with the assumption that it is optimal.

0/1 Knapsack Problem Therefore, no matter what the first decision, the remaining decisions are optimal with respect to the state that results from this decision. The principle of optimality holds and dynamic programming may be applied.

Dynamic Programming Recurrence Let f(i,y) be the profit value of the optimal solution to the knapsack instance defined by the state (i,y).  Items i through n are available.  Available capacity is y. For the time being assume that we wish to determine only the value of the best solution.  Later we will worry about determining the x i s that yield this maximum value. Under this assumption, our task is to determine f(1,c).

Dynamic Programming Recurrence f(n,y) is the value of the optimal solution to the knapsack instance defined by the state (n,y).  Only item n is available.  Available capacity is y. If w n <= y, f(n,y) = p n. If w n > y, f(n,y) = 0.

Dynamic Programming Recurrence Suppose that i < n. f(i,y) is the value of the optimal solution to the knapsack instance defined by the state (i,y).  Items i through n are available.  Available capacity is y. Suppose that in the optimal solution for the state (i,y), the first decision is to set x i = 0. From the principle of optimality (we have shown that this principle holds for the knapsack problem), it follows that f(i,y) = f(i+1,y).

Dynamic Programming Recurrence The only other possibility for the first decision is x i = 1. The case x i = 1 can arise only when y >= w i. From the principle of optimality, it follows that f(i,y) = f(i+1,y-w i ) + p i. Combining the two cases, we get  f(i,y) = f(i+1,y) whenever y < w i.  f(i,y) = max{f(i+1,y), f(i+1,y-w i ) + p i }, y >= w i.

Recursive Code f(i,y) */ private static int f(int i, int y) { if (i == n) return (y < w[n]) ? 0 : p[n]; if (y < w[i]) return f(i + 1, y); return Math.max(f(i + 1, y), f(i + 1, y - w[i]) + p[i]); }

Recursion Tree f(1,c) f(2,c ) f(2,c-w 1 ) f(3,c) f(3,c-w 2 )f(3,c-w 1 )f(3,c-w 1 –w 2 ) f(4,c)f(4,c-w 3 )f(4,c-w 2 ) f(5,c) f(4,c-w 1 –w 3 ) f(5,c-w 1 –w 3 –w 4 )

Time Complexity Let t(n) be the time required when n items are available. t(0) = t(1) = a, where a is a constant. When t > 1, t(n) <= 2t(n-1) + b, where b is a constant. t(n) = O(2 n ). Solving dynamic programming recurrences recursively can be hazardous to run time.

Reducing Run Time f(1,c) f(2,c ) f(2,c-w 1 ) f(3,c) f(3,c-w 2 )f(3,c-w 1 )f(3,c-w 1 –w 2 ) f(4,c)f(4,c-w 3 )f(4,c-w 2 ) f(5,c) f(4,c-w 1 –w 3 ) f(5,c-w 1 –w 3 –w 4 )

Time Complexity Level i of the recursion tree has up to 2 i-1 nodes. At each such node an f(i,y) is computed. Several nodes may compute the same f(i,y). We can save time by not recomputing already computed f(i,y)s. Save computed f(i,y)s in a dictionary.  Key is (i, y) value.  f(i, y) is computed recursively only when (i,y) is not in the dictionary.  Otherwise, the dictionary value is used.

Integer Weights Assume that each weight is an integer. The knapsack capacity c may also be assumed to be an integer. Only f(i,y)s with 1 <= i <= n and 0 <= y <= c are of interest. Even though level i of the recursion tree has up to 2 i-1 nodes, at most c+1 represent different f(i,y)s.

Integer Weights Dictionary Use an array fArray[][] as the dictionary. fArray[1:n][0:c] fArray[i][y] = -1 iff f(i,y) not yet computed. This initialization is done before the recursive method is invoked. The initialization takes O(cn) time.

No Recomputation Code private static int f(int i, int y) { if (fArray[i][y] >= 0) return fArray[i][y]; if (i == n) {fArray[i][y] = (y < w[n]) ? 0 : p[n]; return fArray[i][y];} if (y < w[i]) fArray[i][y] = f(i + 1, y); else fArray[i][y] = Math.max(f(i + 1, y), f(i + 1, y - w[i]) + p[i]); return fArray[i][y]; }

Time Complexity t(n) = O(cn). Analysis done in text. Good when cn is small relative to 2 n. n = 3, c = w = [100102, , 6327] p = [102, 505, 5] 2 n = 8 cn =