2/19/2016 1 ITCS 6114 Dynamic programming 0-1 Knapsack problem.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Knapsack Problem Section 7.6. Problem Suppose we have n items U={u 1,..u n }, that we would like to insert into a knapsack of size C. Each item u i has.
Chapter 5 Fundamental Algorithm Design Techniques.
Analysis of Algorithms
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Overview What is Dynamic Programming? A Sequence of 4 Steps
COMP8620 Lecture 8 Dynamic Programming.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 14 Dynamic.
Review: Dynamic Programming
Introduction to Algorithms
Dynamic Programming (II)
Greedy vs Dynamic Programming Approach
Dynamic Programming Reading Material: Chapter 7..
0-1 Knapsack Problem A burglar breaks into a museum and finds “n” items Let v_i denote the value of ith item, and let w_i denote the weight of the ith.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
© 5/7/2002 V.S. Subrahmanian1 Knapsack Problem Notes V.S. Subrahmanian University of Maryland.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
© 5/7/2002 V.S. Subrahmanian1 Knapsack Problem Notes V.S. Subrahmanian University of Maryland.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
1 The Greedy Method CSC401 – Analysis of Algorithms Lecture Notes 10 The Greedy Method Objectives Introduce the Greedy Method Use the greedy method to.
Cutting Planes II. The Knapsack Problem Recall the knapsack problem: n items to be packed in a knapsack (can take multiple copies of the same item). The.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Introduction to Algorithms Chapter 16: Greedy Algorithms.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Dynamic Programming … Continued 0-1 Knapsack Problem.
CS 361 – Chapter 10 “Greedy algorithms” It’s a strategy of solving some problems –Need to make a series of choices –Each choice is made to maximize current.
Dynamic Programming … Continued
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Introduction to Algorithms: Brute-Force Algorithms.
Review: Dynamic Programming
Algorithm Design Methods
Dynamic Programming.
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming.
CS6045: Advanced Algorithms
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Longest Common Subsequence
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
CSC 413/513- Intro to Algorithms
Lecture 4 Dynamic Programming
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
0-1 Knapsack problem.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Knapsack Problem A dynamic approach.
Algorithm Course Dr. Aref Rashad
Presentation transcript:

2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem

2/19/ Knapsack problem n Given a knapsack with maximum capacity W, and a set S consisting of n items n Each item i has some weight w i and benefit value b i (all w i, b i and W are integer values) n Problem: How to pack the knapsack to achieve maximum total value of packed items?

2/19/ Knapsack problem: a picture W = 20 wiwi bibi WeightBenefit value This is a knapsack Max weight: W = 20 Items

2/19/ Knapsack problem n Problem, in other words, is to find n The problem is called a “0-1” problem, because each item must be entirely accepted or rejected. n Just another version of this problem is the “Fractional Knapsack Problem”, where we can take fractions of items.

2/19/ Knapsack problem: brute- force approach Let’s first solve this problem with a straightforward algorithm n Since there are n items, there are 2 n possible combinations of items. n We go through all combinations and find the one with the most total value and with total weight less or equal to W n Running time will be O(2 n )

2/19/ Knapsack problem: brute- force approach n Can we do better? n Yes, with an algorithm based on dynamic programming n We need to carefully identify the subproblems Let’s try this: If items are labeled 1..n, then a subproblem would be to find an optimal solution for S k = {items labeled 1, 2,.. k}

2/19/ Defining a Subproblem If items are labeled 1..n, then a subproblem would be to find an optimal solution for S k = {items labeled 1, 2,.. k} n This is a valid subproblem definition. n The question is: can we describe the final solution (S n ) in terms of subproblems (S k )? n Unfortunately, we can’t do that. Explanation follows….

2/19/ Defining a Subproblem Max weight: W = 20 For S 4 : Total weight: 14; total benefit: 20 w 1 =2 b 1 =3 w 2 =4 b 2 =5 w 3 =5 b 3 =8 w 4 =3 b 4 =4 wiwi bibi WeightBenefit 9 Item # S4S4 S5S5 w 1 =2 b 1 =3 w 2 =4 b 2 =5 w 3 =5 b 3 =8 w 4 =9 b 4 =10 For S 5 : Total weight: 20 total benefit: 26 Solution for S 4 is not part of the solution for S 5 !!! ?

9 Defining a Subproblem (continued) n As we have seen, the solution for S 4 is not part of the solution for S 5 n So our definition of a subproblem is flawed and we need another one! n Let’s add another parameter: w, which will represent the exact weight for each subset of items n The subproblem then will be to compute B[k,w]

2/19/ Recursive Formula for subproblems n It means, that the best subset of S k that has total weight w is one of the two: 1) the best subset of S k-1 that has total weight w, or 2) the best subset of S k-1 that has total weight w-w k plus the item k n Recursive formula for subproblems:

2/19/ Recursive Formula n The best subset of S k that has the total weight w, either contains item k or not. n First case: w k >w. Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable n Second case: w k <=w. Then the item k can be in the solution, and we choose the case with greater value

2/19/ Knapsack Algorithm for w = 0 to W B[0,w] = 0 for i = 0 to n B[i,0] = 0 for w = 0 to W if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w

2/19/ Running time for w = 0 to W B[0,w] = 0 for i = 0 to n B[i,0] = 0 for w = 0 to W What is the running time of this algorithm? O(W) Repeat n times O(n*W) Remember that the brute-force algorithm takes O(2 n )

2/19/ Example Let’s run our algorithm on the following data: n = 4 (# of elements) W = 5 (max weight) Elements (weight, benefit): (2,3), (3,4), (4,5), (5,6)

2/19/ Example (2) for w = 0 to W B[0,w] = W i

2/19/ Example (3) for i = 0 to n B[i,0] = W i

2/19/ Example (4) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=1 w-w i =-1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0

2/19/ Example (5) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=2 w-w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3

2/19/ Example (6) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=3 w-w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (7) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=4 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (8) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=5 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (9) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=1 w-w i =-2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (10) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=2 w-w i =-1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (11) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=3 w-w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (12) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=4 w-w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (13) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=5 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (14) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=1..3 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (15) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=4 w- w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (15) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=5 w- w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (16) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=1..4 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Example (17) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=5 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

2/19/ Comments n This algorithm only finds the max possible value that can be carried in the knapsack n To know the items that make this maximum value, an addition to this algorithm is necessary n Please see LCS algorithm from the previous lecture for the example how to extract this data from the table we built

2/19/ Conclusion n Dynamic programming is a useful technique of solving certain kind of problems n When the solution can be recursively described in terms of partial solutions, we can store these partial solutions and re-use them as necessary n Running time (Dynamic Programming algorithm vs. naïve algorithm): –LCS: O(m*n) vs. O(n * 2 m ) –0-1 Knapsack problem: O(W*n) vs. O(2 n )