0-1 Knapsack Problem A burglar breaks into a museum and finds “n” items Let v_i denote the value of ith item, and let w_i denote the weight of the ith.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Algorithm Design approaches Dr. Jey Veerasamy. Petrol cost minimization problem You need to go from S to T by car, spending the minimum for petrol. 2.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Analysis of Algorithms
Dynamic Programming (II)
Greedy vs Dynamic Programming Approach
Dealing with NP-Complete Problems
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
© 5/7/2002 V.S. Subrahmanian1 Knapsack Problem Notes V.S. Subrahmanian University of Maryland.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
The Knapsack Problem Input –Capacity K –n items with weights w i and values v i Goal –Output a set of items S such that the sum of weights of items in.
David Luebke 1 8/23/2015 CS 332: Algorithms Greedy Algorithms.
Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Gary Sham HKOI 2010 Greedy, Divide and Conquer. Greedy Algorithm Solve the problem by the “BEST” choice. To find the global optimal through local optimal.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Algorithm Design Methods 황승원 Fall 2011 CSE, POSTECH.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Greedy Algorithms Interval Scheduling and Fractional Knapsack These slides are based on the Lecture Notes by David Mount for the course CMSC 451 at the.
Dynamic Programming … Continued 0-1 Knapsack Problem.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
CS 361 – Chapter 10 “Greedy algorithms” It’s a strategy of solving some problems –Need to make a series of choices –Each choice is made to maximize current.
CS 312: Algorithm Design & Analysis Lecture #26: 0/1 Knapsack This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.Creative.
Greedy Algorithms Prof. Kharat P. V. Department of Information Technology.
CSC317 Greedy algorithms; Two main properties:
Dynamic Programming Sequence of decisions. Problem state.
Algorithm Design Methods
CS 3343: Analysis of Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Algorithm Design Methods
Prepared by Chen & Po-Chuan 2016/03/29
CS 3343: Analysis of Algorithms
CS4335 Design and Analysis of Algorithms/WANG Lusheng
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Advanced Algorithms Analysis and Design
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Greedy Algorithms: Maximizing Loot
Algorithm Design Methods
CSC 413/513- Intro to Algorithms
Lecture 4 Dynamic Programming
Algorithm Design Methods
The results for Challenging Problem 1.
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Sequence of decisions. Problem state.
0-1 Knapsack problem.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Advance Algorithm Dynamic Programming
Dynamic Programming Sequence of decisions. Problem state.
Knapsack Problem A dynamic approach.
Algorithm Design Methods
Presentation transcript:

0-1 Knapsack Problem A burglar breaks into a museum and finds “n” items Let v_i denote the value of ith item, and let w_i denote the weight of the ith item The burglar carries a knapsack capable of holding a total weight W The burglar wishes to carry away the most valuable subset of items subject to the weight constraint Want to steal diamonds before gold We assume that the burglar cannot take a fraction of an object, so he/she must make a decision to take the object entirely or leave it behind If the burglar can take a fraction of an object for a fraction of the value and weight, the problem very easily solved

0-1 Knapsack: Formal Definition Given <v1, v2, …, vn> and <w1, w2, …, wn>, and W > 0, we wish to determine the subset T <= {1, 2, .., n} (of objects to “take”) that maximizes Sum_{i e T} vi subject to the constraint Sum_{i e T} wi <= W

0-1 Knapsack: Brute-Force Algo Brute-force algorithm would be to try all possible combinations of items that fit in the knapsack and see which one results in the highest total gain How many possible sets? Size of the superset of “n” items Set of items containing 1 items: n Set of items containing 2 items (n 2) Set of items containing 3 items (n 3) … Set of items containing n items (n n) Sum_{k = 1}^n (n k) =? Exponential # of alternatives

0-1 Knapsack: Greedy Algorithm The greedy thing to do will be to compute the gain per unit of weight (vi/wi) for each item “i”, and fill in the knapsack starting with the item having the highest gain and continuing this way until the knapsack is filled. Example next…

0-1 Knapsack: Greedy Algorithm 60 60 35 -- 40 60 60 40 30 40 $140 $160 $90 30 20 20 20 20 $100 $100 $100 10 5 5 $30 5 $30 ----- ----- ----- Knapsack $30 $20 $100 $90 $160 $270 $220 $260 Gain 6.0 2.0 5.0 3.0 4.0 Greedy solution to Fractional problem Greedy solution to 0-1 problem Optimal solution to 0-1 problem

0-1 Knapsack: Greedy Algorithm Greedy Algorithm does not work because we cannot take fraction of an item. If that was allowed, the greedy algorithm would have found the optimum solution How do we solve 0-1 knapsack then? It turns out this problem is NP-complete, so we cannot hope to find an efficient algorithm However, if we make the same sort of assumptions that we made in counting sort, we can come up with an efficient algorithm

Formulation of the Problem We have object {1, 2, 3, …, n}., Knapsack of size: W Here is how we formulate the problem: Assume you have a function Knapsack, that that computes the optimal Value: Knapsack(v, w, n, W). Leave the Last Object: If we choose to not take object “n”, then the optimal value will come about by considering how to fill a knapsack of size W with the remaining objects {1, 2, …, i-1}. This is Knapsack(v, w, n-1, W) Take the Last Object: If we take object “n”, then we gain a value of v[n], but have used up w[n] of our capacity. With remaining W-w[n] capacity in the knapsack, we can fill it up to the best possible way with objects {1, 2, …, i-1}. This is v[n]+Knapsack(v, w, n-1, W-w[n]). This is only possible if w[n] <= W

0-1 Knapsack: Divide-Conquer Algorithm Knapsack(v[1..n], w[1..n], n, W){ if (n <= 0) return 0; if (W <= 0) return 0; // Consider leaving object “n” leave_val = Knapsack(v, w, n-1, W); // Consider taking object “n” take_val = -INFINITY; if (w[n] <= W) take_val = v[n] + Knapsack(v, w, n-1, W-w[n]); return max(leave_val, take_val); } //end-Knapsack Running Time? O(2n) in the worst case Notice that the function makes 2 calls to itself. At each call, n is decremented by 1. So the height of the function call tree is potentially “n”, giving rise to potentially 2n function invocations

0-1 Knapsack: DP Solution The problem with the divide and conquer algorithm is that it solves the same subproblems over and over again. This is what DP is all about. To make divide and conquer algorithms more efficient The idea is to store the solutions to subproblems in a table and retrieve the solutions from that table the next time we need to solve the same subproblem. This way each subproblem is solved just once, giving rise to efficient algorithms

0-1 Knapsack: DP Solution How to store solutions to subproblems? Construct an array V[0..n, 0..W] For 1 <= i <= n, 0 <= j <= W, the entry V[i, j] will store the maximum value of any subset of objects {1, 2, .., i} that can fit into a knapsack of weight “j” If we can compute all the entries of this array, then the entry V[n, W] will contain the max. value of all “n” objects that can fit into the entire knapsack of weight W

Computing V[i, j] Observe that V[0, j] = 0 for all 0 <= j <= W No items, no value We consider 2 cases: Leave Object: If we choose to not take object “i”, then the optimal value will come about by considering how to fill a knapsack of size “j” with the remaining objects {1, 2, …, i-1}. This is just V[i-1, j] Take Object: If we take object “i”, then we gain a value of vi, but have used up wi of our capacity. With remaining j-wi capacity in the knapsack, we can fill it up to the best possible way with objects {1, 2, …, i-1}. This is vi+V[i-1, j-wi]. This is only possible if wi <= j

0-1 Knapsack: Final Formulation Combining all observations, we have the following rules {0 if i=0 and 0<=j<=W v[i,j] ={v[i-1,j] if wi > j {max{v[i-1,j], vi+v[i-1,j-wi] if wi<=j

0-1 Knapsack: Algorithm Knapsack(v[1..n], w[1..n], n, W){ Allocate V[0..n][0..W]; For j=0 to W do V[0, j] = 0; // Initialization For i=1 to n do { For j=0 to W do { leave_val = V[i-1, j]; // Total val. If we leave i if (j >= w[i]) // Enough capacity to take i? take_val = v[i] + V[i-1, j-w[i]]; // Tot. value if else // we take i take_val = -INFINITY; // cannot take i V[i, j] = max{leave_val, take_val}; // final value } //end-for return V[n, W]; } //end-Knapsack

0-1 Knapsack: Running Time It is easy to see that the algorithm takes (n+1)*(W+1) = O(n*W) steps The algorithm computes the maximum attainable knapsack value in V[n, W], but does not describe which items are taken But that can be added by recording for each entry V[i, j] in the matrix whether we got this entry by taking the ith item, or leaving it. This this information it is possible to reconstruct the optimum knapsack contents

0-1 Knapsack: Example Values of the objects are <10, 40, 30, 50> Weights of the objects are <5, 4, 6, 3> The capacity of the knapsack, W = 10 Capacity J = 0 1 2 3 4 5 6 7 8 9 10 Item Value Weight 1 10 5 10 10 10 10 10 10 2 40 4 40 40 40 40 40 50 50 3 30 6 40 40 40 40 40 50 70 4 50 3 50 50 50 50 90 90 90 90 Final result is V[4, 10] = 90 (for taking items 2 and 4)