Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Analysis of Algorithms
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
COMP8620 Lecture 8 Dynamic Programming.
Dynamic Programming.
Review: Dynamic Programming
Introduction to Algorithms
Greedy vs Dynamic Programming Approach
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
Dynamic Programming Reading Material: Chapter 7..
0-1 Knapsack Problem A burglar breaks into a museum and finds “n” items Let v_i denote the value of ith item, and let w_i denote the weight of the ith.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
The Knapsack Problem Input –Capacity K –n items with weights w i and values v i Goal –Output a set of items S such that the sum of weights of items in.
David Luebke 1 8/23/2015 CS 332: Algorithms Greedy Algorithms.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
Dynamic Programming continued David Kauchak cs302 Spring 2012.
CSC 201: Design and Analysis of Algorithms Greedy Algorithms.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Dynamic Programming David Kauchak cs161 Summer 2009.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Dynamic Programming … Continued 0-1 Knapsack Problem.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
CS 361 – Chapter 10 “Greedy algorithms” It’s a strategy of solving some problems –Need to make a series of choices –Each choice is made to maximize current.
Dynamic Programming … Continued
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Divide and Conquer. Problem Solution 4 Example.
CSC317 Greedy algorithms; Two main properties:
Dynamic Programming Sequence of decisions. Problem state.
Review: Dynamic Programming
Algorithm Design Methods
CS 3343: Analysis of Algorithms
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Advanced Algorithms Analysis and Design
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming.
CS6045: Advanced Algorithms
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Longest Common Subsequence
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
CSC 413/513- Intro to Algorithms
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
0-1 Knapsack problem.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Knapsack Problem A dynamic approach.
Presentation transcript:

Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at DynamicProgramming.pdf

Recall: Dynamic Programming Basic idea: ◦ Optimal substructure: Optimal solution to problem consists of optimal solution to subproblems ◦ Overlapping subproblems: Few subproblems in total, many recurring instances of each ◦ Memoization: Solve bottom-up, building a table of solved subproblems that are used to solve larger ones Variations ◦ “Table” could be 3D, triangular, a tree, etc

0-1 Knapsack Problem Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight w i and value v i Problem: solve a version of 0-1 Knapsack where W, w i, and v i are all integer values ◦ How to pack the knapsack to achieve maximum total value of packed items.

Brute Force Approach Since there are n items, there are 2 n possible combination of items Go through all combinations and find the one with maximum value and total weight less than or equal to W Running time is  (2 n )

Can We Do Better? Use an algorithm based on dynamic programming We need to carefully identify the subproblem Try this: ◦ If items are labeled 1,…,n, then a subproblem would be to find an optimal solution for S k = {items labeled 1,…,k}

Defining a Subproblem This is a reasonable subproblem definition The question: ◦ Can we define the final solution (S n ) in terms of subproblems (S k )? Unfortunately, we can’t do that If items are labeled 1,…,n, then a subproblem would be to find an optimal solution for S k = {items labeled 1,…,k}

Defining a Subproblem Solution for S 4 is not part of the solution for S 5

Defining a Subproblem So our definition of a subproblem is flawed and we need another one! Let’s add another parameter: w, which will represent the weight for each subset of items The subproblem then will be to compute V[k,w]

Recursive Formula for Subproblems It means that the best subset of S k that has total weight w is: ◦ The best subset of S k-1 that has total weight w OR ◦ The best subset of S k-1 that has total weight w - w k plus the item k

Recursive Formula The best subset of S k that has the total weight w either contains item k or not First case: w k > w ◦ Item k can’t be part of the solution since if it was, the total weight would be > w, which is unacceptable Second case: w k ≤ w ◦ Then the item k can be in the solution, and we choose the case with greater value

Optimal substructure: The 0-1 Knapsack Problem exhibits optimal substructure ◦ Consider the most valuable load weighing at most W pounds  If we remove item k from the load, what do we know about the remaining load?  Answer: The remaining load must be the most valuable load weighing at most W – w k that the thief could take, excluding item k

How many possible subproblems?

nW Use a table of size (n+1) * (W+1)

0-1 Knapsack Algorithm for w = 0 to W V[0,w] = 0 for k = 1 to n V[k,0] = 0 for k = 1 to n for w = 0 to W if w[k] <= w // item k can be part of the solution if v[k] + V[k-1,w-w[k]] > V[k-1,w] V[k,w] = v[k] + V[k-1,w- w[k]] else V[k,w] = V[k-1,w] else V[k,w] = V[k-1,w] // w[k] > w

Example Let’s try this on the following data ◦ n = 4 (number of items) ◦ W = 5 (maximum weight knapsack can hold) ◦ Items (weight, value) 1.(2, 3) 2.(3, 4) 3.(4, 5) 4.(5, 6)

Example (2) Initialization

Example (3)

Example (4)

Example (5)

Example (6)

Comments This algorithm finds only the max possible value that can be carried in the knapsack ◦ I.e. The value V[n, W] To know the items that make this maximum value, we need to back track through the table ◦ Remember, we did the same thing with LCS

How to Find Actual Items Let k = n and w = W If V[k, w] ≠ V[k-1, w] then ◦ Mark the k th item as in the knapsack ◦ w = w – w k, k = k – 1 Else ◦ k = k – 1 // Assume the kth item in not in the // knapsack

Finding the Items

Running time? O(nW) ◦ pseudo-polynomial ◦ O(nW) complexity does not contradict the fact that the 0-1 knapsack problem is NP-complete, since W, unlike n, is not polynomial in the length of the input to the problem. ◦ Also remember the DP solution only works if the weights are integers!