9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 14 Dynamic.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Knapsack Problem Section 7.6. Problem Suppose we have n items U={u 1,..u n }, that we would like to insert into a knapsack of size C. Each item u i has.
Dynamic Programming.
Greedy vs Dynamic Programming Approach
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Lecture 34 CSE 331 Nov 30, Graded HW 8 On Wednesday.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Week 2: Greedy Algorithms
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.
9/3/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Guest lecturer: Martin Furer Algorithm Design and Analysis L ECTURE.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
Piyush Kumar (Lecture 1: Introduction)
Bold Stroke January 13, 2003 Advanced Algorithms CS 539/441 OR In Search Of Efficient General Solutions Joe Hoffert
Design Techniques for Approximation Algorithms and Approximation Classes.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 13 Dynamic.
9/8/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 6 Greedy Algorithms.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
1 CPSC 320: Intermediate Algorithm Design and Analysis July 28, 2014.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Computability NP complete problems. Space complexity. Homework: [Post proposal]. Find PSPACE- Complete problems. Work on presentations.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
1 The instructor will be absent on March 29 th. The class resumes on March 31 st.
Lecture 33 CSE 331 Nov 20, HW 8 due today Place Q1, Q2 and Q3 in separate piles I will not accept HWs after 1:15pm Submit your HWs to the side of.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
P, NP, and NP-Complete Problems Section 10.3 The class P consists of all problems that can be solved in polynomial time, O(N k ), by deterministic computers.
Divide and Conquer. Problem Solution 4 Example.
Algorithm Design and Analysis
Lecture 32 CSE 331 Nov 16, 2016.
Algorithm Design Methods
Piyush Kumar (Lecture 1: Introduction)
Piyush Kumar (Lecture 1: Introduction)
CS38 Introduction to Algorithms
Algorithm Design Methods
Prepared by Chen & Po-Chuan 2016/03/29
Dynamic Programming General Idea
Lecture 11 Overview Self-Reducibility.
Lecture 11 Overview Self-Reducibility.
Lecture 32 CSE 331 Nov 15, 2017.
Lecture 33 CSE 331 Nov 14, 2014.
Dynamic Programming General Idea
CSC 413/513- Intro to Algorithms
Lecture 36 CSE 331 Nov 30, 2012.
Algorithm Design Methods
Data Structures and Algorithm Analysis Lecture 15
Algorithm Design Methods
Presentation transcript:

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 14 Dynamic Programming Knapsack Independent Set on grid exercise

Midterm 1 It’s on Thursday, 8:15pm in Willard 75. –BYO coffee You can have one (1) double sided sheet of notes on colored paper Everything up to and including today’s lecture is fair game Sample exams from previous years are on the web. The sample “midterm 2” exams contain questions on material we haven’t yet covered –Network flow, Randomization, … I will try to post (all?) homework solutions… 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

Knapsack 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Knapsack Problem Given n objects and a "knapsack." - Item i weighs w i > 0 kilograms and has value v i > 0. - Knapsack has capacity of W kilograms. - Goal: fill knapsack so as to maximize total value. Ex: { 3, 4 } has value 40. Many “packing” problems fit this model –Assigning production jobs to factories –Deciding which jobs to do on a single processor with bounded time –Deciding which problems to do on an exam 1 value weight # W = 11

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Knapsack Problem Given n objects and a "knapsack." - Item i weighs w i > 0 kilograms and has value v i > 0. - Knapsack has capacity of W kilograms. - Goal: fill knapsack so as to maximize total value. Ex: { 3, 4 } has value value weight # W = 11 Greedy: repeatedly add item with maximum ratio v i / w i Example: { 5, 2, 1 } achieves only value = 35  greedy not optimal.

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Dynamic programming: attempt 1 Definition: OPT(i) = maximum profit subset of items 1, …, i. –Case 1: OPT does not select item i. OPT selects best of { 1, 2, …, i-1 } –Case 2: OPT selects item i. without knowing what other items were selected before i, we don't even know if we have enough room for i Conclusion. Need more sub-problems!

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adding a new variable Definition: OPT(i, w) = max profit subset of items 1, …, i with weight limit w. –Case 1: OPT does not select item i. OPT selects best of { 1, 2, …, i-1 } with weight limit w –Case 2: OPT selects item i. new weight limit = w – w i OPT selects best of { 1, 2, …, i–1 } with new weight limit

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adding a new variable How many different possible inputs for the recursive procedure? –n choices for the length i of the list –W choices for the weight limit Fill a table with n × W entries –Need entries for i-1 before we fill entries for i

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Bottom-up algorithm Fill up an n-by-W array. Input: n, W, w 1,…,w N, v 1,…,v N for w = 0 to W M[0, w] = 0 for i = 1 to n for w = 1 to W if (w i > w) M[i, w] = M[i-1, w] else M[i, w] = max {M[i-1, w], v i + M[i-1, w-w i ]} return M[n, W]

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Knapsack table n 1 Value Weight Item  { 1, 2 } { 1, 2, 3 } { 1, 2, 3, 4 } { 1 } { 1, 2, 3, 4, 5 } W W = 11 OPT: { 4, 3 } value = = 40

How do we make turn this into a proof of correctness? Dynamic programming (and divide and conquer) lends itself directly to induction. –Base cases: OPT(i,0)=0, OPT(0,w)=0 (no items!). –Inductive step: explaining the correctness of recursive formula If the following values are correctly computed: –OPT(0,w-1),OPT(1,w-1),…,OPT(n,w-1) –OPT(0,w),…,OPT(i-1,w) Then the recursive formula computes OPT(i,w) correctly –Case 1: …, Case 2: … (from previous slide). 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

About proofs Proof is a rigorous argument about a mathematical statement’s truth –Should convince you –Should not feel like a shot in the dark –What makes a proof “good enough” is social –(Truly rigorous, machine-readable proofs exist but are too painful for human use). In real life, you have to check your own work –Ideal: all problems should have grades 100% or 20% 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Time and Space Complexity Given n objects and a "knapsack." - Item i weighs w i > 0 kilograms and has value v i > 0. - Knapsack has capacity of W kilograms. - Goal: fill knapsack so as to maximize total value. What is the input size? In words? In bits? 1 value weight # W = 11

9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Time and space complexity Time and space:  (n W). –Not polynomial in input size! –"Pseudo-polynomial." –Decision version of Knapsack is NP-complete. [KT, chapter 8] Knapsack approximation algorithm. There is a poly-time algorithm that produces a solution with value within 0.01% of optimum. [KT, section 11.8]

Review Weighted independent set on the chain –Input: a chain graph of length n with values v1,..,vn –Goal: find a heaviest independent set Let OPT({1,…,n}) = heaviest IS among {1,…,n} Write down a recursive formula for OPT –How many different subproblems? 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

Exercise Do the same with a 2 x n grid graph Three types of subproblems: –grid(i) –gridTop(i) –gridBottom(i) 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

Exercise –grid(i): maximum independent set in the subgraph consisting of only the first i pairs of nodes –gridTop(i): maximum independent set in the subgraph consisting of the first i pairs of nodes plus the top node of the (i+1)-st pair –gridBottom(i): maximum independent set in the subgraph consisting of the first i pairs of nodes plus the bottom node of the (i+1)-st pair 9/27/ i pairs

Recursive formulas for subproblems Say t i, b i are the weights of the nodes in the ith pair grid(i) = max(b i, t i ) if i=1, else max(gridBottom(i-1), t i + gridBottom(i-2)) gridBottom(i) = b i if i=0, else max(grid(i-1), b i + gridTop(i-1) gridTop(i) = t i if i=0, else max(grid(i-1), t i + gridBottom(i-1) Bottom-up algorithm takes O(n) time. 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne