CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Greedy Algorithms.
Chapter 5 Fundamental Algorithm Design Techniques.
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
COMP8620 Lecture 8 Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
Chapter 7 Dynamic Programming 7.
0-1 Knapsack Problem A burglar breaks into a museum and finds “n” items Let v_i denote the value of ith item, and let w_i denote the weight of the ith.
Greedy Algorithms Reading Material: –Alsuwaiyel’s Book: Section 8.1 –CLR Book (2 nd Edition): Section 16.1.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
November 7, 2005Copyright © by Erik D. Demaine and Charles E. Leiserson Dynamic programming Design technique, like divide-and-conquer. Example:
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
1 The Greedy Method CSC401 – Analysis of Algorithms Lecture Notes 10 The Greedy Method Objectives Introduce the Greedy Method Use the greedy method to.
The Knapsack Problem Input –Capacity K –n items with weights w i and values v i Goal –Output a set of items S such that the sum of weights of items in.
Longest Common Subsequence
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Instructor Neelima Gupta Instructor: Ms. Neelima Gupta.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
CS 361 – Chapter 10 “Greedy algorithms” It’s a strategy of solving some problems –Need to make a series of choices –Each choice is made to maximize current.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
CS 3343: Analysis of Algorithms
Advanced Algorithms Analysis and Design
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
The Greedy Method and Text Compression
Analysis of Algorithms CS 477/677
Design & Analysis of Algorithm Greedy Algorithm
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Prepared by Chen & Po-Chuan 2016/03/29
ICS 353: Design and Analysis of Algorithms
CS 3343: Analysis of Algorithms
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
Dynamic Programming General Idea
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
CS 3343: Analysis of Algorithms
MCA 301: Design and Analysis of Algorithms
Advanced Algorithms Analysis and Design
Greedy Algorithms.
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
Trevor Brown DC 2338, Office hour M3-4pm
Dynamic Programming General Idea
Introduction to Algorithms: Dynamic Programming
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Lecture 5 Dynamic Programming
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
Analysis of Algorithms CS 477/677
Dynamic Programming.
Presentation transcript:

CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming

Review of Dynamic Programming We’ve learned how to use DP to solve –a special shortest path problem –the longest subsequence problem –a general sequence alignment When should I use dynamic programming? Theory is a little hard to apply –More examples would help

Two steps to dynamic programming Formulate the solution as a recurrence relation of solutions to subproblems. Specify an order to solve the subproblems so you always have what you need.

A special shortest path problem S G m n Each edge has a length (cost). We need to get to G from S. Can only move right or down. Aim: find a path with the minimum total length

Recursive thinking Suppose we’ve found the shortest path It must use one of the two edges: –(m, n-1) to (m, n)Case 1 –(m-1, n) to (m, n)Case 2 If case 1 –find shortest path from (0, 0) to (m, n-1) –SP(0, 0, m, n-1) + dist(m, n-1, m, n) is the overall shortest path If case 2 –find shortest path from (0, 0) to (m-1, n) –SP(0, 0, m, n-1) + dist(m, n-1, m, n) is the overall shortest path We don’t know which case is true –But if we’ve find the two paths, we can compare –Real shortest path is the one with shorter overall length m n

Recursive formulation Let F(i, j) = SP(0, 0, i, j). => F(m, n) is length of SP from (0, 0) to (m, n) F(m-1, n) + dist(m-1, n, m, n) F(m, n) = min F(m, n-1) + dist(m, n-1, m, n) F(i-1, j) + dist(i-1, j, i, j) F(i, j) = min F(i, j-1) + dist(i, j-1, i, j) Generalize Data dependency determines order to compute (i, j) m n Boundary condition: i = 0 or j = 0. Easy to figure out manually. i = 1.. m, j = 1.. n Number of subproblems = m * n determines structure of DP table

Longest Common Subsequence Given two sequences x[1.. m] and y[1.. n], find a longest subsequence common to them both. x:x:ABCBDAB y:y:BDCABA “a” not “the” BCBA = LCS(x, y) functional notation, but not a function

Recursive thinking Case 1: x[m]=y[n]. There is an optimal LCS that matches x[m] with y[n]. Case 2: x[m]  y[n]. At most one of them is in LCS –Case 2.1: x[m] not in LCS –Case 2.2: y[n] not in LCS x y m n Find out LCS (x[1..m-1], y[1..n-1]) Find out LCS (x[1..m], y[1..n-1]) Find out LCS (x[1..m-1], y[1..n])

Recursive thinking Case 1: x[m]=y[n] –LCS(x, y) = LCS(x[1..m-1], y[1..n-1]) || x[m] Case 2: x[m]  y[n] –LCS(x, y) = LCS(x[1..m-1], y[1..n]) or LCS(x[1..m], y[1..n-1]), whichever is longer x y m n Reduce both sequences by 1 char Reduce either sequence by 1 char concatenate

Recursive formulation c[m, n] = c[m–1, n–1] + 1if x[m] = y[n], max { c[m–1, n], c[m, n–1] } otherwise. Let c[i, j] be the length of LCS( x[1..i], y[1..j] ) => c[m, n] is the length of LCS( x, y ) Generalize c[i, j] = c[i–1, j–1] + 1if x[i] = y[j], max { c[i–1, j], c[i, j–1] } otherwise. Boundary condition: i = 0 or j = 0. Easy to figure out manually. Number of subproblems = m * n Order to compute? (i, j) i = 1.. m j = 1.. n

Another DP example You work in the fast food business Your company plans to open up new restaurants in Texas along I-35 Towns along the highway called t 1, t 2, …, t n Restaurants at t i has estimated annual profit p i No two restaurants can be located within 10 miles of each other due to some regulation Your boss wants to maximize the total profit You want a big bonus 10 mile

Brute-force Each town is either selected or not selected Test each of the 2 n subsets Eliminate subsets that violate constraints Compute total profit for each remaining subset Choose the one with the highest profit Θ(n 2 n )

Natural greedy 1 Take first town. Then the next town > 10 miles Can you give an example that this algorithm doesn’t return the correct solution? 100k 500k

Natural greedy 2 Almost take a town with the highest profit and not within 10 miles of another selected town Can you give an example that this algorithm doesn’t return the correct solution? 300k 500k

A DP algorithm Suppose you’ve already found the optimal solution It will either include t n or not include t n Case 1: t n not included in optimal solution –Best solution same as best solution for t 1, …, t n-1 Case 2: t n included in optimal solution –Best solution is p n + best solution for t 1, …, t j, where j < n is the largest index so that dist(t j, t n ) ≥ 10

Recurrence formulation Let S(i) be the total profit of the optimal solution when the first i towns are considered (not necessarily selected) –S(n) is the optimal solution to the complete problem S(n-1) S(j) + p n j < n & dist (t j, t n ) ≥ 10 S(n) = max S(i-1) S(j) + p i j < i & dist (t j, t i ) ≥ 10 S(i) = max Generalize Number of sub-problems: n. Boundary condition: S(0) = 0. Dependency: i i-1 j S

Example Natural greedy 1: = 25 Natural greedy 2: = Distance (mi) Profit (100k) S(i) S(i-1) S(j) + p i j < i & dist (t j, t i ) ≥ 10 S(i) = max dummy Optimal: 26

Complexity Time:  (nk), where k is the maximum number of towns that are within 10 miles to the left of any town –In the worst case,  (n 2 ) –Can be improved to  (n) with some preprocessing tricks Memory: Θ(n)

Knapsack problem Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? Each item has a value and a weight Objective: maximize value Constraint: knapsack has a weight limitation We study the 0-1 problem today.

Formal definition (0-1 problem) Knapsack has weight limit W Items labeled 1, 2, …, n (arbitrarily) Items have weights w 1, w 2, …, w n –Assume all weights are integers –For practical reason, only consider w i < W Items have values v 1, v 2, …, v n Objective: find a subset of items, S, such that  i  S w i  W and  i  S v i is maximal among all such (feasible) subsets

Naïve algorithms Enumerate all subsets. –Optimal. But exponential time Greedy 1: take the item with the largest value –Not optimal –Give an example Greedy 2: take the item with the largest value/weight ratio –Not optimal –Give an example

A DP algorithm Suppose you’ve find the optimal solution S Case 1: item n is included Case 2: item n is not included Total weight limit: W wnwn Total weight limit: W Find an optimal solution using items 1, 2, …, n-1 with weight limit W - w n wnwn Find an optimal solution using items 1, 2, …, n-1 with weight limit W

Recursive formulation Let V[i, w] be the optimal total value when items 1, 2, …, i are considered for a knapsack with weight limit w => V[n, W] is the optimal solution V[n, W] = max V[n-1, W-w n ] + v n V[n-1, W] Generalize V[i, w] = max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken Boundary condition: V[i, 0] = 0, V[0, w] = 0. Number of sub-problems = ?

Example n = 6 (# of items) W = 10 (weight limit) Items (weight, value):

w i vivi wiwi max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken V[i, w] = V[i, w] V[i-1, w]V[i-1, w-w i ] 6 wiwi 5

w iwiwi vivi max V[i-1, w-w i ] + v i item i is taken V[i-1, w] item i not taken V[i-1, w] if w i > w item i not taken V[i, w] =

w iwiwi vivi Item: 6, 5, 1 Weight: = 10 Value: = 15 Optimal value: 15

Time complexity Θ (nW) Polynomial? –Pseudo-polynomial –Works well if W is small Consider following items (weight, value): (10, 5), (15, 6), (20, 5), (18, 6) Weight limit 35 –Optimal solution: item 2, 4 (value = 12). Iterate: 2^4 = 16 subsets –Dynamic programming: fill up a 4 x 35 = 140 table entries What’s the problem? –Many entries are unused: no such weight combination –Top-down may be better

A few more examples

Longest increasing subsequence Given a sequence of numbers Find a longest subsequence that is non- decreasing –E.g –It has to be a subsequence of the original list –It has to in sorted order => It is a subsequence of the sorted list Original list: LCS: Sorted:

Events scheduling problem A list of events to schedule (or shows to see) –e i has start time s i and finishing time f i –Indexed such that f i < f j if i < j Each event has a value v i Schedule to make the largest value –You can attend only one event at any time Very similar to the new restaurant location problem –Sort events according to their finish time –Consider: if the last event is included or not Time e1e1 e2e2 e3e3 e4e4 e5e5 e6e6 e7e7 e8e8 e9e9

Events scheduling problem Time e1e1 e2e2 e3e3 e4e4 e5e5 e6e6 e7e7 e8e8 e9e9 V(i) is the optimal value that can be achieved when the first i events are considered V(n) = V(n-1) e n not selected e n selected V(j) + v n max { j < n and f j < s n s9s9 f9f9 s8s8 f8f8 s7s7 f7f7

Coin change problem Given some denomination of coins (e.g., 2, 5, 7, 10), decide if it is possible to make change for a value (e.g, 13), or minimize the number of coins Version 1: Unlimited number of coins for each denomination –Unbounded knapsack problem Version 2: Use each denomination at most once –0-1 Knapsack problem

Use DP algorithm to solve new problems Directly map a new problem to a known problem Modify an algorithm for a similar task Design your own –Think about the problem recursively –Optimal solution to a larger problem can be computed from the optimal solution of one or more subproblems –These sub-problems can be solved in certain manageable order –Works nicely for naturally ordered data such as strings, trees, some special graphs –Trickier for general graphs The text book has some very good exercises.