CS583 Lecture 12 Jana Kosecka Dynamic Programming Longest Common Subsequence Matrix Chain Multiplication Greedy Algorithms Many slides here are based on.

Slides:



Advertisements
Similar presentations
CS 332: Algorithms NP Completeness David Luebke /2/2017.
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Greedy Algorithms.
Analysis of Algorithms
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Algorithms Dynamic programming Longest Common Subsequence.
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms.
Review: Dynamic Programming
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Analysis of Algorithms CS 477/677
1 Foundations of Software Design Lecture 26: Text Processing, Tries, and Dynamic Programming Marti Hearst & Fredrik Wallenberg Fall 2002.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Week 2: Greedy Algorithms
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
David Luebke 1 8/23/2015 CS 332: Algorithms Greedy Algorithms.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Greedy Algorithms Dr. Yingwu Zhu. Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that.
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CS 8833 Algorithms Algorithms Dynamic Programming.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
CSC 201: Design and Analysis of Algorithms Greedy Algorithms.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
Greedy Algorithms. Zhengjin,Central South University2 Review: Dynamic Programming Summary of the basic idea: Optimal substructure: optimal solution to.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
CS6045: Advanced Algorithms Greedy Algorithms. Main Concept –Divide the problem into multiple steps (sub-problems) –For each step take the best choice.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy algorithms: CSC317
Dynamic Programming Typically applied to optimization problems
Review: Dynamic Programming
Chapter 8 Dynamic Programming.
CSCE 411 Design and Analysis of Algorithms
CS6045: Advanced Algorithms
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
CS 3343: Analysis of Algorithms
Advanced Algorithms Analysis and Design
Merge Sort Dynamic Programming
Advanced Algorithms Analysis and Design
CS6045: Advanced Algorithms
Lecture 6 Topics Greedy Algorithm
Longest Common Subsequence
Lecture 8. Paradigm #6 Dynamic Programming
Trevor Brown DC 2338, Office hour M3-4pm
Introduction to Algorithms: Dynamic Programming
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Longest Common Subsequence
Dynamic Programming II DP over Intervals
Analysis of Algorithms CS 477/677
Greedy algorithms.
Longest Common Subsequence
Advance Algorithm Dynamic Programming
Dynamic Programming.
Algorithm Course Dr. Aref Rashad
Presentation transcript:

CS583 Lecture 12 Jana Kosecka Dynamic Programming Longest Common Subsequence Matrix Chain Multiplication Greedy Algorithms Many slides here are based on D. Luebke slides

Review: Dynamic Programming Another strategy for designing algorithms is dynamic programming A metatechnique, not an algorithm (like divide & conquer) Applicable when problem breaks down into recurring small sub-problems

Review: Dynamic Programming Problem solving methodology (as divide and conquer) Idea: divide into sub-problems, solve sub-problems Applicable to optimization problems Ingredients 1. Characterize the optimal solution 2. Recursively define a value of the optimal solution 3. Compute values of optimal solution bottom up 4. Construct an optimal solution from computed inf.

Review: Longest Common Subsequence Longest common subsequence (LCS) problem: Given two sequences x[1..m] and y[1..n], find the longest subsequence which occurs in both Ex: x = {A B C B D A B }, y = {B D C A B A} {B C} and {A A} are both subsequences of both What is the LCS? Brute-force algorithm: For every subsequence of x, check if it’s a subsequence of y How many subsequences of x are there? What will be the running time of the brute-force alg?

Review: LCS Application: comparison of two DNA strings Ex: X= {A B C B D A B }, Y= {B D C A B A} Longest Common Subsequence: X = A B C B D A B Y = B D C A B A Brute force algorithm would compare each subsequence of X with the symbols in Y

Review: LCS Algorithm Brute-force algorithm: 2 m subsequences of x to check against n elements of y: O(n 2 m ) We can do better: for now, let’s only worry about the problem of finding the length of LCS When finished we will see how to backtrack from this solution back to the actual LCS Notice LCS problem has optimal substructure Subproblems: LCS of pairs of prefixes of x and y

LCS recursive solution We start with i = j = 0 (empty substrings of x and y) Since X 0 and Y 0 are empty strings, their LCS is always empty (i.e. c[0,0] = 0) LCS of empty string and any other string is empty, so for every i and j: c[0, j] = c[i,0] = 0

LCS Example (2) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 0 ABCB BDCAB

LCS Example (3) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 000 ABCB BDCAB

LCS Example (4) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 0001 ABCB BDCAB

LCS Example (5) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Example (6) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Example (7) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Example (8) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Example (10) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Example (11) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

9/29/ LCS Example (12) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Example (13) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Example (14) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Example (15) j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

LCS Algorithm Running Time LCS algorithm calculates the values of each entry of the array c[m,n] So what is the running time? O(m.n) since each c[i,j] is calculated in constant time, and there are m.n elements in the array

Review: How to find actual LCS So far, we have just found the length of LCS, but not LCS itself. We want to modify this algorithm to make it output Longest Common Subsequence of X and Y Each c[i,j] depends on c[i-1,j] and c[i,j-1] or c[i-1, j-1] For each c[i,j] we can say how it was acquired: For example, here c[i,j] = c[i-1,j-1] +1 = 2+1=3

How to find actual LCS - continued Remember that n So we can start from c[m,n] and go backwards n Whenever c[i,j] = c[i-1, j-1]+1, remember x[i] n (because x[i] is a part of LCS) n When i=0 or j=0 (i.e. we reached the beginning), output remembered letters in reverse order

Review: Finding LCS j i Xi A B C YjBBACD B

Review: Finding LCS (2) j i Xi A B C YjBBACD B BCB LCS (reversed order): LCS (straight order):

Review: Optimal Substructure of LCS Observation 1: Optimal substructure A simple recursive algorithm will suffice Draw sample recursion tree from c[3,4] What will be the depth of the tree? Observation 2: Overlapping subproblems Find some places where we solve the same subproblem more than once

Review: Structure of Subproblems For the LCS problem: There are few subproblems in total And many recurring instances of each (unlike divide & conquer, where subproblems unique) How many distinct problems exist for the LCS of x[1..m] and y[1..n]? A: mn

Memoization Memoization is another way to deal with overlapping sub- problems After computing the solution to a subproblem, store in a table Subsequent calls just do a table lookup Can modify recursive alg to use memoziation: There are mn subproblems How many times is each subproblem wanted? What will be the running time for this algorithm? The running space?

Review: Dynamic Programming Dynamic programming: build table bottom-up Same table as memoization, but instead of starting at (m,n)and recursing down, start at (1,1)

Review: Dynamic Programming Summary of the basic idea: Optimal substructure: optimal solution to problem consists of optimal solutions to subproblems Overlapping subproblems: few subproblems in total, many recurring instances of each Solve bottom-up, building a table of solved subproblems that are used to solve larger ones Variations: “Table” could be 3-dimensional, triangular, a tree, etc.

Matrix Chain Multiplication Given sequence of matrices And their dimensions What is the optimal order of multiplication Example: why two different orders matter ? Brute force strategy – examine all possible parenthezations Solution to the recurrence

Matrix Chain Multiplication Substructure property Total cost will be cost of solving the two subproblems and multiplying the two resulting matrices Optimal substructure find the split which will yield the minimal total cost Idea: try to define it recursively

Matrix Chain Multiplication Define the cost recursively m[i,j] cost of multiplying

Matrix Chain Multiplication Option 1: Compute the cost recursively, remember good splits Draw the recurrence tree for

Matrix Chain Multiplication Look up the pseudo-code in the textbook Core is the recursive call C = RecursiveMatrixChain(p,i,k) + RecursiveMatrixChain(p,k+1,j) + Prove by substitution Recursive solution would still take exponential time 

Matrix Chain Multiplication Idea: memoization Look at the recursion tree, many of the sub-problems repeat Remember then and reuse in the How many sub-problems do we have ? Why ? Compute the solution to all subproblems bottom up Memoize in the table store intermediate cost m[i,j]

Matrix Chain Multiplication 1.n = length(p)-1 2.for i=1 to n m[i,i] = 0; % initialize 3.for l=2 to n % l is the chain length 4. for i=1 to n-l+1 % first compute all m[i,i+1], then m[i,i+2] 5. do j := i m[i,j]  inf 7. for k = i to j-1 8. do q = m[i,k] + m[k+1,j] + p(i-1)p(k)p(j) 9. if q < m[i,j] then 10. m[i,j] = q; 11. s[i,j] = k; % remember k with min cost 12. end 13. end 14. end 15.Return m and s

Matrix Chain Multiplication Example

Dynamic Programming What is the structure of the sub-problem Common pattern: Optimal solution requires making a choice which leads to optimal solution Hard part: what is the optimal subproblem structure How many subproblems ? How many choices we have which sub-problem to use ? Matrix chain multiplication LCS

Dynamic Programming What is the structure of the sub-problem Common pattern: Optimal solution requires making a choice which leads to optimal solution Hard part: what is the optimal subproblem structure How many sub-problems ? How many choices we have which sub-problem to use ? Matrix chain multiplication: 2 subproblems, j-i choices LCS: 3 suproblems 3 choices Subtleties (graph examples) shortest path, longest path

Greedy Algorithms A greedy algorithm always makes the choice that looks best at the moment My everyday examples The hope: a locally optimal choice will lead to a globally optimal solution Minimum weight spanning tree, Dijstra’s algorithm (greedy) Dynamic programming can be overkill; greedy algorithms tend to be easier to code

Activity-Selection Problem Problem: get your money’s worth out of a carnival Buy a wristband that lets you onto any ride Lots of rides, each starting and ending at different times Your goal: ride as many rides as possible Another, alternative goal that we don’t solve here: maximize time spent on rides Welcome to the activity selection problem General: how to schedule activities which require use of a common resource – goal select maximal set of compatible activities

Activity-Selection Formally: Given a set S of n activities s i = start time of activity I f i = finish time of activity I Find max-size subset A of compatible activities Assume (wlog) that f 1  f 2  …  f n Final times are sorted Compatible activities - if their intervals do not overlap

Activity Selection Example i si fi

Activity Selectin Optimal substructure Set of activities compatible with Need to find a maximal set Suppose the set contains activity Recursive definition

Activity Selection: Optimal Substructure Suppose A is the solution set of the problem Let k be the minimum activity in A (i.e., the one with the earliest finish time). Then A’= A - {k} is an optimal solution to S’ = {i  S: s i  f k } In words: once activity #1 is selected, the problem reduces to finding an optimal solution for activity-selection over activities in S compatible with #1 Proof: if we could find optimal solution B’ to S’ with |B| > |A - {k}|, Then B U {k} is compatible And |B U {k}| > |A|  contradition since we said A is the optimal solution to the problem

Activity Selection Dynamic Programming Strategy 1.Identify subproblems 2.Recursively define the cost 3.Fill in the cost table in the tabular form

Activity Selection Converting dynamic programming to greedy solution Observation: given activity with the earliest finishing time, 1. that activity will be in some maximal size subset of mutually compatible activities of 2. The subproblem is empty, so choosing leaves the subproblem as the only one non-empty Proof: book Conclusion: The activity we choose is always the one with earlierst finishing time Greedy choice Show that it always will maximize the amount of scheduled activities

Recursive Alg. RecursiveActivitySelect(s,f,i,j) 1.m = i+1 2.While m < j and 3. do m = m+1 4.If m < j 5.Then return RecursiveActivitySelector(s,f,m,j) Call RecursiveActivitySelector(s,f,0,n)

Recursive Alg. i si fi RecursiveActivitySelector(s,f,0,n) RecursiveActivitySelect(s,f,i,j) 1.m = i+1 2.While m < j and 3. do m = m+1 4.if m < j 5.then return RecursiveActivitySelector(s,f,m,j) Selected: % m=1

Activity Selection: Repeated Subproblems Consider a recursive algorithm that tries all possible compatible subsets to find a maximal set, and notice repeated subproblems: S 1  A? S’ 2  A? S-{1} 2  A? S-{1,2}S’’S’-{2}S’’ yes no yes

Greedy Choice Property Dynamic programming? Memoize? Yes, but… Activity selection problem also exhibits the greedy choice property: Locally optimal choice  globally optimal sol’n Them 17.1: if S is an activity selection problem sorted by finish time, then  optimal solution A  S such that {1}  A Sketch of proof: if  optimal solution B that does not contain {1}, can always replace first activity in B with {1} (Why?). Same number of activities, thus optimal.

Activity Selection: A Greedy Algorithm So actual algorithm is simple: Sort the activities by finish time Schedule the first activity Then schedule the next activity in sorted list which starts after previous activity finishes Repeat until no more activities Intuition is even more simple: Always pick the shortest ride available at the time

Huffman coding Design of optimal codes Example Idea how to design optimal code ? Notion of prefix code Greedy Algorithm for constructing optimal codes

Review: The Knapsack Problem The famous knapsack problem: A thief breaks into a museum. Fabulous paintings, sculptures, and jewels are everywhere. The thief has a good eye for the value of these objects, and knows that each will fetch hundreds or thousands of dollars on the clandestine art collector’s market. But, the thief has only brought a single knapsack to the scene of the robbery, and can take away only what he can carry. What items should the thief take to maximize the haul?

Review: The Knapsack Problem More formally, the 0-1 knapsack problem: The thief must choose among n items, where the ith item worth v i dollars and weighs w i pounds Carrying at most W pounds, maximize value Note: assume v i, w i, and W are all integers “0-1” b/c each item must be taken or left in entirety A variation, the fractional knapsack problem: Thief can take fractions of items Think of items in 0-1 problem as gold ingots, in fractional problem as buckets of gold dust

Review: The Knapsack Problem And Optimal Substructure Both variations exhibit optimal substructure To show this for the 0-1 problem, consider the most valuable load weighing at most W pounds If we remove item j from the load, what do we know about the remaining load? A: remainder must be the most valuable load weighing at most W - w j that thief could take from museum, excluding item j

Solving The Knapsack Problem The optimal solution to the fractional knapsack problem can be found with a greedy algorithm How? The optimal solution to the 0-1 problem cannot be found with the same greedy strategy Greedy strategy: take in order of dollars/pound Example: 3 items weighing 10, 20, and 30 pounds, knapsack can hold 50 pounds Suppose item 2 is worth $100. Assign values to the other items so that the greedy strategy will fail

The Knapsack Problem: Greedy Vs. Dynamic The fractional problem can be solved greedily The 0-1 problem cannot be solved with a greedy approach As you have seen, however, it can be solved with dynamic programming